many people seem to be excited about nVidias new line of GPUs, which is reasonable, since at CES they really made it seem like these new bois are insance for their price.

Jensen (the CEO guy) said that with the power of AI, the 5070 at a price of sub 600, is in the same class as the 4090, being over 1500 pricepoint.

Here my idea: They talk a lot about upscaling, generating frames and pixels and so on. I think what they mean by both having similar performace, is that the 4090 with no AI upscaling and such achieves similar performance as the 5070 with DLSS and whatever else.

So yes, for pure “gaming” performance, with games that support it, the GPU will have the same performance. But there will be artifacts.

For ANYTHING besides these “gaming” usecases, it will probably be closer to the 4080 or whatever (idk GPU naming…).

So if you care about inference, blender or literally anything not-gaming: you probably shouldn’t care about this.

i’m totally up for counter arguments. maybe i’m missing something here, maybe i’m being a dumdum <3

imma wait for amd to announce their stuffs and just get the top one, for the open drivers. not an nvidia person myself, but their research seems spicy. currently still slobbing along with a 1060 6GB

  • .Donuts
    link
    161 day ago

    I was ready to do some due diligence but the specs don’t lie: the 5070 is lower in all the specs that matter like CUDA cores, Shader cores, Tensor cores, VRAM and even base clock speed.

    There might be some improved use cases because of more modern architecture and offloading certain tasks to a powerful CPU, but it’s looking bleak, yeah.

    Minor pet peeve: it’s NVIDIA, full caps.

    • @[email protected]
      link
      fedilink
      121 day ago

      Regarding your pet peeve, when was the change? I always want to write it as nVidia too, or maybe now Nvidia. Was that something back from the early GeForce days or am I just imagining things?

      • @glimse
        link
        4
        edit-2
        1 day ago

        It was never either of those. It started as nVIDIA and made the N uppercase during the pandemic (or around that time)

        [Edit] I could have just checked before commenting but no…I decided I’d rather be wrong I guess. This is the correct answer:

        They started as nVIDIA but used NVIDIA interchangeably for decades. In 2020, all caps became “official”

        • @[email protected]
          link
          fedilink
          31 day ago

          I think I conflated capitalization between Nvidia and their nForce chipset. I had an nForce motherboard for my Athlon build.

          • @glimse
            link
            11 day ago

            Oh yeah, I forgot about those things! nVIDIA nForce… Should have gone with nFORCE

        • @[email protected]
          link
          fedilink
          English
          11 day ago

          I found a couple brand guides from the 2010s showing all caps. If they changed, it was before that.

          • @glimse
            link
            31 day ago

            It’s been their official name since like the 2000s but they didn’t seem to nForce it (sorry lol) until a few years ago. I remember reading an article about the updated guidelines.

            I think it’s funny that their logo still shows a lowercase n

    • @[email protected]
      link
      fedilink
      English
      41 day ago

      CUDA cores, Shader cores, Tensor cores

      You should never compare those cross architectures. Just like CPUs, GPUs can do more or less per clock per core. Inside an architecture you can use it get get an idea, but cross architecture it’s apples to oranges.

      ex: The GTX 680 had 3x the cores of the GTX 580, but only performed 2x as fast at best, closer to 1.5x.