• Maple
    link
    English
    221 year ago

    You know what, good. If the past 5 years have been any indication, we need to stop pushing the bleeding edge as much and focus on stability.

    I want games that run well just as much as I want them to look good.

    • @[email protected]
      link
      fedilink
      English
      01 year ago

      I don’t buy them for bragging rights, I buy them because I want to future proof my builds. Idgaf about anyone knowing anything about my builds or costs. I simply want performance and quality. Don’t generalize.

          • Scratch
            link
            fedilink
            English
            31 year ago

            Hello, I am a gamer first, Blender modeller and animator, like 4th. I needed to upgrade my GPU to actually work in Cycles. So, as it was around by birthday, I bought a top end Nvidia card.

            I can buy a GPU for more than one reason.

              • Scratch
                link
                fedilink
                English
                31 year ago

                I know your just trying to troll and be obnoxious, but legitimately my render times dropped by an obscene amount.

                20 mins per frame down to about 30 seconds over a 250 frame animation is very significant for me.

  • Altima NEO
    link
    fedilink
    English
    81 year ago

    I mean, its not like AMD has been able to for several generations. I wouldn’t expect them to now.

    Though I wish they would, because nVidia is getting away with some bullshit pricing thanks to not having any competition.

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    AMD seems incapable of competing with Nvidia at the high end, they can’t make FSR as good as DLSS and they are still far behind in RT

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      And AI/ML workloads. Nvidia gets lots of shit and is more expensive but you get a better ecosystem with their cards.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        A good portion of this though is the CUDA stranglehold nvidia has. Good luck getting a neural net accelerated on OpenCL or Vulkan Compute.

        • Scratch
          link
          fedilink
          English
          21 year ago

          AMD do seem to be taking steps in the right direction here, still a while away from a more balanced landscape.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      I think it’s more that they’re unwilling. AMD goes after low hanging fruit and targets the mass market. In essence, they’re willing to let NVIDIA invest in all of the new tech, and then they implement whatever gets popular.

      So unless they decide to truly prioritize their GPU business, they’ll be happy to target the quiet majority who care mostly about price to performance while focusing on innovating on the CPU side of the business where they make their real money.

      I’m sure they could compete on the GPU side if they threw money at the problem, but they don’t see a need to when it’s decently popular and they’re seeing a lot more growth and profit on the CPU side.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          And that’s how it has been for a long time at AMD.

          Look at CPUs, they were in a comfortable second place as the economy option for many years, and when they tried something new, it blew up in their face (Bulldozer).

          Ryzen was all about the chiplet design first, and architecture improvements second. They didn’t go for the most innovative core design or smallest process (they didn’t even have a fab), they went for the economical option (chiplets have better yields). They were able to catch up with Intel with IPC gains, but Ryzen was pretty uninteresting aside from that. Even today, Zen 4 is just an iteration on the chiplet design, and they’re beating Intel because Intel struggled with lithography issues, and Intel is also trying novel things that haven’t resulted in a clear win vs AMD. So AMD is happy to attack yields (chiplets) and innovate by extension (add-on cache) instead of trying something radical with core design.

          Their GPUs are going the same way. NVIDIA is trying hard with RT cores, whereas AMD mostly reused regular shader cores initially. NVIDIA is building a huge model for DLSS, AMD just applies a simple, one-size fits most filter on top. NVIDIA goes for the best experience for the high end, AMD just goes for a pretty good experience for most.

          I don’t see that changing, that has been AMD’s main playbook since Intel overtook them after the x64 transition.