• @[email protected]
    link
    fedilink
    English
    332 days ago

    They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.

    • @Knock_Knock_Lemmy_In
      link
      English
      32 days ago

      What if I’m buying a graphics card to run Flux or an LLM locally. Aren’t these cards good for those use cases?

      • @[email protected]
        link
        fedilink
        English
        42 days ago

        Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.

    • @daddy32
      link
      English
      22 days ago

      Except you cannot use them for AI commercially, or at least in data center setting.

      • @[email protected]
        link
        fedilink
        English
        22 days ago

        Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.