• @Knock_Knock_Lemmy_In
    link
    English
    31 day ago

    What if I’m buying a graphics card to run Flux or an LLM locally. Aren’t these cards good for those use cases?

    • @[email protected]
      link
      fedilink
      English
      41 day ago

      Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.