• @daddy32
    link
    English
    21 day ago

    Except you cannot use them for AI commercially, or at least in data center setting.

    • @[email protected]
      link
      fedilink
      English
      21 day ago

      Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.