• Alphane MoonOPM
    link
    210 days ago

    That’s fair. But do you see where I am coming from?

    Marketing around TOPs isn’t everything.

    Interesting is a relative term. I find upscaling older SD content interesting. You can’t just dismiss this use case because it doesn’t fit into your arguement.

    Getting a local LLM (Llama 1B is not as good as cloud LLMs of course, but it does have valid use cases) with a Nvidia GPU is extremely simple. Can you provide a 5 bullet point guide for setting up a local LLM with 32 GB RAM (64 GB RAM isn’t that common in laptops).

    • @[email protected]
      link
      fedilink
      1
      edit-2
      10 days ago

      Install lmstudio

      Profit

      *If you want to use the NPU

      Apply for beta branch (3.6.x) at lmstudio

      Install lmstudio beta

      Profit

      Edit: Almost forgot, the AMD drivers (under review) for the latest NPU containing CPUs (7xxx and upward) should come with the spring kernel update to 6.3, fingers crossed. It’s been two years, they took their sweet time. Windows support was available on release…