Hello, I’ve been hearing a lot about this new DeepSeek LLM, and was wondering, would it be possible to get the 600+ billion parameter model running on my GPU? I’ve heard something about people have got it to run on their MacBooks. I have i7 4790K, 32GB DDR3, and 7900 XTX 24GB VRAM. I’m running Arch Linux, this computer is just for AI stuff really, not gaming as much. I did tried running the distilled 14B parameter model, but it didn’t work for me, I was using GPT4All to run it. I’m thinking about getting one of the NVIDIA 5090s in the future. Thanks in advance!

  • The Hobbyist
    link
    fedilink
    212 days ago

    To run the full 671B sized model (404GB in size), you would need more than 404GB of combined GPU memory and standard memory (and that’s only to run it, you would most probably want it all to be GPU memory to make it run fast).

    With 24GB of GPU memory, the largest model which would fit from the R1 series would be the 32b-qwen-distill-q4_K_M (20GB in size) available at ollama (and possibly elsewhere).

  • Eager Eagle
    link
    English
    22
    edit-2
    2 days ago

    They run smaller variations of it in their personal machines. There are models that fit in almost any machine, but IME the first model that is useful is the 32b, which you can probably run on the XTX. Anything less than that, only for the more trivial tasks.

    • Rimu
      link
      fedilink
      English
      6
      edit-2
      2 days ago

      Just to confirm - I tried the 7b and it was fast but pretty untrustworthy. OPs 24 GB if vram should be enough to run a medium sized version, tho…

  • Here_for_the_dudes
    link
    fedilink
    142 days ago

    I run the 32b Version on my 6700xt with an R9 3700x using ollama. It runs well but it gets a bit slower on complex problems. I once ran an 70b Llama model, but it took a long time to finish.

    • @ieatpwns
      link
      32 days ago

      Hey not to side track ops post or your own but I’m new to the home llm space and I was wondering once you have the model set up is there a gui? And how do you input tasks for it to do?

    • @[email protected]
      link
      fedilink
      English
      22 days ago

      I have the same GPU but I always run 7B/8B variants as exl2. Do you use GGUF to use your system RAM?

    • @[email protected]
      link
      fedilink
      Deutsch
      12 days ago

      i also have a 6700xt but i don’t get ollama running on it. it only defaults to the cpu ryzen 5600 I plan to tackle this problem on a free weekend and now i have a new Reason for solving it.

      • Here_for_the_dudes
        link
        fedilink
        12 days ago

        Well, I dont know what you are running, but on Debian or Fedora it automatically installed Drivers and picked the GPU. I had a Problem like this ones, where it had wrong Drivers (but it was in an NVIDIA GPU).

  • @[email protected]
    link
    fedilink
    English
    52 days ago

    I don’t know how big the original model is but I have an RX 6700 XT and I can easily run the Llama 3 8B distill of Deepseek R1 with 32k context. I just haven’t figured out how to get good results yet, it always does the <thinking><thinking/> thing.