Hello, I’ve been hearing a lot about this new DeepSeek LLM, and was wondering, would it be possible to get the 600+ billion parameter model running on my GPU? I’ve heard something about people have got it to run on their MacBooks. I have i7 4790K, 32GB DDR3, and 7900 XTX 24GB VRAM. I’m running Arch Linux, this computer is just for AI stuff really, not gaming as much. I did tried running the distilled 14B parameter model, but it didn’t work for me, I was using GPT4All to run it. I’m thinking about getting one of the NVIDIA 5090s in the future. Thanks in advance!
i also have a 6700xt but i don’t get ollama running on it. it only defaults to the cpu ryzen 5600 I plan to tackle this problem on a free weekend and now i have a new Reason for solving it.
on some Linux distros like Arch Linux you might need to install a ollama-rocm package too
Well, I dont know what you are running, but on Debian or Fedora it automatically installed Drivers and picked the GPU. I had a Problem like this ones, where it had wrong Drivers (but it was in an NVIDIA GPU).