I’m currently getting a lot of timeout errors and delays processing the analysis. What GPU can I add to this? Please advise.

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    6 days ago

    Is that still true though? My impression is that AMD works just fine for inference with ROCm and llama.cpp nowadays. And you get much more VRAM per dollar, which means you can stuff a bigger model in there. You might get fewer tokens per second compared with a similar Nvidia, but that shouldn’t really be a problem for a home assistant. I believe. Even an Arc a770 should work with IPEX-LLM. Buy two Arc or Radeon with 16 GB VRAM each, and you can fit a Llama 3.2 11B or a Pixtral 12B without any quantization. Just make sure that ROCm supports that specific Radeon card, if you go for team red.

    • @OpossumOnKeyboard
      link
      English
      26 days ago

      I’m also curious. I have also heard good things this past year about AMD and ROCm. Obviously not as close to Nvidia yet (or maybe ever) but considering the price I’ve been considering trying.