My office computer has a Ryzen 7 5700, RX 580x, and 32gb of ram. Running ollama with deepseekv2 or llama3 is much slower than chatgpt in the browser. Same with my newer, more powerful home computer.

What kind of hardware do you need to run with comparable responsiveness to chatgpt? How much does it cost? Presuming such hardware is commercial, where do you find it?

  • JoYo 🇺🇸
    link
    fedilink
    English
    3
    edit-2
    5 hours ago

    It’s all dependent on VRAM. If you can load the distilled models with your GPU without maxing out your VRAM it will run just as fast as any server farm.

    RX 580x

    It looks like your video card only has 8 GB of VRAM. That will be your bottleneck.

  • zelifcam
    link
    English
    6
    edit-2
    8 hours ago

    Install LocalAI and ensure it’s using acceleration. It’s one of the best solutions we have at the moment.

    Are you sure you’re not running these small models off of CPU and no acceleration? Because I’m running these small models pretty quickly. Nearly instant responses using a NVIDIA titanXP from a gaming rig I built in 2017 ish.

    • @[email protected]
      link
      fedilink
      49 hours ago

      AMD is quite awful in this regard. Rn with my rx6650xt using Vulcan acceleration, I get the same speed as running on my r5 7600

  • @JASN_DE
    link
    59 hours ago

    You’d need basically a small server rack filled with datacenter GPUs. Expect mid to high 5 digit numbers.

    But: running smaller models on a typical gaming GPU is quite doable.

  • @[email protected]
    link
    fedilink
    English
    -59 hours ago

    What kind of hardware do you need to run with comparable responsiveness to chatgpt?

    Generally you need between $8-10,000 worth of equipment to get relative responsiveness from a self-hosted LLM.