I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
  • Greg Clarke
    link
    fedilink
    English
    26 months ago

    That’s 128GB RAM, the GPU has 24GB VRAM. Ollama has gotten pretty smart with resource allocation. Smaller models can fit soley on my VRAM but I can still run larger models on RAM.

    • JackGreenEarth
      link
      fedilink
      English
      16 months ago

      Any tips on how to get stable diffusion to do that? I’m running it through Krita’s AI Image Generation plugin, and with my 6GB VRAM and 16GB RAM, the VRAM is quite limited if I want to inpaint larger images, I keep getting ‘out of VRAM’ errors. How do I make it switch to RAM when VRAM is full? Or with Jan for that matter, how can I get it to partially use RAM and partially VRAM so I can get it to run models larger than 7B?