• (deleted-account)
    link
    English
    102 years ago

    I run Stable Diffusion locally. It’s free but you need a nice video card though.

    • @[email protected]
      link
      fedilink
      English
      12 years ago

      Any news of like a llama.cpp equivalent for SD? It would be handy to be able to slowly run it without a GPU, and maybe competitive with other free options in terms of images generated per day.

        • @[email protected]
          link
          fedilink
          English
          32 years ago

          It’s a portable, standard-C++, CPU-based implementation of the code to do inference (i.e. text generation) with the LLaMA language model. You get a command line command that takes text and the model and eventually outputs more text.

          You could do the same thing and run Stable Diffusion off of the CPU at some relatively slow speed, but I don’t know if anyone has code for it.

            • @[email protected]
              link
              fedilink
              English
              22 years ago

              Wow, that claims to be really fast on CPU actually. Why aren’t people using this all the time instead of the annoying services?

              • Inductor
                link
                fedilink
                English
                1
                edit-2
                2 years ago

                That guide refrences this version for more CPU performance. I haven’t tried using CPU, but from my experience with raytraced rendering on a CPU, it’s probably very slow compared to GPU. It might be faster than online services with a queue though.