Hi, you’ve found this subreddit Community, welcome!

This Community is intended to be a replacement for r/LocalLLaMA, because I think that we need to move beyond centralized Reddit in general (although obviously also the API thing).

I will moderate this Community for now, but if you want to help, you are very welcome, just contact me!

I will mirror or rewrite posts from r/LocalLLama for this Community for now, but maybe we could eventually all move to this Community (or any Community on Lemmy, seriously, I don’t care about being mod or “owning” it).

  • scrollbars
    link
    fedilink
    English
    41 year ago

    Hello! This is the one community that I was a bit worried about finding an equivalent of outside of reddit. Hopefully more of us migrate over.

  • @[email protected]
    link
    fedilink
    English
    31 year ago

    thank you for using a decent platform. i doubt more than 20 people will migrate from reddit… but it make the world a better place, anyways.

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    Late to the party, but thanks for setting this up! I suspect the overlap of people both using local LLMs and hungry for reddit alternatives will be higher than average

  • @floatingBurger
    link
    English
    11 year ago

    Thank you so much for setting uo here. Have you looked into the github project that allows cloning submittions automatically? just curious, since this must be a lot for an indicidual to do.

  • @[email protected]M
    link
    fedilink
    English
    01 year ago

    I could help with moderation, but I have a question, how to set up LLAma on my mac computer? any tips?

      • @gh0stcassette
        link
        English
        21 year ago

        Adding to this: text-generation-webui (https://github.com/oobabooga/text-generation-webui) works with the latest bleeding edge llama.cpp via llama-cpp-python, and it has a nice graphical front-end. You do have a manually tell pip to install llama.cpp-python with the right compiler flags to get GPU acceleration working but the llama-cpp-python github and ooba github explain how to do this.

        You can even set up GPU acceleration through metal on m1 Macs I’ve seen some fucking INSANE performance numbers online for the higher RAM MacBook pros (20+ tokens/sec, I think with a 33b model, but it might have been 13b, either way, impressive.)