I am seeking recommendations for alternative free and privacy-focused AI chatbots that can be accessed via a web browser or utilized as offline software with a user-friendly installation process, not requiring extensive coding knowledge.

Specifically, I am interested in exploring options similar to DuckDuckGo or Venice, which prioritize user data protection and anonymity. Additionally, I would appreciate suggestions for offline AI chatbots that can be easily installed on various operating systems without requiring technical expertise.

It is notable that while there are numerous resources available for evaluating proxies and privacy-focused software, there appears to be a lack of comprehensive lists or reviews specifically focused on free AI chatbots that prioritize user privacy. If such resources exist, I would appreciate any guidance or recommendations.

    • @474D
      link
      22 days ago

      LMStudio is what I use, extremely simple and runs well

      • fmstrat
        link
        fedilink
        English
        1
        edit-2
        1 day ago

        I’m trying to switch to this from Ollama after seeing the benchmarks, so much faster. But it has given me nothing but issues with CUDA incompatibility where Ollama runs smooth as butter. Hopefully I get some feedback on my repo discussion. Same docker setup as working Ollama, but Ollama has a lot more detailed docs.

        Ignore that, thought you said LMDeploy.

  • sunzu2
    link
    fedilink
    32 days ago

    , I would appreciate suggestions for offline AI chatbots that can be easily installed on various operating systems without requiring technical expertise.

    The path on which you are going will likely require you to up skill at some point

    Good luck!

    But there decent amount of options for non tech person to run local llm but unless you got a good gaming PC ie high end graphics with RAM it ain’t as usae.

    CPU/RAM set up is too slow for chatbot functionality… Maybe Apple Silicon could work but I am not sure, it does have better bandwidth than traditional PC architectures

    • @[email protected]
      link
      fedilink
      22 days ago

      I can confirm that Apple silicon works for running the largest Llama models. 64GB of RAM. Dunno if it would work with less as I haven’t tried. It’s the M1 Max chip, too. Dunno how it’d do on the vanilla chips.

  • @[email protected]
    link
    fedilink
    32 days ago

    Look up LM Studio. It’s a free software that let’s you easily install and use local LLMs. Note that you need to have a good graphics card and a lot of RAM for it to be useful.

  • @[email protected]
    link
    fedilink
    English
    22 days ago

    I’ve been very happy with GPT4All. It’s open source and privacy-focused by running on your own hardware. It provides a clean GUI for downloading various LLMs to chat with.