I’m not a huge fan of AI, but I do use it occasionally. I found a tool on Flathub that I’ve been trying out, so I thought i would share it if any for you need something like it. The interface is also pretty nice if you use Gnome(it don’t look so nice KDE). Its unstable on my arch install

      • @[email protected]
        link
        fedilink
        English
        122 months ago

        Way more models available, faster in my experience, more reliable, local ChatGPT compatible api and advanced fine tuning features. There have been some additions to Alpaca since I last used it, so maybe I will try it again soon but since I don’t use it regularly I use GPT4ALL because it just works, and when I tried Alpaca didn’t.

        • @[email protected]
          link
          fedilink
          52 months ago

          I’d say alpaca is pretty much identical these days, the only major difference is the interface. If you need more power running something like Open WebUI with Ollama makes more sense.

  • @rsolva
    link
    82 months ago

    Nice! It can also connect to a remote instance of ollama 👍

    • @[email protected]
      link
      fedilink
      292 months ago

      local AI models are a thing you know, just run this on a computer with no internet access and you’ll swiftly see whether it relies on servers or not.

    • Match!!
      link
      fedilink
      English
      132 months ago

      so like, turn your network connection off and try it

          • Everett
            link
            fedilink
            English
            32 months ago

            I can run Llama3 on my desktop with a 3060, answers are near instant.

        • Match!!
          link
          fedilink
          English
          132 months ago

          model training is often exorbitantly high-energy and storage, but the model output is not: GPT3 cost $7 million dollars just in electricity costs to train, but the model is “only” 500 GB, and can be run locally on just CPU processing

        • @[email protected]
          link
          fedilink
          English
          62 months ago

          You can run a basic model on pretty mid-range hardware, the smaller ones are only 1-2GB in size.

        • @k4j8
          link
          32 months ago

          To be fair, the first time I tried running local AI (and it actually worked), I was so surprised that I actually unplugged my Ethernet and tried again. I’m still surprised, but it’s possible for the massive amounts of training data to be compressed to a model under only 10 or 20 GB.

    • z3rOR0ne
      link
      fedilink
      7
      edit-2
      2 months ago

      Just use firejail to sandbox it and find out.

    • @TheGrandNagus
      link
      English
      62 months ago

      It absolutely is. I even tested it with WiFi turned off.