Wanted to share a resource I stumbled on that I can’t wait to try and integrate into my projects.

A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    Been playing around for a couple of weeks with it, and its local server option made it really easy to use with langchain + Orca mini is amazingly fast (but need proper prompts it seems - I still need to work this out it seems :D ) oh and it even lets you see the server side chat, reaaaaally useful when you chain prompts with langchain

    • @[email protected]OP
      link
      fedilink
      English
      11 year ago

      how does it compare to commercially available options? namely code generation, text summarization, and asking questions related to programming? I’m curious if they trained it on code.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        I’m really not able to answer this precisely, since i only used commercial alternatives to play around with it… what i can say is “Nous - Vicuna” model didnt feel worse than GPT 3.5 overall (and there’s a dozen other models available), just a bit slower (which depends on your computer). And the GPT4ALL team curates their list of models, and it’s really comfortable considering the million models happening everyday. Also the app that keeps getting new features. We also chose this system because self hosting is safer, in control, and free. Plus we try to only use the LLM where needed in our small project, so i’ll be able to give more insight about that later I think, but overall it is more than usable.

        • @[email protected]OP
          link
          fedilink
          English
          21 year ago

          thanks for your insight. I, too, hope to come to a conclusion and share with the community once I have one formulated. Over the next month I hope to get something working