• @[email protected]
    link
    fedilink
    English
    928 months ago

    Also check out LLM Studio and GPT4all. Both of these let you run private ChatGPT alternatives from Hugging Face and run them off your ram and processor (can also offload to GPU).

    • @Just_Pizza_Crust
      link
      English
      268 months ago

      I’d also recommend Oobabooga if you’re already familiar with Automatic1111 for Stable diffusion. I have found being able to write the first part of the bots response gets much better results and seems to make up false info much less.

        • Turun
          link
          fedilink
          English
          38 months ago

          And llama file, which is a chat bot in a single executable file.

      • @EarMaster
        link
        English
        88 months ago

        I feel like you’re all making these names up…but they were probably suggested by a LLM all together…

      • @[email protected]
        link
        fedilink
        English
        40
        edit-2
        8 months ago

        Mistral is thought to be almost as good. I’ve used the latest version of mistral and found it more or less identical in quality of output.

        It’s not as fast though as I am running it off of 16gb of ram and an old GTX 1060 card.

        If you use LLM Studio I’d say it’s actually better because you can give it a pre-prompt so that all of its answers are within predefined guardrails (ex: you are glorb the cheese pirate and you have a passion for mink fur coats).

        There’s also the benefit of being able to load in uncensored models if you would like questionable content created (erotica, sketchy instructions on how to synthesize crystal meth, etc).

      • @Hestia
        link
        English
        18 months ago

        Depends on your use case. If you want uncensored output then running locally is about the only game in town.

    • @[email protected]
      link
      fedilink
      English
      78 months ago

      Something i am really missing is a breakdown of How good these models actually are compared to eachother.

      A demo on hugging face couldnt tell me the boiling point of water while the authors own example prompt asked the boiling point for some chemical.

      • @TriPolarBearz
        link
        English
        18 months ago

        Maybe you could ask for the boiling point of dihydrogen monoxide (DHMO), a very dangerous substance.

        More info at DHMO.org

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          8 months ago

          I asked H2O first but no proper answer.

          i heard dihydrogen monoxide has a melting point below room temperature and they seem to find it everywhere causing huge oxidation damage to our infrastructure, its even found inside our crops.

          Truly scary stuff.

    • @[email protected]
      link
      fedilink
      English
      58 months ago

      I can’t find a way to run any of these on my homeserver and access it over http. It looks like it is possible but you need a gui to install it in the first place.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          8 months ago

          (edit: here was wrong information - I apologize to the OP!)

          Plus a GUI install is not exactly the best for reproducability which at least I aim for with my server infrastructure.

          • @[email protected]
            link
            fedilink
            English
            28 months ago

            You don’t need to run an X server on the headless server. As long as the libraries are compiled in to the client software (the GUI app), it will work. No GUI would need to be installed on the headless server, and the libraries are present in any common Linux distro already (and support would be compiled into a GUI-only app unless it was Wayland-only).

            I agree that a GUI-only installer is a bad thing, but the parent was saying they didn’t know how it could be done. “ssh -X” (or -Y) is how.

            • @[email protected]
              link
              fedilink
              English
              28 months ago

              That’s a huge today-I-learned for me, thank you! I took ill throw xeyes on it just to use ssh - C for the first time in my life. I actually assumed wrong.

              I’ll edit my post accordingly!

  • stevedidWHAT
    link
    English
    798 months ago

    Open source good, together monkey strong 💪🏻

    Build cool village with other frens, make new things, celebrate as village

    • @Zeon
      link
      English
      4
      edit-2
      8 months ago

      It’s free / libre software, which is even better, because it gives you more freedom than just ‘open-source’ software. Make sure to check the licenses of software that you use. Anything based on GPL, MIT, or Apache 2.0 are Free Software licenses. Anyways, together monkey strong 💪

  • @TootSweet
    link
    English
    648 months ago

    It seems like usually when an LLM is called “Open Source”, it’s not. It’s refreshing to see that Jan actually is, at least.

    • @[email protected]
      link
      fedilink
      English
      14
      edit-2
      8 months ago

      Jan is just a frontend. It supports various models under multiple licence. It also supports some proprietary models.

  • WetFerret
    link
    English
    108 months ago

    I would also reccommend faraday.dev as a way to try out different models locally using either CPU or GPU. I believe they have a build for every desktop OS.

  • @randon31415
    link
    English
    88 months ago

    I have recently been playing with llamafiles, particularly Llava which, as far as I know, is the first multimodal open source llm (others might exist, this is just the first one I have seen). I was having it look at pictures of prospective houses I want to buy and asking it if it sees anything wrong with the house.

    The only problem I ran into is that window 10 cmd doesn’t like the sed command, and I don’t know of an alternative.

    • @ripcord
      link
      English
      28 months ago

      Install Cygwin and put it in your path.

      You can use grep, awk, see, etc from either bash or Windows command prompt.

      • @randon31415
        link
        English
        18 months ago

        Wait, can you just install sed?

        • @Falcon
          link
          English
          18 months ago

          If you can find a copy yeah. GNU sed isn’t written for windows but I’m sure you can find another version of sed that targets windows.

  • ElPussyKangaroo
    link
    English
    58 months ago

    Any recommendations from the community for models? I use ChatGPT for light work like touching up a draft I wrote, etc. I also use it for data related tasks like reorganization, identification etc.

    Which model would be appropriate?

    • @Falcon
      link
      English
      88 months ago

      The mistral-7b is a good compromise of speed and intelligence. Grab it in a GPTQ 4bit.

    • Infiltrated_ad8271
      link
      fedilink
      128 months ago

      The question is quickly answered as none is currently that good, open or not.

      Anyway it seems that this is just a manager. I see some competitors available that I have heard good things about, like mistral.

    • @Falcon
      link
      English
      5
      edit-2
      8 months ago

      Many are close!

      In terms of usability though, they are better.

      For example, ask GPT4 for an example of cross site scripting in flask and you’ll have an ethics discussion. Grab an uncensored model off HuggingFace you’re off to the races

      • tubbadu
        link
        fedilink
        English
        18 months ago

        Seems interesting! Do I need high end hardware or can I run them on my old laptop that I use as home server?

        • @Falcon
          link
          English
          18 months ago

          Oh no you need a 3060 at least :(

          Requires cuda. They’re essentially large mathematical equations that solve the probability of the next word.

          The equations are derived by trying different combinations of values until one works well. (This is the learning in machine learning). The trick is changing the numbers in a way that gets better each time (see e.g. gradient descent)

          • @ripcord
            link
            English
            28 months ago

            How’s the guy who said he’s running off a 1060 doing it?

              • @ripcord
                link
                English
                28 months ago

                Then you don’t need a 3060 at least

          • tubbadu
            link
            fedilink
            English
            18 months ago

            Oh this is unfortunate ahahahaha
            Thanks for the info!

        • @SpiceDealer
          link
          English
          98 months ago

          And well deserved too. I’m not even mad. I really should real the actual article.

          • @ripcord
            link
            English
            38 months ago

            I really should real the actual article.

            And proofread.

    • @Fades
      link
      English
      25
      edit-2
      8 months ago

      It’s literally in the goddamn title, just click the fucking link Jesus Christ

      Shit, don’t even have to click the link it’s in the fuckin url