• @joneskind
    link
    English
    97 months ago

    That’s a nice hobby

    I would suggest you to install a local instance of a LLM (mistral or llama3 for example) to widen your source of information. Go straight to Wikipedia instead of “googling” it if you don’t already.

    Anyway, I didn’t know about kagi so I might take my own advice and give it a try.

    • peopleproblems
      link
      English
      87 months ago

      How big are they, and what do I need to use them well?

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        7 months ago

        Take a look at Ollama.ai, just follow the installation instructions. A decent GPU is recommended, and the models are around 10GB iirc.

      • @joneskind
        link
        English
        2
        edit-2
        7 months ago

        Most of 7b-8b models run just fine in 4bits quant and won’t use more than 4 or 5 GB of VRAM.

        The only important metric is the amount of VRAM as the model must be loaded in VRAM for fast inference.

        You could use CPU and RAM but it is really painfully slow.

        If you got an Apple Silicon Mac it could be even simpler.

        • Venia Silente
          link
          fedilink
          English
          27 months ago

          I have an Intel Celeron Mobile laptop with iGPU and, I think, 256MB VRAM. How many bs does that get me for the LLM?

          Only half-joking. That’s my still functional old daily driver now serving as homelab

          • @joneskind
            link
            English
            27 months ago

            Well, I got a good news and a bad news.

            The bad news is you won’t do shit with that my dear friend.

            The good news is that you won’t need it because the duck is back.