I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

    • @TheBigBrotherOP
      link
      English
      -45 months ago

      Yeah, I did see something related to what you mentioned and I was quite interested. What about quantized models?

      • @[email protected]
        link
        fedilink
        English
        35 months ago

        Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.

        • @TheBigBrotherOP
          link
          English
          -65 months ago

          Interesting information mate, I’m documenting myself into the subject, thx for the help 👍👍