Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

  • @[email protected]
    link
    fedilink
    English
    1010 months ago

    Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs

    • ivanafterall
      link
      fedilink
      610 months ago

      So can I run it on my Radeon RX 5700? I overclocked it some and am running it as a 5700 XT, if that helps.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.