Leaderboard scores often can be a bit misleading since there are other factors to consider.

  • Censorship: Is the model censored?
  • Verbosity: How concise is the output?
  • Intelligence: Does the model know what it is talking about?
  • Hallucination: How much does the model makes up facts?
  • Domain Knowledge: What specialization a model has.
  • Size: Best models for 70b, 30b, 7b respectively.

And much more! What models do you use and would recommend to everyone?

The model that has caught my attention the most personally is the original 65b Llama. It seems genuine and truly has a personality. Everyone should chat with the original non-fine tuned version if they can get a chance. It’s an experience that is quite unique within the sea of “As an AI language model” openai tunes.

  • @Audalin
    link
    English
    11 year ago

    Wizard-Vicuna-30B-Uncensored works pretty well for most purposes. It feels the smartest of all I’ve tried. Even when it hallucinates, it gives enough to refine the google query on some obscure topic. As usual, hallucinations are also easily counteracted by light non-argumentative gaslighting.

    It isn’t very new though. What’s the current SOTA for universal models of similar size? (both foundation and chat-tuned)