Meta has released llama 3.1. It seems to be a significant improvement to an already quite good model. It is now multilingual, has a 128k context window, has some sort of tool chaining support and, overall, performs better on benchmarks than its predecessor.

With this new version, they also released their 405B parameter version, along with the updated 70B and 8B versions.

I’ve been using the 3.0 version and was already satisfied, so I’m excited to try this.

  • @[email protected]
    link
    fedilink
    English
    14 months ago

    It doesn’t follow instructions, insists on being “conversational” despite being told not to be.

    • RachelRodent
      link
      fedilink
      English
      34 months ago

      that is the base model. Wait for people to finetune it for spesfic tasks