• @x00z
    link
    English
    02 months ago

    I tried llama3.1:8b and it’s absolutely horrible.

    • @brucethemoose
      link
      English
      1
      edit-2
      2 months ago

      You can use larger “open” models through free or dirt-cheap APIs though.

      TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.