• @x00z
    link
    English
    01 month ago

    I tried llama3.1:8b and it’s absolutely horrible.

    • @brucethemoose
      link
      English
      1
      edit-2
      30 days ago

      You can use larger “open” models through free or dirt-cheap APIs though.

      TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.