• @brucethemoose
      link
      English
      21 month ago

      Yep.

      32B fits on a “consumer” 3090, and I use it every day.

      72B will fit neatly on 2025 APUs, though we may have an even better update by then.

      I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.

    • @brucethemoose
      link
      English
      2
      edit-2
      1 month ago

      BTW, as I wrote that post, Qwen 32B coder came out.

      Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.

      • @A_A
        link
        English
        21 month ago

        Great news 😁🥂, someone should make a new post on this !