@[email protected] to TechnologyEnglish • 10 days agoOpenAI and others seek new path to smarter AI as current methods hit limitationswww.reuters.commessage-square15fedilinkarrow-up156arrow-down14cross-posted to: [email protected]
arrow-up152arrow-down1external-linkOpenAI and others seek new path to smarter AI as current methods hit limitationswww.reuters.com@[email protected] to TechnologyEnglish • 10 days agomessage-square15fedilinkcross-posted to: [email protected]
minus-square@A_AlinkEnglish3•10 days ago… “Alibaba (LLM)” … is it this ? … ? Qwen2.5: A Party of Foundation Models! https://qwenlm.github.io/blog/qwen2.5/
minus-square@brucethemooselinkEnglish2•10 days agoYep. 32B fits on a “consumer” 3090, and I use it every day. 72B will fit neatly on 2025 APUs, though we may have an even better update by then. I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.
minus-square@brucethemooselinkEnglish2•edit-210 days agoBTW, as I wrote that post, Qwen 32B coder came out. Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
… “Alibaba (LLM)” … is it this ? … ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/
Yep.
32B fits on a “consumer” 3090, and I use it every day.
72B will fit neatly on 2025 APUs, though we may have an even better update by then.
I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.
BTW, as I wrote that post, Qwen 32B coder came out.
Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
Great news 😁🥂, someone should make a new post on this !