Also competition is stiff. Alibaba is currently handing their butts to them with Qwen 2.5. Deepseek (a Chinese startup), tencent and Mistral (French) are giving them a run for their money too, and there are even some that “continue train” their old weights.
A small startup called Arcee AI actually “distilled” logits from several other models (Llama, Mistral) and used the data to continue train Qwen 2.5 14B (which itself is Apache 2.0). It’s called supernova medius, and it’s quite incredible for a 14B model… SOTA as far as I know, even with their meager GPU resources.
A company called upstage “expands” models to larger parameter counts by continue training them. Look up the SOLAR series.
And quite notably, Nvidia continue trained Llama 3.1 70B and published the weights as Nemotron 70B. It was the best 70B model for awhile, and may still be in some areas.
And some companies like Cohere continuously train the same model slowly, and offer it over API, but occasionally publish the weights to promote them.
The fact that there is AI with open source licenses is already a good thing, as is the competition. Although in my opinion it is not enough because it can further consolidate oligopolies in this sector.
Trying to prevent OpenAI from becoming a for-profit seems to me to be a questionable tactic. It’s as if Mozilla wanted to be a for-profit company in order to make Firefox more competitive with Chrome, but Google opposes this measure.
Well for one, I directly disagree with Altman’s fundamental proposition, they don’t need to “scale” AI so dramatically to make it better.
See: Qwen 2.5 from Alibaba, a fraction of the size, made with a tiny fraction of the H100 GPUs and highly competitive (and (mostly) Apache licensed). And frankly, OpenAI is pointedly ignoring all sorts of open research that could make their models notably better or more powerful efficient, even with the vast resources and prestige they have… they seem most interested in anticompetitive efforts to regulate competitors that would make them look bad, using the spectre of actual AGI (which has nothing to do with transformers LLMs) to scare people.
Even if doing it for the wrong reasons, I feel like Google would be right to oppose Mozilla axing the nonprofit division if they were somehow in a similar position to OpenAI. Their mission of producing a better, safer browser would basically be lying through their teeth.
Open AI has different priorities they want to achieve AGI, so they seek to explore the capabilities of AI not look at what competency does in those directions to replicate and/or improve it. They only optimize it to make their services faster and less resource consuming.
Also, becoming a for-profit organization doesn’t mean you eliminate your non-profit division. Those two parts separate and become independent, although the nonprofit ends up getting considerable funds from the funding offer received by the other part.
As is the case with Mastercard, whose nonprofit organization is one of the richest in the world. In that scenario Mozilla would split into two entities one would focus on making a profit and making Firefox more competitive, while the other would focus on what Mozilla currently does.
Also competition is stiff. Alibaba is currently handing their butts to them with Qwen 2.5. Deepseek (a Chinese startup), tencent and Mistral (French) are giving them a run for their money too, and there are even some that “continue train” their old weights.
And what are those examples of those who continue training old weights?
A small startup called Arcee AI actually “distilled” logits from several other models (Llama, Mistral) and used the data to continue train Qwen 2.5 14B (which itself is Apache 2.0). It’s called supernova medius, and it’s quite incredible for a 14B model… SOTA as far as I know, even with their meager GPU resources.
A company called upstage “expands” models to larger parameter counts by continue training them. Look up the SOLAR series.
And quite notably, Nvidia continue trained Llama 3.1 70B and published the weights as Nemotron 70B. It was the best 70B model for awhile, and may still be in some areas.
And some companies like Cohere continuously train the same model slowly, and offer it over API, but occasionally publish the weights to promote them.
The fact that there is AI with open source licenses is already a good thing, as is the competition. Although in my opinion it is not enough because it can further consolidate oligopolies in this sector.
Trying to prevent OpenAI from becoming a for-profit seems to me to be a questionable tactic. It’s as if Mozilla wanted to be a for-profit company in order to make Firefox more competitive with Chrome, but Google opposes this measure.
Well for one, I directly disagree with Altman’s fundamental proposition, they don’t need to “scale” AI so dramatically to make it better.
See: Qwen 2.5 from Alibaba, a fraction of the size, made with a tiny fraction of the H100 GPUs and highly competitive (and (mostly) Apache licensed). And frankly, OpenAI is pointedly ignoring all sorts of open research that could make their models notably better or more powerful efficient, even with the vast resources and prestige they have… they seem most interested in anticompetitive efforts to regulate competitors that would make them look bad, using the spectre of actual AGI (which has nothing to do with transformers LLMs) to scare people.
Even if doing it for the wrong reasons, I feel like Google would be right to oppose Mozilla axing the nonprofit division if they were somehow in a similar position to OpenAI. Their mission of producing a better, safer browser would basically be lying through their teeth.
Open AI has different priorities they want to achieve AGI, so they seek to explore the capabilities of AI not look at what competency does in those directions to replicate and/or improve it. They only optimize it to make their services faster and less resource consuming.
Also, becoming a for-profit organization doesn’t mean you eliminate your non-profit division. Those two parts separate and become independent, although the nonprofit ends up getting considerable funds from the funding offer received by the other part.
As is the case with Mastercard, whose nonprofit organization is one of the richest in the world. In that scenario Mozilla would split into two entities one would focus on making a profit and making Firefox more competitive, while the other would focus on what Mozilla currently does.