The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?
Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.
It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.
OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.
“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”
OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.
Because you could charge more for “AiPUs” than you already are for GPUs since capitalists have brain rot. Maybe we just need to invest in that open source GPU project if its still around.
That’s what I said.
If a GPU and a hypothetical AiPU are the same tech, but nVidia could charge more for the AiPU, then why would they make and sell GPUs?
It’s the same reason why they don’t clamp down on their pricing now: they don’t care if you are able to buy a GPU, they care that Twitter or Tesla or OpenAI are buying them 10k at a time.
Yeah and then in this “free market” system someone can come make cheaper GPUs marketed at gamers and there ya go. We live again.
Except “free market” ideals break down when there are high barriers to entry, like… chip fabrication.
Also, that’s already what’s happening? If you don’t want to pay for nVidia, you can get AMD or Intel ARC for cheaper. So again, there’s literally no reason for nVidia to change what they’re doing.
I know you’re right. But I’m just making pro consumer suggestions, like anybody but us scrubs at the bottom gives a fuck about those. Moving the marketing to a different component would lower the perceived and real value of GPUs for us lowly consumers to once again partake. But its not like it matters because we’re at some strange moment in time where the VRAM on cards isn’t matching what the games say they need.