The whole “all AI bad” is disconnected and primitivism.
John J. Hopfield work is SCIENCE with caps. A decade of investigations during the 80s when computational power couldn’t really do much with their models. And now it has been shown that those models work really good given proper computational power.
Also not all AI is generative AI that takes money out of fanfic drawers pockets or an useless hallucinating chatbot. Neural networks are commonly used in science as a very useful tool for many tasks. Also image recognition is nowadays practically a solved issue thanks to their research. Proteins folding. Dataset reduction. Fluent text to speech. Speech recognition… AI may be getting more track nowadays because the generative AIs (that also have their own merit, like or not) but there is much more to it.
As any technological advance there are shitty use cases and good use cases. You cannot condemn a whole tech just for the shitty uses of some greedy capitalists. Well… you can condemn it. But then I will classify you as a primitivist.
Scientific theory that resulted in practical applications useful to people is why the nobel prize was created to begin with. So it is a well given prize. More so than many others.
Generative AI is really causing a negative association with AI in general to the point where a proper rebranding is probably in order.
Agreed. Which is why we should call it Machine Learning (or Data Science) and continue to torch OpenAI until it is no more.
‘AI hate’ is usually connected with insane claims like ‘we have “reasoning” model’.
That shit needs to die in fire.
I’m still waiting for the full-planet weather model. That will be something.
To be fair, the protein folding thing is legitimately impressive and an actual good use for the technology that isnt just harvesting people’s creativity for profit.
The way to tell so often seems to be if someone has called it AI or Machine Learning.
AI? “I put this through chatgpt” (or “The media department has us by the balls”)
ML? “I crunched a huge amount of data in a huge amount of ways, and found something interesting”
Actually I endorse the fact that we are less shy of calling “AI” algorithms that do exhibit emergent intelligence and broad knowledge. AI uses to be a legitimate name for the field that encompasses ML and we do understood a lot of interesting things about intelligence thanks to LLMs nowadays, like the fact that training on next-word-prediction is enough to create pretty complex world models, that transformer architectures are capable of abstraction or that morality arise naturally when you try to acquire all the pre-requisites to have a normal discussion with a human.