- cross-posted to:
- aicompanions
- cross-posted to:
- aicompanions
Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.
It is not an issue of processing power, it’s a problem with the basic operating principles of LLMs. They predict what they “think” is a valid bit of text to come after the last bit of text.
Sure it could be verified by some other machine learning tool, but we have no idea how that could work.
But I strongly doubt LLMs are a stepping stone on the way to true AIs. If you want to get to the moon you can’t just build higher and higher towers.
Also quantum computers aren’t really suited to run artificial neural networks as far as I know.
Very good points. I have very limited knowledge about the inner workings of most LLMs, I just know the tidbits I’ve read here and there.
As far as quantum computers, based on my current understanding is once they’re at a point where they can be used commercially, they should easily be able to model/run artificial neural networks. Based on the stuff I’ve seen from Dr. Michio Kaku, quantum computers will eventually have the capacity to do pretty much anything.
I hadn’t looked up what Michio Kaku had said about quantum computing before, but it does not look well-regarded.
“His 2023 book on Quantum Supremacy has been criticized by quantum computer scientist Scott Aaronson on his blog. Aaronson states "Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions””
I’m hardly an expert on the subject, but as I understand it they have some very niche uses, mostly in cryptography and some forms of simulation.