So they are moving away from general models and specializing them to tasks as certain kind of ai agents
It will probably make queries with those agents defined in a narrow domain and those agents will probably be much less prone to error.
I think its a good next step. Expecting general intelligence to arise out of LLMs with larger training models is obviously a highly criticized idea on Lemmy, and this move supports the apparent limitations of this approach.
If you think about it, assigning special “thinking” steps for ai models makes less sense for a general model, and much more sense for well-defined scopes.
We will probably curate these scopes very thoroughly over time and people will start trusting the accuracy of their answer through more tailored design approaches.
When we have many many effective tailored agents for specialized tasks, we may be able to chain those agents together into compound agents that can reliably carry out many tasks like we expected from AI in the first place.
So they are moving away from general models and specializing them to tasks as certain kind of ai agents
It will probably make queries with those agents defined in a narrow domain and those agents will probably be much less prone to error.
I think its a good next step. Expecting general intelligence to arise out of LLMs with larger training models is obviously a highly criticized idea on Lemmy, and this move supports the apparent limitations of this approach.
If you think about it, assigning special “thinking” steps for ai models makes less sense for a general model, and much more sense for well-defined scopes.
We will probably curate these scopes very thoroughly over time and people will start trusting the accuracy of their answer through more tailored design approaches.
When we have many many effective tailored agents for specialized tasks, we may be able to chain those agents together into compound agents that can reliably carry out many tasks like we expected from AI in the first place.