• FaceDeer
    link
    fedilink
    310 months ago

    Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate “memory” that gets searched and bits inserted into the context of the LLM when it’s answering questions. LLMs have been trained to be able to call external APIs to do the things they’re bad at, like math. The LLM is typically still the central “core” of the system, though; the other stuff is routine sorts of computer activities that we’ve already had a handle on for decades.

    IMO it still boils down to a continuum. If there’s an AI system that’s got an LLM in it but also a Wolfram Alpha API and a websearch API and other such “helpers”, then that system should be considered as a whole when asking how “intelligent” it is.

    • @fidodo
      link
      English
      310 months ago

      Lol yup, some people think they’re real smart for realizing how limited LLMs are, but they don’t recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They’re not just making the specific models better, they’re also figuring out how to combine them to make something more generally intelligent instead of super specialized.