• @[email protected]
    link
    fedilink
    English
    197 days ago

    AGI is currently just a buzzword anyway…

    Microsoft defines AGI in contracts in dollars of earnings…

    If you’d travel in time 5 years back and show the currently best GPT to someone, he/she would probably accept it as AGI.

    I’ve seen multiple experts in German television explaining that LLMs will reach the AGI state within a few years…

    (That does not mean that the CEO guy isn’t a fool. Let’s wait for the first larger problem that requires not writing new code, but rather dealing with a bug, something not documented, or similar…)

    • @cynar
      link
      English
      407 days ago

      LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.

      That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.

      • @[email protected]
        link
        fedilink
        97 days ago

        While I haven’t read the paper, the comment’s explanation seems to make sense. It supposedly contains a mathematical proof that making AGI from a finite dataset is a NP-hard problem. I have to read it and parse out the reasoning, if true, it would make for a great argument in cases like these.

        https://lemmy.world/comment/14174326

        • Redjard
          link
          fedilink
          27 days ago

          If that is true, how does the brain work?

          Call everything you have ever experienced the finite dataset.
          Constructing your brain from dna works in a timely manner.
          Then training it does too, you get visibly smarter with time, so on a linear scale.

          • @[email protected]
            link
            fedilink
            8
            edit-2
            7 days ago

            I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.

            Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.

            • @veni_vedi_veni
              link
              27 days ago

              How does conversation context work though? Is that memory not a form of learning?

              • @[email protected]
                link
                fedilink
                6
                edit-2
                7 days ago

                The context window is a fixed size. If the conversation gets too long, the start will get pushed out and the AI will not remember anything from the start of the conversation. It’s more like having a notepad in front of a human, the AI can reference it, but not learn from it.

                • @[email protected]
                  link
                  fedilink
                  37 days ago

                  Also, a key part of how GPT-based LLMs work today is they get the entire context window as their input all at once. Where as a human has to listen/read a word at a time and remember the start of the conversation on their own.

                  I have a theory that this is one of the reasons LLMs don’t understand the progression of time.

      • Zos_Kia
        link
        fedilink
        17 days ago

        They have no ability to actually reason

        I’m curious about this kind of statement. “Reasoning” is not a clearly defined scientific term, in that it has a myriad different meanings depending on context.

        For example, there has been science showing that LLMs cannot use “formal reasoning”, which is a branch of mathematics dedicated to proving theorems. However, the majority of humans can’t use formal reasoning. This would make humans “unable to actually reason” and therefore not Generally Intelligent.

        At the other end of the spectrum, if you take a more casual definition of reasoning, for example Aristotle’s discursive reasoning, then that’s an ability LLMs definitely have. They can produce sequential movements of thought, where one proposition leads logically to another, such as answering the classic : “if humans are mortal, and Socrates is a human, is Socrates mortal ?”. They demonstrate the ability to do it beyond their training data, meaning they do encode in their weights a “world model” which they use to solve new problems absent from their training data.

        Whether or not this is categorically the same as human reasoning is immaterial in this discussion. The distinct quality of human thought is a metaphysical concept which cannot be proved or disproved using the scientific method.