• @[email protected]
    link
    fedilink
    English
    01 year ago

    How do you define “intelligence” in this context?

    Do you think gpt4 is self aware?

    Do you believe this LLM tech has the ability to make judgement calls, say? Or understand meaning?

    What has been your experience with the accuracy / correctness of the answers it has provided? Does it match claims that mistakes or “hallucinations” occur often?

    • @flossdaily
      link
      English
      2
      edit-2
      1 year ago

      You’re wandering into one of the great questions of our age: what is intelligence? I don’t have a great answer. All I know is that gpt-4 can REASON, and does so better than the average human.

      It’s gpt-4 self-aware? Yes. To an extent. It knows what it is, and can use that information in its reasoning. It knows it’s an LLM, but not which model.

      Can it make judgement calls? Yes. Better than the average human.

      Understand meaning? Absolutely. To a jaw-dropping extent.

      Accuracy and correctness… Depends on the type of question.

      What you need to understand is that gpt-4 isn’t a whole brain. Think of it as if we have managed to reproduce the language center of the brain. I believe this is mechanism for higher reasoning in the human brain.

      But just as in humans with right-brain injuries, the language center is disconnected from reality at times.

      So, when you think about gpt-4 as the most important, difficult to solve part of the brain, you start to understand that with some minimal supporting infrastructure, you now have something very similar to a complete brain.

      You can use vector databases to give it long-term memory, and any kind of data retrieval used to augment it’s prompts improved accuracy and reduces hallucinations almost entirely.

      With my very mediocre programming skills, I managed to build a system that is curious, has a long-term memory, and do a wide variety of tasks, enough to easily replace an entire customer service, tech support team, sales team, and marketing team.

      That’s just ME, and working with the gpt-4 that’s available to the public with a bunch of guardrails on it. Today.

      Imagine a less-restricted system, with infrastructure built by an experienced enterprise coding team, and with just one more generation of LLM improvement? That could wipe out half the white collar workforce.

      If LLM improvement was only geometric, and not even exponential (as it clearly is), in 10 years these things will be smarter AND MORE CREATIVE than all humans.

      The truth is that we’re going to be there in 5 years.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Appreciate the detailed response!

        Indeed, intelligence is …a difficult thing to define. It’s also a fascinating area to ponder. The reason I asked was to get an idea of where your head is at with the claims you made.

        Now, I admit I haven’t done a lot with gpt-4 but your comments make me think it is worth the time to do so.

        So you indicate gpt-4 can reason. My understanding is gpt-4 is an LLM, basically a large scale Markov chain, trained to respond with appropriate output based on input (questions).

        On the one hand, my initial reaction is: no, it doesn’t reason it just mimics or simulates human reasoning that came before it in text form.

        On the other hand, if a program could perfectly simulate whatever processes are involved in reasoning by a human to the point that they’re indistinguishable, is it not, in effect, reasoning? (I suppose this amounts to a sort of Turing Test but for reasoning exercises).

        I don’t know how gpt4 LLMs work yet. I imagine, being a Markov Model (specifically a Markov Chain), if the model is trained on human language then the underlying semantics are sort of implicitly captured in the statistical model. Like, simplistically, if many sentences reflect human knowledge that cars are vehicles and not animals then it’s statistically unlikely for anyone to write about attributes and actions of animals when talking about cars. I assume the LLM is of such a scale that it permits this apparently emergent behavior.

        I am skeptical about judgement calls. I would think some sensory input would be required. I guess we have to outline various types of judgement calls to really dig into this.

        I am willing to accept that gpt-4 simulates the portions of the brain that deal with semantics and syntax both the receiving and transmitting abilities. And, maybe to some degree, knowledge and understanding.

        I think “very similar to a complete brain” is an overstatement as the brain also does some amazing things with vision, hearing, proprioception, touch, among other things. Human brains can analyze situations and take initiative, analyze things and understand how they work and apply that to their repair, improvement, duplication, etc. We can understand and solve problems, and so on. In other words I don’t think you’re giving the brain anywhere near enough credit. We aren’t just Q&A machines.

        We also have to be careful of the human tendency to anthropomorphize.

        I’m curious to look into vector databases and their applications here. Addition of what amounts to memory, or like extended context, sounds extremely interesting.

        Interesting to ponder what the world would be like with AGI taking over the jobs of most knowledge workers, artists, and so on. (I wonder if someone could create a CEO replacement…)

        What does it mean for a capitalist society with masses of people permanently unemployed? How does the economy work when nobody can afford to buy anything because they’re unemployed? Does this create widespread poverty and collapse or a post-scarcity economy in some sectors?

        Until robots mechanically evolve to Asimov’s vision, at least, manual labor is safe. Truly being able to replace a human body with a robot is still a ways off due to lack of progress on several fronts.