• Asuka
    link
    fedilink
    English
    010 months ago

    I think it’s a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn’t constitute “thinking”. No, GPT4 isn’t conscious, but it very clearly “thinks”.

    It’s started to feel to me like current AIs are reasonable recreations of parts of our minds. It’s like they’re our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the “rest” of our thought processes.

    • @fidodo
      link
      English
      310 months ago

      Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.