I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • @[email protected]
    link
    fedilink
    26 months ago

    I’m not positive I’m understanding your term naturalistic but no neuroscientist would say “we are just neurons”. Similarly no neuroscientist would deny that neurons are a fundamental part of consciousness and thought.

    You have plenty of complex chemical processes interacting with your brain constantly - the neurons there aren’t all of who you are.

    But without the neurons there: you aren’t anyone anymore. You cease to live. Destroying some of those neurons will change you fundamentally.

    There’s no disputing this.

    • HucklebeeOP
      link
      16 months ago

      I agree with you, and you worded what I was clumsily trying to say. Thank you:)

      With naturalism I mean the philosphical idea that only natural laws and forces are present in this world. Or as an extension, the idea that here is only matter.