I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

    • @kaffiene
      link
      English
      39 months ago

      In the sense that the “argument” is an intuition pump. As an anti ai argument it’s weak - you could replace the operator in the Chinese room with an operator in an individual neuron and conclude that our brains don’t know anything, either

      • @themusicman
        link
        39 months ago

        Exactly. The brain is analogous to the room, not the person in it. Try removing a chunk of a brain and see how well it can “understand”

        • @kaffiene
          link
          English
          19 months ago

          Yeah? Of course?