I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • HucklebeeOP
    link
    17 months ago

    I think what makes it hard to wrap your head around is that sometimes, this text is emotionally charged. What I notice is that it’s especially hard if an AI “goes rogue” and starts saying sinister and malicious things. Our brain immediatly jumps to “it has bad intent” when in reality it’s jus taking some reddit posts where it happened to connect some troll messages or extremist texts.

    How can we decouple emotionally when it feels so real to us?