I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • FuglyDuck
    link
    English
    2
    edit-2
    5 months ago

    It’s basically regurgitating things.

    It’s trained on an immense amount of data and that 89% of the time when someone asks the phrase “what is the answer to the ultimate question of life, the universe, everything?” It’s “42”, with an explanation that it’s a reference to Douglas Adam’s Hitchhiker’s Guide to the Galaxy

    So, when you ask that… it just replies 42, and gives a mash up of informstion mostly consistent with the pop culture reference.

    It has no idea what “42” is, whether it’s a real question or real answer, or entirely a joke. Only that’s how people in its training data responded.

    (In this example, 11% of people are either idiots who’ve never read the book- losers- or people who are making some other random quip.)