I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • HucklebeeOP
    link
    46 months ago

    Hmm, now that I read this, I have a thought: it might also be hard to wrap our heads around this issue because we all talk about AI as if it is an entity. Even the sentence “it makes shit up” gives AI some kind of credit that it “thinks” about things. It doesn’t make shit up, it is doing exactly what it is programmed to do: create good sentences. It succeeds.

    Maybe the answer is just to stop talking about AI’s as “saying” things, and start talking about GenAI as “generating sentences”? That way, we emotionally distance ourselves from “it” and it’s more difficult to ascribe consciousness to an AI.