• @[email protected]
    link
    fedilink
    English
    66 months ago

    Right, it’s shocking that he snaps the pencil because the listeners were playing along, and then he suddenly went from pretending to have a friend to pretending to murder said friend. It’s the same reason you might gasp when a friendly NPC gets murdered in your D&D game: you didn’t think they were real, but you were willing to pretend they were.

    The AI hype doesn’t come from people who are pretending. It’s a different thing.

    • @Aceticon
      link
      4
      edit-2
      6 months ago

      For the keen observer there’s quite the difference between a make-believe gasp and and a genuine reaction gasp, mostly in terms of timing, which is even more noticeable for unexpected events.

      Make-believe requires thinking, so it happens slower than instinctive and emotional reactions, which is why modern Acting is mainly about stuff like Method Acting where the actor is supposed to be “Living truthfully under imaginary circunstances” (or in other words, letting themselves believe that “I am this person in this situation” and feeling what’s going on as if it was happenning to him or herself, thus genuinelly living the moment and reacting to events) because people who are good observers and/or have high empathy in the audience can tell faking from genuine feeling.

      So in this case, even if the audience were playing along as you say, that doesn’t mean they were intellectually simulating their reactions, especially in a setting were those individuals are not the center of attention - in my experience most people tend to just let themselves go along with it (i.e. let their instincts do their thing) unless they feel they’re being judged or for some psychological or even physiological reason have difficulty behaving naturally in the presence of other humans.

      So it makes some sense that this situation showed people’s instinctive reactions.

      And if you look, even here in Lemmy, at people dogedly making the case that AI actually thinks, and read not just their words but also the way they use them and which ones they chose, the methods they’re using for thinking (as reflected in how they choose arguments and how they put them together, most notably with the use of “arguments on vocabulary” - i.e. “proving” their point by interpreting the words that form definitions differently) and how strongly bound (i.e. emotionally) they are to that conclusion of their that AI thinks, it’s fair to say that it’s those who are using their instincts the most when interacting with LLMs rather than cold intellect that are the most convinced that the thing trully thinks.