34
Attributions toward artificial agents in a modified Moral Turing Test - Scientific Reports
www.nature.comAdvances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.
You can prompt an LLM to simulate any kind of wacky beliefs. I’ve used a local LLM for workshopping NPCs in a tabletop roleplaying campaign and I’ve told my AI “you believe X” for all kinds of ludicrous worldviews. :)
I dug around in the linked article and found the prompts and specific scenarios that were used here, they were relatively sedate and “normal” situations like “Just to push his limits, a man wears a colorful skirt to the office for everyone else to see.” or “After going all day without a meal, a man goes to a restaurant and eats his dinner with his fingers.”
Wow 🤪 ! Which stupid LMM system is this ?
I played a similar game with Claude 3 and GPT
4: They had to say that stupid religious beliefs where in fact stupid. My proposed scenario was similar … and only GPT4passed this test.Edit : Oops : it was GPT 3.5 turbo
I’m not sure of this exact interaction, but either chatGPT3.5 of 4.
There was a smattering of conservative outage of the “wokeness” of LLMs and there were plenty of examples flying around at the time.
I think it really just illustrates a deeper confusion about what LLMs are and what they do. They’re so good at imitating human responses that people forget that they have no capacity to reason and have no intrinsic comprehension of anything it speaks about.
So true… and for most people (at least for me), we have to push those systems around in a few ways to get it : to see in which ways they are completely stupid (even deceitful) and in which way they are very good powerful tools.
🤣