34
Attributions toward artificial agents in a modified Moral Turing Test - Scientific Reports
www.nature.comAdvances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.
Current LLM models tend to extract “best practice” responses a lot. They can statistically guess the correct responses to things, because it’s what experts cite the most. I wonder if that is what is behind this? As the authors of the research point out, the significance here is not the AI’s appearance of superior intelligence, it’s that it’s yet another example of how people may be influenced by AI.
That and the fact that I’m guessing they hand picked the results too instead of using just the first response given. Ultimately LLMs aren’t AI, it’s not forming its own thoughts, it’s generating text based on input that was produced by humans. So saying they rated “AI” responses better than humans is already disingenuous.
Probably. They’ve mastered the art of corporate-speak; another natural language task which doesn’t require precise abstract reasoning.
I’m kind of convinced that the set of possible moral philosophies most people would agree with in practice is the empty set, at this point, so I’m not surprised those kinds of answers do better.
I took one of the more complicated questions from an expert help column and fed it into Chat GPT. This was before it could perform live searches and the answer it gave was pretty close to the expert’s own answer.
“A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source.”
a representative sample is probably 299 absolute idiots… i’d also question what people they had actually write the human essays…