Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • @workerONE
    link
    English
    -1
    edit-2
    9 months ago

    Human beings have developed logic and morality. AI does not know the difference between killing a person and changing a 1 to a 0.

    • @[email protected]
      link
      fedilink
      English
      89 months ago

      LLM “AI” doesn’t “know” anything. It’s just statistical word vomit based on established patterns. It talks about nuclear war because a significant portion of text on the subject of world wide long term peace brings it up.