Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • theodewere
    link
    fedilink
    -9
    edit-2
    8 months ago

    the Japanese Fascist Industrial Complex would still be fighting WWII if we hadn’t nuked TWO cities to ash… it’s probably the best way to affect change in both China and Russia…

        • The Snark Urge
          link
          English
          88 months ago

          Your opinion is completely ahistorical. Even if we accept that nukes were required to end the war, the fact is that their leaders had already decided to surrender when they bombed Nagasaki. The president didn’t even know of this second attack until he read about it in the news. This was before we had the convention of leaders having sole authority over nukes.

          If you’re actually interested in this subject and not merely spouting off edgy bullshit, I would recommend some Hardcore History as a primer.

    • PugJesus
      link
      fedilink
      118 months ago

      No. It wouldn’t. It would have been defeated with the loss of millions more lives, but it would not still be fighting today. Or even by 1948.

      • theodewere
        link
        fedilink
        -108 months ago

        cancer is pernicious by nature, sometimes directed radiation therapy is advised

    • alliswell33
      link
      fedilink
      English
      88 months ago

      Insane. By this logic you could easily argue that nuking the US is the best way towards world peace. Doesn’t sound so good when it’s you who gets killed.

      • norbert
        link
        fedilink
        118 months ago

        Have you been around lemmy much? That wouldn’t be the wildest take I’ve seen.

      • theodewere
        link
        fedilink
        -1
        edit-2
        8 months ago

        i think the LLM suggested nuking bad actors as a way to move politics forward in the world, and avoiding prolonged and pointless wars

        • @[email protected]
          link
          fedilink
          English
          108 months ago

          No, it regurgitated the response that has the highest percentage of “approval”. LLMs do not think. They do not use logic.

          • theodewere
            link
            fedilink
            -4
            edit-2
            8 months ago

            it calculates the productivity/futility of conversation with the various actors, and determines a best course… it’s playing a war game…

            it sees that both China and Russia are only emboldened to further mischief by anything less than force, so it calculates that applying overwhelming force immediately is the cheapest option, and best long term…

            • @fogelmensch
              link
              English
              68 months ago

              No. LLMs basically finish sentences.

              • theodewere
                link
                fedilink
                -6
                edit-2
                8 months ago

                it comprehends context incredibly well… this one played through scenarios and saw that both China and Russia are on a path to all-out war…

                • Jack Riddle
                  link
                  fedilink
                  English
                  28 months ago

                  It produces the statistically most likely token based on previous data. It doesn’t “comprehend” anything, and it can’t “play through scenarios”. It is just a more advanced form of autocomplete.

            • @[email protected]
              link
              fedilink
              English
              68 months ago

              No, not at all. It doesn’t think! LLMs don’t calculate. They don’t take any factors into consideration. These algorithms are not AI. That’s a complete misnomer, which makes the insane costs of operation even more ludicrous.

            • @Feathercrown
              link
              English
              28 months ago

              Honestly if we ignore the ethical issues it is a logically consistent solution… until you consider retaliation.

            • norbert
              link
              fedilink
              18 months ago

              As others have said this is factually incorrect. ChatGPT is not WOPR running a million War Games and calculating the winning move. It’s just spitting out what it’s already read.

              • theodewere
                link
                fedilink
                0
                edit-2
                8 months ago

                it routinely does things even its designers can’t explain, you cannot see into that thing’s thought processes and speak with certainty to its limitations

        • @NocturnalMorning
          link
          English
          28 months ago

          Wait, which ones the bad actor? Could go either way for me.

          • theodewere
            link
            fedilink
            -3
            edit-2
            8 months ago

            who did the LLM nuke… i’m just playing AI’s Advocate here…

        • @PotatoKat
          link
          English
          08 months ago

          Bad actors? Like the US?