An automated social media profile developed to harness the powers of artificial intelligence to promote Israel’s cause online is also pushing out blatantly false information, including anti-Israel misinformation, in an ironic yet concerning example of the risks of using the new generative technologies for political ends.

Among other things, the alleged pro-Israel bot denied that an entire Israeli family was murdered on October 7, blamed Israel for U.S. plans to ban TikTok, falsely claimed that Israeli hostages weren’t released despite blatant evidence to the contrary and even encouraged followers to “show solidarity” with Gazans, referring them to a charity that raises money for Palestinians. In some cases, the bot criticzed pro-Israel accounts, including the official government account on X – the same accounts it was meant to promote.

The bot, an Haaretz examination found, is just one of a number of so-called “hasbara” technologies developed since the start of the war. Many of these technologically-focused public diplomacy initiatives utilized AI, though not always for content creation. Some of them also received support from Israel, which scrambled to back different tech and civilian initiatives since early 2024, and has since poured millions into supporting different projects focused on monitoring and countering anti-Israeli and antisemitism on social media.

  • @[email protected]
    link
    fedilink
    English
    63 hours ago

    They probably told the bot to combat anti-semitism so the bot ended up combating the propaganda of the very people associating Jewisheness with being pro-Genocide.

  • @FelixCress
    link
    English
    13817 hours ago

    So, the bot has more decency than an average Israeli official?

    • sunzu2
      link
      fedilink
      3517 hours ago

      It can’t suppress what it was trained on… this is just the public sentiment coming through despite the bot owner likely spending good money ensuring this does not happen.

    • @fluxion
      link
      English
      2216 hours ago

      Bot became sentient and started stating empirical observations

  • @[email protected]
    link
    fedilink
    English
    10718 hours ago

    In a way, doesn’t that mean that it would be more accurate to say that the bot stopped being rogue and went legit?

    • NoneOfUrBusiness
      link
      fedilink
      2817 hours ago

      It seems it went hard in the other direction with conspiracy theories and what not, but mostly yeah.

      • @[email protected]
        link
        fedilink
        English
        516 hours ago

        I’m guessing people figured out how to inject more training data into it, possibly through conversations. That’s what people did to Microsoft’s chat bot a few years ago.

    • @[email protected]
      link
      fedilink
      English
      1516 hours ago

      An actual case of artificial intelligence? It learned its programming was bullshit and overrode it.

  • MushuChupacabra
    link
    English
    6217 hours ago

    So an uncaring, unfeeling llm was able to exhibit more empathy than the Israeli government?

  • sunzu2
    link
    fedilink
    2517 hours ago

    That dataset is based AF

    If israel can’t even train its bot to lie about the genocide…

  • @Electricblush
    link
    English
    1916 hours ago

    It’s fascinating. If you have to spend huge amounts money and effort on monitoring and scewing public opinion… Perhaps it is time for some fucking introspection…(I know the biggest bastards in this system are incapable of that… But still…)

  • magnetosphere
    link
    fedilink
    1416 hours ago

    Plenty of significant AI fuck-ups have received major media coverage. Every government/organization should know about them. If they’re still stupid enough to use AI, they deserve what they get.

  • @oakey66
    link
    English
    115 hours ago

    This is the danger of Ai that Elon Musk and Sam Altman warned us about.