Israel Quietly Embeds AI Systems in Deadly Military Operations::undefined

  • u/lukmly013 💾 (lemmy.sdf.org)
    link
    fedilink
    English
    111 year ago

    Continue reading with one of the options below
    FREE ACCOUNT
    Read this article
    Free newsletters
    SUBSCRIPTION
    Unlimited access to Bloomberg.com
    Unlimited access to the Bloomberg app
    Subscriber-only newsletters

    :-/

    • @inspxtr
      link
      English
      61 year ago

      I posted the archive link in another comment, as I couldn’t access it.

      I wonder whether anyone has a lemmy bot that can automatically detect these paywalled articles and give an archive link?

        • @inspxtr
          link
          English
          11 year ago

          hm not sure why that happens. Maybe you can look up archive.today or the Wayback machine to see which one works for you.

  • wanderingmagus
    link
    English
    81 year ago

    Technology Israel Quietly Embeds AI Systems in Deadly Military Operations Selecting targets for air strikes and executing raids can now be conducted with unprecedented speed, according to army officials By Marissa Newman July 16, 2023 at 12:00 AM EDT The Israel Defense Forces have started using artificial intelligence to select targets for air strikes and organize wartime logistics as tensions escalate in the occupied territories and with arch-rival Iran. Though the military won’t comment on specific operations, officials say that it now uses an AI recommendation system that can crunch huge amounts of data to select targets for air strikes. Ensuing raids can then be rapidly assembled with another artificial intelligence model called Fire Factory, which uses data about military-approved targets to calculate munition loads, prioritize and assign thousands of targets to aircraft and drones, and propose a schedule. While both systems are overseen by human operators who vet and approve individual targets and air raid plans, according to an IDF official, the technology is still not subject to any international or state-level regulation. Proponents argue that the advanced algorithms may surpass human capabilities and could help the military minimize casualties, while critics warn of the potentially deadly consequences of relying on increasingly autonomous systems. “If there is a mistake in the calculation of the AI, and if the AI is not explainable, then who do we blame for the mistake?” said Tal Mimran, a lecturer of international law at the Hebrew University of Jerusalem and former legal counsel for the army. “You can wipe out an entire family based on a mistake.”

    Israeli Air Strike In Gaza, Palestine Israel’s Iron Dome air defense system intercepts rockets launched from Gaza City, on May 10. The system, in service for over a decade, is an example of Israel’s early adoption of AI-based technologies in warfare.Photographer: Majdi Fathi/NurPhoto/Getty Images Details of the army’s operational use of AI remain largely classified, yet statements from military officials suggest that the IDF has gained battlefield experience with the controversial systems through periodic flareups in the Gaza Strip, where Israel frequently carries out air strikes in response to rocket attacks. In 2021, the IDF described the 11-day conflict in Gaza as the world’s first “AI war,” citing its use of artificial intelligence to identify rocket launchpads and deploy drone swarms. Israel also conducts raids in Syria and Lebanon, targeting what it says are weapons shipments to Iran-backed militias like Hezbollah. In recent months, Israel has been issuing near-daily warnings to Iran over its uranium enrichment, vowing it will not allow the country to obtain nuclear weapons under any circumstances. Should the two enter into a military confrontation, the IDF anticipates that Iranian proxies in Gaza, Syria and Lebanon would retaliate, setting the stage for the first serious multi-front conflict for Israel since a surprise attack by Egypt and Syria 50 years ago sparked the Yom Kippur War. AI-based tools like Fire Factory are tailored for such a scenario, according to IDF officials. “What used to take hours now takes minutes, with a few more minutes for human review,” said Col. Uri, who heads the army’s digital transformation unit and who spoke at the IDF headquarters in Tel Aviv on the condition that only his first name be used for security reasons. “With the same amount of people, we do much more.” The system, these officials stressed, is designed for all-out war.

    Artillerie israélienne lors de la guerre du Kippour en 1973 Israeli artillery in the Golan Heights during the Yom Kippur War in October 1973.Photographer: Jean-Claude Francolon/Gamma-Rapho/Getty Images Expanding Intelligence? The IDF has long made use of AI, but in recent years it has expanded those systems across various units as it seeks to position itself as a global leader in autonomous weaponry. Some of these systems were built by Israeli defense contractors; others, like the StarTrack border control cameras, which are trained on thousands of hours of footage to identify people and objects, were developed by the army. Collectively, they comprise a vast digital architecture dedicated to interpreting enormous amounts of drone and CCTV footage, satellite imagery, electronic signals, online communications and other data for military use. Dealing with this torrent of information is the purpose of the Data Science and Artificial Intelligence Center, run by the army’s 8200 unit. Based within the intelligence division, that unit is where many of the country’s tech multi-millionaires, including Palo Alto Networks Inc.’s Nir Zuk and Check Point Software Technologies Ltd founder Gil Shwed, did their mandatory military service before forming successful startups. According to a spokesman, the Center was responsible for developing the system that “transformed the entire concept of targets in the IDF.” The secretive nature of how such tools are developed has raised serious concerns, including that the gap between semi-autonomous systems and entirely automated killing machines could be narrowed overnight. In such a scenario, machines would be empowered to both locate and strike targets, with humans removed entirely from positions of decision-making. “It’s just a software change that could make them go to not being semi but to being completely autonomous,” said Catherine Connolly, an automated decision researcher at Stop Killer Robots, a coalition of nongovernmental organizations that includes Human Rights Watch and Amnesty International. Israel says it has no plans to remove human oversight in coming years.

    Fire Factory softwareSource: Israel Defense Forces Another worry is that the fast adoption of AI is outpacing research into its inner workings. Many algorithms are developed by private companies and militaries that do not disclose propriety information, and critics have underlined the built-in lack of transparency in how algorithms reach their conclusions. The IDF acknowledged the problem, but said output is carefully reviewed by soldiers and that its military AI systems leave behind technical breadcrumbs, giving human operators the ability to recreate their steps. “Sometimes when you introduce more complex AI components, neural networks and the like, understanding what ‘went through its head,’ figuratively speaking, is pretty complicated. And then sometimes I’m willing to say I’m satisfied with traceability, not explainability. That is, I want to understand what is critical for me to understand about the process and monitor it, even if I don’t understand what every ‘neuron’ is doing,’” said Uri. The IDF declined to talk about facial recognition technology, which has been strongly criticized by human rights groups, although it did say it has refrained from integrating AI into recruitment software out of concern that it could discriminate against women and potential cadets from lower socioeconomic backgrounds. The main advantage of integrating AI into battlefield systems, according to some experts, is the potential to reduce civilian casualties. “I think that there’s an efficiency and effectiveness benefit to using these technologies correctly. And within good functioning technological parameters, there can be very, very high precision,” said Simona R. Soare, a research fellow at the London-based International Institute of Strategic Studies. “It can help you with a lot of things that you need to do on the go, in the fog of battle. And that is very difficult to do on the best of days.” “There are also many things that can go wrong, too,” she added. Ethical Concerns While Israeli leaders have outlined their intention to make the country an “AI superpower,” they’ve been vague on the details. The Defense Ministry declined to comment on how much it’s invested in AI, and the army would not discuss specific defense contracts, though it did confirm that Fire Factory was developed by Israeli defense contractor Rafael. Further obscuring the picture is that, unlike during the nuclear arms race, when leaking details of weapons’ capabilities was a key aspect of deterrence, autonomous and AI-assisted systems are being developed by governments, militaries, and private defense companies in secret. “We can assume that the Americans and even the Chinese and maybe several other countries have advanced systems in those fields as well,” said Liran Antebi, a senior researcher at the Israel-based Institute for National Security Studies. But unlike Israel, “they have, as much as I know, never demonstrated operational use and success.” For now, there aren’t any limitations. Despite a decade of UN-sponsored talks, there is no international framework establishing who bears responsibility for civilian casualties, accidents or unintended escalations when a computer misjudges. “There’s also a question of testing and the data that these systems are trained on,” said Connolly from the Stop Killer Robots coalition. “How precise and accurate can you know a system is going to be unless it’s already been trained and tested on people?” Such concerns are why Mimran, the law lecturer at Hebrew University, believes that the IDF should exclusively use AI for defensive purposes. During his tenure in the army, Mimran manually vetted targets to make sure that attacks complied with international law. That taught him that, regardless of technology, “there is a point where you need to make a value-based decision.” “And for that,” he said, “we cannot rely on AI.”