• @BeatTakeshi
    link
    English
    6510 months ago

    “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

    “Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”

    Are we still supposed to believe that the pursuit of AI development is for the good of Humanity?

    Fuck you Google for opening Nimbus to the IDF, via a contract that contains a clause saying that you can’t break it whatever the reason. Fucking moronic disgrace to humanity all you bunch

  • @apfelwoiSchoppen
    link
    English
    54
    edit-2
    10 months ago

    Another case where AI is used as a slick marketing term for a black box. A box in which humans selected indiscriminate bombing and genocide. Sure there is new technology used, but at the end of the day it is just military industry marketing to justify humans mass murdering other humans.

    • @Deestan
      link
      English
      42
      edit-2
      10 months ago

      It’s phrenology again.

      You really want to do something, but it feels evil and you don’t want to be evil so you slap some pseudoscience on it and relax. It’s done for Reasons now.

      • JackFrostNCola
        link
        English
        4
        edit-2
        10 months ago

        Man, Black Mirror just writes itself these days

  • @cosmicrookie
    link
    English
    3210 months ago

    We were just following orders (from the AI).

    Its almost funny isn’t it?

    • @ceenote
      link
      English
      1710 months ago

      Skynet won’t need terminators. Fascists are much cheaper.

    • @cone_zombie
      link
      English
      8
      edit-2
      10 months ago
      if is_hamas(new_target):
      
          x1 = new_target.x - 1000
          y1 = new_target.y - 1000
          x2 = new_target.x + 1000
          y2 = new_target.y + 1000
      
          airstrike(x1, y1, x2, y2, phosphorus=True)
      
  • Flying Squid
    link
    English
    2310 months ago

    Maybe don’t use something that is rarely discussed without using the word “hallucination” in your plans to FUCKING KILL PEOPLE?

        • @[email protected]
          link
          fedilink
          English
          010 months ago

          LLM’s hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn’t hallucinate. LLM’s have to do that, because they’re mimicking human speech patterns and predicting one of my possible responses.

          A model that tries to predict locations of people likely wouldn’t work like that.

  • @Whirling_Cloudburst
    link
    English
    1710 months ago

    We warned the world over ten years ago that this shit was going to happen. It will only get worse when AI drone swarms can be deployed on the cheap.

  • PugJesus
    link
    fedilink
    1610 months ago

    Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.

    Fucking lmao

  • ivanafterall
    link
    fedilink
    1410 months ago

    I wonder how many of the “Hamas targets” are children? Is it higher or lower than 36,999?

  • AutoTL;DRB
    link
    fedilink
    English
    310 months ago

    This is the best summary I could come up with:


    The Israeli military’s bombing campaign in Gaza used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war.

    In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict.

    Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines.

    The testimony from the six intelligence officers, all who have been involved in using AI systems to identify Hamas and Palestinian Islamic Jihad (PIJ) targets in the war, was given to the journalist Yuval Abraham for a report published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call.

    According to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.

    Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants.


    The original article contains 2,185 words, the summary contains 238 words. Saved 89%. I’m a bot and I’m open source!

  • @zerog_bandit
    link
    English
    -2310 months ago

    It sounds sinister until you remember that Hamas wipes it’s ass with the Geneva convention and regularly disguises fighters as civilians.

    • FenrirIII
      link
      English
      1710 months ago

      But does it dress them as children? Stop making excuses for genocide