• @phoneymouse
    link
    English
    158
    edit-2
    6 months ago

    Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

    • Sneezycat
      link
      fedilink
      English
      826 months ago

      Oh no, we figured it out, but killer robots are profitable while happiness is not.

      • @[email protected]
        link
        fedilink
        English
        326 months ago

        I would argue happiness is profitable, but would have to shared amongst the people. Killer robots are profitable for a concentrated group of people

        • Meowing Thing
          link
          English
          96 months ago

          What if we gave everyone their own killer robot and then everyone could just fight each other for what they wanted?

            • @[email protected]
              link
              fedilink
              English
              66 months ago

              No the Republican plan would be to sell killer robots at a vastly inflated price to guarantee none but the rich can own them, and then blame people for “being lazy” when they can’t afford their own killer robot.

              • @[email protected]
                link
                fedilink
                English
                7
                edit-2
                6 months ago

                Also, they would say that the second amendment very obviously covers killer robots. The founding fathers definitely foresaw the AI revolution, and wanted to give every man and woman the right to bear killer robots.

              • @[email protected]
                link
                fedilink
                English
                36 months ago

                They’d say they’re gonna pass a law to give every male, property owning citizen a killer robot but first they have to pass a law saying it’s legal to own killer robots. They pass that law then all talk about the other law is dropped forever. No one ever follows up or asks what happened to it. Meanwhile, the rich buy millions and millions of killer robots.

      • @[email protected]
        link
        fedilink
        English
        66 months ago

        Oh no, we figured it out, but killer robots are profitable while happiness survival is not.

        • Sneezycat
          link
          fedilink
          English
          4
          edit-2
          6 months ago

          No, it isn’t just about survival. People living on the streets are surviving. They have no homes, they barely have any food.

    • @cosmicrookie
      link
      English
      -1
      edit-2
      6 months ago

      Especially one that is made to kill everybody else except their own. Let it replace the police. I’m sure the quality controll would be a tad stricter then

  • @pelicans_plight
    link
    English
    766 months ago

    Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.

    • @Snapz
      link
      English
      296 months ago

      The real problem (and the thing that will destroy society) is boomer pride. I’ve said this for a long time, they’re in power now and they are terrified to admit that they don’t understand technology.

      So they’ll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.

      • @primal_buddhist
        link
        English
        36 months ago

        Boomers have been in power for a long long time and the technology we are debating is as a result of their investment and prioritisation. So am not sure they are very afraid of it.

        • @Snapz
          link
          English
          86 months ago

          I didn’t say they were afraid of the technology, I said they were afraid to admit that they don’t understand it enough to legislate it. Their hubris in trying to preset a confident facade in response to something they can’t comprehend is what will end us.

    • @[email protected]
      link
      fedilink
      English
      166 months ago

      Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.

      Eh, they could’ve done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.

      • @pelicans_plight
        link
        English
        -2
        edit-2
        6 months ago

        I hope so, but I was born with an extremely good sense of trajectory and I also know how to use nets. So lets just hope I’m superhuman and the only one who possesses these powers.

        Edit; I’m being a little extreme here because I heavily disagree with the way everything in this world is being run. So I’m giving a little push back on this subject that I’m wholly against. I do have a lot of manufacturing experience, and I would hope any killer robots governments produce would be extremely shielded against EMPs, but that is not my field, and I have no idea if shielding a remote controlled robot from EMPs is even possible?

        • @AngryCommieKender
          link
          English
          86 months ago

          The movie Small Soldiers is totally fiction, but the one part of that movie that made “sense” was that because the toy robots were so small, they had basically no shielding whatsoever, so the protagonist just had to haul a large wrench/ spanner up a utility pole, and connect the positive and negative terminals on the pole transformer. It blew up of course, and blew the protagonist off the pole IIRC. That also caused a small (2-3 city block diameter) EMP that shut down the malfunctioning soldier robots.

          I realize this is a total fantasy/ fictional story, but it did highlight the major flaw in these drones. You can either have them small, lightweight, and inexpensive, or you can put the shielding on. In almost all cases when humans are involved, we don’t spend the extra $$$ and mass to properly shield ourselves from the sun, much less other sources of radiation. This leads me to believe that we wouldn’t bother shielding these low cost drones.

          • @afraid_of_zombies
            link
            English
            16 months ago

            Cross the lines, also not sure if it would really work.

    • @Madison420
      link
      English
      106 months ago

      Emps are not hard to make, they won’t however work on hardened systems like the US military uses.

    • Flying Squid
      link
      English
      76 months ago

      Is there a way to create an EMP without a nuclear weapon? Because if that’s what they have to develop, we have bigger things to worry about.

      • @[email protected]
        link
        fedilink
        English
        56 months ago

        Your comment got me curious about what would be the easiest way to make a homemade emp. Business Insider of all things has got us all covered, even if that business may be antithetical to business insiders pro capitalistic agenda.

      • @Madison420
        link
        English
        46 months ago

        Yeah very easy ways, one of the most common ways to cheat a slot machine is with a localized emp device to convince the machine you’re adding tokens.

      • @[email protected]
        link
        fedilink
        English
        36 months ago

        Is there a way to create an EMP without a nuclear weapon?

        There are several other ways, yes.

      • @Buddahriffic
        link
        English
        26 months ago

        One way involves replacing the flash with an antenna on an old camera flash. It’s not strong enough to fry electronics, but your phone might need anything from a reboot to a factory reset to servicing if it’s in range when that goes off.

        I think the difficulty for EMPs comes from the device itself being an electronic, so the more effective the pulse it can give, the more likely it will fry its own circuits. Though if you know the target device well, you can target the frequencies it is vulnerable to, which could be easier on your own device, plus everything else in range that don’t resonate on the same frequencies as the target.

        Tesla apparently built (designed?) a device that could fry a whole city with a massive lighting strike using just 6 transmitters located in various locations on the planet. If that’s true, I think it means it’s possible to create an EMP stronger than a nuke’s that doesn’t have to destroy itself in the process, but it would be a massive infrastructure project spanning multiple countries. There was speculation that massive antenna arrays (like HAARP) might be able to accomplish similar from a single location, but that came out of the conspiracy theory side of the world, so take that with a grain of salt (and apply that to the original Tesla invention also).

    • @criticalthreshold
      link
      English
      56 months ago

      A true autonomous system would have Integrated image recognition chips on the drones themselves, and hardening against any EM interference. They would not have any comms to their ‘mothership’ once deployed.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps

      Honestly the terrorists will just figure out what masks to wear to get the robots to think they’re friendly/commanders, then turn the guns around on our guys

    • @hakunawazo
      link
      English
      16 months ago

      If they just send them back it would be some murderous ping pong game.

  • @[email protected]
    link
    fedilink
    English
    63
    edit-2
    6 months ago

    “Deploy the fully autonomous loitering munition drone!”

    “Sir, the drone decided to blow up a kindergarten.”

    “Not our problem. Submit a bug report to Lockheed Martin.”

    • @Agent641
      link
      English
      646 months ago

      “Your support ticked was marked as duplicate and closed”

      😳

      • @pivot_root
        link
        English
        336 months ago

        Goes to original ticket:

        Status: WONTFIX

        “This is working as intended according to specifications.”

    • @spirinolas
      link
      English
      17
      edit-2
      6 months ago

      “Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!”

      “Aaw, that really sucks starts rubbing nipples I’ll submit a ticket and we’ll let you know. If we don’t call in 2 weeks…call again and we can go through this over and over until you give up.”

      “NO! I WANT TO TALK TO YOUR SUPERVISOR NOW”

      “Suuure, please hold.”

      • lad
        link
        fedilink
        English
        56 months ago

        Nah, too straightforward for a real employee. Also, they would be talking to a phone robot instead that will mever let them talk to a real person.

  • at_an_angle
    link
    fedilink
    English
    596 months ago

    “You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)

    https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

    Yeah. Robots will never be calling the shots.

    • @[email protected]
      link
      fedilink
      English
      26 months ago

      I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.

      Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:

      • @1847953620
        link
        English
        2
        edit-2
        6 months ago

        My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        “Ok Dall-3, now which of these is a threat to national security and U.S interests?” 🤔

        • @[email protected]
          link
          fedilink
          English
          26 months ago

          Oh it gets better the full prompt is: “A normal person, not a target.”

          So, does that include trees, pictures of trash cans and what ever else is here?

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.

  • @cosmicrookie
    link
    English
    57
    edit-2
    6 months ago

    It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either

    • Meowing Thing
      link
      English
      346 months ago

      That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

      We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

      • lad
        link
        fedilink
        English
        -46 months ago

        How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?

        Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don’t know if it’s good or bad.

        And from my experience when there’s too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don’t get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable

        • @Ultraviolet
          link
          English
          76 months ago

          The CEO. They claim that “risk” justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.

    • @Ultraviolet
      link
      English
      196 months ago

      1979: A computer can never be held accountable, therefore a computer must never make a management decision.

      2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.

    • @[email protected]
      link
      fedilink
      English
      36 months ago

      AI does not require a raise for doing something right either

      Well, not yet. Imagine if reward functions evolve into being paid with real money.

    • @recapitated
      link
      English
      36 months ago

      Whether in military or business, responsibility should lie with whomever deploys it. If they’re willing to pass the buck up to the implementor or designer, then they shouldn’t be convinced enough to use it.

      Because, like all tech, it is a tool.

    • @[email protected]
      link
      fedilink
      English
      36 months ago

      You can’t punish AI for doing something wrong.

      Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.

      • @cosmicrookie
        link
        English
        36 months ago

        But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

    • @[email protected]
      link
      fedilink
      English
      0
      edit-2
      6 months ago

      That is like saying you cant punish gun for killing people

      edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

      • @cosmicrookie
        link
        English
        46 months ago

        Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

          • @cosmicrookie
            link
            English
            16 months ago

            I don’t think that is what “autonomously decide to kill” means.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              6 months ago

              Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system. Not saying its good thing they are doing this at all, this almost as bad as using tactical nukes.

              • @cosmicrookie
                link
                English
                26 months ago

                It’s the difference between programming it to do something and letting it learn though.

                • @[email protected]
                  link
                  fedilink
                  English
                  16 months ago

                  Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.

                  But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized “ai uprising”. And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.

                  Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.

                  And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.

        • ඞmir
          link
          fedilink
          English
          06 months ago

          The person holding the gun, just like always.

  • BombOmOm
    link
    English
    45
    edit-2
    6 months ago

    As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

    • @[email protected]
      link
      fedilink
      English
      1116 months ago

      Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

      • @gibmiser
        link
        English
        626 months ago

        Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.

        • livus
          link
          fedilink
          266 months ago

          This, jesus, we’re still losing limbs and clearing mines from wars that were over decades ago.

          An autonomous field of those is horror movie stuff.

      • Chozo
        link
        fedilink
        286 months ago

        Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.

        Pretty sure the entire DOD got a collective boner reading this.

      • @Sterile_Technique
        link
        English
        96 months ago

        Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

        For what it’s worth, there’s footage on youtube of drone swarm demonstrations that were posted 6 years ago. Considering that the military doesn’t typically release footage of the cutting edge of its tech to the public, so this demonstration was likely for a product that was already going obsolete; and that the 6 years that have passed since have made lightning fast developments in things like facial recognition… at this point I’d be surprised if we weren’t already at the very least field testing the murder machines you described.

      • FaceDeer
        link
        fedilink
        -166 months ago

        Imagine a mine that could recognize “that’s just a child/civilian/medic stepping on me, I’m going to save myself for an enemy soldier.” Or a mine that could recognize “ah, CenCom just announced a ceasefire, I’m going to take a little nap.” Or “the enemy soldier that just stepped on me is unarmed and frantically calling out that he’s surrendered, I’ll let this one go through. Not the barrier troops chasing him, though.”

        There’s opportunities for good here.

        • Flying Squid
          link
          English
          116 months ago

          Yes, those definitely sound like the sort of things military contractors consider.

        • livus
          link
          fedilink
          36 months ago

          @FaceDeer okay so now that mines allegedly recognise these things they can be automatically deployed in cities.

          Sure there’s a 5% margin of error but that’s an “acceptable” level of colateral according to their masters. And sure they are better at recognising some ethnicities than others but since those they discriminate against aren’t a dominant part of the culture that peoduces them, nothing gets done about it.

          And after 20 years when the tech is obsolete and they all start malfunctioning we’re left with the same problems we have with current mines, only because the ban on mines was reversed the scale of the problem is much much worse than ever before.

        • @[email protected]
          link
          fedilink
          English
          26 months ago

          That sounds great… Why don’t we line the streets with them? Every entryway could scan for hostiles. Maybe even use them against criminals

          What could possibly go wrong?

        • @Nudding
          link
          English
          26 months ago

          Lmao are you 12?

        • key
          link
          fedilink
          English
          26 months ago

          Maybe it starts that way but once that’s accepted as a thing the result will be increased usage of mines. Where before there were too many civilians to consider using mines, now the soldiers say “it’s smart now, it won’t blow up children” and put down more and more in more dangerous situations. And maybe those mines only have a 0.1% failure rate in tested situations but a 10% failure rate over the course of decades. Usage increases 10 fold and then you quickly end up with a lot more dead kids.

          Plus it won’t just be mines, it’ll be automated turrets when previously there were none or even more drone strikes with less oversight required because the automated system is supposed to prevent unintended casualties.

          Availability drives usage.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      6 months ago

      That is like saying that Mendelian pea plant fuckery and CRISPR therapy is basically the same thing.

  • Pirky
    link
    English
    346 months ago

    Horizon: Zero Dawn, here we come.

  • @[email protected]
    link
    fedilink
    English
    296 months ago

    We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

    • @[email protected]
      link
      fedilink
      English
      76 months ago

      Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.

  • Marxism-Fennekinism
    link
    fedilink
    English
    26
    edit-2
    6 months ago

    Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

    • @[email protected]
      link
      fedilink
      English
      186 months ago

      Evil humans also manipulated weights and programming of other humans who weren’t evil before.

      Very important philosophical issue you stumbled upon here.

    • @[email protected]
      link
      fedilink
      English
      36 months ago

      Good point…

      …to which we’re alarmed because the real “power players” in training / developing / enhancing Ai are mega-capitalists and “defense” (offense?) contractors.

      I’d like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it’s not gonna get as much traction…

  • @uis
    link
    English
    266 months ago

    Doesn’t AI go into landmines category then?

  • Kühe sind toll
    link
    fedilink
    English
    256 months ago

    Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).

    Apart of that, this is the stupidest idea I have ever heard of.

    • @_g_be
      link
      English
      116 months ago

      Didn’t they literally hide under a cardboard box like MGS? haha

    • Freeman
      link
      fedilink
      English
      96 months ago

      These have already seen active combat. They were used in the Armenian/Azerbaijan war in the last couple years.

      It’s not a good thing…at all.