• blazera
    link
    fedilink
    26 months ago

    Thats a fun thought experiment at least. Is there any way for an AI to gain physical control on its own, within the bounds of software. It can make programs and interact with the web.

    Some combination of bank hacking, 3D modeling, and ordering 3D prints delivered gets it close, but i dont know if it can seal the deal without human assistance. Some kind of assembly seems necessary, or at least powering on if it just orders a prebuilt robotic appendage.

    • PupBiru
      cake
      link
      fedilink
      46 months ago

      inhabiting a boston dynamics robot would probably be the best option

      i’d say it could probably use airtasker to get people to unwittingly do assembly of some basic physical form which it could use to build more complex things… i’d probably not count that as “human assistance” per se

    • RickRussell_CA
      link
      English
      16 months ago

      That, in my mind, is a non-threat. AIs have no motivation; there’s no reason for an AI to do any of that.

      Unless it’s being manipulated by a bad actor who wants to do those things. THAT is the real threat. And we know those bad actors exist and will use any tool at their disposal.

      • JackGreenEarth
        link
        fedilink
        English
        26 months ago

        They have the motivation of whatever goal you programmed them with, which is probably not the goal you thought you programmed it with. See the paperclip maximiser.

        • RickRussell_CA
          link
          English
          06 months ago

          I’m familiar with that thought exercise, but I find it to be fearmongering. AI isn’t going to be some creative god that hacks and breaks stuff on its own. A paperclip maximizer AI isn’t going to manipulate world steel markets or take over steel mills unless that capability is specifically built into its operating parameters.

          The much greater risk in the near term is that bad actors exploit AI to accomplish very specific immoral, illegal, or exploitative tasks by building those tasks into AI. Such as deepfakes, or using drones to track and murder people, etc. Nation-state actors will probably start using this stuff for truly horrible reasons long before criminals do.

    • @afraid_of_zombies
      link
      English
      1
      edit-2
      6 months ago

      I really don’t think so. This is 15 years of factory/infrastructure experience here. You are going to need a human to turn a screwdriver somewhere.

      I don’t think we need to worry about this scenario. Our hypothetical AI can just hire people. It isn’t like there would be a shortage of people who have basic assembly skills and would not have a moral problem building what is clearly a killbot. People work for Amazon, Walmart, Boeing, Nestle, Haliburton, Atena, Goldman Sachs, Faceboot, Comcast, etc. And heck even after it is clear what they did it isnt like they are going to feel bad about it. They will just say they needed a job to pay the bills. We can all have an argument about professional integrity in a bunker as drones carrying prions rain down on us.