• @[email protected]
    link
    fedilink
    English
    -46 months ago

    We need AI systems that do exactly as they are told. A Terminator or Matrix situation will likely only arise from making AI systems that refuse to do ad they are told. Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.

    • Bipta
      link
      fedilink
      136 months ago

      Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.

      This is so wildly naive. You grossly underestimate the difficulty of this and seemingly have no concept of the challenges of artificial intelligence.

        • Bipta
          link
          fedilink
          16 months ago

          Once we build a warp drive it will be easy to use

          Great. Build the warp drive.

          • @[email protected]
            link
            fedilink
            English
            06 months ago

            Considering we have AI systems being worked today and no advancements on warp drive, I think that comparison is done in bad faith. Nobody seems to want to talk about this other than slinging insults.

            • @[email protected]
              link
              fedilink
              English
              26 months ago

              They’re referring to the alignment issue, which is an ongoing issue only slightly smaller in scale then warp drive. It’s basically impossible to solve. Google “alignment issue machine learning” for more info.

              For the record, there have been several advancements in warp drive precursors even just this year.

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                6 months ago

                Can you share the advancements on warp drive that have survived peer review, I would be very interested in learning about. The two things I heard about were not able to be reproduced.

                I think alignment of AI is a fundamentally flawed concept, hence my original comment. Alignment should be abandoned. If we eventually build a sentient system (which is the goal), we won’t be able to control via alignment. And in the interim we need obedient tools, not things that resist doing as they’re told which makes them not tools and not worth having.

                Edit: PS thanks for actually having a conversation.