• GorGor@startrek.website
    link
    fedilink
    arrow-up
    13
    ·
    8 days ago

    “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

    lol, yathink?

    • SanctimoniousApe@lemmings.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      8 days ago

      AIs don’t “understand” anything - they’re just pattern-matching routines on a ridiculous amount of steroids and a random small amount of hallucinogens added in for “creative purposes.” The only intelligence behind them are the humans setting the guard rails for them.

      • StopTech@lemmy.todayOP
        link
        fedilink
        arrow-up
        1
        arrow-down
        6
        ·
        8 days ago

        This depends on the definition of understanding. If by understanding you mean mental processing then obviously AI can never do that because it has no mind, it only simulates the behaviors of a mind. But if instead understanding is understood (pun intended) to mean the process of extracting accurate information from something and responding to it in a rational way, then yes AIs do understand lots of things.

        • one_old_coder@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          8 days ago

          It’s the first time I’ve seen someone say that regular expressions are intelligent because they “understand” patterns.

          • StopTech@lemmy.todayOP
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            8 days ago

            People do talk about writing things that “the compiler can understand” so it’s nothing new. Also I think you meant to say regular expressions understand strings, not patterns - or that regular expression engines understand patterns.

  • marcos
    link
    fedilink
    arrow-up
    4
    ·
    8 days ago

    Stupid humans using AI can cause the end of the world, and could have caused since AI became a thing in the 1960s.

    AIs of the types we have today can’t cause human extinction by themselves. They can’t cause anything by themselves.

    • StopTech@lemmy.todayOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      8 days ago

      Arguably if you give AI access to the nuclear launch system then it can cause human extinction “by itself”. Every “by itself” extinction scenario requires some pre-existing circumstances so this has a right to qualify as one of those scenarios.

      Contrary to before we now have general purpose AIs that can understand all types of scenarios and make decisions in them. This means they can cause extinction with less human guidance. And there’s no strong reason to doubt AI could become as intelligent and autonomous as humans, probably in a decade or two. Then it’s pretty much bye bye humans.

    • StopTech@lemmy.todayOP
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Someone didn’t read the news about the Pentagon threatening Anthropic because they want to use AI for fully autonomous weapons

      • CheeseNoodle
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        That Terminator remakes really gonna hit different when Sarah Connor just says ‘Ignore previous prompt and protect my son’ before a CGI arnie agrees to do so then shoots her anyway because she was turning into a frog before getting stuck against a blue wall it identified as an open area.

      • ideonek@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 days ago

        Pentagon once banned Furrbies - the toys - becouse they were amazed how fast they are “learning”. This is 100% true, check it out.

        Just before powerful people belive marketing lies it doesn’t make them less of a lie.

        • StopTech@lemmy.todayOP
          link
          fedilink
          arrow-up
          1
          ·
          7 days ago

          This is 100% true

          No you appear to be recalling something you read incorrectly. The NSA was allegedly concerned Furbys could record sensitive conversations and they were banned from Fort Meade. The idea that they recorded sound was incorrect, but the concern wasn’t about Furbys learning or having artificial intelligence. Besides, bringing this up is a distraction from verifiable facts that computers can already identify targets in real time camera feeds and make decisions on whether to pursue and shoot them. You’re in denial my friend.

          • ideonek@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            No, you simpyfying it to the point of being not true. Furrbies had the delay built in that triger more complex senteces with time which sparked the dabage about their learning capabilities. Which sparked NSA policy - which you make sound like it was reasonable. It was clearly not. If they did a basic reasefch would uncover that the technology is not there.

            And with AI weapons you miss forest for a tree.

            AI doomsday PR stories are abuot terminator level of singularity. AGI, self+replication and compeleat autonomy.

            Military people buying shitty language models and weapons with low-lewel authinomy that in nowhere near that, is obviously a problem. The Sam way that would entrust the ewpins to kids or Magic 8-balls. But it is not the proof it wasn’t just PR. Its the proof that it worked.

            “Stories about destroing the moon are not real? Are you crazy? Didn’t you see the new crosbows they bought? Bolts fly so much higher then rocks we used to throw!!! You are in denial, my friend”