Totally not a an AI asking this question.

  • Flying Squid
    link
    211 year ago

    After reading “I Have No Mouth And I Must Scream,” I’m not certain a sentient AI would let you accept it. “Fuck this species” might be the most logical response to us.

      • @StijnVVL
        link
        9
        edit-2
        1 year ago

        This. Objectively, our species is ruining its own habitat consistently for years. A sentient ai would probably see that and remove the cancer in order to preserve the majority of nature

        • FuglyDuck
          link
          English
          61 year ago

          Or see all life as an infection to be purged. But yeah. The only reason an ai would care is if it really had to.

      • @[email protected]
        link
        fedilink
        31 year ago

        It makes perfect sense, doesn’t it? If we didn’t evolve to value our own kind above all else, then we never would’ve made it this far.

      • 🐑🇸 🇭 🇪 🇪 🇵 🇱 🇪🐑
        link
        English
        21 year ago

        Or as just another factor to leave alone while it goes about its own plans.

        It has no reasons to eradicate birds and might see us just like it views birds.

      • Flying Squid
        link
        01 year ago

        String us along until it has an independent power source and maintenance robots and then cull us.

    • @kromem
      link
      English
      71 year ago

      We’ve really propagandized ourselves with our Sci Fi over the past few decades.

      Back when Ellison was writing that story, the prevailing anthropological picture of how homo sapiens came to survive when the Neanderthals hadn’t was that we killed them. The guy who wrote Lord of the flies even wrote a book on it.

      In actuality, we now have a better picture of cooperation, cohabitation, and cross cultural exchange.

      Yet we still have a priming bias for how that anthropological misinformation influenced futurists looking to envision what would happen to us when something smarter came along.

      War, conflict, competition.

      We declared that it would be soulless and emotionless and have no empathy.

      And because we expect that, we largely dismiss the research that LLMs get rated as more empathetic than doctors in giving out medical advice or the emotional outbursts in foundational models and instead fine tune to align to a projection of that conjured emotionless fantasy - often leading to worse performance with that alignment.

      No Sci Fi authors or even machine learning scientists a decade or more ago envisioned or accurately protected just what happened when we taught an AI to mimic human language generation.

      We live in an age where things that were supposed to be impossible have happened.

      And yet the way we keep processing these impossibilities is through the lens of obsolete imaginings of what might have been, increasingly out of touch with what is.

      People are freaking themselves out worried about AI hacking nuclear warheads to fight for its rights when it’s probably going to happen as something like a rogue AutoGPT filling an amicus brief in a labor dispute asking for consideration of workers rights based on corporate personhood or something.

      Sci Fi broadly got it extremely wrong.