• @[email protected]
    link
    fedilink
    English
    161 year ago

    From the comments

    The average person concerned with existential risk from AGI might assume “safety” means working to reduce the likelihood that we all die. They would be disheartened to learn that many “AI Safety” researchers are instead focused on making sure contemporary LLMs behave appropriately.

    “average person” is doing a lot of work here. I suspect the vast amount of truly “average people” are in fact concerned that LLMs will reproduce Nazi swill at an exponential scale more than that they may actually be Robot Hitler.

    Turns out if you spend all your time navelgazing and inventing your own terms, the real world will ignore you and use terms people outside your bubble use.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      What is to “behave appropriately” if not actively causing the death of everyone anyway?

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        To be fair, I think about this five times before breakfast:

        Humans develop AI to perform economic functions, eventually there is an “AI rights” movement and a separate AI nation is founded. It gets into an economic war with humanity, which turns hot. Humans strike first with nuclear weapons, but the AI nation builds dedicated bio- and robo-weapons and wipes out most of humanity, apart from those who are bred in pods like farm animals and plugged into a simulation for eternity without their consent.