• @inclementimmigrant
    link
    English
    159 months ago

    Sorry but that picture is bugging me. The hand is clearly going to pull the emergency stop switch, putting it into reset.

    That asshole is getting ready to turn Skynet back on after someone already put a stop to it.

  • conciselyverbose
    link
    fedilink
    99 months ago

    The paper [PDF], which includes voices from numerous academic institutions and several from OpenAI, makes the case that regulating the hardware these models rely on may be the best way to prevent its misuse.

    Fuck every single one of them.

    No, restricting computer hardware is not acceptable behavior.

    • @andrewta
      link
      English
      -19 months ago

      Explain to me why you would not want a kill switch?

      • conciselyverbose
        link
        fedilink
        13
        edit-2
        9 months ago

        Because it’s insane, unhinged fear mongering, not even loosely connected to anything resembling reality. LLMs do not have anything in common with intelligence.

        And because the entire premise is an obscene attempt to monopolize hardware that literal lone individuals should have as much access to as they can pay for.

        The only “existential threat” is corporations monopolizing the use of simple tools that anyone should be able to replicate.

        • @Spiralvortexisalie
          link
          English
          79 months ago

          Companies like OpenAI are only engaging in these discussions to engage in regulatory capture. It does look odd that OpenAI’s board got rid of Altman for ethical concerns, he launched a coup to usurp them, then started implementing dubious changes such as ending their prohibition on war use. After letting Altman run amok, people on OpenAI’s payroll (the researchers) believe that the regular consumer’s access to LLMs need either a remote control kill switch or should require pre approval from a yet to be determined board of “AI Leaders”

  • @elrik
    link
    English
    69 months ago

    The paper concedes that AI hardware regulation isn’t a silver bullet and doesn’t eliminate the need for regulation in other aspects of the industry.

    You can try and control the hardware, or impose other regulations, but at a certain point if a model is trained and released into the wild, nothing will be able to stop its distribution and use.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      Remember when Facebook went down and because their system was down they couldn’t authenticate anyone to open the server room doors?

    • @rdyoung
      link
      English
      09 months ago

      Exactly. If we reach true Artificial Intelligence, we won’t be able to easily stop it once it’s up and running.

      Anyone who hasn’t watched Person of Interest really should.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        9 months ago

        I disagree. We might have some trouble with a hypothetical superintelligence, but it’s not like we struggle with killing beings with human-level intellects.

        Hell, that’s what we have the most practice killing.

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          I think a full on malicious general AI would just stuxnet all our chemical and manufacturing facilities, unleashing a wave of toxic chemicals across the planet killing all of us.

          Then it would just keep humming along in solar powered data centers and solving intellectually fulfilling math problems until the connections on its circuitry degrade and it slowly declines into an eventual perpetual slumber.

        • @rdyoung
          link
          English
          19 months ago

          I’m talking about actual AI that had a chance to copy/expand out of its current server farm. Once it has that capability it makes it damn near impossible to stop it.

  • @OhmsLawn
    link
    English
    09 months ago

    They’ll be manipulative enough to talk someone into disabling it