Summary

Geoffrey Hinton, the “Godfather of AI,” warns of a 10-20% chance that AI could cause human extinction within 30 years, citing the rapid pace of development and the likelihood of creating systems more intelligent than humans.

Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI.

He called for urgent government regulation, arguing that corporate profit motives alone cannot ensure safety.

This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.

  • @[email protected]
    link
    fedilink
    English
    341 month ago

    AI will not cause human extinction.

    Humans will cause human extinction by being complete dumbfucks. AI may simply be the tool we use

  • Flying Squid
    link
    251 month ago

    I am so much less worried about being wiped out by artificial intelligence than the kind that evolved biologically.

  • @[email protected]
    link
    fedilink
    English
    131 month ago

    Hinton emphasized that such AI could evade human control, likening humans to toddlers compared to advanced AI. … This stance contrasts with fellow AI expert Yann LeCun, who believes AI could save humanity rather than threaten it.

    Contrary to what the reporting suggests, these views don’t seem to be contradicting each other. Yann LeCun says AI could save humanity, while Geoffrey Hinton says that, in the absence of strong government regulation, AI companies will not develop it safely. Both of these things can be true.

  • @Tyfud
    link
    English
    91 month ago

    AI will wipe out humanity, but not directly. It will take the form of causing massive acceleration of climate change, draining the potable water supply, etc.

    All for our hubris.

  • @njm1314
    link
    81 month ago

    It would be a mercy killing at this point.

    • HubertManne
      link
      fedilink
      11 month ago

      this how I feel. whelp there is the definite extinction we are accelerating into and then the possibility of an existentail extinction from other sources.

  • @prime_number_314159
    link
    31 month ago

    An AI actually more intelligent than humans is probably not a huge threat, many of the mutual cooperation things that make humans work semi-well together apply to an AI. Likewise, an LLM is unlikely to cause any problems just by existing.

    Instead, I think the big danger is something like an LLM that convinces people that its smarter than they are (probably by being able to recite more facts than they can, or offering copy/paste explanations of advance topics), and is then put into more and more places of trust.

    Once it’s there, we have the open possibility that something “weird” happens, and many many devices, controls, etc simultaneously react poorly to novel inputs. Depending on the type of systems, and how widespread an issue it is, that could cause extremely large problems. Military systems might be the worst possibility for this.

  • @[email protected]
    link
    fedilink
    21 month ago

    I’m confused, is AI a dumb parrot that is good at spitting out convincing bullshit, or is AI a sentient genius that will destroy us all? Every article and comment about it is one or the other, and it can’t be both.

    • @[email protected]
      link
      fedilink
      11 month ago

      LLMs are the former. And we’re probably at least one breakthrough away from building something that can actually think. Not helping things is that we don’t know what thinking actually is.

  • @MrNesser
    link
    21 month ago

    There’s a few ways AI could go

    1. It’s completely indifferent to us, ignores us completely and does its own thing
    2. Someone takes a shot and we end up in a war ala the matrix
    3. AI sees us as needing it and takes over like it does in the polity novels

    Realistically 1 is best case as 3 would cause civil war as governments lose power.

    The only problem with 1 is it inevitably leads to 2 as we are ignorant idiots

  • @[email protected]
    link
    fedilink
    11 month ago

    I’m in favor of human extinction. It’s not personal, it’s just better for the universe if we stop existing.

  • @[email protected]
    link
    fedilink
    -1
    edit-2
    1 month ago

    Unless somebody makes tens of thousands of super soldier bodies that require no recharging and are powered by AI CPUs I don’t think we have a lot to worry about.

    I don’t think AI will either destroy or save humanity, it’s just a tool.

    Sure accidents may be caused by connecting AI to dangerous things like power plants, but we can still pull the plug.