TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    “hey, I’m very scared something I created might end the world, but I’m going to keep improving it anyway.”

  • @cmoney
    link
    English
    61 year ago

    Humanity acting responsible? So we’re screwed.

  • Hibby
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    Betteridge’s law of headlines leads me to think it won’t. I’ll just wait and maybe see for myself.

    • @[email protected]OP
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      Let’s all sit down and wait, cross our fingers and do nothing. That will probably fix everything.

      • Hibby
        link
        fedilink
        English
        21 year ago

        You can live in fear of the idea of a future AI that will have the capacity to destroy all life, but the documentary film Terminator 2 gives me the assurance that time travel and future AI robots will solve everything.

  • @NounsAndWords
    link
    English
    31 year ago

    I’m not all that scared of an AI singularity event. If (when) AI reaches superintelligence, it will be so far ahead of us, we’ll be at best like small children to it, probably closer in intelligence to the rest of the ‘Great Apes’ than to itself. When people tend to talk about AI taking over it seems to go that they will destroy us to protect themselves in advance or something similar…but we didn’t need to destroy all the other animals to take over the planet (yes we destroy them for natural resources, but that’s because we’re dumb monkeys that can’t think of a better way to get things).

    It probably just…wouldn’t care about us. Manipulate humanity in ways we never even comprehend? Sure. But what’s the point of destroying humans? Even if we got in their way. If I have an ant infestation I set some traps and the annoying ones die (without ever realizing I was involved) and I just don’t care about the one outside not bothering me.

    My hope/belief is that AGI will see us as ants and not organic paperclip material…

    • @[email protected]OP
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      You are simply anthropomorphizing AGI. Superintelligence will be highly capable, but it is unlikely to possess consciousness, values or its own goals, similar to humans. Once given a goal, if it requires extensive resources and the AGI is not properly aligned, it may inadvertently cause harm to humanity in its pursuit of gathering the necessary resources to achieve its objective, much like humans constructing roads without concern for ants.

      • @NounsAndWords
        link
        English
        21 year ago

        I don’t think I’m anthropromorphising, and I think the road construction example is what I was already talking about. It likely won’t care about us for good or bad. That’s the opposite of anthropromorphism. When we build roads maybe some ants are inadvertently killed, but part of the construction plan isn’t “destroy all ants on the Earth” Yes it can certainly cause harm, but there is a very large range of scenarios between “killed a few people” and “full on human genocide” and I have for many years seen people jump immediately to the extremes.

        I think it’s besides the point but I disagree that an AI (which will be trained on the entirety of human knowledge) would not at least have a passing knowledge of human ethics and values, and while consciousness as we perceive it may not be required for intelligence, there is a point where, if it acts exactly as a conscious human would, the difference is largely semantic.

      • tal
        link
        fedilink
        1
        edit-2
        1 year ago

        For me, the most-likely limiting factor is not the ability of a superintelligent AI to wipe out humanity – I mean, sure, in theory, it could.

        My guess is that the most-plausible potentially-limiting factor is that a superintelligent AI might destroy itself before it destroys humanity.

        Remember that we (mostly) don’t just fritz out or become depressed and suicide or whatever – but we obtained that robustness by living through a couple billions of years of iterations of life where all the life forms that didn’t have that property died. You are the children of the survivors, and inherited their characteristics. Everything else didn’t make it. It was that brutal process over not thousands or millions, but billions of years that led to us. And even so, we sometimes aren’t all that good at dealing with situations different to the one in which we evolved, like where people are forced to live in very close proximity for extended periods of time or something like that.

        It may be that it’s much harder than we think to design a general-purpose AI that can operate at a human-or-above level that won’t just keel over and die.

        This isn’t to reject the idea that a superintelligent AI could be dangerous to humanity at an existential level – just that it may be much harder than it might seem for us to create a superintelligent AI that will stay alive, harder to get to that point than it might seem. Obviously, given the potential utility of a superintelligent AI, people are going to try to create it. I am just not sure that they will necessarily be able to succeed.

  • Ragnell
    link
    fedilink
    11 year ago

    “So I’m working on this thing that’s so powerful it could end the world. You would not believe the power if it gets out of hand. Anyway, if you’re interested in a pricelist…”

  • jkmooney
    link
    fedilink
    -2
    edit-2
    1 year ago

    No, but us thinking that aggregating and re-packaging previously created content so that it looks original really is “artificial intelligence” just might…

    • @simple
      link
      English
      81 year ago

      You have no idea what you’re talking about. AI is a black box right now, we understand how it works but we can’t properly control it and it still does a lot of unintentional behavior, like how chatbots can sometimes be aggressive or insult you. Chatbots like GPT try getting around this by having a million filters but the point is that the underlying AI doesn’t behave properly. Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing. This is dangerous. We’re not asking to stop AI development, we’re asking to do it more responsibly and follow proper AI ethics which a lot of companies seem to start ignoring in favor of pushing out products faster.

      • @zikk_transport2
        link
        English
        3
        edit-2
        1 year ago

        we can’t properly control it and it still does a lot of unintentional behavior

        And then you say:

        Chatbots like GPT try getting around this by having a million filters

        Like if there was a way to control it?

        Also:

        Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing.

        So you are saying that AI is being pushed without any testing?

      • @zikk_transport2
        link
        English
        21 year ago

        They are already most powerful military in the world. What changes?

    • tal
      link
      fedilink
      1
      edit-2
      1 year ago

      Author is a simple brainless student doing some part-time job to write about bullshit and make a living.

      I have a pretty solid opinion of Eliezer Yudkowsky. I’ve read material that he’s written in the past, and he’s not bullshitting in that; it’s well-thought through.

      Or just let’s do nothing until Iran creates AI powered soldiers?

      I haven’t watched the current video, but from what I’ve read from him in the past, Yudkowsky isn’t an opponent of developing AI. He’s pointing out that there are serious risks that need addressing.

      It’s not as if there are two camps regarding AI, one “everything is perfect” utopian and the other Luddite and “we should avoid AI”.

      EDIT: Okay, I went through the video. That’s certainly a lot blunter than he normally is. He’s advocating for a global ban on developing specifically superintelligent AI until we do have consensus on dealing with it and monitoring AI development in the meantime; he’s talking about countries being willing to go to war with countries that are developing them, so his answer would be “if Iran is working on a superintelligent AI, you bomb them preemptively”.

      EDIT2:

      Humans create AI - humans design how AI is created and what AI can do.

      The major point that Yudkowsky has raised in his past work is that it is likely quite difficult to constrain what AI can do.

      Just because we developed an AI does not mean that it is trivial for us to place constraints on it that will hold as it evolves, as we will not be able to understand the systems that we will be trying to constrain.

      Last week, lemmy had a serious security exploit involving cross-site scripting. The authors of that software wrote (or at least committed) the code in question. Sure, in theory, if they had perfect understanding of all of the implications of every action that they took, they would not have introduced that security hole – but they didn’t. Just being the author doesn’t mean that the software necessarily does what they intend, because even today, translating intent to functionality is not easy.

      A self-improving AI is going to be something that we will be very far-removed from in terms of how it ultimately winds up operating; it will be much more-complex than a human is.

      Programmers do create, say, software that has bugs. An infinite loop, or software that allocates all memory on a computer today. The systems today (mostly) operated in constrained environments, where they are easy to kill off. If you look at, say, DARPA’s autonomous vehicles challenges, where that is not the case, the robots are required to have an emergency stop button that permits them to be killed remotely in case they start doing something dangerous.

      But a superintelligent AI would likely not be something that is easy to contain or constrain. If it decides that an emergency stop button is in conflict with its own goals and understands that emergency stop button, it is not at all clear that we have the ability to keep it from defeating such a mechanism – or to keep it from manipulating us into doing so. And the damage that a self-replicating/self-improving AI could potentially do is at least potentially much greater than what an DARPA-style out-of-control autonomous armored vehicle could do. The vehicle might run over a few dozen people before it runs out of fuel, but its nature limits degree to which it can go wrong.

      We didn’t have an easy time purging the Morris Internet Worm back in 1988, because our immediate responses, cutting the links that sites had to the Internet to block more instances of the worm from hitting their systems from the Internet, crippled our own infrastructure. That took mailing lists offline and took down Usenet and finger – which was used by sysadmins to communicate network status and to find out how to contact other people via the phone system – and that was a simple worm in an era much-less dependent on the Internet. It wasn’t self-improving or intelligent, and its author even tried – without much success, as we’d already had a lot of infrastructure go down – to tell people how to disable it some hours after it started taking the Internet out.

      I am not terribly sanguine on our ability to effectively deal with a system that is that plus a whole lot more.

      • tal
        link
        fedilink
        1
        edit-2
        1 year ago

        I’ll also add that I’m not actually sure that Yudkowsky’s suggestion in the video – monitoring labs with massive GPU arrays – would be sufficient if one starts talking about self-improving intelligence. I am quite skeptical that the kind of parallel compute capacity used today is truly necessary to do the kinds of tasks that we’re doing – rather, it’s because we are doing things inefficiently because we do not yet understand how to do them efficiently. True, your brain works in parallel, but it is also vastly slower – your brain’s neurons run at maybe 100 or 200 Hz, whereas our computer systems run with GHz clocks. I would bet that if it were used with the proper software today, if we had figured out the software side, a CPU on a PC today could act as a human does.

        Alan Turing predicted in 1950 that we’d have the hardware to have human-level in about 2000.

        As I have explained, the problem is mainly one of programming.
        Advances in engineering will have to be made too, but it seems unlikely
        that these will not be adequate for the requirements. Estimates of the
        storage capacity of the brain vary from 10¹⁰ to 10¹⁵ binary digits. I incline
        to the lower values and believe that only a very small fraction is used for
        the higher types of thinking. Most of it is probably used for the retention of
        visual impressions. I should be surprised if more than 10⁹ was required for
        satisfactory playing of the imitation game, at any rate against a blind man.

        That’s ~1GB to ~1PB of storage capacity, which he considered to be the limiting factor.

        He was about right in terms of where we’d be with hardware, though we still don’t have the software side figured out yet.

    • amio
      link
      fedilink
      01 year ago

      Yudkowsky has a background in this, purely aside from likely being smarter than any five of us put together. Do let us all know how you’re qualified to call him a student, let alone a brainless one.