• @[email protected]
    link
    fedilink
    21 month ago

    I wouldn’t say there’s no mysticism in the singularity, at least not in the sense you’re implying here. While it uses secular and scientific aesthetics rather than being overtly religious the roadmap it presents relies on specific assumptions about the nature of consciousness, intelligence, and identity that may sound plausible but aren’t really more rational than having an immortal soul that gets judged at the end of days.

    And it doesn’t help that when confronted by any kind of questioning of how plausible any of this is there’s a tendency to assume a sufficiently powerful AI can solve the problem and assume that’s the end of it. It’s not less of a Deus ex Machina if you call it an AI instead of a God to focus on the Machina instead of the Deus.

    • While it uses secular and scientific aesthetics rather than being overtly religious the roadmap it presents relies on specific assumptions about the nature of consciousness, intelligence, and identity that may sound plausible but aren’t really more rational than having an immortal soul that gets judged at the end of days.

      Do we have any scientific evidence that consciousness, intelligence, and identity reside anywhere else than the brain? People who lose all of their limbs don’t become more stupid. People who get artificial hearts don’t become soulless automotons. Certainly, the brain needs chemicals and hormones produced elsewhere in the body, but we successfully artificially produce these chemicals for people in whom the natural production is faulty all the time; it’s only a matter of scale.

      We’re certainly far from a complete understanding of the brain, but we’re not that far. There are no great unknowns.

      there’s a tendency to assume a sufficiently powerful AI can solve the problem and assume that’s the end of it.

      I think that’s entirely reasonable. We’re limited by our biology; perhaps we’ll find a limit to our technology that puts an upper limit on AI growth, but we haven’t observed that horizon yet. It’s no more irrational to assume there’s a limit, than to assume that the limit is so low we can’t get super-intelligent general AI before we hit it.