AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.

  • @Pohl
    link
    English
    21 year ago

    A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture.

    If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence. We do not. We have no idea how the computational meat we all possess enables us to translate sensory input into a contiguous sense of self.

    It is totally valid to believe that ML computing is a match to the biological model and that it will cross a barrier at some point. But it is a belief that does not support itself with empirical evidence. At least not yet.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      1 year ago

      A claim that we have a computing model that shares a design with the operation of a biological brain is philosophical and conjecture

      Mathematical actually. See the 1943 McCulloch and Pitts paper for why Neural networks are called such.

      We use logic and math to approximate neurons

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        We have also recently trained a model against a small fly or worm, I can’t remember which it was, and it behaved identically to the original organism, it’s the complicated networks which have multiple sub networks essentially that are our current weak spot.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      We have literally created small organism brains with neural networks that behaved nearly identical, using ML and neural nets. Idk but i would call that pretty damn good emphical evidence. We do not know the specific mechanism on how a brain generates its weight so to speak chemically and computes but we understand that at its simplest form it is a neuron, with a weight, and depending on that weight/sensitivity what ever you want to call it produces a an output pretty damn consistently. The brain is multiple networks working simultaneously with the ability to self learn, this architecture is what is missing in our ML models now if you wanted general artificial intelligence, but we are missing foundational algoriths for chossing wieghts instead of randomly assigning them and hoping for the best to facilitate memroy and cleaner network integrations. You need specialized networks for each critical function, motor control, emotional regulation, etc then you need a system that can interpret weights or create weights in a way that you can imprint an “image”, for lack of a better term, to create memories. Consciousness would then just be the network that facilitates interpretation from each networks output and decides which systems need to be engaged next or if an end state was reached. Which imo can be clearly demonstrated by split brain individuals.

      If we had a theory of mind that was complete, it would simply be a matter of counting up the number of transistors required to approximate varying degrees of intelligence.

      I think this would be our fundamental lacking to interpret how our brain calculates and uses chemical weights so to speak to vary output. If we can’t judge that efficiency, we can’t just count all the transitors and say it’s this smart because the model could literally be trained to just output the letter s for everything even if its the size of chat gpt. I think we very well could state the capacity and limits of our brains by counting the number of neurons but whether it reaches its potential is dependent on how efficiently it was trained and that is where approximating intelligence becomes insanely difficult.