There’s a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models

I am under the impression from the video that real neurons are a lot better at learning than simulated ones (and much less power demanding)

Could any ML problems, such as natural language generation be solved using neurons instead and would that be in any way practical?

Ethically at this point is this neuron array considered conscious in any way?

  • @[email protected]
    link
    fedilink
    20
    edit-2
    9 months ago

    Afaik, an actual neuron is computationally more powerful than a perceptron, so in theory yeah, for sure.

    If you’re a subscriber to the Chinese Room thought problem, we are already just a bunch of really good “LLMs”.

    • @themusicman
      link
      69 months ago

      First time I’ve come across the Chinese Room, but it’s pretty obviously flawed. It’s not hard to see that collectively the contents of the room may understand Chinese in both scenarios. The argument boils down to “it’s not true understanding unless some component part understands it on its own” which is rubbish - you can’t expect to still understand a language after removing part of your brain

      • @[email protected]
        link
        fedilink
        19 months ago

        Hah, tbh, I didn’t realize it was originally formulated to argue against consciousness in the room. When I originally heard it it was presented as a proper thought problem with no “right” answer. So I honestly remembered it as a sort of illustration of the illusion that is consciousness. But it’s been a while since I’ve discussed it with others, mostly I’ve just thought about it in the context of recent AI advancements.

    • @[email protected]OP
      link
      fedilink
      49 months ago

      I’ve always thought we have something resembling an LLM as one components of our brains, and the brain has the ability to train new models by itsself for solving new problems

      • ErzatzCadillac
        link
        fedilink
        English
        4
        edit-2
        9 months ago

        Actually we do, the cerebellum is what the neural networks in LLMs were partially based off. It’s essentially a huge collection of input/output modules that the other parts of the brain are wired into which preforms various computations. It also handles motor control for the body and figures out how to do this through reinforcement learning. (The way the reinforcement learning works is different to LLMs though because it’s a biological process) So when you throw a ball, for example, various modules in the cerebellum take in inputs from the visual centers, arm muscles, etc and compute the outputs needed to produce the throwing motion to reach your target.

        We also have the cerebrum though, which along with the rest of the brain is the magic voodoo that creates our consciousness and self awareness and we can’t recreate with a computer.