• daannii
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 小时前

    I’m familiar with Chinese room and yes that’s exactly what I was trying to infer with my example of how a video looks like a person. Acts like a person. But is not a person. I didn’t want to go into the Chinese room experiment but that was what I was thinking of.

    The heuristics that humans use are not really like the probability statistics that learning models use. The models use probability cut offs. We use incredibly error prone shortcuts. They aren’t really “estimates” in the statistical way. They are biases in attention and reasoning.

    I enjoyed your speculating about the use of analog for processing closer to real humans vs virtual.

    I think you are partially correct because it’s closer to biology. As you said. But it also can’t change. Which is not like biology. 🤷

    Humans don’t actually compute real probability. In fact humans are so poor at true statistical probability, due to our biases and heuristics, it’s actually amazing that any human was able to break free from that hard wired method and discover the true mathematical way of calculating probability.

    It quite literally goes against human nature. By which I mean brains are not designed to deal with probability that way.

    We actually have trouble with truly understanding anything besides. “Very likely and basically assured” and “very unlikely and basically like no chance”.

    We round percentages to one of those two categories when we think about them. (I’m simplifying but you get what I’m saying,)

    This is why people constantly complain that weather predictions are wrong. 70% chance of rain means it certainly will rain. And when it doesn’t. We feel lied to.

    I mentioned emotion and you are 100% correct that it’s a tricky concept in neuroscience (you actually seem pretty educated about this topic).

    It is ill defined. However. The more specific emotions I refer to are approach/avoidance. And their ability to attract attention or discourage it.

    To clarify, Both approach and avoidance emotions can attract attention.

    Emotional salience : definition. = Grabs attention at an emotional level , becomes interesting. Either because you like it or you don’t like it (I’m simplifying)

    Stimuli with neutral emotional salience will not grab attention and be ignored and will not affect learning to the same degree as something that is emotionally salient.

    Your personal priorities will feed into this as well. Dependent on mood and whatever else you have going on in your life. Plus personality.

    It’s always changing.

    LLMs have set directions that don’t fluctuate.

    The loop I describe is not the same as an algorithm loop.

    An algorithm loop feeds data and cycles through to get to the desired outcome.

    Sort of like those algorithms for rubric cube solutions (idk if you know what I’m talking about).

    You do the steps enough reiterations and you will solve the puzzle.

    That’s not the same as altering and evolving the entire system constantly. It never goes back to how it was before. Its never stable.

    Every new cognitive event starts differently than the last because it is influenced by the preceding events. In neuroscience we call this priming.

    It literally changes the chances of a neuron firing again. And so the system is not running the same program over and over. It’s running an updated program on updated hardware. Every single iteration.

    That’s the process of learning that is not able to be separated from the process of experience nor decision making; At any level. Within or beyond awareness.

    May I ask what your expertise area is in? Are you a computer scientist?

    You do seem to know a bit more about neuroscience than the average person. I also rarely meet anyone who has heard of the Chinese room thought experiment.

    Also I agree we are getting into philosophical areas.

    • mirshafie@europe.pub
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 小时前

      But the Chinese room argument is very flawed, at least if we assume that consciousness does in fact arise in the brain and not through some supernatural phenomenon.

      Suppose we know the exact algorithm that gives rise to consciousness. The Chinese room argument states that if a person carries out the algorithm by hand, the person does not become consciousness. Checkmate atheists.

      This is flawed because it is not the axons, synapses, neurotransmitters or voltage potentials within the brain that are conscious. Instead, it appears that consciousness arises when these computations are carried out in concert. Thus consciousness is not a physical object itself, it is an evolving pattern resulting from the continuous looping of the algorithm.

      Furthermore, consciousness and intelligence are not the same thing. Intelligence is the ability to make predictions, even if it’s just a single-neuron on/off gate connected to a single sensory cell. Consciousness is likely the experience of being able to make predictions about our own behavior, a meta-intelligence resulting from an abundance of neurons and interconnections. There is likely no clear cutoff boundary of neural complexity where consciousness arises, below which no consciousness can exist. But it’s probably useful to imagine such a boundary.

      Basically, what if thinking creatures are simply auto-correct on steroids (as Linus Tordvals put it). What’s unreasonable about treating intelligence as a matter of statistics, especially given that it’s such a powerful tool to model every other aspect of our universe?