• @[email protected]
    link
    fedilink
    English
    -5
    edit-2
    2 days ago

    That seems a very reasonable approach on the impossibility to achieve AGI with current models…

    The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.

    But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.

    Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

    Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

    • @[email protected]
      link
      fedilink
      English
      42 days ago

      Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

      Does a dog have the Buddha nature?

      …meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.

      I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

      What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.

      • @[email protected]
        link
        fedilink
        English
        -22 days ago

        It may be because of the habit.

        But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.

        A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

        A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.

        For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.

        At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.

        • lad
          link
          fedilink
          English
          12 days ago

          A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

          As per current psychology’s view, yes, even if to a smaller extent. There are problems with how we define conscience, and right now with LLMs most of the arguments are usually related to the Chinese room and philosophical zombie thought experiments, imo