“An intriguing open question is whether the LLM is actually using its internal model of reality to reason about that reality as it solves the robot navigation problem,” says Rinard. “While our results are consistent with the LLM using the model in this way, our experiments are not designed to answer this next question.”

The paper, “Emergent Representations of Program Semantics in Language Models Trained on Programs” can be found here.

Abstract

We present evidence that language models (LMs) of code can learn to represent the formal semantics of programs, despite being trained only to perform next-token prediction. Specifically, we train a Transformer model on a synthetic corpus of programs written in a domain-specific language for navigating 2D grid world environments. Each program in the corpus is preceded by a (partial) specification in the form of several input-output grid world states. Despite providing no further inductive biases, we find that a probing classifier is able to extract increasingly accurate representations of the unobserved, intermediate grid world states from the LM hidden states over the course of training, suggesting the LM acquires an emergent ability to interpret programs in the formal sense. We also develop a novel interventional baseline that enables us to disambiguate what is represented by the LM as opposed to learned by the probe. We anticipate that this technique may be generally applicable to a broad range of semantic probing experiments. In summary, this paper does not propose any new techniques for training LMs of code, but develops an experimental framework for and provides insights into the acquisition and representation of formal semantics in statistical models of code.

  • @A_A
    link
    English
    319 days ago

    i feel that many of us, when confronted to this, are doing like Z.B. (president of the Galaxy) in The hitchhiker guide… when he says :
    … “whenever I stop and think why did I want to do something? – how did I work out how to do it? – I get a very strong desire to just to stop thinking about it” …
    We don’t want to be surpassed by machines … and this explains the large amount of downvotes

    • @atrielienz
      link
      English
      1319 days ago

      I’m actually pretty sure the downvotes are because LLM’s don’t think. They don’t even process. They pick the highest number and spit out the information attached to it.

      • @[email protected]
        link
        fedilink
        English
        119 days ago

        Do I “think” or does my brain pick the closest neuron and spit out a function based on that input?

        If we could recreate the universe, would I do the exact same thing in the exact same situation?

        • @atrielienz
          link
          English
          1019 days ago

          I’m sorry. Because you don’t understand how your brain works you’re suggesting that it must work in the same way as something a similar brain created because you don’t know how either thing works. That’s not an argument.

          • @[email protected]
            link
            fedilink
            English
            -4
            edit-2
            19 days ago

            No, I’m not suggesting that.

            I’m suggesting that if we don’t even understand how consciousness works for ourselves, we cannot make claims about how it will look for other things.

            Deterministically free will does not exist, if we cannot exercise free will we cannot have independent thoughts just the same as a machine.

            Truth is we don’t really know shit, we’re biological machines that are able to think they’re in control of themselves based on inputs. If we ever discover true AGI it will be on accident as we fiddle with technologies such as LLMs or any other complex models.

            • @atrielienz
              link
              English
              519 days ago

              Okay. Feed a new species that hasn’t been named yet into an LLM. Does it name that new creature? Can it decide what family or phylum etc it belongs? Does it pick up the specific attributes of that new species?

              • @[email protected]
                link
                fedilink
                English
                0
                edit-2
                19 days ago

                It might be able to pick those things out, I certainly couldn’t.

                Edit: So ChatGPT correctly identified a new species from 4 days ago as a type of Storm Petrel and a new flower from Sri Lanka as an Orchidaceae. Far better than I could do.

                • @atrielienz
                  link
                  English
                  419 days ago

                  That is very deliberately not in the spirit of the question I asked. It’s almost like you’re intent on misunderstanding on purpose just so you can feel like you’re right.

                  • @[email protected]
                    link
                    fedilink
                    English
                    119 days ago

                    You asked if it could do I task I wasn’t even capable of doing, and this was your assessment of consciousness.

            • @[email protected]
              link
              fedilink
              English
              3
              edit-2
              19 days ago

              I’m suggesting that if we don’t even understand how consciousness works for ourselves, we cannot make claims about how it will look for other things.

              So is a rock conscious? I guess we’ll never know… But AI!?! Definitely conscious! smh.

            • @tabular
              link
              English
              3
              edit-2
              18 days ago

              I suspect others are talking about “thinking” only objectively.

              B) If a LLM had a subjective experience when given input presumably it has none when all processes are stopped (subjectively, unverifiable).

              A) If a LLM has no input then there are no processes going on at all which could be described as thinking (objectively, verifiable: what is the program doing).

      • @A_A
        link
        English
        -519 days ago

        Science cannot say much about what it is to think since it doesn’t understand the brain well enough … and the day we can fully explained it, we will also be able to replicated it on computers.

        • @atrielienz
          link
          English
          419 days ago

          Science can and does quantify what our brains do vs what an LLM does though. That’s the point. That’s why the brain knows when it’s supplying wrong information or guessing but the LLM does not.

          The LLM can provide wrong information. What it can’t do is intentionally lie.

          • @A_A
            link
            English
            -319 days ago

            i agree with you that we are much better than LLMs in the fact we can verify our errors (and we can do much more things LLMs don’t do).
            Still i am happy to have access to their vast memory and i know where they fail most of times so i can work with them in a productive way.
            The day we provide them (or DNNs) with “Will” is i think when they will become (more) dangerous.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              19 days ago

              Gotta use extra heavy quotes around “will”… Mixing up AI pseudo-science with fascist pseudo-science.

            • @atrielienz
              link
              English
              0
              edit-2
              19 days ago

              Nah fam. You’re comparing this thing to yourself and other humans because to you it sounds like us. “Sounds like” is the phrasing conspiracy theorists use to try to make correlations regardless of whether they make sense or not without filling in any actual facts. Either you know and understand how they work and are researching this and have a valid rebuttal or you don’t. But an assumption does not make anything you said fact. People who are researching this have already refuted this claim.

              You don’t even know what the word “will” would mean in this context, or that it would provide an LLM with the kind of consciousness required to be sapient. So can we stop arguing if you just admit that you like using LLM’s so you have a bias and a poor understanding of what they are and aren’t.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          19 days ago

          Science cannot say much about what it is to think since it doesn’t understand the brain well enough

          Wild pseudo-scientific generalization.

          and the day we can fully explained it, we will also be able to replicated it on computers.

          There are many many things that are fully explained but will never be replicated on computers. Eg. Any numerical problem bigger than a computer.