• Veraticus
    link
    fedilink
    English
    01 year ago

    Even animals are more intelligent than LLMs. An LLM is a text creation tool; if it comes up with a new solution, it’s not because it thought of it. It’s because there was already some connection for the solution in the text it ingested.

    • @BitSound
      link
      01 year ago

      Oh good, so I can have a cockroach write code for me like ChatGPT can?

      LLMs can reason and don’t just regurgitate their training data. Have you actually used one? It’s plainly obvious that they can reason. It’s relatively primitive to what we’ll see in the future because they’re only getting better, but they can do it today.

      • Veraticus
        link
        fedilink
        English
        21 year ago

        I have; it being “obvious” does not mean it is correct, and in fact, in this instance, you are wrong.

        • @BitSound
          link
          11 year ago

          What exactly is the behavior that an LLM could display that would change your mind?

    • @Eheran
      link
      -21 year ago

      Nonsense. Just make something up yourself and ask it. If you then think “maybe that was on the internet before” then try again with an even more absurd question. Ask it how to stack XYZ. Whatever you want.

      • Veraticus
        link
        fedilink
        English
        11 year ago

        Let’s do better, and ask GPT4 itself what it thinks about this!

        GPT-4 is a powerful generative model, but its ability to come up with entirely new and novel solutions is limited. The model operates within the constraints of its pre-trained data and lacks true understanding or the capability for long-term reasoning. While it can generate answers that may appear creative, these are usually reconfigurations of existing data. Therefore, GPT-4 cannot produce entirely new solutions or frameworks in the way that a human with deep understanding and creative reasoning might.

        • @BitSound
          link
          01 year ago

          I wouldn’t put much stock in that either way TBH. OpenAI has probably forced a particular type of response from it with RLHF for that question

          • Veraticus
            link
            fedilink
            English
            11 year ago

            It is an accurate assessment of its capabilities and design. While obviously LLMs hallucinate, this is basically the expert consensus as well.

            • @BitSound
              link
              01 year ago

              Can you point to any sort of expert consensus or support your claim in any way? Or are you just hallucinating like an LLM does?

        • @Eheran
          link
          -21 year ago
          1. Ask random humans and some will tell you they themselves are not intelligent or others aren’t.
          2. This is a pre-programmed response from openAI.
          • Veraticus
            link
            fedilink
            English
            21 year ago

            So you’re claiming that it’s lying, and the broad expert consensus that it is indeed neither creative nor intelligent is also a lie? ok.

            • @Eheran
              link
              0
              edit-2
              1 year ago

              Lying would imply intelligence. But yes, it hallucinates/lies all the time.

              What consensus are you taking about? Example link that says otherwise. and another one.

              • Veraticus
                link
                fedilink
                English
                21 year ago

                I mean I started this conversation from an amateur article that did not and does not understand AI, so the fact you can find more is not exactly surprising. You just need to do more research on this; feel free to talk to ChatGPT itself about this or Google for better sources.