• @Eheran
    link
    -21 year ago

    Nonsense. Just make something up yourself and ask it. If you then think “maybe that was on the internet before” then try again with an even more absurd question. Ask it how to stack XYZ. Whatever you want.

    • Veraticus
      link
      fedilink
      English
      11 year ago

      Let’s do better, and ask GPT4 itself what it thinks about this!

      GPT-4 is a powerful generative model, but its ability to come up with entirely new and novel solutions is limited. The model operates within the constraints of its pre-trained data and lacks true understanding or the capability for long-term reasoning. While it can generate answers that may appear creative, these are usually reconfigurations of existing data. Therefore, GPT-4 cannot produce entirely new solutions or frameworks in the way that a human with deep understanding and creative reasoning might.

      • @BitSound
        link
        01 year ago

        I wouldn’t put much stock in that either way TBH. OpenAI has probably forced a particular type of response from it with RLHF for that question

        • Veraticus
          link
          fedilink
          English
          11 year ago

          It is an accurate assessment of its capabilities and design. While obviously LLMs hallucinate, this is basically the expert consensus as well.

          • @BitSound
            link
            01 year ago

            Can you point to any sort of expert consensus or support your claim in any way? Or are you just hallucinating like an LLM does?

      • @Eheran
        link
        -21 year ago
        1. Ask random humans and some will tell you they themselves are not intelligent or others aren’t.
        2. This is a pre-programmed response from openAI.
        • Veraticus
          link
          fedilink
          English
          21 year ago

          So you’re claiming that it’s lying, and the broad expert consensus that it is indeed neither creative nor intelligent is also a lie? ok.

          • @Eheran
            link
            0
            edit-2
            1 year ago

            Lying would imply intelligence. But yes, it hallucinates/lies all the time.

            What consensus are you taking about? Example link that says otherwise. and another one.

            • Veraticus
              link
              fedilink
              English
              21 year ago

              I mean I started this conversation from an amateur article that did not and does not understand AI, so the fact you can find more is not exactly surprising. You just need to do more research on this; feel free to talk to ChatGPT itself about this or Google for better sources.