• @kaffiene
    link
    English
    138 months ago

    It’s not intelligent, it’s making an output that is statistically appropriate for the prompt. The prompt included some text looking like a copyright waiver.

    • @feedum_sneedson
      link
      38 months ago

      Maybe that’s intelligence. I don’t know. Brains, you know?

      • @kaffiene
        link
        English
        28 months ago

        It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.

        • @[email protected]
          link
          fedilink
          1
          edit-2
          8 months ago

          You could say our brain does the same. It just trains in real time and has much better hardware.

          What are we doing but applying things we’ve already learnt that are encoded in our neurons. They aren’t called neural networks for nothing

          • @kaffiene
            link
            English
            28 months ago

            You could say that but you’d be wrong.

        • @feedum_sneedson
          link
          0
          edit-2
          8 months ago

          Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?

          • @kaffiene
            link
            English
            28 months ago

            Magical thinking?

            • @feedum_sneedson
              link
              0
              edit-2
              8 months ago

              Okay, now you’re definitely protecting projecting poo-flicking, as I said literally nothing in my last comment. It was nonsense. But I bet you don’t think I’m an LLM.