• Deconceptualist
    link
    fedilink
    English
    464 months ago

    As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

    “Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.

    https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

    • Captain Aggravated
      link
      fedilink
      English
      94 months ago

      Remember the game people used to play that was something like “type my girlfriend is and then let your phone keyboards auto suggestion take it from there?” LLMs are that.

    • @[email protected]
      link
      fedilink
      English
      84 months ago

      An LLM once explained to me that it didn’t know, it simulated an answer. I found that descriptive.

    • @[email protected]
      link
      fedilink
      English
      34 months ago

      I was wondering, are people working on networks that train to create a modular model of the world, in order to understand it / predict events in the world?

      I imagine that that is basically what our brains do.

      • @[email protected]
        link
        fedilink
        English
        54 months ago

        Many attempts, some well-funded.

        They have been successful in very limited domains. For example, the F-35 integrated sensor suite.

      • Natanael
        link
        fedilink
        English
        34 months ago

        Not really anything properly universal, but a lot of task specific models exists with integration with logic engines and similar stuff. Performance varies a lot.

        You might want to take a look at wolfram alpha’s plugin for chatgpt for something that’s public

      • Deconceptualist
        link
        fedilink
        English
        24 months ago

        Yeah I’m sure folks are working on it, but I’m not knowledgeable or qualified on the details.

    • @_number8_
      link
      English
      -14 months ago

      all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order

      • Deconceptualist
        link
        fedilink
        English
        13
        edit-2
        4 months ago

        That’s not the whole story. “The dog swam across the ocean.” is a grammatically valid sentence with correct word order. But you probably wouldn’t write it because you have a concept of what a dog actually is and know its physiological limitations make the sentence ridiculous.

        The LLMs don’t have those kind of smarts. They just blindly mirror what we do. Since humans generally don’t put those specific words together, the LLMs avoid it too, based solely on probability. If lots of people started making bold claims about oceanfaring canids (e.g. as a joke), then the LLMs would absolutely jump onboard with no critical thinking of their own.

        • @theherk
          link
          English
          04 months ago

          Humans do the same thing. Have you heard of religion?

          • @[email protected]
            link
            fedilink
            English
            14 months ago

            Have you heard of music theory and psychoacoustics (frankly even painting with oil you’ll use that)? Where we hear something dependent on what we expect and what we actually get, both in time, in length, in color, in amplitude etc.

            Religion is about the same, it uses the concepts of impossible, unreachable and transcendent. Kicking something left and then back into place is not the same as not touching it.

      • Deconceptualist
        link
        fedilink
        English
        44 months ago

        Ok, maybe there’s a possibility someday with that approach. But that doesn’t reflect my understanding or (limited) experience with the major LLMs (ChatGPT, Gemini) out in the wild today. Right now they confidently advise ingesting poison because it’s grammatically sound and they found it on some BS Facebook post.

        If ML engineers can design an internal concept of what constitutes valid information (a hard problem for humans, let alone machines) maybe there’s hope.

        • QuantumSoul
          link
          fedilink
          English
          24 months ago

          Ethical and healthy is a whole harder problem lol. Having reasoning and thinking will come before

      • Natanael
        link
        fedilink
        English
        44 months ago

        The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff