• @[email protected]
    link
    fedilink
    51 year ago

    It doesn’t “lie” though, it just generates a plausible sequence of words. The sort-of fortunate thing is that facts are often plausible, and it’s going to be trained on a lot of facts. But, facts aren’t the only word-sequences that are plausible, and LLMs are trained to be creative, and that means sometimes choosing a next-word that isn’t the best fit, which might end up meaning the generated sentence isn’t factual.

    Calling it a “lie” suggests that it knows the truth, or that it is being deceptive. But, that’s giving “spicy autocomplete” too much credit. It simply generates word salads that may or may not contain truths.

    • SokathHisEyesOpen
      link
      fedilink
      English
      31 year ago

      The industry word for it is “hallucination”, but I’m not sure that fits either.

      • @[email protected]
        link
        fedilink
        21 year ago

        It’s better than lying, but it still implies consciousness. It also implies that it’s doing something different than what it normally does.

        In reality, it’s always just generating plausible words.

        • SokathHisEyesOpen
          link
          fedilink
          English
          0
          edit-2
          1 year ago

          It is certainly more complex than a predictive text machine. It does seem to understand the concept of objective truth, and facts, vs interpretation and inaccurate information. It never intentionally provides false information, but sometimes it thinks it is giving factual information when really it is using an abundance of inaccurate information that it was trained with. I’m honestly surprised at how accurate it usually is, considering it was trained with public data from places like Reddit, where common inaccuracies have reached the level of folklore.

          • @[email protected]
            link
            fedilink
            21 year ago

            It is certainly more complex than a predictive text machine

            No, it literally isn’t. That’s literally all it is.

            It does seem to understand

            Because people are easily fooled, but what it seems like isn’t what’s actually happening.

            but sometimes it thinks it is giving factual information

            It’s incapable of thinking. All it does is generate a plausible sequence of words.