• db0OP
      link
      fedilink
      English
      766 months ago

      “Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

      • Cyrus Draegur
        link
        fedilink
        English
        24
        edit-2
        6 months ago

        in terms of communication utility, it’s also a very accurate term.

        when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

        when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

        it’s the same song, but played on a different instrument.

        • kronisk
          link
          English
          56 months ago

          when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

          Is it really? You make it sound like this is a proven fact.

          • Cosmic Cleric
            link
            English
            4
            edit-2
            6 months ago

            Is it really? You make it sound like this is a proven fact.

            I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.

              • @Dasus
                link
                English
                26 months ago

                I know I’m responding to a bot, but… how does a PipedLinkBot get “Kyle Hill” wrong to “Kyke Hill”? More AI hallucinations?

          • KillingTimeItself
            link
            fedilink
            English
            26 months ago

            i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

            Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.

          • @UmeU
            link
            English
            16 months ago

            Better than one of those pesky unproven facts

        • @assassinatedbyCIA
          link
          English
          26 months ago

          I think a more accurate term would be confabulate based on your explanation.

      • @[email protected]
        link
        fedilink
        English
        -126 months ago

        They don’t come up with any statements, they generate data extrapolating other data.

          • @SlopppyEngineer
            link
            English
            16 months ago

            Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.

          • knightly the Sneptaur
            link
            fedilink
            English
            16 months ago

            I like this argument.

            Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.

                • Cosmic Cleric
                  link
                  English
                  36 months ago

                  When they grow god damn thumbs.

                  So, you’re prejudiced against the handicapped. Wow.

                  (I kid, I kid.)

                  • Flying Squid
                    link
                    English
                    26 months ago

                    Now that’s just not fair. I don’t think any of us have a problem with handicapped cows getting the special help they need, be it a wheelchair or a prosthetic arm.

          • @[email protected]
            link
            fedilink
            English
            -126 months ago

            Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.

            • NιƙƙιDιɱҽʂ
              link
              English
              36 months ago

              Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…

              • @[email protected]
                link
                fedilink
                English
                -56 months ago

                Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”

                • NιƙƙιDιɱҽʂ
                  link
                  English
                  2
                  edit-2
                  6 months ago

                  That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?

                  Nice hijacking of the term mansplaining, btw. Super cool of you.

                  • @[email protected]
                    link
                    fedilink
                    English
                    06 months ago

                    Oh my god, we’ve got a sealion here.

                    Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:

                    My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.

                    Clearer?

    • @[email protected]
      link
      fedilink
      English
      126 months ago

      No?

      An anthropomorphic model of the software, wherein you can articulate things like “the software is making up packages”, or “the software mistakenly thinks these packages ought to exist”, is the right level of abstraction for usefully reasoning about software like this. Using that model, you can make predictions about what will happen when you run the software, and you can take actions that will lead to the outcomes you want occurring more often when you run the software.

      If you try to explain what is going on without these concepts, you’re left saying something like “the wrong token is being sampled because the probability of the right one is too low because of several thousand neural network weights being slightly off of where they would have to be to make the right one come out consistently”. Which is true, but not useful.

      The anthropomorphic approach suggests stuff like “yell at the software in all caps to only use python packages that really exist”, and that sort of approach has been found to be effective in practice.

      • @[email protected]B
        link
        fedilink
        English
        06 months ago

        Here is an alternative Piped link(s):

        too late

        Piped is a privacy-respecting open-source alternative frontend to YouTube.

        I’m open-source; check me out at GitHub.