Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • @[email protected]
    link
    fedilink
    English
    96 months ago

    because it’s a text generation machine…? i mean, i wouldn’t say i can prove it, but i don’t think anyone can prove it’s capable of thinking, much less of reasoning

    like, it can string together a coherent sentence thanks to well crafted equations, sure, but i wouldn’t qualify that as “thinking”, though i guess the definition of “thinking” is debatable

    • DominusOfMegadeus
      link
      fedilink
      English
      16 months ago

      It’s an interesting question. I am inclined to believe that the faster it gets at running those equations, over and over and over, reanalysing is data and responses as it goes, that that ultimately leads to some kind of evolution. You know, Vger style.

    • @Eheran
      link
      English
      06 months ago

      It can tell you how to stack things on top of each other the best way to get a high tower. Etc.

      Those are not random sentences. If you can not define thinking in a way this machine fails at, then stop saying it does not think.

      • @Aceticon
        link
        English
        26 months ago

        A parrot can be trained to tell you how to stack things on top of each other the best way to get a high tower.

        This is just an electronic parrot, millions of times faster to train than the biological parrot, specialized in repetition alone (can’t really do anything else a parrot can) and which has been trained on billions of texts.

        You’re confusing one specific form in which humans externally express cogniscence with the actual cogniscence itself: just because intelligence can produce some forms of textual communication doesn’t mean that the relationship holds in the opposite direction and such forms of textual communication require intelligence, or if you will, just because you can photograph a real pizza to get a picture of a pizza doesn’t mean a picture of a pizza is actually of a real pizza and not something with glue to make it look like it has stringy melted cheese.

        • @Eheran
          link
          English
          -16 months ago

          Again, it is absolutely capable to come up with it’s own logical stuff, hence my example. Stop saying it just copies existing stuff, that is simply wrong.

          • @[email protected]
            link
            fedilink
            English
            26 months ago

            it is absolutely capable to come up with it’s own logical stuff

            interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me

            do you have example prompts where it showed clear logical reasoning?

            • @Eheran
              link
              English
              0
              edit-2
              6 months ago

              Examples showing that it comes up with it’s own solutions to an answer? Just ask it something that could not have been on the Internet before. Professor talking about AGI in GPT 4

              Personal examples would be to code python to solve a 2D thermal heat flux problem given some context and constraints.

              • @[email protected]
                link
                fedilink
                English
                16 months ago

                well, i just tried it, and its answer is meh –

                i asked it to transcribe “zenquistificationed” (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that’s likely how a native english speaker would read that word.

                i then asked it to transcribe that into japaense katakana, it gave me “ゼンクィスティフィカションエッド” (zenkwisuthifikashon’eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon’) should be ケーシュン (kēshun’), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)

                • @Eheran
                  link
                  English
                  06 months ago

                  this paper says it is capable of original thought. It also “speaks” of it in high regard in other things. That is also my experience using it for… over a year?! now.

            • @Eheran
              link
              English
              06 months ago

              Amazing reply, given the context.

              • @Aceticon
                link
                English
                16 months ago

                I’m actually a domain expert on AI whilst your “assertive denial without a single counter-argument” answer to my simplified explanation together with your “understanding” of the subject matter shown in the post before that one, shows you’re at the peak of the Dunning-Krugger curve on this domain and also that you do not use analytical thinking or the scientific method in any way form or shape when analysing a subject.

                There is literally no point in explaining anything to somebody who reasons like that and is at that point of that curve.

                You keep your strongly held “common sense” beliefs and I’ll keep from wasting any more of my time.

                • @Eheran
                  link
                  English
                  06 months ago

                  this paper clearly says it is capable of original thought. It also “speaks” of it in high regard in other things.

                  • @Aceticon
                    link
                    English
                    16 months ago

                    Re-read it: it says AI is capable of “originality” and does not mention “thought” at all.

                    You’re the one presuming that “originality” requires cognition and hence understood “originality” as meaning “original thought” even though they’re different concepts (specifically the latter is a subset of the former).

                    In your interpretation of that paper you did the exact same logical mistake as you seem to be doing in your interpretation of LLMs - you made assumptions backed only by gut feeling thus taking a leap to reach a conclusion ultimately supported only by your gut feeling.