• @HighlyRegardedArtist
    link
    132 months ago

    Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?

      • @HighlyRegardedArtist
        link
        12 months ago

        I have to disagree with that. To quote the comment I replied to:

        AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued

        Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          2 months ago

          I’ll be the first one to explain to people that AI as we know it is just pattern recognition, so yeah, it was a turn of phrase, thanks for your concern.

          • @HighlyRegardedArtist
            link
            -12 months ago

            Ok, great to know. Nuance doesn’t cross internet well, so your intention wasn’t clear, given all the uninformed hype & grifters around AI. Being somewhat blunt helps getting the intended point across better. ;)

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          2 months ago

          My point wasn’t that LLMs are capable of reasoning. My point was that the human capacity for reasoning is grossly overrated.

          The core of human reasoning is simple pattern matching: regurgitating what we have previously observed. That’s what LLMs do well.

          LLMs are basically at the toddler stage of development, but with an extraordinary vocabulary.