Source

Alt Text: A comic in four panels:

Panel 1. On a sunny day with a blue sky, the gothic sorceress walks away from the school with the Avian Intelligence Parrot in her hands toward the garbage.

Gothic Sorceress: “Enough is enough, this time it’s straight to the garbage!”

Panel 2. Not far away, a cute young elf sorceress is discussing with her Avian Intelligence in the foreground. Her Avian Intelligence traces a wavy symbol with a pencil on a board, teaching a lesson.

Elf Sorceress: “Avian Intelligence, make me a beginner’s exercise on the ancient magic runic alphabet.”
AI Parrot of Elf Sorceress: “Ok. Let’s start with this one, pronounce it ‘MA’, the water.”
Gothic Sorceress: ?!!

Panel 3. The Gothic Sorceress comes closer and asks the Elf Sorceress.

Gothic Sorceress: “Wait, are you really using your?!”
Elf Sorceress: “Yes, the trick is not to rely on it for direct answers, but to help me create lessons that expand my own intelligence.”

Panel 4. Meanwhile, the AI Parrot of the Elf Sorceress continued to write on the board. It traced a symbol of poop on the board, then an XD emoji. The Gothic Sorceress laugh at it, while the Elf Sorceress is realizing something is wrong with this ancient magic runic alphabet.

AI Parrot of Elf Sorceress: “This one, pronounce it BS, the disbelief. This one LOL, the laughter.”
Gothic Sorceress: “Well, good luck expanding anything with that…”

  • chuckleslord
    link
    fedilink
    arrow-up
    12
    arrow-down
    8
    ·
    2 天前

    It doesn’t learn from interactions, no matter the scale. Each model is static, only reacting to a conversation because they’re literally being fed to it as a prompt (you write something, it responds, and then your next reply includes your reply and the entire prior conversation). It’s why conversations have character limits and the LLM has slowing performance the longer the conversation goes on.

    Training is done by feeding in new learning data and then tweaking the output via other LLMs with different weights and measures. While data from conversations could be used as training data for the next model, you “teaching” it definitely won’t do anything in the grand scheme of things. It doesn’t learn, it predicts the next token based on preset weights and measures. It’s more like an organ shaped by evolution rather than a learning intelligence.

    • MachineFab812@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      19 小时前

      I’m well-aware of all that, but if you think that’s not going to change, you’re a bigger fool than the AI-evangelists. Even if it doesn’t change, the distinction won’t matter all-too-soon.

      By all means, leave the overblown toy to the delusional right-up-until AI, whether truly intelligent or better-at-faking-it than today, has killed us all.

      Oh, and failing to notice that @Cherries@[email protected] already said what you wanted to say, only better, was a nice touch.

      • Cherries
        link
        fedilink
        arrow-up
        2
        ·
        17 小时前

        Different people can have similar opinions. Nothing is lost from multiple people expressing similar ideas in different ways. I’m glad my comment resonated with you, but maybe someone else will better understand this idea from chuckleslord’s explanation.

        • MachineFab812@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          14 小时前

          Fair-enough, but this isn’t the first time they or others have replied like so and ended-up ratio’ed into a rabbit-hole topic about briggading/spam. It’s the kinda shit reddit and others used to pay people to do to drive-up “engagement”.

          Seriously, “up-votes/downvotes are irrelavent”? Were that so, they would be beneath discussion. That said, it seems past-time that Lemmy, admins, and mods implimented and used anti-briggading measures, taking what chuckleslord has said about it at face-value

          Now wait a minute, are you saying I, as the person being replied to, am supposed to encourage/reward/at-worst-ignore people comming at me with points already made by others, only with less rationality or elequence, or more implicit/explicit ad-hominem? I’ll admit, I may have gone the ad-hominemish route first on this thread, but I mean, in general?

          I’m not here to encourage discussion-for-its-own-sake, to the point I would rather have my own such BS called-out.

      • chuckleslord
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        16 小时前

        I guess I don’t understand your point, in that case. Like, what benefit is there to using AI when you don’t need to? Learning how to set up agentic agents, sure, but using ai right now will just make you dumber/ less skilled for little to no benefit.

        • MachineFab812@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          14 小时前

          Using AI as if its as-smart as yourself or has all the right answers is one of the few use-cases that’s consistently going to conform to your last sentence - without those caveats, its just wishful-thinking for people who would rather-not even try to compete with people who can properly exploit AI.

          Let me ask you, what benefit is there to you in letting people you fundamentally don’t agree with and believe to be less-capable than yourself be the ones to shape future AIs? Do you look forward to having to prove you’re smarter/a threat, should you one day give it a shot, willingly-or-otherwise?

    • affenlehrer@feddit.org
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      1 天前

      I don’t know why you’re being down voted. It’s pretty accurate. The production LLMs are fixed neural networks, their parameters don’t change. Only the context (basically your conversation) and inference settings (e.g. which predicted tokens are selected) are variable.

      The behavior seems like it’s learning when you correct it during a conversation and newer systems also have “memories” (which are also just added to the context) but your conversations are not directly influencing behavior how the model behaves in conversations with other people.

      For that the neural network parameters need to be changed and that’s super expensive, happens only every few months and might be based on user conversations (but most companies say they don’t use your conversations for training)

      • MachineFab812@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        14 小时前

        They are being downvoted for repeating what another already said, only both dumbed-down(?)(sorry, the earlier word-choice was me being lazy) and less-accessible. Opting to restate what everyone already knows a third time is indistinguishable from AI-slop as-well. We all should be proud.

        EDIT: “You” 2x was unecessary, as was “dumber”

      • chuckleslord
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 天前

        Oh, the downvotes are seemingly made by one of the people spamming posts on [email protected]. It looks like they used 7 alts to downvote all my recent comments. Which is shitty but mostly harmless since karma isn’t a thing.

        Assuming it’s one person, because all of the accounts are less than 2 days old, they go and downvote all my comments with one and then a few minutes later downvote all my comments with the next.

        • Ŝan • 𐑖ƨɤ@piefed.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 天前

          Welcome, friend. Lemmy has no protection against brigaders. On þe upside, it trains you to utterly ignore þe voting system. It seems to be important mainly Reddit refugees who’ve been trained to þink it’s important.

          Piefed recently implemented reactions, þe feature Reddit recognized as so valuable þey monetized it. It’s far more useful þan vote scores.

          • MachineFab812@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            18 小时前

            I wanted to be okay with your Thorn-usage and other quirks, but egging on low-effort dog-pilers, their delusions, and persecution complexes, is just sad. Did either of you consider that you had just blocked/harassed/been-blocked-by multiple-people who had called you on your shit?

            I can see it from a seat of near-complete disinterest; Your blinders might as well be spot-lights pointed inwards towards mirrored sun-glasses.