• kratoz29
    link
    fedilink
    English
    91 day ago

    Is that it?

    One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

    Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…

    As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

    • @LovableSidekick
      link
      English
      4
      edit-2
      24 hours ago

      I love how they chose the term “hallucinate” instead of saying it fails or screws up.

        • @pulsewidth
          link
          English
          315 hours ago

          A hallucination is a false perception of sensory experiences (sights, sounds, etc).

          LLMs don’t have any senses, they have input, algorithms and output. They also have desired output and undesired output.

          So, no, ‘hallucinations’ fits far worse than failure or error or bad output. However assigning the term ‘hallucinaton’ does serve the billionaires in marketing their LLMs as actual sentience.

    • @[email protected]
      link
      fedilink
      English
      223 hours ago

      I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc

      Society will handle it sooner or later