When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • Queen HawlSera
    link
    fedilink
    English
    74 hours ago

    It’s a fucking Chinese Room, Real AI is not possible. We don’t know what makes humans think, so of course we can’t make machines do it.

    • @[email protected]
      link
      fedilink
      English
      32 hours ago

      You forgot the ever important asterisk of “yet”.

      Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.

      Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.

      • @[email protected]
        link
        fedilink
        English
        142 minutes ago

        I actually don’t think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.

        Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.

        The singularity point isn’t going to be the matrix or skynet or AM, it’s going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade “Third Hemisphere.”

        Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.

  • @[email protected]
    link
    fedilink
    English
    216 hours ago

    I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.

    • @[email protected]
      link
      fedilink
      English
      26 hours ago

      I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

      • @[email protected]
        link
        fedilink
        English
        205 hours ago

        If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.

        If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.

      • @[email protected]
        link
        fedilink
        English
        85 hours ago

        So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

      • @[email protected]
        link
        fedilink
        English
        116 hours ago

        I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.

      • Stopthatgirl7OP
        link
        English
        55 hours ago

        If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

        • @lunarul
          link
          English
          04 hours ago

          Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.

          • Stopthatgirl7OP
            link
            English
            3
            edit-2
            3 hours ago

            So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

            • @lunarul
              link
              English
              159 minutes ago

              I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.

      • @[email protected]
        link
        fedilink
        English
        25 hours ago

        If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

        Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

        Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

  • @[email protected]
    link
    fedilink
    English
    429 hours ago

    It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

    Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

    Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

    • @Arbiter
      link
      English
      238 hours ago

      Or just stop using buggy AIs for everything.

    • 100
      link
      fedilink
      88 hours ago

      just shows that these “ai”'s are completely useless at what they are trained for

      • @[email protected]
        link
        fedilink
        English
        116 hours ago

        They’re trained for generating text, not factual accuracy. And they’re very good at it.

  • @[email protected]
    link
    fedilink
    English
    158 hours ago

    The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

    You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn’t know how it ended up that way.

    • @[email protected]
      link
      fedilink
      English
      66 hours ago

      We’re already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they’ll say “idk computer said so”.

  • @rovingnothing29
    link
    English
    46 hours ago

    Isn’t this literally a subplot in the movie Brazil?

  • @AbouBenAdhem
    link
    English
    08 hours ago

    Or… maybe he really is a criminal mastermind, desperately trying to do damage control after AI blew his cover. /s

  • sunzu2
    link
    fedilink
    -28 hours ago

    These are not hallucinations whatever thay is supposed to mean lol

    Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn’t understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself…

    AI🤡

      • snooggums
        link
        English
        16 hours ago

        Hallucinations is a fancy word for being wrong.

        • chiisana
          link
          fedilink
          English
          -14 hours ago

          The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

          The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

          • snooggums
            link
            English
            24 hours ago

            The users’ assumption/expectation of the output being factual is what is wrong.

            So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?