German journalist Martin Bernklau typed his name and location into Microsoft’s Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.

The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Bernklau believes the false claims may stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases. The AI seems to have combined this online information and mistakenly cast the journalist as a perpetrator.

Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.

  • @[email protected]
    link
    fedilink
    72 months ago

    There’s no easy way to solve this problem

    How about not replacing search engines with this evidently non-functional scam, for instance…?

    It’s user error

    No. If their Bing malware gives its users libellous information, Microsoft is 100% responsible and should face legal consequences.

    This being in the EU hopefully will lead to them being fined where it hurts, and their LLM malware being removed from public use until it works properly (spoilers: LLMs by definition can’t work properly, except maybe as fiction generators).

    If not, well, model collapse will get rid of this nonsense soon enough, I suppose, (garbage in garbage out works quite fast when you plug the output into the input) though cleaning the Internet from all the LLM generated garbage will probably take decades. Hopefully the idiots responsible will be fined to pay for the costs.

    • @Eranziel
      link
      32 months ago

      Agreed. The solution to this is to stop using LLMs to present info authoritatively, especially when facing directly at the general public. The average person has no idea how an LLM works, and therefore no idea why they shouldn’t trust it.