Language models generate text based on statistical probabilities. This led to serious false accusations against a veteran court reporter by Microsoft’s Copilot.

  • AwkwardLookMonkeyPuppet
    link
    English
    43 months ago

    And there are governments right now using AI for all sorts of things, including criminal investigation and legislation.

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    3 months ago

    This is really bad. It shows that LLMs cannot be trusted, since there is no corrective instance above them. According to this video (German), Microsoft, who has been questioned about this incident, claims that the Copilot feature is to be seen as entertainment rather than a search engine or anything what is considered as a serious source. Although search results regarding the victim have been deleted after they had reached out to Microsoft, the same wrong search results re-appeared a few days later.

    With the raise of fake news over the last decade, combined with a growing lack of media literacy, such feature can destroy lives, especially when people tend to ignore the sources. A victim barely has any possibility to proof that these facts are wrong and a result of a hallucinating LLM. And even if - the internet doesn’t forget. Fake news will be circleing around as well as legit news.

    Edit: According to Copilot, among the crimes he supposedly has committed there is an allegation that he is a child molester. That makes it even worse!