@[email protected] to [email protected] • 3 months agoLLMs produce racist output when prompted in African American Englishwww.nature.comexternal-linkmessage-square35fedilinkarrow-up190arrow-down117cross-posted to: science[email protected]
arrow-up173arrow-down1external-linkLLMs produce racist output when prompted in African American Englishwww.nature.com@[email protected] to [email protected] • 3 months agomessage-square35fedilinkcross-posted to: science[email protected]
minus-square@[email protected]linkfedilink11•edit-23 months agoPretty much, it was trained on human writing, then people are all surprised when it has human biases.
minus-square@[email protected]linkfedilink2•3 months agoAn LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
minus-square@Gradually_AdjustinglinkEnglish4•3 months agoWho among us couldn’t benefit from a little more of that?
minus-square@[email protected]linkfedilink1•3 months agoHumans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
minus-square@Gradually_AdjustinglinkEnglish2•3 months agoExciting new product from the tech industry: Fruit from the poisoned tree!
Pretty much, it was trained on human writing, then people are all surprised when it has human biases.
An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
Who among us couldn’t benefit from a little more of that?
Humans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
Exciting new product from the tech industry: Fruit from the poisoned tree!