@[email protected] to TechnologyEnglish • 10 months agoGoogle explains Gemini’s “embarrassing” AI pictures of diverse Naziswww.theverge.comexternal-linkmessage-square22fedilinkarrow-up1107arrow-down110
arrow-up197arrow-down1external-linkGoogle explains Gemini’s “embarrassing” AI pictures of diverse Naziswww.theverge.com@[email protected] to TechnologyEnglish • 10 months agomessage-square22fedilink
minus-square@jacksilverlinkEnglish19•10 months agoIt’s done because the underlying training data is heavily biased to begin with. It’s been a known issue for along time with AI/ML, for example racist cameras have been an issue for decades https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/. So they do this to try to correct for biases in their training data. It’s a terrible idea, and shows the rocky path forward for GenAI, but it’s easier than actually fixing the problem ¯\_(ツ)_/¯
It’s done because the underlying training data is heavily biased to begin with. It’s been a known issue for along time with AI/ML, for example racist cameras have been an issue for decades https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/.
So they do this to try to correct for biases in their training data. It’s a terrible idea, and shows the rocky path forward for GenAI, but it’s easier than actually fixing the problem ¯\_(ツ)_/¯