Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • @[email protected]
    link
    fedilink
    English
    159 months ago

    I’m pretty sure it’s generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn’t output black or Asian nazis.

    it doesn’t have EQ or sociological knowledge.

    It sort of does (in a poor way), but they call it bias and tries to dampen it.

    • @kaffiene
      link
      English
      29 months ago

      I don’t disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

    • Echo Dot
      link
      fedilink
      English
      09 months ago

      At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it’s still the original data deep down.

      At some point that won’t be true and it will be a proper intelligence. But we’re not there yet.

      • @[email protected]
        link
        fedilink
        English
        59 months ago

        Nah, the problem here is literally that they would edit your prompt and add “of diverse races” to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

      • @kaffiene
        link
        English
        19 months ago

        I pretty much agree with that