• @FMT99
    link
    English
    756 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • @[email protected]
      link
      fedilink
      English
      -116 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • @Grimy
        link
        English
        21
        edit-2
        6 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.