New research visualizes the political bias of all major AI language models:

-OpenAI’s ChatGPT and GPT-4 were identified as most left-wing libertarian.

-Meta’s LLaMA was found to be the most right-wing authoritarian.

Models were asked about various topics (e.g., feminism, democracy) and then plotted on a political compass.

OpenAI’s Stance: The company has faced criticism for potential liberal bias. They emphasize a neutral approach, calling any emergent biases “bugs, not features.”

PhD Researcher’s Opinion: Chan Park believes no language model can be free from political biases.

How Models Acquire Bias: Researchers examined three stages of model development. Initially, models were queried with politically sensitive statements to identify biases. BERT models (from Google) showed more social conservatism than OpenAI’s GPT models. The paper speculates this might be due to BERT’s training on more conservative books, while newer GPT models trained on liberal internet texts. Meta clarified steps taken to reduce bias in its LLaMA model. (Google did not comment)

Training actually amplified existing biases: left-leaning models became more left-leaning, and vice versa. Political orientation of training data influenced models’ detection of “hate speech and misinformation.”

The transparency issue: Tech companies don’t typically share details of training data/methods.

Should they be required to make the training data public?

Bottom line is if AI ends up disseminating a large portion of the total information exchange with humans, it can steer opinions. We can’t completely eliminate bias, but one should be aware that it exists.

https://twitter.com/AiBreakfast/status/1688939983468453888?s=20

  • b000urns
    link
    English
    431 year ago

    I love that being not a being an asshole somehow equates to “left wing” according to this line of thinking. Does anyone really want “conservative” AIs? Sounds like a nightmare

  • @[email protected]
    link
    fedilink
    English
    161 year ago

    What even is a neutral political position in that case? Doesn’t that depend entirely on the observers definition of left and right?

    • Sabata11792
      link
      fedilink
      71 year ago

      “I have no thoughts or opinions on a wide variety of topics.”

  • @[email protected]
    link
    fedilink
    English
    141 year ago

    Sad to see the right wing libertarian political compass being used as some sort of factual scoreboard in research like that. It completely undermines the premise of that research.

    • fkn
      link
      English
      221 year ago

      It’s only “left” in American left. Otherwise it’s quite conservative.

  • ∟⊔⊤∦∣≶
    link
    fedilink
    English
    61 year ago

    Ideally we would have AI that doesn’t intentionally (“intentionally”) favour either side.

    Here’s a great discussion on the bias of ChatGPT, where you can see that the model lies by omission or has negative things to say about one side and only positive for the other.

    https://odysee.com/@UpperEchelonGamers:3/chatgpt-is-deeply-biased:1

    To be honest, if we are having political discussions with AI, we are using it wrong.

  • @nxfsi
    link
    English
    -11 year ago

    Sorry sweetie, reality has a (left|right)-wing bias.