Google Gemini seems to have been programmed to provide cookie cutter responses when asked questions about whether Trump tried to overturn the election.
When you point it out to Gemini, it says it isn’t programmed to avoid any topics or viewpoints.
Even saying you’ll accept a variety of sources & viewpoints on the topic to reach your own conclusion, results in it saying it can’t answer.
When asking if it has been trained on research papers, case law, indexed news stories & even Wikipedia results, it says that it has.
I think asking the “ai” for information about itself is inherently flawed. It cannot guarantee that the information it provides is true. The only thing which is verifiable by Gemini itself, is that it cannot anwswre to that specific prompt at that specific time.
I was unable to get it to mention trump, though it did link to a webpage by campaign.legal.
Damn it is not saying anything about those court cases.
Very true, but the fact that it refuses to answer despite me already knowing it has enough info in its index to answer or provide some information & viewpoints, shows that it has been programmed not to answer.
Well, yeah. That’s what it said.
It’s trained by reading the horrible morass of stuff on the Internet. Topics with larger amounts of disinformation are areas where they’re very prone to making mistakes. Crossing those topics with ones that misinformation or the appearance of misinformation are particularly damaging to the world or to their reputation and you have a good list of topics that are probably not good candidates to let your chatbot talk about.
It doesn’t do “reasoning” or “critical thinking” in the way you might expect for something that can communicate articulately. It doesn’t know what’s accurate or not, only what’s likely to be stated on the Internet. Unfortunately, it’s very likely for people on the Internet to say some bonkers things about the 2020 election in specific, and anything political in general. Even in sources that normally might be ranked higher for factuality, like a news publication.
It’s not just trump, it’s anything political.
This type of AI isn’t an expert, it’s a mimic. It knows how to mimic patterns, and it’s been told to mimic something knowledgeable and helpful based on all the text on the Internet, where people regularly present themselves as knowledgeable regardless of their basic sanity.