I was in a group chat where the bro copied and pasted my question into ChatGPT, took a screenshot, and pasted it into the chat.

As a joke, I said that Gemini disagreed with the answer. He asked what it said.

I made up an answer and then said I did another fact check with Claude, and ran it through “blast processing” servers to “fact check with a million sources”.

He replies that ChatGPT 5.1 is getting unreliable even at the $200 a month model and considering switching to a smarter agent.

Guys - it’s not funny anymore.

  • cmhe
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Have you ever used a chatbot ? , the fact that an answer is unverifiable doesn’t stop them answering at all.

    Yes, I’ve use chatbots. And yes, I know that they always manage to generate answer full of conviction even while wrong. I never said otherwise.

    My point is about the person using a chatbot/LLM needs to be able to easily verify if a generated reply is right or wrong, otherwise it doesn’t make much sense using LLMs, because they could have just researched the answer directly instead.