• @NevermindNoMind
    link
    English
    151 year ago

    I’ll give you an example that comes to mind. I had a question about the political leanings of a school district and so I asked the bots if the district had any recent controversies, like a conservative takeover of the school board, bans on crt, actions against transgender students, banning books, or defying COVID vaccine or mask requirements in the state, things like that. Bing Chat and ChatGPT (with internet access at the time) both said they couldn’t find anything like that, I think Bing found some small potatoes local controversy from the previous year, and both bots went on to say that the voting record for the Congressional district the school district was in was lean Dem in the last election. When I asked Bard the same question it confidentiality told me that this same school district recently was overrun by conservatives in a recall and went on to do all kinds of horrible things. It was a long and detailed response. I was surprised and asked for sources since my searching didn’t turn any of that up, and at that point Bard admitted it lied.

    I don’t know, my experience with Bard is it’s been way worse than just evasive lying. I routinely ask all three (and now anthropic since they opened that up) the same copy and paste questions to see the differences, and whenever I paste my question into Bard I think “wonder what kind of bullshit it’s going to come up with now”. I don’t use it that much because I don’t trust it, and it seems like your more familiar with Bard, so maybe your experience is different.

    • @FringeTheory999OP
      link
      English
      41 year ago

      interesting. next time I’ll try a similar scenario and what happens.