• @[email protected]
    link
    fedilink
    English
    14 days ago

    Just because you haven’t personally gotten an egregiously wrong answer doesn’t meant it won’t give one, which means you have to check anyway. Google’s AI famously recommended adding glue to your pizza to make the cheese more stringy. Just a couple of weeks ago I got blatantly wrong information about quitting SSRIs with its source links directly contradicting it’s confidently stated conclusion. I had to spend EXTRA time researching just to make sure I wasn’t being gaslit.

    • @Donkter
      link
      English
      04 days ago

      Google’s AI is famously shitty. ChatGPT, and especially the most modern version is very good.

      Also don’t use LLMs for sensitive stuff like quitting SSRIs yet.

      • @[email protected]
        link
        fedilink
        English
        12 days ago

        That’s the thing. I didn’t want to use it. The AI’s input was entirely unsolicited and luckily I knew better than to trust it obviously. I doubt the average user is going to care enough to get a second opinion.

        • @Donkter
          link
          English
          12 days ago

          Right, that’s a problem you have with a corporation. Not the tool itself.