• Greg Clarke
    link
    fedilink
    English
    -36 days ago

    That’s my point, if the model returns a hallucinated source you can probably disregard it’s output. But if the model provides an accurate source you can verify it’s output. Depending on the information you’re researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.

    • @[email protected]
      link
      fedilink
      English
      26 days ago

      I use GPT (4o, premium) a lot, and yes, I still sometimes experience source hallucinations. It also will sometimes hallucinate incorrect things not in the source. I get better results when I tell it not to browse. The large context of processing web pages seems to hurt its “performance.” I would never trust gen AI for a recipe. I usually just use Kagi to search for recipes and have it set to promote results from recipe sites I like.

    • @[email protected]
      link
      fedilink
      English
      15 days ago

      i have stopped using openai services, and now I’m only using ai services through duck.ai website for trying to protect my privacy