• @[email protected]
    link
    fedilink
    English
    396 months ago

    Oh, it’s worse than that.

    Google’s “AI” results feed you things for 10 year old Reddit posts that are subtle (but sometimes, also not so subtle) bullshit.

    Whatever they’re using to curate training data is evidently pretty awful at detecting shitposts.

    • Carighan Maconar
      link
      English
      156 months ago

      Bold of you to assume they’re curating their training data.

    • Diplomjodler
      link
      English
      86 months ago

      Those underpaid Indians probably aren’t very good at picking up irony, even if they give a shit.

    • @CosmoNova
      link
      English
      46 months ago

      Most of the curation or fine tuning is done in low income African countries so this is little surprising. They‘re cheap labour but you can‘t expect them to reliably detect sarcasm or notice mistakes in specialized fields. They basically give a thumbs up whenever the AI sounds convincing. Of course that includes instances where it‘s confidently wrong and that appears to be most of the time with this model.

    • @[email protected]
      link
      fedilink
      English
      26 months ago

      It’s not a training data issue, look up Retrieval Augmented Generation. It’s basically serving up stuff on the web and taking it as gospel.

      • @Melvin_Ferd
        link
        English
        -16 months ago

        That’s bullwhip why can’t it just think for itself