• @SlopppyEngineer
      link
      377 months ago

      “If it’s on the internet it must be true” implemented in a billion dollar project.

      • @[email protected]
        link
        fedilink
        57 months ago

        Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated

        • @[email protected]
          link
          fedilink
          English
          4
          edit-2
          7 months ago

          Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!

      • @[email protected]
        link
        fedilink
        English
        47 months ago

        Pretty sure AI will start telling us “You should not believe everything you see on the internet as told by Abraham Lincoln”

    • kate
      link
      fedilink
      English
      -17 months ago

      Can’t even rly blame the AI at that point

      • @[email protected]
        link
        fedilink
        117 months ago

        Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

        • kate
          link
          fedilink
          English
          57 months ago

          Should an LLM try to distinguish satire? Half of lemmy users can’t even do that

          • @[email protected]
            link
            fedilink
            97 months ago

            Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.

          • ancap shark
            link
            fedilink
            17 months ago

            If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that