• @[email protected]
    link
    fedilink
    English
    -513 hours ago

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

    But I’d guess the AI is quite a bit better than, say, the average Republican.

    • Balder
      link
      English
      1
      edit-2
      6 hours ago

      I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

      There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.

      • @[email protected]
        link
        fedilink
        English
        25 hours ago

        I’m more interested in the technology itself, rather than its current application.

        I feel like I am watching a toddler taking her first steps; wondering what she will eventually accomplish in her lifetime. But the loudest voices aren’t cheering her on: they’re sitting in their recliners, smugly claiming she’s useless. She can’t even participate in a marathon, let alone compete with actual athletes!

        Basically, the best AIs currently have college-level mastery of language, and the reasoning skills of children. They are already far more capable and productive than anti-vaxxers, or our current president.

        • Balder
          link
          English
          1
          edit-2
          5 hours ago

          It’s not the people that simply decided to hate on AI, it was the sensationalist media hyping it up so much to the point of scaring people: “it’ll take all your jobs”, or companies shoving it down our throats by putting it in every product even when it gets in the way of the actual functionality people want to use. Even my company “forces” us all to use X prompts every week as a sign of being “productive”. Literally every IT consultancy in my country has a ChatGPT wrapper that they’re trying to sell and they think they’re different because of it. The result couldn’t be different, when something gets too much exposure it also gets a lot of hate, especially when it is forced down on people.