Can’t be bothered to read the whole news article? Get a quick AI summary in the post itself. Uses a specialised summariser (not just asking an LLM “summarise this”). Summaries are 60% identical to human summaries and >95% accurate in keeping original meaning.

News summary had moved to [email protected] the bot has been updated to use a better we scraping method and improved summarisation.

If u don’t like this please just block the community no need to complain or downvote.

  • I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do. The worse errors it has made are a couple cases of missattribution of actions that are easily spotted from within the context of the whole summary.

    The ai has no more misinformation than a human journalist. It is not biass in its summary. It does not assert falsehoods in malice. I have been accused of spreading misinformation for many articles and yes the bot is saying misinformation but that is the misinformation stated in the original human authored article. It is not my bots job to pass judgement but simply to make ur ability to do so easyer.

    • @[email protected]
      link
      fedilink
      English
      94 days ago

      I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do.

      What architecture is that? If you have an LLM that doesn’t hallucinate, there will surely have been papers written about the breakthrough.

      The ai has no more misinformation than a human journalist.

      And that dear reader was when the work of foolishness became something much more sinister.

      Humans, and trust in humans, are important. The internet divorced the human face and the accumulation of trust from the news, which has allowed engineered alternative facts to enter the mainstream consciousness, which might be the single biggest harmful development in a modern age which has no shortage of them. I am not trying to tell you that your summarizer project is automatically responsible for that. But be cautious about what future you’re constructing.

    • missingno
      link
      fedilink
      74 days ago

      “My LLM will simply never make a mistake ever”

      I don’t believe you.

        • odd
          link
          fedilink
          English
          43 days ago

          But that’s what you are saying. You either admit that this is a horrible idea, or you are confident that your AI never makes a mistake. It’s black and white, really.

          • Wow that’s one hell of a false choice. I fully admit my ai makes mistakes but I believe it makes less than or equal to the same amount of mistakes as a human.

            The guy who made said mistake made 2 mistakes (invented a false quote and false attribution) that’s 2 mistakes in 4 messages with an error rate of 50% my not has a measured error rate of <5%.

            It makes errors just the rate at which it makes said errors is far smaller than a human.