Can’t be bothered to read the whole news article? Get a quick AI summary in the post itself. Uses a specialised summariser (not just asking an LLM “summarise this”). Summaries are 60% identical to human summaries and >95% accurate in keeping original meaning.

News summary had moved to [email protected] the bot has been updated to use a better we scraping method and improved summarisation.

If u don’t like this please just block the community no need to complain or downvote.

  • @[email protected]
    link
    fedilink
    English
    214 days ago

    If u don’t like this please just block the community no need to complain or downvote.

    Best of luck with that!

    I’m actually not trying to poop on some cool new thing you’re setting up, but I think it is pretty clear at this point that building a system that uses an LLM to produce factual information for people, is a recipe for your system getting well-deserved criticism.

    Also, pay your journalists. Anything that takes them out of the equation will at some point lead to X and Youtube being the only sources of news, sending everybody anything that somebody feels like paying to produce and distribute for “free.”

  • missingno
    link
    fedilink
    184 days ago

    This is genuinely harmful. LLMs will hallucinate, which means that using them as a substitute for reading the news will result in the spread of misinformation. And in an era where we see just how dangerous misinformation can be, I beg you to please not do this.

    “95% accurate” means 5% lies, which is 5% too many.

    • I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do. The worse errors it has made are a couple cases of missattribution of actions that are easily spotted from within the context of the whole summary.

      The ai has no more misinformation than a human journalist. It is not biass in its summary. It does not assert falsehoods in malice. I have been accused of spreading misinformation for many articles and yes the bot is saying misinformation but that is the misinformation stated in the original human authored article. It is not my bots job to pass judgement but simply to make ur ability to do so easyer.

      • @[email protected]
        link
        fedilink
        English
        94 days ago

        I’m using an LLM architecture that’s better suited to summarisation meaning it won’t invent false facts like traditional gpts do.

        What architecture is that? If you have an LLM that doesn’t hallucinate, there will surely have been papers written about the breakthrough.

        The ai has no more misinformation than a human journalist.

        And that dear reader was when the work of foolishness became something much more sinister.

        Humans, and trust in humans, are important. The internet divorced the human face and the accumulation of trust from the news, which has allowed engineered alternative facts to enter the mainstream consciousness, which might be the single biggest harmful development in a modern age which has no shortage of them. I am not trying to tell you that your summarizer project is automatically responsible for that. But be cautious about what future you’re constructing.

      • missingno
        link
        fedilink
        74 days ago

        “My LLM will simply never make a mistake ever”

        I don’t believe you.

          • odd
            link
            fedilink
            English
            43 days ago

            But that’s what you are saying. You either admit that this is a horrible idea, or you are confident that your AI never makes a mistake. It’s black and white, really.

            • Wow that’s one hell of a false choice. I fully admit my ai makes mistakes but I believe it makes less than or equal to the same amount of mistakes as a human.

              The guy who made said mistake made 2 mistakes (invented a false quote and false attribution) that’s 2 mistakes in 4 messages with an error rate of 50% my not has a measured error rate of <5%.

              It makes errors just the rate at which it makes said errors is far smaller than a human.

  • @regrub
    link
    English
    18
    edit-2
    4 days ago

    1/20 chance of introducing secondhand inaccuracies to news when it’s already hard enough for news to be accurate.

    Fuck HilariousChaos. I’m glad I blocked them and their low-effort communities a long time ago