• Alphane Moon
    link
    English
    17
    edit-2
    17 hours ago

    I’ve been reading (and subscribing to) Ars Technica for a long time (20+ years reading, ~10 year sub).

    While they have pretty solid coverage on many topics (science, US public policy, general tech), their coverage of Apple has always been very biased. The Apple fanboys in the comments are also extremely annoying and pathetic.

    EDIT: Added the paragraphs in question:

    The BBC stories about the error-prone AI have often seemed to lack understanding of how the Apple Intelligence notification summaries work—for example, in suggesting that all users received the offending notification about Mangione. The wording of the summaries varies on individual devices depending on what other notifications were received around the same time.

    Nevertheless, it’s a serious problem when the summaries misrepresent news headlines, and edge cases where this occurs are unfortunately inevitable. Apple cannot simply fix these summaries with a software update. The only answers are either to help users understand the drawbacks of the technology so they can make better-informed judgments or to remove or disable the feature completely. Apple is apparently going for the former.

    We’re oversimplifying a bit here, but generally, LLMs like those used for Apple’s notification summaries work by predicting portions of words based on what came before and are not capable of truly understanding the content they’re summarizing.

    Further, these predictions are known to not be accurate all the time, with incorrect results occurring a few times per 100 or 1,000 outputs. Deploying this technology at scale without users really understanding how it works is risky at best, whether it’s with the iPhone’s summaries of news headlines in notifications or Google’s AI summaries at the top of search engine results pages. Even if the vast majority of summaries are perfectly accurate, there will always be some users who see inaccurate information.

    It’s no wonder that we are stuck in an era of corrupt oligarchic regimes, weakened democracy, rising authoritarianism and an inability to solve our pressing problems.

    I will add that this is not doomerism on my part. History always goes in cycles, the current regime will eventually reach a point were it won’t be viable due to the weight of its own contradictions. But that doesn’t mean we have yet reached the trough part of the cycle.

    We are in for some interesting times.

    • paraphrand
      link
      English
      71 day ago

      If you go over to r/singularity they will have the same “it’s an LLM, you can’t expect no errors! AGI here we come” attitude too.

      Constantly apologizing for LLMs and promising the next hit will be better/perfect is… it’s like a cult. More so than Apple fans, because they see it as reshaping reality via the singularity.

      I’m disappointed that Apple jumped in while the error rate is still high. It’s almost like everyone on the side of modern AI just wants us all to get over it and get use to the errors. And trust in the future iterations.