Apple is failing to effectively monitor its platforms or scan for images and videos of the sexual abuse of children, child safety experts allege, which is raising concerns about how the company can handle growth in the volume of such material associated with artificial intelligence.

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.

  • Flying SquidM
    link
    English
    42 months ago

    I honestly do not know a good way to monitor these platforms for CSAM. Obviously, they need to be monitored, but the choices, as far as I can tell, is either pay people to look at traumatic things all the time or use AI to do it. The former sounds kind of like either torture or titillation depending on who you are, and neither is a good thing, and I’m not convinced the latter will work.

    • @Docus
      link
      English
      252 months ago

      Obviously, they need to be monitored is a flawed view. Obviously, you want all your mail to be checked, all your phone calls to be recorded?

      Obviously, most of us would like the producers of CSAM, and the people spreading it, to be held to account. But arguing that if we could just monitor these platforms we would reduce the problem is not credible.

    • @SlopppyEngineer
      link
      English
      152 months ago

      Well, to get the AI to have enough training data you have to pay people to look at traumatic things all the time first. Once that is done, you still need some people to check the AI to catch false positives as being falsely charged can be devastating. There are already people stories for people being arrested for CSAM because they’ve sent a pic of their child in a swimming pool to the grandparents.

      • Flying SquidM
        link
        English
        62 months ago

        Good point. So no matter what, you end up essentially either torturing people (sure, they’re being paid, but it’s still a form of psychological torture to have to look at CSAM images all the time) or you hire people who want to look at CSAM. Both are terrible options, but I guess the former is really the only viable one.

        And, of course, the humans hired to do this are always from developing nations so they can get away with paying them a few dollars a day.