• MudMan
    link
    fedilink
    1218 hours ago

    Oh, so it’s an on-device service initially used to stop accidental sharing of nudes and available as a service call to other apps for that and other types of content.

    Look, I’m not saying you can’t be creeped out by phones automatically categorizing the content of your photos. I’m saying you’re what? Five, ten years late to that and killing this instance won’t do much about it. Every gallery app on Android is doing this. How the hell did you think searching photos by content was working? The default Android Gallery app by Google does it, Google Photos sure as hell does it, Samsung’s Gallery does it. The very minimum I’ve seen is face matching for people.

    This is one of those things where gen-AI panic gets people to finally freak out about things big data brokers have been doing for ages (using ML, too).

    • @CrayonRosary
      link
      English
      59 hours ago

      Every gallery app on Android is doing this.

      FossifyGallery is not doing this.

    • @Kyrgizion
      link
      English
      717 hours ago

      It’s almost a certainty they’re checking the hashes of your pics against a database of known csam hashes as well. Which, in and of itself isn’t necessarily wrong, but you just know scope creep will mean they’ll be checking for other “controversial” content somewhere down the line…

      • MudMan
        link
        fedilink
        716 hours ago

        That does not seem to be the case at all, actually. At least according to the GrapheneOS devs. The article quotes them on this and links a source tweet.

        Since… you know, Twitter, here’s the full text:

        Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

        The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.

        It’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source. It won’t be available to GrapheneOS users unless they go out of the way to install it.

        We’d have no problem with having local neural network features for users, but they’d have to be open source. We wouldn’t want anything saving state by default. It’d have to be open source to be included as a feature in GrapheneOS though, and none of it has been so it’s not included.

        Google Messages uses this new app to classify messages as spam, malware, nudity, etc. Nudity detection is an optional feature which blurs media detected as having nudity and makes accessing it require going through a dialog.

        Apps have been able to ship local AI models to do classification forever. Most apps do it remotely by sharing content with their servers. Many apps have already have client or server side detection of spam, malware, scams, nudity, etc.

        Classifying things like this is not the same as trying to detect illegal content and reporting it to a service. That would greatly violate people’s privacy in multiple ways and false positives would still exist. It’s not what this is and it’s not usable for it.

        GrapheneOS has all the standard hardware acceleration support for neural networks but we don’t have anything using it. All of the features they’ve used it for in the Pixel OS are in closed source Google apps. A lot is Pixel exclusive. The features work if people install the apps.