Google didn't tell Android users much about Android System SafetyCore before it hit their phones, and people are unhappy. Fortunately, you're not stuck with it.
It’s almost a certainty they’re checking the hashes of your pics against a database of known csam hashes as well. Which, in and of itself isn’t necessarily wrong, but you just know scope creep will mean they’ll be checking for other “controversial” content somewhere down the line…
That does not seem to be the case at all, actually. At least according to the GrapheneOS devs. The article quotes them on this and links a source tweet.
Since… you know, Twitter, here’s the full text:
Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.
The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.
It’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source. It won’t be available to GrapheneOS users unless they go out of the way to install it.
We’d have no problem with having local neural network features for users, but they’d have to be open source. We wouldn’t want anything saving state by default. It’d have to be open source to be included as a feature in GrapheneOS though, and none of it has been so it’s not included.
Google Messages uses this new app to classify messages as spam, malware, nudity, etc. Nudity detection is an optional feature which blurs media detected as having nudity and makes accessing it require going through a dialog.
Apps have been able to ship local AI models to do classification forever. Most apps do it remotely by sharing content with their servers. Many apps have already have client or server side detection of spam, malware, scams, nudity, etc.
Classifying things like this is not the same as trying to detect illegal content and reporting it to a service. That would greatly violate people’s privacy in multiple ways and false positives would still exist. It’s not what this is and it’s not usable for it.
GrapheneOS has all the standard hardware acceleration support for neural networks but we don’t have anything using it. All of the features they’ve used it for in the Pixel OS are in closed source Google apps. A lot is Pixel exclusive. The features work if people install the apps.
It’s almost a certainty they’re checking the hashes of your pics against a database of known csam hashes as well. Which, in and of itself isn’t necessarily wrong, but you just know scope creep will mean they’ll be checking for other “controversial” content somewhere down the line…
That does not seem to be the case at all, actually. At least according to the GrapheneOS devs. The article quotes them on this and links a source tweet.
Since… you know, Twitter, here’s the full text: