I know that scanning images for Scam is kind if a dystopian and scary. However, that doesn’t mean that we need to open ourselves for abusive materials to be sent to us.

What I think we need is some publicly available ML models that can be run on each device voluntary to block SCAM from being shown or stored.

Publicly available models would help but implementing them could be a slippery sloap. If popular encrypted messaging apps start having this feature built in its possible it will become illegal to turn it off or use versions of the app with scanner removed. This would mean that we would effectively stuck with a bad egg in our code.

Maybe the best answer is to not give individuals with questionable history the ability to message you.

Does anyone else have a thought?

  • @WhoRoger
    link
    English
    1
    edit-2
    1 year ago

    Don’t see why not. You can download a database of hashes and compare that locally. Granted, those hashes aren’t “free”, but that’s due to the legal status of such material. The principle itself - comparing hashes - can be foss.

    Yea people can look into the algorithms to see how they work and circumvent etc., but that’s no different than with… Anything else. If someone is motivated enough to distribute the material, they’ll make their own network. Foss doesn’t make any difference here.