I know that scanning images for Scam is kind if a dystopian and scary. However, that doesn’t mean that we need to open ourselves for abusive materials to be sent to us.

What I think we need is some publicly available ML models that can be run on each device voluntary to block SCAM from being shown or stored.

Publicly available models would help but implementing them could be a slippery sloap. If popular encrypted messaging apps start having this feature built in its possible it will become illegal to turn it off or use versions of the app with scanner removed. This would mean that we would effectively stuck with a bad egg in our code.

Maybe the best answer is to not give individuals with questionable history the ability to message you.

Does anyone else have a thought?

  • @Cypher
    link
    51 year ago

    The justifications for closed source scanners are slim, even knowing how a scanner works it would be difficult for CSAM to be altered to completely avoid detection and those gaps could quickly be closed.

    We need an open source scanner that can be integrated safely and with trust into FOSS.

    This will only happen with government permission as anyone developing this without permission obviously opens themselves up to legal action.

    The FOSS community needs to get Governments on side with this but I don’t know where lobbying would be best started. Potentially the EU would be most receptive to this approach?

    • Possibly linuxOP
      link
      fedilink
      English
      11 year ago

      I figured they could just release a ML model that was trained on CSAM internally