• @[email protected]OP
    link
    fedilink
    91 year ago

    By safe I mean about privacy, if there’s possibility that someone can “intercept” the photos of the child. Sorry if I didn’t explain it well

    • @shadowintheday2
      link
      English
      16
      edit-2
      1 year ago

      Interception by a third party is highly unlike, as the transport layer of basically everything is encrypted nowadays. What is left unknown is what can Meta do once the file is on their servers, as you’ll have to trust Zuckk’s word and Zuckk’s encryption

      • Carighan Maconar
        link
        51 year ago

        But the Signal people also say the e2e is trustworthy, no (Whatsapp, I mean)?

        • @[email protected]
          link
          fedilink
          31 year ago

          If Meta would really not know your Messages and encryption Keys, they would not be able to recover Every single one oft your messages even if you forgot your Password.

          • Carighan Maconar
            link
            21 year ago

            Last time I needed that they could not. They needed either the backup (which is less secure and private but your choice whether to use or not, I think it uploads to Google Drive or so?) Or another device that is still working that is linked to the same account.

      • @[email protected]
        link
        fedilink
        11 year ago

        It’s end to end encrypted so they can’t see it then. What they could do is access it once it’s on your device and unencrypted potentially.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          Or through unencrypted by default backup. It goes on Google drive and there’s no guarantee that it doesn’t go to Meta.

    • @[email protected]
      link
      fedilink
      15
      edit-2
      1 year ago

      In computer security it always depends on your thread model. WhatsApp is supposed to be end-to-end-encrypted, so nobody can intercept your messages. However: Once someone flags a message as inappropriate, this gets circumvented and messages get forwarded to Meta. This is only supposed to happen if it’s flagged. So unlikely in a family group. I trust this actually works the way Meta tells us, though I can’t be sure because I haven’t dissected the app and this may change in the future. And there is lawful intercept.

      Mind that people can download or screenshot messages and forward them or do whatever they like with the pictures.

      And another thing: If you have Sync enabled, Google Photos will sync pictures you take with their cloud servers and it’ll end up there. And Apple does the same with their iCloud. As far as I know both platforms automatically scan pictures to help fight crime and child exploitation. We aren’t allowed to know how those algorithms work in detail. I doubt a toddler in clothes or wrapped in a blanket will trigger the automatism. They claim a ‘high level of accuracy’. But people generally advise not to take pictures of children without clothes with a smartphone. Bad incidents have already happened.

      Edit: Apple seems to have pushed for cloud scanning initially, but currently that doesn’t happen any more. They have some on device filters as far as I understand.

      • kirklennon
        link
        fedilink
        41 year ago

        As far as I know both platforms automatically scan pictures to help fight crime and child exploitation.

        Apple doesn’t. They should but they don’t. They came up with a really clever system that would do the actual scanning on your device immediately before uploading to iCloud, so their servers would never need to analyze your photos, but people went insane after they announced the plan.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          1 year ago

          Oh. I didn’t know that. I don’t use Apple products and just read the news, I must have missed how the story turned out, so thanks for the info.

          Technically I suppose it doesn’t make a huge difference. It still gets scanned by Apple software. And sent to them if it’s deemed conspicuous. And the algorithm on a device is probably limited by processing power and energy budget. So it might even be less accurate. But this is just my speculation. I think all of that is more of a marketing stunt. This way the provider reduces cost, they don’t need additional servers to filter the messages and in the end it doesn’t really matter where exactly the content is processed if it’s a continuous chain like in the Apple ecosystem.

          The last story I linked about the dad being incriminated for sending the doctor a picture would play out the same way, regardless.

          Edit: I googled it and it seems the story with Apple has changed multiple times. The last article I read says they don’t even do on-device scanning. Just a ‘nude filter’. Whatever that is. I’m cautious around cloud services anyways. And all of that might change and also affect old pictures. We just avoided mandatory content filtering in the EU and upload filters and things like that are debated regularly. Also the US has updated their laws regarding internet crime and prevention of child exploitation in the last years. I’m generally unsure where we’re headed with this.

          • kirklennon
            link
            fedilink
            11 year ago

            The proposal was only for photos stored on iCloud. Apple has a legitimate interest in not wanting to actually host abuse material on their servers. The plan was also calibrated for one in one trillion false positives (it would require multiple matches before an account could be flagged), followed by a manual review by an employee before reporting to authorities. It was so very carefully designed.

            • @[email protected]
              link
              fedilink
              2
              edit-2
              1 year ago

              Do you happen to know a good source for information on this? I don’t want to highjack this discission, since it’s not that closely related to the original subject… But I’d be interested in more technical information. Most news articles seem to be a bit biased and I get it, both privacy and protection of children are sensible topics and there are feelings envolved.

              One in a trillion sounds like a probability of a hash collision. So that would be just checking if they already have the specific image in their database. It’ll trigger if someone downloaded an already existing image and not detect new images taken with a camera. I’m somewhat fine with that.

              And I was under the impression that iPhones connected to the iCloud sync the pictures per default? So “only for photos stored on iCloud” would practically mean every image you take, unless you deliberately changed the settings on your iPhone?

              • kirklennon
                link
                fedilink
                11 year ago

                Do you happen to know a good source for information on this?

                Apple released detailed whitepapers and information about it when originally proposed but they shelved it so I don’t think they’re still readily available.

                One in a trillion sounds like a probability of a hash collision.

                Basically yes, but they’re assuming a much greater likelihood of a single hash collision. The system would upload a receipt of the on-device scan along with each photo. A threshold number of matches would be set to achieve the one in a trillion confidence level. I believe the initial estimate was roughly 30 images. In other words, you’d need to be uploading literally dozens of CSAM images for your account to get flagged. And these accompanying receipts use advanced cryptography so it’s not like they’re seeing “oh this account has 5 potential matches and this one has 10”; anything below the threshold would have zero flags. Only when enough “bad” receipts showed up for the same account would they collectively flag it.

                And I was under the impression that iPhones connected to the iCloud sync the pictures per default?

                This is for people who use iCloud Photo Library, which you have to turn on.

                • @[email protected]
                  link
                  fedilink
                  1
                  edit-2
                  1 year ago

                  Thank you very much for typing that out for me! And this seems to be the first sound solution I read about. I think I would happily deploy something like that on my own (potential) server. I have to think about it and try and dig up more information.

                  Lately, I’ve been following the news about EU data retention and all they come up with are solutions that are proper surveillance of every citizen, a slippery slope and come with many downsides. The justification is always “won’t somebody please think of the children” and the proposed solution is to break end to end encryption for everyone. They could have just implemented this. Okay I do actually know why they don’t… There is a lobby pushing for general surveillance and protecting children is just their superficial argument to gain acceptance for it. So they’re not interested in effective solutions to deal with the specific problem at all. They want something that actually is a slippery slope and can also be used for other purposes, later on.

                  Such a hash-table would at least detect known illegal content. And it doesn’t even trigger on legal content. For example if someone underage sends nudes to their partner consentually. And having it only detect multiple images makes it less likely someone can get attacked by sending them one illegal picture / planting evidence and they’ll instantly be flagged and be raided by police. All the proper cases I read about they always find hundreds of images on a criminal’s harddisk. And the police already said they can’t handle loads of false positives. They’re understaffed and overworked and implementing a solution that’d generate many false positives would lead to them having to deal with it and have even less time to deal with the actual criminals.

                  So this sounds like a solution that Apple has put some thought into. It tackles lots of issues that previously were arguments for me to advocate against CSAM filters.

    • @[email protected]
      link
      fedilink
      51 year ago

      No , but they will get the meta data. But image should be secure. But then your recepient download it , upload it to Google cloud and so on

    • Big P
      link
      fedilink
      English
      41 year ago

      If someone is able to intercept WhatsApp messages, they aren’t using it to look at photos of your baby they’re using it to spy on government officials

    • atro_city
      link
      fedilink
      21 year ago

      you have to trust that Meta doesn’t do anything with your pictures before they are sent and that the person you’re sending them to doesn’t backup their whatsapp stuff to google.

      It’s more secure to use Signal