Canadian Sikh Facebook users receive notifications that their posts are being taken down because they’re in violation of Indian law

  • @[email protected]
    link
    fedilink
    English
    46
    edit-2
    1 year ago

    So they received a take down request from the Indian government, mistook the users for being in India, followed the law that they’re required to follow in India, and when it was brought to their attention that those users were actually based in Canada they went back and allowed the posts. This doesn’t seem as malicious as people are making it out to be, they should probably work on their geo-blocking, but with 3 billion users in 150+ countries with their own local laws it’s probably safer to be aggressive when it comes to removing content when requested.

    • @blackfire
      link
      31 year ago

      I think this comes under the ‘Never attribute to malice that which is adequately explained by stupidity’ Hanlons razor

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Well they don’t, hence why they’re taking down posts as required by the countries they operate in and willing to accept a noticable false positive rate to do it.

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            and willing to accept a noticable false positive rate to do it.

            It’d probably help if you fully read the comments you’re replying to lol

              • @[email protected]
                link
                fedilink
                English
                0
                edit-2
                1 year ago

                Your first comment was incredibly vague… I was responding to this part:

                Glad corporations get the power to make these decisions.

                However, a high false positive rate is different than assuming every post is “guilty until proven innocent”, and they aren’t mutually exclusive either. Current example here would be the automated removal of CSAM on Lemmy. A model was built to remove CSAM and it has a high rate of false positives. Does this mean that it assumes everything is CSAM until it’s able to confirm it isn’t? No. It could work that way, that’s an implementation detail that I don’t know the specifics of, but it doesn’t necessarily mean it does.

                But really, who cares? The false positive rate matters for site usability for sure, but the rest is an implementation detail in an AI model, it isn’t the court of law. Nobody’s putting you in Facebook prison because they accidentally mistook your post for rule breaking.