• @chalupapocalypse
    link
    English
    172 months ago

    They would have to hire a shitload of people to police it all along with the rest of the questionable shit on there, like jailbait or whatever other shit they turned a blind eye to until it showed up on the news

    Not saying it’s right but from a business standpoint it makes sense

    • @brucethemoose
      link
      English
      5
      edit-2
      2 months ago

      Don’t they flag stuff automatically?

      Not sure what they’re using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.

      • @T156
        link
        English
        22 months ago

        They do, but they’d still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don’t toe the company line exactly.

        I doubt that they would use an LLM for that. That’s very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren’t as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.

        • @brucethemoose
          link
          English
          1
          edit-2
          2 months ago

          Small LLMs are quite fast these days, even the multimodal ones. Same with small models explicitly used to filter diffusion output.

    • @ripcord
      link
      English
      02 months ago

      A shitload of people, like as many as 10!