• @brucethemoose
    link
    English
    247
    edit-2
    2 months ago

    It requires them to restrict certain categories of video, so that users cannot share content on cyberbullying, promoting eating disorders, promotion of self harm or incitement to hatred on a number of grounds.

    Wow, what a horrible, restraining overreach.

    I am shedding tears for the 1.2% engagement loss this would cost Reddit next quarter. Imagine what they have to pay devs for filtering abusive videos!

    (I hate to sound so salty, but its mind boggling that they would fight this so vehemently, instead of just… filtering abusive content? Which they already do for anything that actually costs them any profit).

    • @Lost_My_Mind
      link
      English
      852 months ago

      Well…the problem is reddit’s size.

      I’m not part of reddit anymore because they filtered me out for abusive content.

      The content that was so abusive? I told a story on /r/Cleveland about the time 35 years ago I got my bike stolen.

      I wasn’t accusing any current reddit user of being the theif. But reddit bots flagged me of being abusive to other users.

      We don’t even know if that guy who stole my bike 35 years ago is even still alive, much less an active redditor on /r/Cleveland. So who am I being abusive to, when I say it’s a bad idea to let strangers ride your bike without some kind of assurance you’ll get it back?

      • @GreenKnight23
        link
        English
        942 months ago

        I got banned when I told a literal Nazi, that said that literal Jews should die, should drink bleach to purify their genes before they contaminated the genepool.

        I still stand by it. my grandfather fucked up Nazis, and I’ll fuck up Nazis too.

        • @[email protected]
          link
          fedilink
          English
          122 months ago

          This is a common tactic. I’ve seen people describe the same process many times before.

          1. Nazi says literal Nazi shit.
          2. Person gets baited into responding.
          3. Person gets ban hammer. Nazi does not.
          4. Nazi moves on to next target. Repeat from step 1.

          They usually trot this out when they see a comment or account they want to silence. That’s how the fascists do censorship on reddit.

          It’s happened to me too. Since then I’ve seen people saying the same general thing has happened to them. They must know that reddits content moderators, the “Anti-evil Operations” or whatever bullshit, is on their side. It’s the only explanation. Probably the nazis went and got jobs there. Or maybe it’s just that spez is a nazi himself. Reddit beneath the thin veneer of default subreddits has always been a very right leaning platform.

      • @brucethemoose
        link
        English
        29
        edit-2
        2 months ago

        Fair. +1

        But also, that just sounds like they’re cheaping out on content filtering. And, you know, kinda broke the enthusiastic community moderation that made it great in the first place.

        • @Lost_My_Mind
          link
          English
          202 months ago

          Yes, that’s true. This all happened like 3 weeks after they went public IPO. I didn’t buy it, because I thought reddit had a decent chance of falling on it’s ass on the free market. It’s a 10+ year old company that’s never made a profit. It’s reasonable to assume it might fail.

          3 weeks after I declined, and they went public, I suddenly get 3 temporary bans in a week, and the 3rd one was a permanent ban. All by autobots.

          • @Dead_or_Alive
            link
            English
            112 months ago

            Yeah same here, the last post I made was to argue for more disabled access to European historical sites n the r/europe subreddit.

            After everything I’ve posted, THAT is what got me banned.

            After loosing my appeal, I changed all my prior posts to AI generated gibberish.

            Fuck Reddit, salt your posts so they can’t use your content to make money on search or train AI.

            • @Lost_My_Mind
              link
              English
              72 months ago

              I wish I could, but I have hundreds of thousands if not millions of comments.

              Look at my time here, and now look at how many comments I have here, and know that I am running at MAYBE 5% of my posting capacity.

              Lemmy just is barren of content if you don’t care about politics, linux, or star trek.

          • @lemonmelon
            link
            English
            32 months ago

            Decepticons probably wouldn’t be any more permissive, though.

      • @UnderpantsWeevil
        link
        English
        162 months ago

        Well…the problem is reddit’s size.

        They’ve never been shy about targeting certain subs and communities for shutdown when it suits their commercial interests. This has nothing to do with size and everything to do with the nature of the content itself.

        These videos are pure clickbait. They feed engagement. They build up lots of enthusiasm both among content providers and active users. And, as a consequence, they make the company money.

        But reddit bots flagged me of being abusive to other users.

        Bots will flag any post purely based on keyword searches and AI parsing of sentiment. Its got nothing to do with your actual statement. But it also depends heavily on who you are, where you post, and how often other users flag you. Very possibly you simply got “Report” flagged a bunch of times by other users for some reason and that - plus a naive parsing - was all the AI bot needed to know.

        But I’ll also bet the post wasn’t getting thousands of unique interactions and external visits. If you’d been a power-poster who was posting a face-cam rant rather than a text blob, I suspect you’d have been fine.

      • @rozodru
        link
        English
        132 months ago

        same with me and /r/Toronto got banned for stating a long dead prime minister was horrible to indigenous people. they used the excuse that I was submitting too many articles about crimes in the city as that subreddit’s mods automatically remove any content about crime or pro Palestinian content.

        Post god knows how many photos of the CN tower, the fucking sun setting or snow…hey that’s great! anything that’s news worthy and potentially paints the city in a bad light? nope, censored. It’s so bad that i’m convinced the mods there are being paid under the table by the City.

      • @bulwark
        link
        English
        42 months ago

        I hate Reddit as much as the next guy but that just sounds like an asshole mod

        • @Lost_My_Mind
          link
          English
          62 months ago

          In 3 different unrelated subs, and all said to be performed as an automated action?

          Also of note, I got permabanned on May 7th.

          May 4th I joined Mastadon. Using the same email as my reddit email.

      • @fluxion
        link
        English
        2
        edit-2
        2 months ago

        I’m gonna have to ask you to stop abusing whatever random reddit mod flagged you back then in case they might be here. Or else.

    • @chalupapocalypse
      link
      English
      172 months ago

      They would have to hire a shitload of people to police it all along with the rest of the questionable shit on there, like jailbait or whatever other shit they turned a blind eye to until it showed up on the news

      Not saying it’s right but from a business standpoint it makes sense

      • @brucethemoose
        link
        English
        5
        edit-2
        2 months ago

        Don’t they flag stuff automatically?

        Not sure what they’re using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.

        • @T156
          link
          English
          22 months ago

          They do, but they’d still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don’t toe the company line exactly.

          I doubt that they would use an LLM for that. That’s very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren’t as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.

          • @brucethemoose
            link
            English
            1
            edit-2
            2 months ago

            Small LLMs are quite fast these days, even the multimodal ones. Same with small models explicitly used to filter diffusion output.

      • @ripcord
        link
        English
        02 months ago

        A shitload of people, like as many as 10!

    • @GeneralInterest
      link
      English
      82 months ago

      I hate to sound so salty, but its mind boggling that they would fight this so vehemently, instead of just… filtering abusive content?

      I guess it’s just enshittification. Profits are their first priority.

    • @reddig33
      link
      English
      22 months ago

      I wonder what the investors like Condé Nast/Advance Publications think of this?

    • @[email protected]
      link
      fedilink
      English
      -112 months ago

      I’m a little surprised to hear people so willing to let the government of Ireland determine who they are allowed to hate and for what reasons.

      • @brucethemoose
        link
        English
        14
        edit-2
        2 months ago

        Because at this point, I trust the Irish government more than Reddit to make proper moderation judgements.

        It’s not a high bar, and not ideal… but I feel social media’s raging internal issues have overshadowed the fear of governments worming their way inside, at least in my eyes.

      • @reliv3
        link
        English
        1
        edit-2
        2 months ago

        You are making such a useless point that requires minimal effort or thought. It would be better if you actually shared a tangible concern rather than providing a strawman argument meant to cause an irrational fear in people reading your comment.

        For example, you could have shared which group of people you want to be a protected class and is not by Irish law; or which group of people is currently a protected class by Irish law and should not be. At least, then, you would have brought up a real concern about how the Irish government is determining hate speech; because right now, all you are doing is fear mongering.

        • @[email protected]
          link
          fedilink
          English
          22 months ago

          I oppose letting anyone define hate speech as a matter of principle, because even if I agree with the definition completely now, I may not continue to agree with the definition in the future. Look at what has been happening in the USA since the October 7 attack: a lot of people I had considered my political allies turned out to have beliefs I consider to be hateful, and meanwhile these people consider my own beliefs hateful. The solution is not to empower a single central authority to decide which sort of hate is allowed. It is (as it has always been) to maintain the principle of free speech.

          • wanderingmagus
            link
            fedilink
            English
            12 months ago

            How about incitements to violence and outright explicit disinformation/misinformation, like:

            • [group] should be [violent act]
            • [group] are [dehumanizing pejorative] that deserve [violent act]
            • [dogwhistle for the actual Nazis, like the 14 words, terminology specifically referencing the Final Solution, etc]
            • [hard r] are [extreme dehumanizing pejorative] and don’t deserve [human rights]
            • [violent or repulsive act] the [slur]
            • “Despite only making up 13%…”
            • “Whites create and forget, [slur]s copy and remember…”