I recently made the jump from Reddit for the same immediate reasons as everyone else. But, to be honest, if it was just the Reddit API cost changes I wouldn’t be looking to jump ship. I would just weather the protest and stay off Reddit for a few days. Heck I’d probably be fine paying a few bucks a month if it helped my favorite Reddit app (Joey) stay up and running.

No, the real reason I am taking this opportunity to completely switch platforms is because for a couple years now Reddit has been unbearably swamped by bots. Bot comments are common and bot up/downvotes are so rampant that it’s becoming impossible to judge the genuine community interest in any post or comment. It’s just Reddit (and maybe some other nefarious interests) manufacturing trends and pushing the content of their choice.

So, what does Lemmy do differently? Is there anything in Lemmy code or rules that is designed to prevent this from happening here?

  • @kadu
    link
    35
    edit-2
    2 years ago

    deleted by creator

    • voiceofchris OP
      link
      62 years ago

      That’s disappointing. Screening new accounts only forces spammers to create the accounts with a human touch and then turn it over to their AI. What about a system to prevent bots from up/downvoting? Something like websites use to detect bots. Just by clicking in the little box that says “I am not a robot” the website can tell you’re not a bot. What if every single up and down arrow was formulated like that little box?

      • @kadu
        link
        282 years ago

        That’s the thing though - what system? Reddit, YouTube, Twitter, Facebook, you name it, nobody managed to prevent bots. How would Lemmy be more successful at this? It’s an extremely challenging battle, unfortunately.

        • czech
          link
          fedilink
          72 years ago

          Do those for-profit social media companies want to drive-down traffic that makes them seem more valuable to advertisers? I get that it’s still insanely difficult, and we can’t actually implement a captcha on every up-vote, but it seems like there’s a conflict of interest between moderators and site owners when it comes to bot activity.

          • @kadu
            link
            92 years ago

            Arguably, some of the platforms I mentioned have even more of an interest on preventing bots. If I want to place ads on your website, but you can’t tell me if out of 100 impressions 10 are bots or 90 are bots… I’m not wasting my money, or at the very least, I’ll expect rates significantly lower than other competitors.

            • voiceofchris OP
              link
              12 years ago

              I don’t know. Wouldn’t their motivation be to know exactly how many bots there are (so they could disclose the number if/when asked) but continue to let them proliferate?

          • @[email protected]
            link
            fedilink
            7
            edit-2
            2 years ago

            Social media companies generally benefit from high traffic for advertiser appeal, but combating bots is crucial for maintaining user trust and engagement. Implementing CAPTCHAs for every upvote may not be feasible, but addressing bot activity is generally in the long-term interest of social media companies.

            This message was generated by ChatGPT.

            Not sure if you bought that, but if I was applying for an account on Beehaw using a LLM assistant, I bet the odds of passing a human review is better than 50%.

            • aeternum
              link
              fedilink
              82 years ago

              Oh god. Could you imagine doing a captcha every time you upvoted? Please DO NOT do this, Ernest.

        • voiceofchris OP
          link
          0
          edit-2
          2 years ago

          Well, what about the system I mentioned? Just have the up and down arrows be little bot detection boxes. My understanding is that all those “I am not a robot” check boxes detect mouse speed, precise click locations, hesitation times, etc. and do a quick calculation on the odds that your clicking behavior was human or robot. I’m probably underestimating what it takes to implement that but on the user side it’s just a click just like any other click.

  • @mjgood91
    link
    30
    edit-2
    2 years ago

    I reckon it’d depend significantly on the instance. Beehaw has a signup form reviewed by humans - measures like this are by no means perfect, but coupled with other bot detection software could help. If an instance developed a real issue with bots, other more strict instances could potentially ban up votes and comments from accounts on it.

    At the very least, tracking instances that account interaction came from should be quite doable, so users part of more strict instances could filter out upvotes and comments from less strict instances if desired.

    • voiceofchris OP
      link
      102 years ago

      Well that’s something at least. Individual instances blocking each other (working against other problematic instances) is at least better than the Reddit admins turning a blind eye because they have a fleet of their own bots out there behaving as bad as any others.

    • @nivenkos
      link
      72 years ago

      Beehaw’s approach isn’t scalable.

      They want to have 4 people moderating every community, managing the creation of any new communities, and reviewing every sign-up request.

      It’s no surprise they’ve buckled on federation already. I give it a week before they stop accepting new sign ups or community creation requests too.

      • @mjgood91
        link
        32 years ago

        Yeah, I do agree Beehaw won’t be able to grow significantly if they keep doing things the way they’re doing them right now. At present point, they’re going to likely remain a more niche community long-term with how they’re operating. Who knows though, maybe this is what they want. Lemmy would have to do something different though without a herculean moderation effort.

    • CoderKat
      link
      fedilink
      32 years ago

      Beehaw has a signup form reviewed by humans

      I’m honestly not sure what difference that makes with federation. Someone from a server with easy signup can still post and comment in Beehaw subs. It doesn’t really scale well to manually review signups, either (with an essay question when I saw, lol).

      • @[email protected]
        link
        fedilink
        8
        edit-2
        2 years ago

        Someone from a server with easy signup can still post and comment in Beehaw subs

        Only if Beehaw federates with the other instance, though.

  • @[email protected]
    link
    fedilink
    142 years ago

    There’s a rumor that Reddit started with (automated and human) bots to gain popularity and kept to drive political and commercial interests.

    • @emptyother
      link
      52 years ago

      It will be reposted by a bot eventually.

  • @Zak
    link
    62 years ago

    Something I’d like to see Lemmy and others adopt is a federated identity/reputation system.

    My identity as @[email protected] has only modest reputational value. It’s moderately risky to let me participate in a new community, and busy moderators probably shouldn’t give me much slack before banning me if I post something that makes me look like an asshole or a spammer. In a place with a high enough volume or vulnerable enough population, perhaps this account shouldn’t be allowed to participate at all[0]. Someone willing to put a bit of effort into abusive behavior could create many accounts that look like mine.

    If, on the other hand, I can prove that I’m also https://news.ycombinator.com/user?id=Zak, that’s a more valuable identity. There aren’t all that many 16 year old accounts on news.ycombinator.com. If I can also produce a verifiable token with some machine-readable facts about that account, such as its age, post count, reputation score, how many of its posts have been moderated, if it has ever been banned, etc… then communities could have automated criteria for joining.

    Of course, communities would need to maintain lists of who they trust as reputation providers, which could also be shared to reduce the workload.

    [0] Lemmy does not currently have tools to restrict participation other than only allowing moderators to post. I think it’s going to need them.

      • @Zak
        link
        22 years ago

        The identity proof aspect is similar, but what I’m proposing goes beyond that to add a protocol for reputation information.

        The idea is a substitute for the account age and karma requirements many subreddits use to make creating accounts for abuse difficult. There are opportunities to be more sophisticated about it though, such as a community only accepting reputation from certain closely-related communities.

  • @FantasticFox
    link
    62 years ago

    We ask them why they aren’t helping the tortoise in the desert.

    • voiceofchris OP
      link
      22 years ago

      In other news, mobs of young out of work robo- tortoises, some sporting fresh scars from the ongoing Mojave Raven wars, have begun an all out assault on the dweebs of a little known Reddit spin-off. “An entire generation of robo-tortoise has been weaponized. They are equipping us with laser guns! They are making us to taste bad!” States one salty techno-turtle. “We are being shipped to the barren wastelands of America’s Southwest to fight a war in which we have no interest.” The repto-robots have decided to take out their frustration by relentlessly downvoting the “…federated tankies of Lemmy until those dweebs return to Reddit where they belong and leave the Threadiverse to us sentient snappers.”