Getting a bot to spam out 12 posts in a minute is not the way to make me want to engage.

  • @[email protected]
    link
    fedilink
    English
    7411 months ago

    Is there really any scenario where a normal user should NOT be rate limited on posts or comments to some degree? Say, no more than 3 posts per minute? No more than 10 replies?

    • @breadsmasher
      link
      English
      5011 months ago

      But when anyone can run an instance, you can’t control it. Someone has an instance which allows them to make as many posts as they want, and then all that content is federated to connect servers

      • @[email protected]
        link
        fedilink
        English
        1511 months ago

        Really though? You can implement the same limits for federated posts, and just drop the ones exceeding the rate limit. Who knows, might be frustrating for normal users that genuinely exceed the rate limits, because their stuff won’t be seen by everyone without any notice, but if they are sane it should be minimal.

        The notice might still be able to be implemented though. idk how federation works exactly, but when a federated post is sent/retrieved, you can also exchange that it has been rejected. The local server of the user can then inform the user that their content has been rejected by other servers.

        There are solutions for a lot of things, it just takes the time to think about & implement them, which is incredibly limited.

        • nicetriangle
          link
          fedilink
          811 months ago

          Even a “normal” user needs to chill out a bit when they start reliably hitting a (for example) 3-post-a-minute threshold.

        • @breadsmasher
          link
          English
          111 months ago

          Not to suggest it isn’t a problem that needs to be solved. But from my understanding of activitypub protocol, there isn’t a way to control content federation on a per message basis, solely on allow/block instances as a whole

      • HeartyBeast
        link
        fedilink
        411 months ago

        It’s an interesting problem to be sure. It feels like it should be possible for servers to automagically detect spam on incoming federated feeds and decline to accept spam posts.

        Maybe an _actual _ useful application of LLMs

        • @[email protected]
          link
          fedilink
          English
          4
          edit-2
          11 months ago

          There’s already plenty of tools that do this automatically, sadly they’re very often proprietary and paid-for services. You just have to have a way to appeal false positives, because there will always be some, and, depending on how aggressive it is, sometimes a lot.

        • originalucifer
          link
          fedilink
          111 months ago

          i look forward to an automated mechanism, like with image checking…

          that said, the existing tools arent all that terrible, even if its after the fact.

          ‘purge content’ does a pretty good job of dumping data from know bad actors. and then being able blocking users/instances.

          if everything was rate limited to some degree, we would manually catch these earlier, and block before the rest of the content made its way over… maybe.

      • @[email protected]
        link
        fedilink
        English
        111 months ago

        Perhaps a case to be made for a federated minimum-config. If servers don’t adhere to a minimum viable contract, say meeting requirements for rate-limiting, or not requiring 2fa, or other config-level things… They become defederated.

        A way of enforcing adherence to an agreed upon minimum standard of behaviour, of sorts

        • Dran
          link
          English
          311 months ago

          It would be very easy to spoof those values in a handshake though, unless you’re proposing that in the initial data exchange a remote server gets a dump of every post and computationally verifies compliance.

          Federated trust is an unsolved problem in computer science because of how complex of a problem it is.

          • @[email protected]
            link
            fedilink
            English
            411 months ago

            Spoofing that handshake would be a bad faith action, one that would not go unnoticed longer term. Instances with a bunch of bad faith actions will make the case for not federating with themselves.

            • Dran
              link
              English
              211 months ago

              It just has to go unnoticed long enough to spam for a few days, get defederated, delete itself, start over

      • Anony Moose
        link
        fedilink
        English
        711 months ago

        This is why Lemmy needs to keep tweaking it’s feed algorithm. I understand why many people rightly have a distaste of social media algorithm fuckery, but Lemmy doesn’t have some of the same bad incentives that an ad driven site like Reddit or twitter might have. A better algorithm will help Lemmy grow and surface interesting posts organically.

        The recent feed change to boost smaller communities in 0.19 is a good start, and not showing too many posts in a row from the same community will be another welcome change.

    • RubberDuck
      link
      English
      111 months ago

      Those numbers seem high for a normal human being. 1 a minute is even high for quality content.

    • Omega
      link
      English
      011 months ago

      I can think of a couple. For events like the NFL or some expo where you want a bunch of different topic discussion threads all at the same time. But even then, it would very limited.