Please. Captcha by default. Email domain filters. Auto-block federation from servers that don’t respect. By default. Urgent.

meme not so funny

And yes, to refute some comments, this publication is being upvoted by bots. A single computer was needed, not “thousands of dollars” spent.

  • @[email protected]
    link
    fedilink
    English
    121 year ago

    I disagree. I think the solution is moderation. Basically, have a set of tools that identify likely bots, and let human moderators make the call.

    If you require admins to manually approve accounts, admins will either automate approvals or stop approving. That’s just how people tend to operate imo. And the more steps you put between people wanting to sign up and actually getting an account, the fewer people you’ll get to actually go through with it.

    So I’m against applications. What we need is better moderation tools. My ideal would be a web of trust. Basically, you get more privileges the more trusted people that trust you. I think that should start from the admins, then to the mods, and then to regular users.

    But lemmy isn’t that sophisticated. Maybe it will be some day, IDK, but it’s the direction I’d like to see things go.

    • @[email protected]
      link
      fedilink
      English
      15
      edit-2
      1 year ago

      HackerNews does something similar where new users don’t have the ability to down vote until they have earned enough upvotes from other users.

      We could extend that, and literally not allow upvotes to properly register if the user is too new. The vote would still show on the comment/post, but the ranking of the comment/post will only be influenced by seasoned users. That way, users could scroll down a thread, see a very highly upvoted comment bang in the middle, and think for themselves “huh, probably bots”.

      Very hierarchical solution, heavily reliant on the mods not playing favourites or having their own agenda.

      • @[email protected]
        link
        fedilink
        English
        41 year ago

        heavily reliant on the mods

        As any solution to this sort of problem should be, IMO.

        If the mods suck, then go make another community. If enough of the mods are good, they can be a huge part of the solution. I’m envisioning this:

        1. users report other users
        2. mods ban users based on reports, if the reports have merit
        3. admins block instances based on reports from mods, if the reports are consistent

        Transparency keeps the mods honest, or at least allows users in the community to name and shame bad mods.

        Some automated tools to help mods out are always welcome.

    • HTTP_404_NotFound
      link
      fedilink
      English
      -11 year ago

      I disagree. I think the solution is moderation.

      But- that is basically agreeing with the context of what I said.

      So I’m against applications.

      I don’t like them either, but, the problem is, I don’t have ANY other tools at my disposal for interacting with and viewing the other users.

      What we need is better moderation tools.

      Just a way to audit user activity and comments would be a big start. I honestly cannot find a method for doing this via the UI. Having to resort to querying the database just to dig up details.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Maybe I’ll work on this. I’m trying to get more involved in lemmy-adjacent dev work, so this might be a fun project.

        • HTTP_404_NotFound
          link
          fedilink
          English
          01 year ago

          Well, I dug around and built a pretty simple webassembly GUI for lemmy just now-

          It would appear, the API is actually missing easy ways to just… query users, and actions. However, skipping past the lemmy api and going directly to the database, is easy enough.

          And, from there, its pretty easy to run each user’s post history through say, a piece of ML, which detects the potential for spam, and reports on the highest risk users.

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            I wonder how hard that would be to throw into a separate service? We could probably set up a replica of the main db to run the ML algo against and ship it as an optional add-on service.

            Admins could then set it to update the model on some schedule and to flag likely spammers/scammers.

            Sounds feasible.

            • HTTP_404_NotFound
              link
              fedilink
              English
              11 year ago

              Querying users, is actually extremely easy.

              Query users, posts, and then performing sentiment analysis, is also extremely easy.

              (From the database- that is)

              The API is a bit limited.

              I am messing around with the database right now… and… well, it maintains a LOT of data.