I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.

In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.

I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.

[email protected]

  • @[email protected]
    link
    fedilink
    English
    -112 hours ago

    Tbh, I don’t really care to engage in arguments anymore. It never goes anywhere. It’s pretty dumb.

    • @[email protected]OP
      link
      fedilink
      English
      712 hours ago

      It seems like you were perfectly happy to engage in arguments, when it was you outputting the argument. At me. When asked about engaging in a rational discussion, you bailed, with contempt at the concept.

      Annnnd that’s why you are banned. Like I say, the bot is working.