Humor me for a second. Quick poll. You can just write “Yes” or “No” in the comments.

  • mozzOP
    link
    fedilink
    28 months ago

    I love talking with people who disagree with me. Honestly I think I have something wrong with me, for how much I like arguing on the internet.

    But also, I think that the presence of so many people who are deliberately coming in to distort the narrative being presented is a bad thing. It absolutely swings elections, for one thing – the rise of social media influence campaigns as a factor in elections has been pretty well documented, and it’s not a good thing.

    It’s obviously not real easy to detect someone who’s saying X because they believe it, as distinct from someone who’s saying X because this is one of 50 accounts they run that is designed to push X as a popular opinion on social media. Every social media platform except Twitter has dedicated teams for this stuff. I think Lemmy’s admins are already overwhelmed running their instances, and it’s not fair to ask them to also do this more or less impossible task, but just leaving it alone and letting the shills come in and do 10 posts a day about how Biden betrayed us and as a good American I’m not voting in the election, and leaving it to the users to argue it out with them every single time every single day, isn’t really the way either.

    I don’t think you can do it by viewpoint, to create an echo chamber. That’s obviously a bunch of crap. I did do some little experiments with a couple of ways to do it, having an LLM detect certain bad-faith techniques automatically or mechanically detecting shill accounts through their behavior, but it’s obviously not a real simple or easy thing to do. I definitely haven’t figured out something that I feel like is “yes this is the way, this works well.”

    • @Eheran
      link
      28 months ago

      At this point I somewhat agree with you. The same way emails are filtered because there is SO much spam or worse things floating around. But I’m such a forum I think it should be done on a user basis, not message basis. So if someone constantly says something and does not actually react to replies, so no discussion, just dumping something, then this should be stopped. Regardless of what the actual message is, what the intentions are, if that is a real person or a bot. Randomly dumping messages hardly has value. But how to detect that is indeed the issue. Even with LLM since they can be used to counter themselves, but they would help against the low effort posts.

      • mozzOP
        link
        fedilink
        28 months ago

        Yeah. That was more or less the conclusion I came to – it’s too hard for the LLM to follow the flow of conversation well enough to really determine if someone’s “acting in good faith” or whatever, and it’s way too easy for it to interpret someone making a joke as being serious, that kind of thing. (Or, maybe GPT-4 can do it if you want to pay for that for API access for every user that wants to post, but I don’t want to do that).

        But it seems even a cheap LLM is pretty capable of distilling down, what are the things that people are claiming (or implying as an assumption), and did someone challenge them on it, and then did they respond substantively / respond combatively / change the subject / never respond. That seems like it works and you can do it kinda cheaply. And, it means that someone who puts out 50 messages a day (which isn’t hard to do) would then have to respond to 50 messages a day coming back asking questions, which is a lot more demanding, and creates a lot more room for an opinion that doesn’t hold up if you examine it to get exposed as such. But, it wouldn’t really weigh in on what’s the “right answer” to come to, and it wouldn’t censor from participating anyone who wanted to be there and participate in the discussion.

        IDK. Because you asked I dusted off the code I had from before just now, and I was reminded of how not-happy-with-it-yet I was 🙂. I think there’s a good idea somewhere in there though.