There’s a new one suddenly popping up in my feed but obviously the reports are being “resolved” by the mods of that community. They suggested to me that I block their community but I will not because that is how you get a cesspit of an instance. How do we report disinformation communities straight to the admins?

Edit: the admins did remove the community in question so I’m going to take that as the official stance on disinformation communities and also assume that any type of community (right wing, left wing, or other) that are intentionally spreading disinformation will be removed. That makes me feel much better about the situation since this type of thing is pretty much guaranteed to pop up again.

  • @[email protected]
    link
    fedilink
    English
    291 year ago

    It’s hosted on this instance: https://lemm.ee/c/vaccines

    “All reports calling post here missinformation will be ignored unless the post says that covid vaccines are healthy. Which is dangerous missinformation because covid vaccines kill.”

    • CommunityLinkFixerBotB
      link
      fedilink
      English
      61 year ago

      Hi there! Looks like you linked to a Lemmy community using a URL instead of its name, which doesn’t work well for people on different instances. Try fixing it like this: [email protected]

    • @[email protected]
      link
      fedilink
      -81 year ago

      Given that even twitter is full of that these days, is a Lemmy community the end of the world? Idiots are going to keep believing their stupid beliefs.

      • @[email protected]
        link
        fedilink
        English
        221 year ago

        End of the world? No.

        By the same token, a few bugs in my house is not the end of the world, but I’d still prefer to have screens on the window and keep a flyswatter handy 😉

        • shootwhatsmyname
          link
          fedilink
          English
          -21 year ago

          The “bugs” you’re referring to are actual people, and “your house” is my house too. We are both anonymous users on a general purpose instance shared with ~15k other people. If you start removing people from our house, and I don’t want you to remove those people, I think it’s fair to have a good-faith conversation about this.

          How do you suggest determining whether or not something is considered disinformation?

          • @[email protected]
            link
            fedilink
            English
            51 year ago

            I’m also not advocating for killing trolls that bother me… so take care not to belabor a quick metaphor.

            The vast majority of disinformation comes in a few key topics related to current hot button political issues and is generally pushed by recognizable sources. It’s not unreasonable to expect admins to check into user reports of disinformation and organized trolling against known sources. I’m not an admin so I’m not going to write up the specific criteria right here and now.

            Choosing not to do so is also a conscious choice to host such content.

            • shootwhatsmyname
              link
              fedilink
              English
              -11 year ago

              Hey, it’s okay to break down a metaphor if I don’t think it’s applicable to the conversation.

              Yes, totally I agree with you, I think admins should review reported content and do some investigation if needed.

              I guess I have a problem with removing users and communities based on someone’s opinion of the content itself. Vote manipulation, brigading, creating multiple accounts to push agenda, repeated automated posting, and even organized trolling like you mentioned are not direct opinions on the content posted. They are clearly defined and relatively easy to identify. “Disinformation,” “recognizable sources,” and “hot button political issues” are direct opinions about the content or subject of a post or community. They are not clearly defined and differ greatly from person to person.

              I asked you to suggest a definition or criteria of disinformation to move us from the “what” to the “how.” Thinking about how this might be regulated practically might help you understand why I think it’s problematic to remove users and communities based solely on someone’s opinion of their content.

              • @[email protected]
                link
                fedilink
                English
                4
                edit-2
                1 year ago

                Believe me I do understand why it could be considered problematic. My disagreement stems from the idea that it’s better to have no policy rather than an imperfect policy or one that relies on some discretion.

                My point in highlighting that disinformation centers around a few hot button issues is to reinforce that we’re not talking about some nebulous or opinion-driven debate; rather there are a few key disinformation strategies that take advantage of the “bullshit asymmetry” to poison real discussion. They are easily identified because they are well documented and reported on.

                I’m simply unconvinced by arguments that it’s too hard to identify and nip such malicious communities in the bud. Even less so by arguments that doing so is somehow a slippery slope.