• @sparr
    link
    English
    37
    edit-2
    2 months ago

    I am sad that the current generation of federated social media/networks still doesn’t have much, if any, implementation of web of trust functionality. I believe that’s the only solution to bots/AI/etc content in the future. Show me content from people/accounts/profiles I trust, and accounts they trust, etc. When I see spam or scams or other misbehavior, show me the trust chain connecting me to it so I can sever it at the appropriate level instead of having to block individual accounts. (e.g. “sorry mom, you’ve trusted too many political frauds, I’m going to stop trusting people you trust”)

    • @takeda
      link
      14
      edit-2
      2 months ago

      I think this would be a great feature request: https://github.com/LemmyNet/lemmy/issues

      I would definitively use it if it was implemented. Make it work like it is in GPG, where you can rank users based on your trust, and that is then propagated to others.

      • @[email protected]
        link
        fedilink
        92 months ago

        This concept reminds me of a certain browser extension that marks trans allies and transphobic accounts/websites using a user aggregate with thresholds that mark transphobes as red and trans allies as green.

    • @[email protected]
      link
      fedilink
      42 months ago

      I guess the question is how specifically you implement such a system, in this case for software like Lemmy. Should instances have a trust level with each other? Should you set a trust when you subscribe to a community? I’m not sure how you can make a solution that will be simple for users to use (and it needs to be simple for users, we can’t only have tech people on Lemmy).

      • @sparr
        link
        English
        12 months ago

        For the simplest users, my initial idea is just a binary “do you trust them?” for each person (aka “friends”) and non-person (aka “follow”), and maybe one global binary of “do you trust who they trust?” that defaults to yes. anything more complex than that can be optional.

        • @[email protected]
          link
          fedilink
          12 months ago

          But how does this work when you follow communities? Do you need to trust every single poster in a community?

          • @sparr
            link
            English
            22 months ago

            You’d see posts in a community/group/etc based on your trust of the community, unless you’ve explicitly de-trusted the poster or you trust someone who de-trusts them (and you haven’t broken that chain).

            • @[email protected]
              link
              fedilink
              12 months ago

              Right, so if I have no connection to someone else, it’d be “neutral” and I’d see the post. If I trust them transitively, then it would be a trusted post and if I distrust them transitively, it would be a distrusted post.

              I think implementing such a thing would not only be complicated but also quite computationally demanding - I mean you’d need to calculate all of this for every single user?

    • @[email protected]
      link
      fedilink
      22 months ago

      Yes! Web of trust is the only way. Everything else can be scammed. I am kinda wondering if it could be invites and if severing could be automated for social media. “We just banned a third person who came in on your invitations. Goodbye.”