Everyone I have something very important to say about The Agora.

The Problem

Let me be super clear here to something people don’t seem to understand about lemmy and the fediverse. Votes mean absolutely nothing. No less than nothing.

In the fediverse, anyone can open a instance, create as many users as they want and one person can easily vote 10,000 times. I’m serious. This is not hard to do.

Voting at best is a guide to what is entertaining.

As soon as you allow a incentive the vast majority of votes will be fake. They might already be mostly fake.

If you try to make any decision using votes as a guide someone WILL manipulate votes to control YOU.

one solution (think of others too!)

A counsel of trusted users.

The admin, top mods may set up a group to decide on who to ban and what instances to defederate from. You will not get it right 100% of the time but you also won’t be controlled by one guy in his basement, running 4 instances and 1,000 alts.

Now i’m gonna go back to shit posting.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    A public/private key pair is more effective. Thats how “https” sites work. SSL/TLS uses certificates to authenticate who is who. Every site with https has a SSL certificate which basically contains the public key of the site. The site can then use its private key to sign all data it sends to you, and you can verify that it actually came from them by trying to decrypt it with their public key. Certificates are granted by a certificate authority, which are basically the identity service you are talking about. Certificates are usually themselves signed by the certificate authority so that you can tell that someone didnt just man-in-the-middle-attack you and swap out the certificate, and the site can still directly serve you the certificate instead of you needing to go elsewhere to find the certificate

    The problem with this is severalfold. You would need some kind of digital identity organization(s) to be handling sensitive user data. This organization would need to

    1. Be trusted. Trust is the key to having these things work. Certificate authorities are often large companies with a vested interest in having people keep business with them, so they are highly unlikely to mess with people’s data. If you can’t trust the organization, you can’t trust any certificate issued or signed by them.

    2. Be secure. Leaking data or being compromised is completely unnaceptable for this type of service

    3. Know your identity. The ONLY way to be 100% sure that it isnt someone just making a new account and a new key or certificate (e.g. bots) would be to verify someone’s details through some kind of identification. This is pretty bad for several reasons. Firstly it puts more data at risk in the event of a security breach. Secondly there is the risk of doxxing or connecting your real identity to your online identity should your data be leaked. Thirdly it could allow impersonation using leaked keys (though im sure theres a way to cryptographically timestamp things and then just mark the key as invalid). Fourth, you could allow one person to make multiple certificates for various accounts to keep them separately identifiable, but this would also potentially enable making many alts.

    There may be less agressive ways of verifying individual humanness of a user, or just preventing bots as in that 3rd point may be better. For example, a simple sign up with questions to weed out bots, which generates an identity (certificate / key) which you can then add to your account. That would then move the bot target from various lemmy instances, solely to the certificate authorities. Certificate authorities would probably need to be a smaller number of trusted sources, as making them “spin up your own” means that anyone could do just that, with less pure intentions or modified code that lets them impersonate other users as bots. That sucks because it goes against the fundamental idea that anyone should be able to do it themselves and the open source ideology. Additionally, you would need to invest in tools to prevent DDOS attacks and chatgpt bots.

    There most certainly exists user authentication authorities, however it wouldn’t surprise me a bit if there were no suitable drop in solutions for this. This in and of itself is a fairly difficult project because of the scale needed to start as well as the effort put into verifying users are human. It’s also a service that would have to be completly free to be accepted, yet cannot just shut down at risk of preventing further users from signing up. I considered perhaps charging instances a small fee (e.g. $1/mo) if they have over a certain threshold of users to allow issuing further certificates to their instance, but its the kind of thing I think would need to be decoupled from Lemmy to have a chance of surviving through more widespread use.

    • DarkwingDuck
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      What the fuck happened to the internet? What happened to “never share your real name or any identifying information on the internet”?

      some kind of digital identity organization(s) to be handling sensitive user data

      Like Equifax? Excuse me if I am a little skeptical of “trusted” organizations handling my data.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      Interesting idea, but I don’t think it would be practical to verify identities for a global community. If you’ve ever worked in a bar or other business that checks ID (and are from the US) you know how hard it is just to verify the identity of US citizens. If you’re considering a global community, US and EU users would be the easiest to verify, and citizens of smaller countries would be much harder. How do you handle countries that have extremely corrupt governments, where it’s easy to bribe an official for “real” documents for fictitious people?