Fair play, regarding the tooling being there then, I had the impression it wasn’t even possible currently. I guess I’d now wonder how ubiquitous its usage is.
My concern with your second part is that law enforcement would not be able to quickly deal with the issue and in the case of an abandoned relay, could take a fair few days or weeks before any action is taken. The problems with such illegal content is that in many places even unwittingly having it in your browser cache would put you massively at risk—it needs to be removed and the user prevented from continuing as immediately as possible, anything else puts the people using the network at risk. If such a risk exists, it’s going to put most people off (and entirely understandably). I know I avoided browsing lemmy for a fair while when the problem here was still being figured out, and I thankfully never saw anything, but I’m still weary of browsing on my lunch break at work for example.
Also FWIW, I think Google does scan emails and drive for this stuff, I think all US based social networks have an obligation to do so also, IIRC, but I might not be 100% correct on that.
There is no “delete a user from the internet” button. It doesn’t exist. Even if a single admin could ban a user from entire network, which is giving immense amount of power to any admin, all that user has to do is make a new account to get around it. That’s true for Nostr, AP, Twitter, Facebook, E-mail, etc. This is why spam exists and will always exist. AP or nostr or whoever isn’t going to solve spam or abuse of online services, the best we can do it mitigate the bulk of it. Relays and instances can share ban lists in nostr or AP, that can be automated, that is the way to mitigate the problem. There is, however, a “delete a person from society” button we can press, and that is LEOs job. That, conveniently, also deletes them from the internet. It’s just not a button we trust anybody but government to press. We do have a “delete a user from most of AP/Nostr” button in the form of shared blocklists.
As we get stronger and stronger anti-spam/anti-abuse measures, we make it harder and harder to join and participate in networks like the internet. This isn’t actually a problem for spammers, they have a financial incentive, so they can pay people to fill out captchas and do SMS verifications and whatever else they need to do. All we do by increasing the cost to spam is change that kinds of spam are profitable to send. Other abuse of services that isn’t spam have their own intrinsic motivations that may outweigh the cost associated with making new accounts. At a certain level of anti-spam mitigation, you end up hurting end users more than spammers. A captcha and e-mail verification blocks like 90% of spam attempts and is a very small barrier for users. But even that has accessibility implications. Requiring them to receive an SMS? An additional 10% but now you’ve excluded people who don’t have their own cell phone or use a VoIP provider. You’ve made it more dangerous for people to use your service to seek help for things like addiction, domestic abuse, etc as their partner or family member may share the same phone. You’ve made it harder to engage in dissent against the government in authoritarian regimes. You’ve also made it much more difficult to run a relay, since running a relay now requires access to an SMS service, payment for that SMS service, etc. Require them to receive a letter in the mail? An additional 10% but now you’ve excluded people who don’t have a stable address or mail access, etc. Plus now it takes a week to sign up for your website and that’s even getting into apartment numbers and the complications you’d face there. For a listing to be placed on Google Maps, maybe a letter in the mail is a reasonable hurdle to have, after all, Google only wants to list businesses which have a physical address. For posting to twitter? It’s pretty ludicrous.
I generally trust relay admins to make moderation decisions, otherwise I wouldn’t be on their instance or relay on the first place. And my trust becomes extended to other admins they work with and share ban lists with. And that’s fine. But remember that any person with any set of motivations can be a relay or instance admin. That person could be the very troll we are trying to prevent with this anti-spam or anti-abuse measures. What I don’t trust is any random person on the internet being able to make moderation decisions for the entire internet. Which means that any approach to bans would need to be federated and built on mutual trust between operators.
Fair play, regarding the tooling being there then, I had the impression it wasn’t even possible currently. I guess I’d now wonder how ubiquitous its usage is.
My concern with your second part is that law enforcement would not be able to quickly deal with the issue and in the case of an abandoned relay, could take a fair few days or weeks before any action is taken. The problems with such illegal content is that in many places even unwittingly having it in your browser cache would put you massively at risk—it needs to be removed and the user prevented from continuing as immediately as possible, anything else puts the people using the network at risk. If such a risk exists, it’s going to put most people off (and entirely understandably). I know I avoided browsing lemmy for a fair while when the problem here was still being figured out, and I thankfully never saw anything, but I’m still weary of browsing on my lunch break at work for example.
Also FWIW, I think Google does scan emails and drive for this stuff, I think all US based social networks have an obligation to do so also, IIRC, but I might not be 100% correct on that.
There is no “delete a user from the internet” button. It doesn’t exist. Even if a single admin could ban a user from entire network, which is giving immense amount of power to any admin, all that user has to do is make a new account to get around it. That’s true for Nostr, AP, Twitter, Facebook, E-mail, etc. This is why spam exists and will always exist. AP or nostr or whoever isn’t going to solve spam or abuse of online services, the best we can do it mitigate the bulk of it. Relays and instances can share ban lists in nostr or AP, that can be automated, that is the way to mitigate the problem. There is, however, a “delete a person from society” button we can press, and that is LEOs job. That, conveniently, also deletes them from the internet. It’s just not a button we trust anybody but government to press. We do have a “delete a user from most of AP/Nostr” button in the form of shared blocklists.
As we get stronger and stronger anti-spam/anti-abuse measures, we make it harder and harder to join and participate in networks like the internet. This isn’t actually a problem for spammers, they have a financial incentive, so they can pay people to fill out captchas and do SMS verifications and whatever else they need to do. All we do by increasing the cost to spam is change that kinds of spam are profitable to send. Other abuse of services that isn’t spam have their own intrinsic motivations that may outweigh the cost associated with making new accounts. At a certain level of anti-spam mitigation, you end up hurting end users more than spammers. A captcha and e-mail verification blocks like 90% of spam attempts and is a very small barrier for users. But even that has accessibility implications. Requiring them to receive an SMS? An additional 10% but now you’ve excluded people who don’t have their own cell phone or use a VoIP provider. You’ve made it more dangerous for people to use your service to seek help for things like addiction, domestic abuse, etc as their partner or family member may share the same phone. You’ve made it harder to engage in dissent against the government in authoritarian regimes. You’ve also made it much more difficult to run a relay, since running a relay now requires access to an SMS service, payment for that SMS service, etc. Require them to receive a letter in the mail? An additional 10% but now you’ve excluded people who don’t have a stable address or mail access, etc. Plus now it takes a week to sign up for your website and that’s even getting into apartment numbers and the complications you’d face there. For a listing to be placed on Google Maps, maybe a letter in the mail is a reasonable hurdle to have, after all, Google only wants to list businesses which have a physical address. For posting to twitter? It’s pretty ludicrous.
I generally trust relay admins to make moderation decisions, otherwise I wouldn’t be on their instance or relay on the first place. And my trust becomes extended to other admins they work with and share ban lists with. And that’s fine. But remember that any person with any set of motivations can be a relay or instance admin. That person could be the very troll we are trying to prevent with this anti-spam or anti-abuse measures. What I don’t trust is any random person on the internet being able to make moderation decisions for the entire internet. Which means that any approach to bans would need to be federated and built on mutual trust between operators.