Lemmy.world is very popular, and one of the largest instances. They do a great job with moderation. There’s a lot of positives with lemmy.world
Recently, over the last month, federation issues have become more and more drastic. Some comments from lemmy.world take days, or never, synchronize with other instances.
The current incarnation of activity pub as implemented in Lemmy has rate issues with a very popular instance. So now lemmy.world is becoming a island. This is bad because it fractures the discussion, and encourages more centralization on Lemmy.world which actually weakens the ability of the federated universe to survive a single instance failing or just turning off.
For the time being, I encourage everyone to post to communities hosted on other instances so that the conversation can be consistently access by people across the entire Fediverse. I don’t think it’s necessary to move your user account, because your client will post to the host instance of a community when you make a comment in that community I believe.
Update: other threads about the delays Great writeup https://lemmy.world/post/13967373
Other people having the same issue: https://lemmy.world/post/15668306 https://aussie.zone/comment/9155614 https://lemmy.world/post/15654553 https://lemmy.world/post/15634599 https://aussie.zone/comment/9103641
The problem is how Lemmy itself was built.
Decentralization should be a background thing with hosts providing server space like you would get from a service like AWS and the front end being a single website with users not knowing on which server their content is hosted and backed up.
If the front end is a single website, then it can be taken down, and provides a central weak point.
Not really if in the background everything is divided over a bunch of servers and backed up by other servers
It’s the only way to solve the centralization of users, take the option away from them and handle it in the background.
Let’s say whoever is running the front end doesn’t like a community and blocks it… How do we prevent that?
You don’t have “somebody running the front end” though, it’s all done by the people providing hosting services.
Think “crypto philosophy as a message board” but instead of having everyone sync the whole history you split all data randomly in a way that guarantees it is stored on three servers at all time.
Heck, you could also have multiple front ends if you wanted, all pulling and pushing data to the same servers and this way you could log in from any of them, the front end would only have an influence on UI/UX, in the background the data would always come from the same places and for this reason the front end dev wouldn’t have the power to block communities.
Ok. So just like Lemmy but communities are spread using some hash table over multiple existing nodes?
Yep, divide everything so no one has real power and it’s the users that decide what they want and don’t want on their feed, allow the hosts to decide if they want to host NSFW content or not (and the users can make the same choice), make it so the users don’t have to decide what instance they register with, their credentials are just stored in the database.
For the front end you then have two ways to deal with it, a single one and the hosts “vote” on how to deal with it (crypto style) or the hosts are the database that anyone can access and it allows anyone to create a front end…
Your second scenario where the hosts decide and the users choose the hosts is what Lemmy is doing now
Not really though, with Lemmy the host and front end are the same. You access Lemmy from hacker talks, I access it from shitjustworks, both our respective instances host our respective data and provide us a UI to access their instance and could decide to only give us access to the content that they host as some instances already do.
What I’m talking about is front end devs not having to host any of the content themselves, just accessing the database hosted by others and showing that info in the UI they developed and pushing changes to the database when the users sign up or post comments.
Front end doesn’t have control over where the info is stored and don’t store anything locally, back end doesn’t have control over who’s pulling and pushing data, they can just choose to filter out NSFW content from their own servers but it just means it won’t pick them to host that content and will instead pick servers that don’t mind hosting it.
In this way the hosting is the same as any hosting services but completely decentralized and the data is open to all and no host can wipe it because of the backups (contrary to Lemmy where if an instance disappears all the content it hosted is gone).
Like raid 6 or higher. Data is distributed with included redundancy to make up for nodes dropping off.
But then community admins would need to backup their communities as raid is not backup.
I just don’t know how that would work with the data… restoration distribution across new nodes etc etc. but then … I’m not a dev.
Raid doesn’t work in this context. Because we’re assuming we have antagonistic peers. So Central control of any element, gives away control of the whole system.
In a redundant array of inexpensive disks, there’s the assumption that there’s bunificent administrator organizing everything.
I get that… sort off.
In Raid the admin supplies the disks, creates the pools and the raid platform does the rest is this really different?
In the analogy it would be an admin starts a pool of 1, other admins join their node into the pool and the system handles distributing content across the nodes in the pool. No raid level selection as the system aims for optimal redundancy.
I just expect this setup to run into similar issues surrounding equitable data and load distribution, as not all nodes will be equal in power, storage capacity, bandwidth etc etc. something that actual Raid arrays should not have…
But it’s cool to think about.
As a DevOps Architect, let me make it simple:
With a single front-end, you have a bottleneck. If you have one domain (website) that everybody goes to to get to the front-end, that means that domain is the single point of failure.
In my line of work, we use load balancers and sub-domains to divide the work and provide resilience (High Availability), but at the end of the day, if the DNS for that site goes down, we’re down.
Also, as Jet mentioned,
whomeverwhoever controls the domain (website) controls the content. You can’t have multiple groups controlling a single domain. Whomever buys it controls it. If they don’t like content, they could easily block access to it.I’m oversimplifying the inner workings, so if you want more details, let me know.
EDIT: subtext called me out on my crap English. Have nobody to blame but myself. English is my first language.
I just want to let you know that “whom” is only ever used as an object. In your sentences, I think you should have used “whoever”.
The easiest way to remember which you should use is to think about the difference between s/he==who and her/him==whom.
She gave the ball to him Who gave the ball to whom
She controls the domain Who controls the domain
The domain is controlled by him The domain is controlled by whom
Updated the comment with your recommendation. Yeah. I suck at writing.
I’m pretty sure most adults suck at writing so you’re no worse than the regular person!
In this case the solution, as I mentioned in other comments, is to make the back end the decentralized database that’s accessible to anyone so the people developing a front end don’t host the data and you can use any of the available front ends to connect to your account as it’s not attached to any specific front end (your info is in the database).
Front end devs would be competing to provide the best UI/UX, but in the end everyone would have access to the same data and front end devs couldn’t get in the way of the data or if they did then people could just go to another website without losing anything.
You could potentially run into issues with data storage reliability:
I understand that these things could still happen, to a similar extent, with the current model of Lemmy, but they are less likely to occur, given that you can choose which instance to join. These are all not unsolvable issues, but this is not a simple “better” alternative — it’s more complicated than that.
All this being said, there is a service that I have heard a little bit about that is sort of similiar to what you appear to be looking for called Nostr.