And I guess this question is two parts: 1. Regarding the current lemmy implementation, and 2. The activityPub protocol in general
It’s a great question! To know this, we’d need to look into not just what the ActivityPub protocol says, but also exactly how the code base implements it, and how the server actually performs on the computers it’s deployed on.
We might look specifically at:
- How many requests (and of what types) does a typical end user send to their local instance?
- How many requests (etc.) does an instance send to its peer instances?
- How is (2) controlled by the number of subscriptions, posts, or other variables?
- How does instance performance respond to different kinds of request load?
- How have instance operators tuned the Lemmy server or its backends to manage different loads?
Because ActivityPub is built on HTTP, different types of request are expressed as different URL paths and HTTP methods. This should make it straightforward to characterize the servers’ behavior under different kinds of load (e.g. lots of local posts; lots of remote posts; lots of new user accounts; etc.)
Doing this “for serious” as an engineering project would require some amount of testing infrastructure, e.g. the ability to replay various kinds of traffic against a Lemmy server while monitoring its performance.
Speaking of scale only, bigger instances are certainly better. More and smaller instances increase the coordination overhead significantly (remember that your instance saves and serves a copy of any remote post. In the extreme case this means every server needs to have a copy of all other servers. Also, the more instances, the more peers each server has to ask for an update.
Many small instances have other benefits though, among them higher resillience and independence.
At the same time, don’t smaller instances mean that they generally are making less copies of remote posts? Fewer users means that they’ll only be subscribed / viewing a few posts.
Still, all posts of any subscribed-to community (and any of their votes, which is the biggest issue atm) need to be pushed to yet another instance. IIRC this is how a feed works.
I believe in the fediverse updates are almost always sent, not requested
Every complex system (and federated systems like Lemmy qualify) has more than one potential bottleneck that can become a problem in different conditions.
- Right now, the common performance bottleneck for Lemmy instances is heavy database reads caused by users browsing. Many of these queries are written inefficiently and can be optimized, there are things that can be done in Postgres to scale as well. But browse traffic is one kind of workload load that can reach limits, and it gets stressed when lots users are active on one big instance.
- Federated networks CAN experience federated replication load when there are lots of instances to deliver federation messages to. If I comment on this post, and the server hosting the community has to deliver the comment to (pinky to mouth) one million instances… that’s a different kind of workload and it gets stressed when there are lots of different instances subscribed to a single community.
The goldilocks zone is where there is a medium number of medium sized instances. Then each federation message can efficiently power browse traffic for a lot of users, and no one instance gets overwhelmed with browse traffic.
In practice, this is not how networks organize. There will both be instances that are “too large” and also lots of small instances. Right now, the Lemmy network is small and federation traffic is not a meaningful bottleneck. Browse traffic is, and that’s what the devs are working on. But with time, the limits of both these things can be pushed further out improving scalability of the etwork in both directions.
Makes sense, I also think the original post would make more sense if it were talking about the distribution of large communities. For instance, if there wasn’t so many of the largest communities on the same 2-4 servers, then I think browsing would be a lot smoother. In addition, I think users who say care a lot more about a certain community may choose to use that other less populated server to be their home, as it is the home to that community.
(Just a thought, I have done ZERO work with federation or ActivityPub)
I had a similar curiousity… Like if I make my own instance but it’s just myself, is that even a net positive to the network? Now there’s a new instance pulling everything I want to it, rather than another bigger instance that might have used that share subscriptions…
if it is a hindrance on the entire network, it would be interesting to explore the idea of “instance pools” in the future. Many smaller instances act as a single instance when it comes to network federation, but still maintain data separation, moderation and management. I would assume these instances would just have to have the same federation list.
I’m not aware of real-world fediverse examples, but a common approach to this kind of thing in distributed systems in general would be to introduce a federation hub/proxy that relays messages from the main instance, and who’s only job is to handle the fan-out load required to copy one stream of federation messages from the server to become many streams to all the federated subscribers. There are also approaches to have a pool multiple such proxies.
Again, no idea how these suggestions apply to ActivityPub, but many systems do this kind of thing and I’d sort of expect it would work with ActivityPub (or could be made to work with some effort once the network is big enough to warrant it).
I haven’t kept the links handy, but the devs have said several times that federation load is not a problem right now, it’s browse traffic setting the db on fire in big instances.
So a single-user instance is fine for now, but someday as the network grows…maybe there are enough instances that federation replication is a notable source of load.
Right now, I wouldn’t worry about it… you’re on an instance that’s well run and has lots of capacity. That’s the best kind of place for you to be for now.
It’s probably optimum to have hundreds to low thousands of users per instance but it’s going to depend entirely on the level of overlap the users have in terms of external communities they access. If you have 1000 users each subscribed to a single different community with no overlap, then there’s no benefit at all.