As Bluesky begins to open up more and more, it’s felt more pertinent to try to wrap my head around it. To help in this, I decided to write out my rough understanding of it from its documentation, in the hopes that it may help others and myself with any corrections from misunderstandings.


As Bluesky themselves note, the architecture is laid out in Personal Data Servers, Relays, & App Views. The intent is that each of these may be deployed and/or developed independently of Bluesky, with some caveats to each.

First & foremost, which is somewhat glossed over, is the notion that ordinary people will have the knowledge or interest in deploying their own Personal Data Servers. This isn’t really touched on from what I’ve seen in their documentation, despite it being touted as such a major benefit of the architecture.

Second, which is recognized in their documentation, is that due to the high volumes of data involved, there are likely to be fewer Relays deployed instead of many. See the following:

The federation architecture allows anyone to host a Relay, though it’s a fairly resource-demanding service. In all likelihood, there may be a few large full-network providers, and then a long tail of partial-network providers. Small bespoke Relays could also service tightly or well-defined slices of the network, like a specific new application or a small community.

This inarguably undercuts much of the benefit of it as a distributed network given that Relays are what may enable much of the transfer of data across the network.

It is noted that this may be avoided via server-to-server networking, so we’ll have to see how that shakes out given it’s mentioned almost as an afterthought.

Third, data portability across a distributed network is absolutely an achievement, but it must be scrutinized. Their language concerning PDSs itself indicates they expect them to be as prone to ephemerality as existing fediverse instances, see:

We assume that a Personal Data Server may fail at any time, either by going offline in its entirety, or by ceasing service for specific users.

Data portability then is reliant on a few crucial details:
Clear communication of the need to safely store recovery keys and backups.

Retention of recovery keys in some way (people never lose recovery keys, right?).

Device safety/stability to ensure access to your Authenticated Transfer client’s backed up data, and sufficient storage for said backup.


From that last section note the following about PDSs, “…or by ceasing service for specific users”, and then see their documentation on PDS Entryways.

Bluesky runs many PDSs. Each PDS runs as a completely separate service in the network with its own identity. They federate with the rest of the network in the exact same manner that a non-Bluesky PDS would.
[…]
To enable this, we introduced a PDS Entryway service. This service is used to orchestrate account management across Bluesky PDSs and to provide an interface for interacting with bsky.social accounts.

What’s noteworthy here is that in creating Bluesky Social, they’ve essentially created a model that I foresee others building on the AuthTransfer protocol emulating. Many everyday people won’t be spinning up their own PDSs, in the same way that few people spin up their own fediverse instances. Essentially instead of PDS Entryways, what may emerge may be AuthTransfer Entryways/Gateways for whatever variety of apps may eventually be built on it.

Similar to different fediverse platforms, you may then eventually see AuthTransfer platforms that pair together Entryway services with an App View as Bluesky itself is presently doing. Arguably this may make the AuthTransfer network no more decentralized (they go back & forth on describing their approach as decentralized and distributed) than the ActivityPub network is.


Lastly, regarding custom feeds and composable moderation, there is something on a protocol level here that those using ActivityPub may look to and improve on (and may already be doing so).

In some cruder ways, however, these are already in play on the fediverse. Custom feeds exist here on Lemmy via different communities and instances. More topic-focused instances (on Lemmy as well as other fediverse platforms) in particular can collaboratively produce distinct local and federated/all feeds. To a limited degree similar may be said of “composable moderation” with community moderation and user/instance blocking.

Mastodon even permits the sharing of one’s mute/block lists, albeit admittedly somewhat clunkily.

Altogether the AuthTransfer protocol definitely makes some interesting improvements, but not without some awkward tradeoffs that they seem to be trying to talk around instead of speaking more plainly about.


Addendum, as I wasn’t sure if I was about to hit a character limit:
The idea of regular people spinning up a Personal Data Server is already pretty laughable, but it’s accentuated by the idea that they might also go out of their way to pay for a domain name to sort of establish(?) their identity across the AuthTransfer network. Many will likely simply have names like around here as @name.atentryservice.tld.

Also there’s a kind of weird disconnect throughout the documentation from the idea of people perhaps wanting to operate multiple handles/identities for different platforms, or different purposes on the same platforms. A lot of thought seems put into owning/maintaining a singular identity, but not as much to multiple identities.

  • @ElectroVagrantOP
    link
    English
    29 months ago

    Fwiw as I understand it data portability is possible without a custom domain, aside from your handle/name. A custom domain only seems necessary if you want to prove and maintain your identity across AuthTransfer services/platforms that permit/enable custom domains in handles. It’s basically a more direct form of the website verification one may find on federated platforms like Mastodon.

    Without it you’d be jumping between subdomains and top-level domains in your username/handle similar to how you do so on ActivityPub platforms, i.e. @uniquename.bsky.social -> @uniquename.otherATprotoplatform.tld.