Hacker News new | past | comments | ask | show | jobs | submit login

Okay well. I work on Bluesky and helped build the AT Protocol. I'm sorry Sam differs with us on this, and I'm glad that Activity Pub is already there for him. However, Sam doesn't understand the ATProto very well and I want to clear it up a bit.

Before I do, let me just say: Bluesky and the AT Proto are in beta. The stuff that seems incomplete or poorly documented is incomplete and poorly documented. Everything has moved enormously faster than we expected it to. We have around 65k users on the beta server right now. We _thought_ that this would be a quiet, stealthy beta for us while we finished the technology and the client. We've instead gotten a ton of attention, and while that's wonderful it means that we're getting kind of bowled over. So I apologize for the things that aren't there yet. I haven't really rested in over a month.

ATProto doesn't use crypto in the coin sense. It uses cryptography. The underlying premise is actually pretty similar to git. Every user runs a data repository where commits to the repository are signed. The data repositories are synced between nodes to exchange data, and interactions are committed as records to the repositories.

The purpose of the data repository is to create a clear assertion of the user's records that can be gossiped and cached across the network. We sign the records so that authenticity can be determined without polling the home server, and we use a repository structure rather than signing individual records so that we can establish whether a record has been deleted (signature revocation).

Repositories are pulled through replication streams. We chose not to push events to home servers because you can easily overwhelm a home server with a lot of burst loads when some content goes viral, which in turn makes self hosting too expensive. If a home server wants to crawl & pull records or repositories it can, and there's a very sensible model for doing so based on its users' social graph. However the general goal is to create a global network that aggregates activity (such as likes) across the entire network, and so we use large scale aggregation services to provide that aggregated firehose. Unless somebody solves federated queries with the sufficient performance then any network that's trying to give a global view is going to need similar large indexes. If you don't want a global view that's fine, then you want a different product experience and you can do that with ATProto. You can also use a different global indexer than the one we provide, same as search engines.

The schema is a well-defined machine language which translates to static types and runtime validation through code generation. It helps us maintain correctness when coordinating across multiple servers that span orgs, and any protocol that doesn't have one is informally speccing its logic across multiple codebases and non-machine-readable specs. The schema helps the system with extensibility and correctness, and if there was something off the shelf that met all our needs we would've used it.

The DID system uses the recovery key to move from one server to another without coordinating with the server (ie because it suddenly disappeared). It supports key rotations and it enables very low friction moves between servers without any loss of past activity or data. That design is why we felt comfortable just defaulting to our hosting service; because we made it easy to switch off after the fact if/when you learn there's a better option. Given that the number one gripe about activitypub's onboarding is server selection, I think we made the right call.

We'll keep writing about what we're doing and I hope we change some minds over time. The team has put a lot of thought into the work, and we really don't want to fight with other projects that have a similar mission.




Reading this I hear someone passionate about technology for the sake of technology. Which is cool, I totally get the desire to build things oneself, but it doesn't really address the substantive questions people are asking about AT:

An open protocol exists that broadly does what you want to do. That protocol is stable and widely used. That in itself, regardless of the quality of the protocol, already represents an OKish argument to strongly consider using it. If you're going to go NIH your replacement needs to not just be better but substantially better, and you should also show understanding of the original open spec.

Sofar, Bluesky's already been caught redoing little things in ways that show a lack of reading/understanding of open specs (.well-known domain verification), and the proposed technological improvements over ActivityPub fall into three categories:

1. Something ActivityPub already supports as a "SHOULD" or a "MAY": there's arguments to be made these would be better as "MUST", but either way there's no reason AT couldn't've just implemented ActivityPub with these added features.

2. Highly debatable improvements - as highlighted by this article. I do think some of this article is hyperbole but it certainly highlights that some of the proposed benefits are not clear cut.

3. Such a minor improvement as to be nowhere near worth the incompatibility.

All that coupled with the continuous qualifiers of it being incomplete/beta/WIP when there's a mature alternative really just doesn't present well.


I run a single user ActivityPub instance with a minimal following and small number of people across multiple instances that I follow. From a user perspective ActivityPub is fine I have no complaints.

However from an Ops perspective ActivityPub is incredibly chatty. If this had to scale to a larger instance the costs would spiral fast. Operationally and cost efficiency wise ATProto is a better looking protocol already. From a single individual user this won't necessarily be obvious right off the bat. But it will tend to manifest in either overworked operations people or slow janky instance performance.

While it's certainly a reasonable question whether the world needs another federated social protocol or not ATProto definitely solves real problems with the ActivityPub protocol.


Being technically better is usually not a good enough reason to be incompatible. I'm not sure why people don't get this, but it is almost always true.

Starting from scratch, just because you can theoretically design a better system, is one of the worst thing to do to users. Theoretically better also rarely wins in the marketplace anyway.

If you want a slightly lighter position: Software needs to be built to be migrated from and to, as a base level requirement. Those communities that do this tend end up with happy users who don't spend lots of toil trying to keep up. This is true regardless of whether the new thing is compatible with the old - if it doesn't take any energy or time, and just works, people care less what you change.

Yes, it is hard. yes, there are choices you make sometimes that burn you down the road. That is 100% guaranteed to happen. So make software that enable change to occur.

The overall the amount of developer toil and waste created en masse by people who think they are making something "better", usually with no before/after data (or at best, small useless metrics sampled from a very small group), almost always vastly dwarfs all improvement that occurs as a result.

If you want to spend time helping developers/users, then understand where they spend their time, not where you spend your time.


Something that I think your analysis is missing is that with a decentralized product, ops is also user experience.

The whole point is that it needs to be reasonably easy for people to run and scale their own servers. If people are constantly being burned out, quit, or run out of money, then it has an impact on regular users.


Assuming you have meaningful data that shows that, instead of handwavy conjecture, sure.

I don't think most projects do ;)


What I’m saying is that server admins should be considered users as well. That’s not something that needs to be driven by data.


I wholeheartedly agree.

From a purely technical analysis ATProto looks better as a protocol to me. But I don't use Bluesky I use ActivityPub because the people I want to be connected to are there and not on Bluesky. I do think you could probably make improvements to ActivityPub that reduce operational costs. It's not something I feel the need to tackle right this moment because my usage doesn't incure those costs really.


I think this is mostly silly. Barely anyone uses ActivityPub. If BlueSky only moderately catches on and 100M people start using it, the total number of ActivityPub users would be a rounding error.

> If you want to spend time helping developers/users, then understand where they spend their time, not where you spend your time.

You're spending your time with ActivityPub, so this advice should apply to you, too. The bulk of potential users are spending their time on anything but ActivityPub. And as for developers, one of course needs to attract them, but I hear the people behind BlueSky have a couple of bucks, the ambition to create a huge potential new market, and a track record of creating a couple of things. I don't think they'll have trouble finding developers.

If BlueSky comes up with a better architecture, ActivityPub clients should rebase. BlueSky should pretend they don't exist, except if they have some nice schemas or solved some problem efficiently, try to maintain compatibility with that unless there's even the slightest reason to deviate.

Didn't ActivityPub have enough of a head start? Why didn't ActivityPub just use Diaspora? Why prioritize ActivityPub over OStatus?


"I think this is mostly silly. Barely anyone uses ActivityPub. If BlueSky only moderately catches on and 100M people start using it, the total number of ActivityPub users would be a rounding error."

Says everyone everywhere who thinks they made something better!

"If BlueSky comes up with a better architecture, ActivityPub clients should rebase. BlueSky should pretend they don't exist, except if they have some nice schemas or solved some problem efficiently, try to maintain compatibility with that unless there's even the slightest reason to deviate."

Look, i'm not suggesting whoever does it first gets to dictate it, but literally everyone thinks their thing will be better enough to attract lots of users or be worth it, and most never actually do/are. They do, however, cause lots and lots and lots of toil!

Your position is exactly what leads people over this cliff - better architecture does not matter. it doesn't. Technical goodness is not an end unto itself. Its a means, often to reduce cost or increase efficiency, and unfortunately rarely, deliver new features or better experience. Reducing cost or increasing efficiency are great. But architecture is not the product. The product is the product.


> The overall the amount of developer toil and waste created en masse by people who think they are making something "better", usually with no before/after data […], almost always vastly dwarfs all improvement that occurs as a result.

So, why is the Fedi not built on RSS/Websub/etc. then?


Don't know enough about this particular topic to offer a view, but in general, because people care more about releasing their better thing and pretending they are helping than the hard work of actually helping

This is true everywhere?



> ActivityPub is incredibly chatty.

> ATProto is a better looking protocol already.

Are there benchmarks for this? What's the level of difference here? Request frequency seems closely linked to activity and XRPC bodies are JSON just as ActivityPub so message size should be within order of magnitude at least. Are there architectural differences that reduce request frequency significantly?

> If this had to scale to a larger instance the costs would spiral fast.

Are we talking bandwidth costs or processing. I know the popular ActivityPub implementation is widely considered to be pretty inefficient processing-wise for reasons unrelated to the protocol itself: is that a factor here?


One example of the chattiness is a flow where more than one person is following the same individual on another server. That person will have to push new messages to every single one of the people following them. This means that if 10 users are following me from the same server I will not have 1 push for that instance, I'll gave 10 pushes for the same single unchanged message. This is built into the protocol. That's a lot of throughput for something that could be much much less chatty.

Now on my single instance it's not too bad because 1. I follow maybe 40 people and 2. I have like max 10 followers. For an instance with people with high follower counts across multiple other instances it could get to be a problem fast.

Edit: my previous description used fetch when it should have used push.


Isn't that "more than once instance" rather than "more than one user"?

I think the weirdness is with Bluesky all that cost is still there but it's now handled by a small group of massive megacorps which is a real tangible benefit to self-hosters but you could have that on top of AP by running your service off what would essentially be a massive global cache of AP content which is what the indexer is.


Yes, I edited my comment too be more clear and. I shouldn't have referred to it as fetch because it's actually push.


I should note that as a protocol I suspect that ATProto is less chatty which does translate to reduced costs. It adds features on top that some may or may not want which increase costs in other ways but only for the people who want to utilize those features. It's not exactly an Apples to Apples comparison.


> This is built into the protocol.

I haven't implemented AP from scratch so I may be missing technical details precluding this, but it sounds like it may at least partially be covered by https://www.w3.org/TR/activitypub/#shared-inbox-delivery

... in which case it may be an implementation issue?

Mind you, there is liberal use of "MAY" there which I find is always a problem with specs: that would likely lead to mandatory chattiness of outgoing requests if you're federated with a lot of instances without shared inbox support, but should at least solve for incoming.


In theory it would help. In practice since a server can't rely on this it probably devolves to just ignoring that feature.


There can be an interrogation endpoint/message of supported versions/extensions to the base protocol, that's a very normal thing. If it supports bundled delivery, send a single bundle if not send them all individually.


Yep, And I'm sure there are some instances that do exactly that. But in a distributed protocol you only get the benefit if both sides of a given interaction support the optimization. For something in the spec that is optional you can't rely on it and you aren't forced to implement it so it's not irrational to just ignore it. Which typically means you only get occasional marginal benefit.


i mean it depends. The vast majority of fedi traffic is mastodon. Add it to mastodon it makes an impression and a real difference. At first mastodon to mastodon comms, but others will take notice, it will find it's way into libraries then it's smooth sailing.


> Which typically means you only get occasional marginal benefit.

Which is why most browser-server communitcation is still limited by HTTP/1.0.

Except it isn't. That's not how this works out when there are benefits to all participants to implement these optimizations.


Instead of baselessly speculating, you could find out that nearly every server uses shared inbox.


I'm not to into the weeds of activitypub yet but in my head it seems like you could add some things to the protocol to optimize this.

1. server with the sender constructs a map by receiving server that contains a list of all users on that server who should receive the message.

2. sending server iterates the map. If the receiving server has multiple recipients do a check to see if the receiving server supports this kind of 'bundled' delivery.

  2a. If so send the message once with a list of receivers.

  2b. receiving service processes the one message and delivers it to all the users.
3. If not sending server sends it in the traditional way, multiple pushes.


Sidekiq seems to be the culprit from what I've seen and read from people who've run into this issue. It gets overloaded fast if you don't have enough processing power in front of the queue. Lighter implementations apparently do something different, or are more efficient in handling their queues without whatever overhead Sidekiq adds.


It's weird to me that all the complaints about @proto being NIH focus on ActivityPub when if anything it's closer to an evolution of secure scuttlebutt. The two are so fundamentally different I do not understand the complaints.


> The two are so fundamentally different I do not understand the complaints.

You will find that many people do not dig into details. You can post on Mastodon, you can post on Bluesky, therefore they must be similar.

It does mean that learning about things becomes a superpower, because you can start to tell if a criticism is founded in actual understanding or something more superficial.


Bluesky is a Twitter-like platform like Mastodon, so the comparison makes sense if you look at the primary implementations.

SSB is also decentralised, while ATProto is federated - like ActivityPub.


User Identity in ATProto is decentralized, it's meant to use W3C DIDs.

That's actually one of the things that bugs me about ActivityPub... unless I'm running my own one-single-user instance I won't have control over my own identity.

It's also very weird how even on a supposedly "federated" system the only way to ensure you can access content from all instances (even if they differ in philosophy or are on opposite sides of some "inter-instance-war") is to have separate accounts for each side... it kind of defeats the point of federation. There's even places like lemmy which instead of using blocklists use allowlists, so they will only federate with pre-approved instances.


> User Identity in ATProto is decentralized

It is but that's largely conceptual so doesn't really mean anything in the context of the protocol. These are message exchange protocols so the defining element is whether the messaging is federated or decentralised.

Fwiw AP also supports DID I just haven't seen any implementations use it, since it strongly recommends other ID (a mistake imo).

> It's also very weird how...

What you're describing here is a cultural phenomenon, not a technological one, so isn't really relevant to the discussion: ActivityPub siloing isn't a feature of the protocol, it's an emergent feature of the ecosystem/communities.

It's also worth mentioning it isn't usually implemented as you describe, unless you're specifically concerned with maximising your own reach from a publishing perspective: most instances allow individual users to follow individual users on another "blocked" instance - its usually just promotion/sharing & discovery that are restricted.


> most instances allow individual users to follow individual users on another "blocked" instance - its usually just promotion/sharing & discovery that are restricted.

If they are still allowing access to all forms of third party content through their own instance (even if they restrict the discoverability) then they are still risking being held responsible for that content. So imho, that would be a mistake.

Personally, if I were to host my own instance under such a protocol, I'd rather NOT allow any potentially illegal content that might come from an instance I don't trust to be distributed/hosted by my node.

The problem, imho, is in the way the content needs to be cached/proxied through the node of the user in order for the user to be able to consume it. This is an issue in the design of how federation typically works.

I'd rather favor a more decentralized approach that uses standards to ensure a user identity can carry over across different nodes of content providers, whether those nodes directly federate among themselves or not.

There should be a separation between identity providers and content providers, in such a way that identity providers have freedom to access different content providers, and content providers can take care of moderation without necessarily having to worry about content from other content providers with maybe different moderation standards.

I'm not saying ATProto is that solution... but it seems to me it's a step in the right direction, since they separate the "Personal Data Server" from the "Big Graph Services" that index the content. I can host my own personal single-user server without having all the baggage of federating all the content I want to consume. The protocol is better suited for that use case.

In services using ActivityPub, instances are designed for hosting communities, they come with baggage that's overkill for a single-user service but that's still mandated due to how the communications work, they expect each instance to do its own indexing/discovery/proxying. So they are bound to be heavier and more troublesome to self-host, and at the same time, from what I've seen, the cross-instance mechanisms for aggregation in services like Mastodon are lacking.


I agree that the siloing happening with AP instances is not a good thing and it's the main reason I have not bothered with the fediverse. But this isn't a technological limitation with the protocol at all but a policy chosen by the operators. What makes you think that this won't apply to BlueSky or other ATProto instances (when they allow federation at all)?


If the protocol is designed in such a way that it allows the operators of one instance to have full control over what some user identities can access to in the whole network, then it's an issue in the protocol. Imho, the problem is likely inherent to the way the AP fediverse commonly defines "federation".

I'd rather favor a more decentralized structure that allows the users to directly access content from any content provider that hosts it through the protocol (ie. without necessarily requiring another specific instance to index that content from their side.. if they index it great, but if they don't it should still be possible to access it using the same user account from a different index), with a common protocol that allows a separation between identity management and content provider.

From what I understood, ATProto is closer to that concept.


> User Identity in ATProto is decentralized, it's meant to use W3C DIDs.

The W3C spec leaves all the hard parts to vendors, which is why the only DID implementation up to now has been Microsoft's, which relies on an AD server in Azure. Much decentralise.

Bluesky's isn't that, but a hash of some sort, which is centrally decentralised ... on their servers? I think this is one of the bits of AT that isn't finished yet.

But "W3C DID" is not a usable spec in itself, it's a sketch at best.


There's a growing base of users who have reached the epiphany (by multiple paths) that both identities & content-addressing MUST be cryptographically-rooted, or else users' privacy & communications will remain at the mercy of feudal centralizers with endless strong incentives to work against their interests.

For such users, any offering without these is a non-starter, dead-on-arrival.

People with resistance to this epiphany sound like those who used to insist, "HTTP is fine" (even when it put people at risk) or "MD5 is fine" (long after it was cryptographically broken). Most will get it eventually, either through painful tangible experiences or the gradual accumulation of social proof.

A bolt-on/fix-up of an older protocol might work, if done with extreme competence & broad consensus. And, some in the ActivityPub world had the cryptoepiphany very early! Ideas for related upgrades have been kicked around for a long time. But progress has been negligible, & knee-jerk resistance strong, & the deployed-habits/technical-debts make it harder there than in a green-field project.

Hence: a new generation of systems that bake the epiphany in at their core – which is, ultimately, a more robust approach than a bolt-on/fix-up.

Because so many of those recently experiencing this cryptoepiphany reached it via experience with cryptotokens, many of these systems enthusiastically integrate other aspects of the cryptotoken world – which of course turns off many people, for a variety of good and bad reasons.

But the link with cryptotokens is plausibly inessential, at least at the get-go. The essentials of grounding identity & addressing in cryptography predate Bitcoin by decades, and had communities-of-practice totally independent of the cryptoeconomics world.

A relative advantage Bluesky may have is their embrace of cryptographic addressing behind-the-scenes, without pushing its details to those who might confuse it with promotional crypto-froth. Users will, if all goes well, just see the extra security, scalability, and user sovereignty against abuses that it offers. We'll see.


HTTP is fine for a lot of uses. So is MD5.

Crytography and security in general are often cargo-culted without any consideration for the negative implications.

> this cryptoepiphan

Bro are you for real.


It was clear that MD5 didn't meet the goals it was designed for in 1994, when experts recommended it be phased out for its originally intended uses.

It's not fine here in 2023.

If you need a secure hash, it's been proven broken for 10 years now.

If you don't need a secure hash, others are far more performant.

Using it, or worse, advocating for its use, is a way to signal your thinking is years behind the leading edge, and also best practices, and even justifiable practices.

HTTP's simplicity could make it tolerable for some places where world-readability is a goal - but people, echoing your sentiments here, have said it was "fine" even in situations where it was putting people at risk.

Major browser makers recognize the risk, and are now subtly discouraging HTTP, and this discouragement will grow more intense over time.


> If you're going to go NIH your replacement needs to not just be better but substantially better, and you should also show understanding of the original open spec.

I just disagree with this in principle. I wonder what the tech equivalent of "laissez faire" would be.

To this day I don't understand how anyone in the tech world thinks they can make a single demand of anyone else. Even as a customer I believe you can really only demand that the people you are paying deliver what is contractually and legally required. But outside of that ... I just don't understand people's mentality on this subject.

What is it about this specific area of the web that attracts these idealogical zealots? I had the same head-scratching moment in the early 2000s when RSS and Atom were duking it out.


> I don't understand how anyone in the tech world thinks they can make a single demand of anyone else.

Ultimately, yes you're right, no-one can make actual "demands" of anyone else. My language above is demanding, certainly, but ultimately I'm just arguing opinion. I cannot control any outcomes of what Bluesky or any other enterprise choose to pursue.

> What is it about this specific area of the web that attracts these idealogical zealots?

I think it stems from the unprecedented success of such zealots in the 1980s, which have differentiated the technological landscape of software technologies from previous areas of engineering by making them more approachable, accessible, interoperable and ultimately democratised. That's largely been the result of people arguing passionately on the internet to advocate for that level of openness, collab & interop.


> I think it stems from the unprecedented success of such zealots in the 1980s,

I would challenge this belief, that the success of Linux, or the Web technologies (TCP, HTTP, HTML, etc.) were primarily the result of the zealots from the 80s. I would challenge the belief that protocols for Twitter-like communication fall into the same category as things like POSIX, TCP/IP, HTTP, HTML, etc.

> That's largely been the result of people arguing passionately on the internet to advocate for that level of openness, collab & interop.

My own opinion is the key to success was a large number of people writing useful code and an even larger number of people using that code.

One example that comes to mind is how HTML was spun out from w3c into WhatWG. Controversial at the time to say the least but IMO necessary to get away from the bickering of semantic web folks who were grinding the progress to a halt. HTML 5 won the day over XHTML (actually to my disappointment). The reason wasn't the impassioned arguments of the semantic web zealots, it was the working implementations delivered by the WhatWG members to serve the literal millions of people using their applications. Another example is the success of Linux over Hurd - the latter being a technology initially supported by the most vocal and ideological of all the zealots.

It is a simple fact that if AT garners sufficiently useful implementations and those implementations garner sufficient numbers of users - all of the impassioned arguments against it will have been for naught. So those arguing should probably stop so that they can focus on implementing ActivityPub (or whatever they think is best) and attracting users. I guarantee you that if ActivityPub attracts millions of users then Bluesky will suddenly see the light. They will change course just like every big tech company did when Linux became successful.

It is also why I think all of the hot-air on RSS and Atom was literally wasted. As technologies both have thus-far failed to attract enough users to make it worth it. I would bet that the same will be true of both AT and ActivityPub. Unless someone develops an application that uses one or the other and it manages to attract as many users as Twitter then people are just wasting time for nothing.


Calling out undesirable behavior and demanding people act better is how society improves. See things you don't like, campaign for change. Zealotry or activism - the label depends mostly on whether you agree with the cause.

That you have no legal basis to enforce those demands does not mean that you cannot make them. They can be ignored of course but they may not be. Not all interactions have to be governed by a literal contract as opposed to a social contract.


This is a great summary of what my concerns are as well. I'd maybe add one more point:

* Improvements that'd layer cleanly on top of ActivityPub if they'd made any attempt at all.

E.g. being able to "cheaply" ensure that you have a current view of all a given users objects is not covered in ActivityPub - you're expected to basically want to get the current state of one specific object, because most of the time that is what you'll want.

So maybe it falls in the "highly debatable" category, but we also have a trivial existing solution from another open spec: RemoteStorage mandates etag's where a parent "directory" object's etag will change if a contained object changes, and embeds that in a JSON-LD collection as the directory listings. If you feel strongly that this is needed to be able to rapidly sync changes to a large collection, an ActivityPub implementation can just support that mechanism without any spec changes being needed at all (but documenting it in case other implementations wants to do so would be nice). Heck, you can "just" add minimal RemoteStorage compatibility to your ActivityPub implementation, since that too users Webfinger, and exposing your posts as objects in a RemoteStorage share would be easy enough.

Want to do a purely "pull" based ActivityPub version the way AT is "pull"? Support the above (w/fallback of assuming every object may have changed if you care about interop), make your "inbox" endpoints do nothing and tell people you've added an attribute to the actor objects to indicate you prefer to pull and so to not push to you.

Upside is, if any of the AT functionality turns out to be worthwhile, it'll be trivial to "steal" the good bits without dropping interop.

(Also, I wanted to see exactly how AT did pulls, and looked at the AT Proto spec, and now I fully concur with the title of this article)


Your added point fits in with the initial red flag for me - before I saw pfraze's (excellent) post here - that the vast majority of what I've read advocating for ATProto says very clearly: "Account portability is the major reason why we chose to build a separate protocol.".

Account portability is in no way incompatible with ActivityPub. It's not built into the spec., but it's also not forbidden / prevented by the spec. in any way. From ActivityPub's perspective it's an implementation detail.

Would it be nice if it was built into the spec: yes. Does that justify throwing out the baby with the bathwater?

I was hoping pfraze would offer some better reasoning, but while the post above is a great read for the technically curious, its very "in the weeds" so doesn't really address the important high-level questions. Textbook "technologists just want to technologise" vibes: engineers love reinventing things because working out problems for yourself is the fun part, even if many people have come together to collaborate on solving those problems before.


An unspecified "implementation detail" is essentially another way of saying that it doesn't work.

I've ported my account on ActivityPub a couple time, and it's a horrendous experience -- not only do I lose all my posts and have to manually move a ton of bits, but the server you port from continues to believe you have an account and doesn't like to show you direct links on there anymore.

The latter could probably be easily solved, the former needs to be built into the spec or it will continue to be broken.


100% agree with everything in your post. I don't see how it contradicts anything I've said though.

> the former needs to be built into the spec or it will continue to be broken.

Absolutely, but ActivityPub doesn't preclude that. There's no reason for that proposed feature to be incompatible with the spec., or to have to exist in an implementation that is incompatible with ActivityPub.

Fwiw Mastodon, the most popular ActivityPub implementation, is (as is often the case with open standards) not actually fully compliant with the spec. They implement features they need as they need & propose them. This is obviously a potential source of integration pains, but as long the intent to be compatible is still there, it's still a better situation.


and I believe CalcKey? is adding account content migration, which is why Eugen is suddenly talking about account migration again.


Even framing it as 'account portability' is missing the point. I want to own my identity and take it with me anywhere. I shouldn't need to transfer anything, its mine. This model is fundamentally incompatible with how ActivityPub works (no, running your own server is not the same thing as decoupling identity from the concept of a server itself.) Could that change in the future? Maybe, but not without prior art. Even if you think @proto, farcaster, ssb, nostr, or others are doomed, we should be applauding them for attempting to push the needle forward.


It's not incompatible with how ActivityPub works at all.

The ActivityPub spec says that object id's should be https URI's, not that they must. The underlying ActivityStreams spec just requires them to be unique.

All that's needed to provide full portability without a "transfer" is for an implementation to use URI's to e.g. DID's, or any other distributed URI scheme. Optionally, if you want full backwards compatibility, point the id to a proxy and add a separate URI until there's broader buyin.

I think there's be benefit in updating the ActivityPub spec to be less demanding of URIs, and instead of saying that they "should" be https require them to be a secure transport, and maybe provide a fallback mechanism if the specific URI mechanism is not known (e.g. allow implementations to provide a fallback proxy URL), but the main challenge there is not the spec but getting buying from at least Mastodon. The approach of providing a https URI but give a transport-neutral id separate to the origin https URI would on the other hand degrade gracefully in the absence of buyin.


> the main challenge there is not the spec but getting buying from at least Mastodon

Which is why promoting a new competing and incompatible protocol is a good way to push for change.

You want Microsoft to become more open? Then start promoting a pure FLOSS alternative until they can't ignore it anymore.


You could do the same a lot easier by pushing an incompatible change to ActivityPub that'd be easy to adopt.


I ran into this trap very early in my entrepreneurial journey. Developed a data "standard" for events in a vacuum, convinced that nothing else in the world could possibly exist. Then spent a good year learning the lessons that already-existing standards had learned and adapted from years ago.

A protocol like this doesn't work without community adoption. And the best way to get the community to adopt is to build on what's already there rather than trying to reinvent the fridge.


Not having read the article, I got pretty far into your second paragraph before realizing you all aren't talking about configuring a modem over a serial link. It's funny because the "Hayes AT command set" protocol is also an obtuse crock. I was really hoping you were going to open my mind with some deep wisdom straight out of 1981.


Yeah, I wish they hadn't clobbered the name of an existing, well-known protocol. It's still used in drivers for cellular modems (I'm working with it right now), which are getting more and more numerous for IoT applications.


Makes it really hard to search stuff. This has happened with Matrix, Go. Although using golang resolves issue with Go.


Indeed. As much as the k-based KDE app names might look or sound silly, they're mostly quite searchable due to being a tad different.


They might be searchable on the wider internet if you’re looking for info on them, but I can never remember them when I need them on the actual system.


Your launcher doesn't find them by description or substring?


Not reliably enough.

On my windows PC, I type ‘note’ and slam enter to open notepad a thousand times a day without a problem. On my KDE desktop ‘text’ seems to 50/50 bring up… whatever the text editor is called and 50/50 something else. Apparently “Kate” is what I’m after.

There is real value in naming things after what they do and it’s my sole gripe with KDE that they have stupid names.


On my cinnamon setup, typing "note" pulls up the settings panel for my installed basic text editor. :-|


I agree that 'go' has to be the worst naming from an SEO perspective. Only ones worse a single letter names. And TBH, they might be _better_ simply because they aren't a common English language word.


better than .net, admittedly.


nodejs has not solved this problem with "node" which is an admittedly shitty name for a very old amateur radio networking package.


It's a cheeky way to say @ protocol.


Maybe they could call it the Strudel protocol?


Man! I like that name already


"The protocol formerly known as Strudel"?


Fetus


I think to use the fetus the @ should represent a kernel, object, but in a generic form such / representing the top of the tree.

I can also see the term becoming controversial such male and female vs. plug and socket. Some see the former as blatantly sexual and the latter requiring dirty mind to be sexual.

Maybe it's best to hold off on that reference until we get more voices to chime in.


"Rocambole" would be even easier to search for


Ok, If I make a protocol I will call it "rocambole" but I will be thinking of the sand leek rather than the victorian fictional adventurer.


LOL I was mostly thinking of the swiss roll but I know it's only called rocambole in Brazil. It's got all sorts of names all over the world according to Wikipedia


I had no idea what it was. Those of what I found one seemed plausible and the other worth mentioning.

Glad it was something else entirely. The localization for Brazil can be rocambole.


Speaking of which, does anybody know a good freestanding C library that doesn't use dynamic allocation for the Hayes AT protocol?


I've certainly had to implement AT commands in C, but within a proprietary codebase.

It's tough to do in a freestanding way given that it's a command-response protocol. It's very convenient to depend on the specific uart API that's available.


Yes, I came for a critique of the Hayes AT commands.


I thought exactly the same. I will dive into configuring a Raspberry Pi to work with a 5G hat/SIM and will use at commands for this (first time ever). I was looking forward to reading how horrible an experience it will be and was a bit confused :D


> the "Hayes AT command set" protocol is also an obtuse crock

Yeah, but it's an obtuse crock that you can literally still here in your memories. How many protocols can say that? It has a special place in my heart, I think.


I keep a Hayes Smart Modem 300 within arm's reach of my desk just in case.


That's good. In case of home intruders you'll have a heavy, sturdy bludgeon at hand.


It's all the rage nowadays. Microsoft pushing LoRA, which has nothing to do with LoRa.


Hahaha… I clicked thinking the same. I was ready for a good rant on Hayes modem commands… :)


I had the same reaction. It seemed really weird to get upset about such an old thing.


Nostalgic outrage is one of the more interesting forms of nostalgia.


> Hayes AT command set" protocol is also an obtuse crock.

I will nake sure to reuse this name if I ever find myself developing an Obtuse crock.


Yes, I'm going to call my object database the ObtuseCrock. Everything that inherits from this base will be an obtuseCrock object. These can be addressed obtuseCrock.[1] or by name directly.


You sound like you may also be old enough to remember dealing with AT keyboard scan codes directly.


I can't say that I have had that pleasure, or if I have I didn't realize it by name. I've just spent a lot of time hammering out reliability issues on embedded cell modems with buggy firmware and power supplies that droop too much on 2G transmissions.


I assumed Hayes AT at first as well.


So anything later than 1981 is wisdomless?


ATH


I had the same reaction. (-_-)


This is completely irrelevant to the article and response


ATDT12398192831


Having done a decent bit of hacking around ActivityPub, when I read the (thin, which 'pfraze and others have copped to) documentation I immediately went "oh, this is going to be way more scalable than ActivityPub once it's done."

It's not all roses. I'm not sold on lexicons and xrpc, but that's probably because I am up to my eyeballs in JSON Schema and OpenAPI on a daily basis and my experience and my existing toolkit probably biases me. I think starting with generally-accepted tooling probably would've been a better idea--but, in a vacuum, they're reasonably thought-out, they do address real problems, and I can't shake the feeling that the fine article is spitting mad for the sake of being spitting mad.

While federation isn't there yet, granted, the idea that you can't write code against this is hogwash. There's a crazily thriving ecosystem already from the word jump, ~1K folks in the development Discord and a bunch of tooling being added on top of the platform by independent developers right now.

Calm down. Nobody's taking ActivityPub away from people who like elephants.


If they'd proposed (or even just implemented) improvements that at least suggested they'd considered existing options and wanted to try to maximise the ability to do interop (even with proxies), I'd have been more sympathetic. But AT to mean seems to be a big ball of Not Invented Here that makes me worry that either they didn't care and try, or that they choice to make interop worse for a non-technical reason.


During this stage of discovery I'm completely comfortable with ground up rethinks.

I don't feel we have the correct solution and there is no commercial reason to ge this thing shipped. Now is the time to explore all the possibilities.

Once we have explored the problem space we should graft the best bits together for a final solution, if needed.

I'm not sure I see the value of standardizing on a single protocol. Multiple protocols can access the same data store. Adopting one protocol doesn't preclude other protocols. I believe Developers should adopt all the protocols.


Ground up rethinks that takes into account whether or not there's an actual reason to make a change is good. Ground up rethinks that throws things away for the sake of throwing them away even what they end up doing would layer cleanly are not. They're at best lazy. At worst intentional attempts at diluting effort. I'm hoping they've only been lazy.


I'm not disagreeing. To say that there is only one way or to project presumed goals and intentions is too far for me.

I firmly believe that protocols are developed through vigorous rewrites and aren't nearly as important as the data-stores they provide access to. I would like our data-stores to be stagnant and as required we develop protocols. Figuring out a method to deal with whatever the hosted data-store's chosen protocol is seems correct to me. I just don't see mutual exclusivity. Consider the power of supporting both protocols.


> "oh, this is going to be way more scalable than ActivityPub once it's done."

Can you elaborate on this?


I think this is referring to the content-hashed user posts. Using this model one can pull content from _anywhere_ without having to worry about MITM forgeries etc. This opens up the structure of the network, basically decentralizing it even _more_.

Correct me if I'm wrong on this though.


ActivityStreams just requires an object to have a unique URI. ActivityPub says it "should" be a https URI. However, since this URI is expected to be both unique and unchanging (if you put up the same content with a different id, it's a different object), you can choose to use it as the input to a hash function and put the posts in a content-addressable store.

Mastodon will already check the local server first if you paste the URL of a post from another server in the Mastodon search bar, so it's already sort-of doing this, but only with the local server.

So you can already do that with ActivityPub. If it becomes a need, people will consider it. There's already been more than one discussion about variations over this.

(EDIT: More so than an improvement on scaling this would be helpful in ensuring there's a mechanism for posts to still be reachable by looking up their original URI after a user moves off a - possibly closing - server, though)

The Fediverse also uses signatures, though they're not always passed on - fixing that (ensuring the JSON-LD signature is always carried with the post) would be nice.


one important difference is:

> expected

Because the URI is only _expected_ to be immutable, not required, servers consuming these objects need to consider the case where the expectation is broken.

For example, imagine the serving host has a bug and returns the wrong content for a URI. At scale this is guaranteed to happen. Because it can happen, downstream servers need to consider this case and build infrastructure to periodically revalidate content. This then propagates into the entire system. For example, any caching layer also needs to be aware that the content isn't actually immutable.

With content hashes such a thing is just impossible. The data self-validates. If the hash matches, the data is valid, and it doesn't matter where you got it from. Data can be trivially propagated through the network.


The URI is expected to be immutable. The URI can be used as a key. Whether the object is depends on the type of object. A hash over the content can not directly be used that way, but it can e.g be used to derive an original URI in a way that allows for predictable lookups without necessarily having access to the origin server.

Posts are explicitly not immutable, so they do need to be revalidated, and that's fine.

For a social network immutable content is a bad thing. People want to be able to edit, and delete, for all kinds of legitimate reasons, and while you can't protect yourself against people keeping copies you can at least make the defaults better.


> Posts are explicitly not immutable, so they do need to be revalidated, and that's fine.

OK that's my point. In the AT protocol design the data backing posts is immutable. This makes sync, and especially caching a lot easier to make correct and robust because you never need to worry about revalidation at any level.

> People want to be able to edit, and delete

Immutable in this context just means the data blocks are immutable. You can still model logically mutable things, and implement edit/delete/whatever. Just like how Git does this.


But to model mutable things over and immutable blocks you need to revalidate which blocks are still valid.

You need to know that the user expects you to now have a different view. That you're not mutating individual blocks but replacing them has little practical value.

It'd be nice to implement a mechanism that made it easier to validate whole collections of ActivityPub objects in one go, but that just requires adding hashes to collections so you don't need to validate individual objects. Nothing in ActivityPub precludes an implementation from adding an optional mechanism for doing that the same way e.g. Remote storage does (JSON-LD directories equivalent to the JSON-LD collections in ActivityPub, with Etags at collection level required to change if subordinate objects do).


> you need to revalidate which blocks are still valid.

No you don't. Sorry if I'm misunderstanding, but it sounds like maybe you don't have a clear idea of how systems like git work. One of their core advantages is what we're talking about here -- that they make replication so much simpler.

When you pull from a git remote you ask the remote what the root hash is, then you fetch all the chunks reachable from that hash which you don't yet have. If the remote says you need the chunk with hash X, and you have a chunk with hash X, then you have the data. You don't have to worry if it has changed. Once you have all the chunks reachable from the latest head, you have the latest state of the entire repository. That's it.

(I mean simple in the sense of clear/direct/correct, not in the sense of "easy". It's certainly the case that a design based on consuming a stream of change events is a lot less code).


> When you pull from a git remote you ask the remote what the root hash is, then you fetch all the chunks reachable from that hash which you don't yet have. If the remote says you need the chunk with hash X, and you have a chunk with hash X, then you have the data. You don't have to worry if it has changed. Once you have all the chunks reachable from the latest head, you have the latest state of the entire repository. That's it.

Yes, I know how Merkle trees work, what it allows you to do. In other words you use the hash to validate which blocks are still valid/applicable. Just as I said, you need to revalidate. In this context (a single user updating a collection that has an authoritative location at any given point in time) it effectively just serves a shortcut to to prune the tree of what you need to consider re-retrieving in this context.

It is also exactly why I pointed at RemoteStorage, which models the same thing with a tree of etags, rooted in the current state of a given directory to provide the same shortcut. RemoteStorage does not require them to be hashes from a Merkle tree, as long as they are guaranteed to update if any contained object updates (you could e.g. keep a database of version numbers if you want to, as long as you propagate changes up the tree), but it's easy to model as a Merkle tree. Since RemoteStorage also uses JSON-LD as a means to provide directories of objects, it provides a "ready lift" model for a minimally invasive way of transparently adding it to an ActivityPub implementation in a backwards compatible way.

(In fact, I'm toying with the idea of writing an ActivityPub implementation that also supports RemoteStorage, in which case you'd get that entirely for "free").

> (I mean simple in the sense of clear/direct/correct, not in the sense of "easy". It's certainly the case that a design based on consuming a stream of change events is a lot less code).

That is, if anything, poorly fleshed out in ActivityPub. In effect you want to revalidate incoming changes with the origin server unless you have agreed some (non-standard) authentication method, so really that part could be simplified to a notification that there has been a change. If you layer a merkle like hash on top of the collections you could batch those notifications further. If we ever get to the point where scaling ActivityPub becomes hard, then a combination of those two would be an easy update to add (just add a new activity type that carries a list of actor urls and hashes of the highest root to check for updates).


> Using this model one can pull content from _anywhere_ without having to worry about MITM forgeries etc

Does that makes it more difficult to implement the right to be forgotten and block spam and trolls?


Doesn't it make it easier? A list of hashes which should be blacklisted means servers obeying rulings are never at risk of returning that data (this would also work offensively: poll for forbidden hashes and see who responds).


...and now you have to track which instance is authorize to block which hash, creating a lot of extra complexity. Plus, we need to trust all instances to really delete stuff.

It makes life really easy for spammers.


[flagged]


Please edit out swipes from your HN comments, as the guidelines ask: https://news.ycombinator.com/newsguidelines.html.

Your comment would be fine without that first bit.


I explained the point twice.


My reaction whenever I see such headline is always "there must be an engineer out there who worked on this, I wonder how they feel about this". I had the same reaction today, opened comments and you were at the top. I love HN sometimes.


A few years ago I had to work with an obscure 90's era embedded microcontroller with a proprietary language and IDE, and a serial adapter based programmer and debugger. The IDE sucked, and programming would fail 9 times out of 10, but at least the debugger was solid.

By complete chance, I happened to interview someone that had, "wrote the debugger for that obscure microcontroller back in the 90's," tucked away in their resume. It was hard not to spend the entire interview session just picking their brain for anecdotes about it.


You were interviewing them for an unrelated role? Just trying to understand why that would be 'complete chance' and that you (by the sounds of it) didn't hire them to keep picking their brain on it.


The role, although also in the field of embedded software development, was indeed unrelated to that particular technology or its use within the company.


Was it some MCU from Rabbit Semiconductor?


It must be great to wake up to some jerk shitting all over your work on the front page of HN.

Fortunately not something ive had to experience!


While praise is best, I'd rather have my work be criticized than ignored. Nobody criticizes something that doesn't matter to them.


Never doing anything of note pays off yet again!

(speaking for myself, not you, to be clear)


I spend my days scraping and reverse engineering various embedded legacy systems and a recurring thought is "Who is the braindead person who specified or implemented this?" Then I realize those bad technical decisions most of the time end up in production due to business concerns. Often they show a clear misunderstanding of how the underlying tech works, i.e. inexperience. Only rarely those bad designs seem to stem from pure incompetence.

That said, my frustration still gets converted into not-so-polite comments in the source code the culprits will never see.


I always have the same thought before I set out and write a blog post like this. Shame this author doesn't.


> The schema is a well-defined machine language which translates to static types and runtime validation through code generation.

Can you elaborate on this point? There's multiple mature solutions in this space like OpenAPI+Json Schema, GraphQL, gRPC. All try to solve the same problems to varying degrees and provide similar benefits like generated static typed and runtime validation. Was there something unique for you that made these tools not appropriate and prevented you from building upon an existing ecosystem?


I don't know what AT is doing but I can say that while JSON Schema is okay as a validation schema it is less okay as codegen schema. I don't know if there is a fundamental divide between these two uses but in JSON Schema there is definitely an impedance mismatch.

For example, the JSON Schema structures: `anyOf`, `oneOf` and `allOf` are fairly clear when applied toward validation. But how do you map these to generating code for, say, C++ data structures?

You can of course minimize the problem by restricting to a subset of JSON Schema. But that leaves others. For example, a limited JSON Schema `object` construct can be mapped to, say, a C++ `struct` but it can also map to a homogeneous associative array, eg `std::map` or a fixed heterogeneous AA, eg `std::tuple`. It can also be mapped to an open-ended hetero AA which has no standard form in C++ (maybe `std::map<string,variant<...>>`). Figuring out the mapping intended by the author of the JSON Schema is non trivial and can be ambiguous.

At some level, I think this is an inherent divide as each language one targets for codegen supports different types of data structures differently. One can not escape that codegen is not always a fully functional transformation.


There's indeed few languages that can model all of JSON Schema in their type system. Typescript comes close. However, you can just use a subset as you said.

I don't really understand why this is a problem. Unless you're using things like Haskell, Julia, or Shapeless Scala, you generally accept that not everything is modeled at the type level. I don't know the nuances of the C++ types you mentioned, but I have not encountered the ambiguity you described in Typescript or the JVM. E.g. JSON Schema is pretty clear on that any object can contain additional unspecified keys (std::map I assume) unless additionalProperties: false is specified.

> `anyOf`, `oneOf` and `allOf` for [...] C++ data structures?

Like I said I don't know C++ well enough, but these have clear translations in type theory which are supported by multiple languages.I don't know if C++ types are powerful enough to express this.

allOf is an intersection type and anyOf is an union type. oneOf is challenging, but usually modelled OK enough as an union type.


Thanks for the comment. It helps me think how to clarify what I was trying to say.

What I wanted to express is that using JSON Schema (or any such) for validation encounters a many-to-one mapping from multiple possible types across any/all given programming languages to a single JSON Schema form. That is, instances of multiple programming language types may be serialized to JSON such that their data may be validated according to a single, common JSON Schema form. This is fine, no problem.

OTOH, using JSON Schema (or any such) for codegen reverses that mapping to be one-to-many. It is this that leads to ambiguity and problems.

Restricting to a subset of JSON Schema is only goes so far. For example, we can not discard JSON Schema `object` as it is too fundamental. But, given a simple `object` schema that happens to specify all properties have a common type `T` it is ambiguous to generate C++'s `class` or `struct` or a `std::map<string,T>`. Likewise, a JSON Schema `array` can be mapped to a large set of possible collection types.

To fight the ambiguity, one possibility is to augment the schema with language-specific information. At least, if we have a JSON Schema `object` we may add a (non `required`) property to provide a hint. Eg, we may add `cpp_type` propety. Then, typically, the overhead of using a codegen schema is only beneficial if we will generate code in multiple languages. So, this type hinting approach means growing our hints to include a `java_type`, `python_type`, etc. This is minor overhead compared to writing language types in "long hand" but still somewhat unsatisfying. With enough type-theory expertise (which I lack) perhaps it is possible to abstractly and precisely name the desired type which then codegen for each language can implement without ambiguity. But, given the wealth of types, even sticking with just a programming language's standard library, this abstraction may be fraught with complication. I think of the remaining ambiguity between specifying use of C++'s `std::map` vs `std::unordered_map` given an abstract type hint of, say, `associative_array`. Or `std::array`, `std::list`, `std::vector`, `std::tuple` if given a JSON Schema `array`).

I don't think this is a failing of JSON Schema per se but is an inherent problem for any codegen schema to confront. Something new must enter the picture to remove the ambiguity. In (my) practice, this ambiguity is killed simply by making limiting choices in the implementation of the codegen. This is fine until it isn't and the user says, "what do you mean I can't generate a `std::map`!". Ask me how I know. :)


Clear, yes, that is indeed a problem. I haven't personally encountered this to be a huge problem, but that might be ecosystem dependent.


I mean the parent said OpenAPI + JSON Schema, not JSON Schema alone. OpenAPI has a ton of generators that are tweakable in the extreme.


There's a discourse clash.

AT and Bluesky are unfinished. Not ready for primetime. It's not fair to compare it to mature, well-developed stuff with W3C specs and millions of active users on thousands of servers with numerous popular forks.

But, also, everyone who hates Mastodon and spent the last months-years complaining about it is treating your project like the promised land that will lead them into the Twitterless future, somehow having gotten the impression that it's finished and ready to scale.

I think most critiques of Bluesky/AT are actually responding to this even if the authors don't realize it. They're frustrated at the discourse, the potshots, and the noise from these people.


> It's not fair to compare it to mature, well-developed stuff

Really? So rather than try to compare, contrast, and course-correct a project in its early stages by understanding the priors and alternatives, we should only do retrospectives after it has matured?

I would have thought this was the whole point of planning in early development: figuring out what you actually need to make? And that is usually a relative proposition, a project is rarely in a vacuum!

We should never just uncritically go ahead with the first draft in anything, especially not in the protocols we use, as they have this annoying habit of sticking around once adopted and being very hard to change after the fact.


"We _thought_ that this would be a quiet, stealthy beta for us while we finished the technology and the client"

Then maybe don't invite journalists onto your Quiet Stealthy Beta!!!

I hope you get rest, work is not that important


The issue there is that journalists is a good reason why Twitter became mainstream, it makes sense to invite them to your Twitter clone


Sure, but that's not really compatible with the "quiet, stealthy beta" thing OP claims they were aiming for. If it was meant to be a quiet beta journalists should probably have been invited at a later point.


It’s a waitlist plus invite codes given to users. We have had only a vague influence over who joined.


Isn't this just way more evidence you don't even know your users?


Can anyone know 65,000 people?


Thanks for taking the time to write this up. I’m thrilled that there is some motion happening to make social a bit less terrible and your write up is maybe the most objective, least rhetorical commentary I have seen on bluesky so far (side note, is it too late to change the name? I always end up calling it “BS”).

I’m still waiting for an invite to try it myself but I’m excited to see how it compares to mastodon.

I’m also curious to know if AT Protocol can be used for multiple service types. I’m more interested in a reddit replacement (lemmy) than a Twitter replacement (mastodon) and I’m hoping we can see another full scale fediverse come to fruition.


I have not looked much at AT, but it seems it solves many of the same problems as Matrix. Instead of redoing all the Crypto and everything, why not build on martrix. There were other Twitter like things on Matrix before.

As far as I'm aware its developed on Matrix and had some connection with it, so this isn't a case of not knowing about it so there is likely some engineering reasons why this was not done.

Would be interesting to understand.


AT proto has some significant similarities to Matrix:

* Both are work by self-authenticating git-style replication of Merkle trees/DAGs

* Both define strict data schemas for extensible sets of events (Matrix uses JSON schema - https://github.com/matrix-org/matrix-spec/tree/main/data/eve... and OpenAPI; AT uses Lexicons)

* Both use HTTPS for client-server and server-server traffic by default.

* Both are focused on decentralised composable reputation - e.g. https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix... on the Matrix side, or https://paulfrazee.medium.com/the-anti-parler-principles-for... on the bluesky side, etc.

* Both are designed as big-world communication networks. You don't have the server balkanisation that affects ActivityPub.

* Both eschew cryptocurrency systems and incentives.

* Both have names which everyone complains about being hard to google, despite "AT protocol" and "Matrix protocol" or "Matrix.org" being trivial to search for :P

There are some significant differences too:

* Matrix aspires to be the secure communication layer for the open web.

* AT aspires (i think) to be an open decentralised social networking protocol for the internet.

* AT has portable identity by default. We've been working on this on Matrix (e.g. MSC1228 - https://github.com/matrix-org/matrix-spec-proposals/pull/122... and MSC2787 - https://github.com/matrix-org/matrix-spec-proposals/blob/nei...) and have a new MSC (and implementation on Dendrite) in progress right now which combines the best bits of MSC1228 & MSC2787 into something concrete, at last. In fact the proto-MSC is due to emerge today. EDIT: and here it is: https://github.com/matrix-org/matrix-spec-proposals/blob/keg...

* AT is proposing a asymmetrical federation architecture where user data is stored on Personal Data Servers (PDS), but indexing/fan-out/etc is done by Big Graph Servers (BGS). Matrix is symmetrical and by default federates full-mesh between all servers participating in a conversation, which on one hand is arguably better from a self-sovereignty and resilience perspective - but empirically has created headaches where an underpowered server joins some massive public chatroom and then melts. Matrix has improved this by steady optimisation of both protocol and implementation (i.e. adding lazy loading everywhere - e.g. https://matrix-org.github.io/synapse/latest/development/syna...), but formalising an asymmetrical architecture is an interesting different approach :)

* AT is (today) focused on for public conversations (e.g. prioritising big-world search and indexing etc), whereas Matrix focuses both on private and public communication - whether that's public chatrooms with 100K users over 10K servers, or private encrypted group conversations. For instance, one of Matrix's big novelties is decentralised access control without finality (https://matrix.org/blog/2020/06/16/matrix-decomposition-an-i...) in order to enforce access control for private conversations.

* Matrix also provides end-to-end encryption for private conversations by default, today via Double Ratchet (Olm/Megolm) and in the nearish future MLS (https://arewemlsyet.com). We're also starting to work on post quantum crypto.

* Matrix is obviously ~7 years older, and has many more use cases fleshed out - whether that's native VoIP/Video a la Element Call (https://element.io/blog/introducing-native-matrix-voip-with-...) or virtual worlds like Third Room (https://thirdroom.io) or shared whiteboarding (https://github.com/toger5/TheBoard) etc.

* AT's lexicon approach looks to be a more modular to extend the protocol than Matrix's extensible event schemas - in that AT lexicons include both RPC definitions as well as the schemas for the underlying datatypes, whereas in Matrix the OpenAPI evolves separately to the message schemas.

* AT uses IPLD; Matrix uses Canonical JSON (for now)

* Matrix is perhaps more sophisticated on auth, in that we're switching to OpenID Connect for all authentication (and so get things like passkeys and MFA for free): https://areweoidcyet.com

* Matrix has an open governance model with >50% of spec proposals coming from the wider community these days: https://spec.matrix.org/proposals

* AT has done a much better job of getting mainstream uptake so far, perhaps thanks to building a flagship app from day one (before even finishing or opening up the protocol) - whereas Element coming relatively late to the picture has meant that Element development has been constantly slowed by dealing with existing protocol considerations (and even then we've had constant complaints about Element being too influential in driving Matrix development).

* AT backs up all your personal data on your client (space allowing), to aid portability, whereas Matrix is typically thin-client.

* Architecturally, Matrix is increasingly experimenting with a hybrid P2P model (https://arewep2pyet.com) as our long-term solution - which effectively would end up with all your data being synced to your client. I'd assume bluesky is consciously avoiding P2P having been overextended on previous adventures with DAT/hypercore: https://github.com/beakerbrowser/beaker/blob/master/archive-.... Whereas we're playing the long game to slowly converge on P2P, even if that means building our own overlay networks etc: https://github.com/matrix-org/pinecone

I'm sure there are a bunch of other differences, but these are the ones which pop to the top of my head, plus I'm far from an expert in AT protocol.

It's worth noting that in the early days of bluesky, the Matrix team built out Cerulean (https://matrix.org/blog/2020/12/18/introducing-cerulean) as a demonstration to the bluesky team of how you could build big-world microblogging on top of Matrix, and that Matrix is not just for chat. We demoed it to Jack and Parag, but they opted to fund something entirely new in the form of AT proto. I'm guessing that the factors that went into this were: a) wanting to be able to optimise the architecture purely for social networking (although it's ironic that ATproto has ended up pretty generic too, similar to Matrix), b) wanting to be able to control the strategy and not have to follow Matrix's open governance model, c) wanting to create something new :)

From the Matrix side; we keep in touch with the bluesky team and wish them the best, and it's super depressing to see folks from ActivityPub and Nostr throwing their toys in this manner. It reminds me of the unpleasant behaviour we see from certain XMPP folks who resent the existence of Matrix (e.g. https://news.ycombinator.com/item?id=35874291). The reality is that the 'enemy' here, if anyone, are the centralised communication/social platforms - not other decentralisation projects. And even the centralised platforms have the option of seeing the light and becoming decentralised one day if we play our parts well.

What would be really cool, from my perspective, would be if Matrix ended up being able to help out with the private communication use cases for AT proto - as we obviously have a tonne of prior art now for efficient & audited E2EE private comms and decentralised access control. Moreover, I /think/ the lexicon approach in AT proto could let Matrix itself be expressed as an AT proto lexicon - providing interop with existing Matrix rooms (at least semantically), and supporting existing Matrix clients/SDKs, while using AT proto's ID model and storing data in PDSes etc. Coincidentally, this matches work we've been doing on the Matrix side as part of the MIMI IETF working group to figure out how to layer Matrix on top of other existing protocols: e.g. https://datatracker.ietf.org/doc/draft-ralston-mimi-matrix-t... and https://datatracker.ietf.org/doc/draft-ralston-mimi-matrix-m... - and if I had infinite time right now I'd certainly be trying to map Matrix's CS & SS APIs onto an AT proto lexicon to see what it looks like.

TL;DR: I think AT proto is cool, and I wish that open projects saw each other as fellow travellers rather than competitors.


> Both define strict data schemas for extensible sets of events (Matrix uses JSON schema

Matrix uses JSONSchema to define event schemas, but how can they be considered strict if the Matrix spec doesn't specify that any of them have to be validated apart from the PDU fields and a sprinkling of authorization events?

> Matrix has an open governance model with >50% of spec proposals coming from the wider community these days.

Do you have a percentage for the proportion of spec proposals from the wider community making it into spec releases?


This is a great comment, and I think one of the top comments on this post in that you actually know what you’re talking about. I found the breakdown comparing the protocols’ similarities and differences really enlightening.

It’s refreshing to see this perspective, and I appreciate trying to quash the us-vs-them thing, so thanks.


Ah, this is a wonderful analysis! Thank you, I've favorited it. :)

I'd love to get the decentralized protocols to work together. I work on braid.org, where we want to find standards for decentralized state sync, and would love to help facilitate a group dialogue. Hopefully I can connect with you more in the future.


have been tracking braid since the outset - providing state sync at the HTTP layer itself is a really interesting idea. would be great to figure out how to interop the various DAG based sync systems, or at least map between them. very happy to chat about this!


> it's super depressing to see folks from ActivityPub and Nostr throwing their toys in this manner

it's literally just one guy on a Mastodon somewhere


matrix has implied e2e which most of the incumbent players do not like too much...


I'm still positive about Bluesky, but the company hasn't yet proven itself like many non-profits. You could clear a lot of misconceptions by having multiple demonstrations of the protocol:

If I don't want to use BGS, how can I access the PDSs of the people I follow, and get e.g. 10 latest entries for each:

Starting from a handle, let's say @pfrazee.com, how do I fetch using CURL your content without the BGS?

Of course, there are also other issues regarding monetization, and if BGS becomes the de facto way to get content, then some PDSs might become locked to your official BGS.

I hope you are willing to contribute with known non-profits such as Mozilla to do a wider consortium of players for BGS space, which is mostly inaccessible for self-hosters.


I would definitely be curious for elaboration on what requirements the project had that weren't met by OpenAPI or gRPC or Cap'n Proto or Apache Thrift or any of the other existing things that solve this general category of problem.


Hi, original author here. Some comments:

> We sign the records so that authenticity can be determined without polling the home server, and we use a repository structure rather than signing individual records so that we can establish whether a record has been deleted (signature revocation).

Why do you need an entirely separate protocol to do this? Email had this exact same problem, yet was able to build protocols on top of it in order to fix the authenticity problem. This is the issue: instead of using ActivityPub, which is simpler to implement, more generic, and significantly easier for developers to understand, you invented an overly-complex alternative that doesn't work with the rest of the federated Internet.

> The schema is a well-defined machine language which translates to static types and runtime validation through code generation. It helps us maintain correctness when coordinating across multiple servers that span orgs, and any protocol that doesn't have one is informally speccing its logic across multiple codebases and non-machine-readable specs.

OpenAPI specs already exist and do the same job. They support much more tooling and are much easier for developers to understand. There is objectively no reason why you could not have used them, you are literally just making GET and POST requests with XRPC. If you really wanted to you could've used GraphQL.

There are plenty of protocols which do not include machine-readable specs (including TCP, IP, and HTTP) that are incredibly reliable and work just fine. If you make the protocol simple to understand and easy to implement, you really don't need this (watch Simple Made Easy by Rich Hickey).

> The DID system uses the recovery key to move from one server to another without coordinating with the server (ie because it suddenly disappeared).

Why is this necessary? The likelihood of a server just randomly disappearing is incredibly low. There are community standards and things like the Mastodon Server Covenant that make this essentially a non-issue. You're storing all of a user's post history on their own device in the case of an immediate outage. That's equivalent to Gmail storing all of your emails on your device in case you want to immediately pack up and move to another email provider. That is an extremely high cost (I have 55k tweets, that would be a nightmare to host locally) for an outcome that is very unlikely.

> It supports key rotations and it enables very low friction moves between servers without any loss of past activity or data.

This forces community servers to store even more data, data that may not even be relevant or useful. Folks might have gigabytes of attachments and hundreds of thousands of tweets. That is not a fast or easy thing to import if you're hosting a community server. This stacks the decks against community servers.

Most people want some of their content archived, not all, and there is no reason why archival can be separate from where content is posted. Those can be two separate problems.

> That design is why we felt comfortable just defaulting to our hosting service; because we made it easy to switch off after the fact if/when you learn there's a better option. Given that the number one gripe about activitypub's onboarding is server selection, I think we made the right call.

Mastodon is able to do this on top of ActivityPub. Pleroma works with it. Akkoma works with it. There's already a standard for this. Why are you inventing an unnecessary one?

Mastodon also changed their app to use Mastodon.Social as the default server, so this is a non-issue.


I think it’s important to say this: I think asking questions is great, and I’m glad that we’re not just taking statements at face value because making social suck less is a worthy goal.

However, you are coming across as highly adversarial here. Mostly because you immediately follow your questions with assertions, indicating that your questions may be rhetorical rather than genuine.

I’m not accusing you of anything per say but I very much want a dialog to happen in this space and I think your framing is damaging the chances of that happening.


Whether on Twitter or Mastodon, people deep into that type of social network love TO SHOUT LIKE THIS to get likes or boosts.

It is why passersby like me can't get into either Twitter or Mastodon when it is a culture of getting outraged and shouting at each other, to collect a choir of people nodding and agreeing in the replies: "well done for saying it like it is."

These people forgot how humans talk and have arguments outside of their Internet echo chambers.

Anger, insults and hate sells more (creates more engagement) than reasoned arguments. No one would have posted this on HN if is was otherwise. So don't worry, they are not hurting their chances, the next topic that can be summarised with an angry title like "xxx is the most obtuse crock of shit" will get great traction on HN.


They're explicitly not debating in good faith:

"Also I don't care if I'm spreading FUD or if I'm wrong on some of this stuff. I spent an insane amount of time reading the docs and looking at implementation code, moreso than most other people. If I'm getting anything wrong, it's the fault of the Bluesky authors for not having an understandable protocol and for not bothering to document it correctly."

(https://urbanists.social/@sam/110340956133434975)


Yeah the way he conflates "crypto" to refer both cryptography and cryptocurrency, and the rhetoric itself is quite odd: https://urbanists.social/@sam/110340265606422596

It's unfortunate because there are some valid points in his criticism.


"crypto" was used to mean cryptography long before it was used to mean currency, and in some circles still primarily means cryptography.


> in some circles still primarily means cryptography.

It still does in my circle. The overloading of "crypto", though, has become such a source of confusion and misunderstanding that I have stopped using it and just use the full word, be it cryptography or cryptocurrency, instead.


I don't think it's "not in good faith" to say "I made a real substantial effort to understand this, and am trying to describe it accurately; if at this point my descriptions don't match the reality, it's not my fault but that of the people who made it impossible to understand".

(Of course it's perfectly possible, for all I know, that SW is not debating in good faith. But what you quote doesn't look to me like an admission of bad faith.)


I don't see what charitable take could possibly be made wrt "I don't care if I'm spreading FUD" even with all these caveats.


Well, I thought I already described what seemed to me to be a charitable and reasonable take on it.

"I put as much effort in as can reasonably be expected; I tried to evaluate it fairly; but the documentation and supporting code is so bad that I may have made mistakes. If so, blame them for making it impossible to evaluate fairly, not me for falling over their tripwires."

If something is badly documented and badly implemented, then I think it's OK to say "I think this is badly designed" even if you found it incomprehensible enough that you aren't completely certain that some of what looks like bad design is actually bad explanation.

If some of the faults you think you see are in fact "only" bad documentation, then in some sense you're "spreading FUD". But after putting in a certain amount of effort, I think it's reasonable to say: I've tried to understand it, I've done my best, and they've made that unreasonably difficult; any mistakes in my account of what they did are their fault, not mine.

(I should reiterate that I haven't myself looked at the AT protocol or Bluesky's code or anything, and I don't know how much effort SW actually put in or how skilled SW actually is. It is consistent with what I know for SW to be just maliciously or incompetently spreading FUD, and I am not saying that that would be OK. Only that what SW is admitting to -- making a reasonable best effort, and possibly getting things wrong because the protocol is badly documented -- is not a bad thing even when described with the words "I don't care if I'm spreading FUD".)


I agree, thank you for stating that in a respectful way.

The linked article/toot and certain replies seriously makes me want to just shutdown my personal mastodon server and move on from the technology altogether.


> The likelihood of a server just randomly disappearing is incredibly low. There are community standards and things like the Mastodon Server Covenant that make this essentially a non-issue.

This has actually happened. It's a real problem. For example, "Mastodon instance mstdn.plus with over 4K users suddenly broke" https://lapcatsoftware.com/articles/mastodon.html

As far as I'm concerned, the Mastodon Server Covenant is a joke.


I came here to day this.

Another example: Mastodon.lol, which had 12,000 users literally shutdown a few hours ago. They did manage to give notice but the point remains that people had to move instances, cannot take their posts with them, and it’s a giant PITA, server covenant or not.

To call this stuff a “non-issue” seems incredibly obtuse, especially when the data portability piece is clearly an after thought by the Mastodon devs, and something that ActivityPub would need some major changes to get accomplished. Changes that the project leads have been fairly against implementing.


there's also a trend of most servers not even being compliant with the 'covenant'


Y’all should see the dead letters in the publish queues from dead indie servers of which thousands have gone offline but whose addresses will get looked up forevermore


> Email had this exact same problem, yet was able to build protocols on top of it in order to fix the authenticity problem.

On the contrary, email has no solution to the authenticity problem that’s being talked about. Even what there is is a right mess and not even slightly how you would choose to build such a thing deliberately.

If you want to verify authenticity via SPF/DKIM/DMARC, you have to query DNS on the sender’s domain name. This works to verify at the time you receive the email, but doesn’t work persistently: in the future those records may have changed (and regular DKIM key rotation is even strongly encouraged and widely practised).

What you are replying to says that AT wants to be able to determine authenticity without polling the home server, and establish whether a record has been deleted. Email has nothing like either of those features.


I think they're talking about GPG, not SPF/DKIM/DMARC.

Which is a risky thing to do, because most people don't associate GPG with positive feelings about well designed solutions, but they're right in that it works well, solves the problem and is built squarely on top of email.

The reason that it's not generally well received is that there's no good social network for distributing the keys, and no popular clients integrate it transparently.


In this case GPG, DKIM and even S/MIME are on equal standing. Validity can be checked only on reception because there's no validity stapling mechanisms.


I’m curious about this. So email that I’ve sent, let’s say from a gmail account to an iCloud account, isn’t guaranteed to be verifiable years later because of dkim key rotation?

That’s not great. I wonder if the receiver could append a signed message upon receipt with something like “the sender’s identity was valid upon receipt”.


The receiver absolutely does that with the Authentication-Results header, but can you trust its integrity in your mailbox, your email provider and all your email clients (to not modify it)? It's indeed not great for non-repudiation.


> I wonder if the receiver could append a signed message upon receipt with something like “the sender’s identity was valid upon receipt”.

That's exactly what does happen, if you view the raw message in GMail/iCloud, you should see DMARC pass/fail header added by the receiving server (iCloud in your example).

(Well not exactly, it's not signed, but I'm not sure that's necessary? Headers are applied in order, like a wrapper on all the content underneath/already present, so you know in this case it was added by iCloud not GMail, because it's coming after (above) 'message received at x from y' etc.)


Thanks for the response. Do you know if this extra “dkim sig was verified header” is part of a protocol or is it just something that is done bc otherwise bad stuff happens?

I’m also curious how this plays into the original comment about dkim/spf/dmarc not being sufficient due to key rotation still factors into the conversation after having discussed this?


I'm not sure off the top of my head, I'd guess it's a MAY or SHOULD. Verifying DKIM/SPF/DMARC is optional anyway, if you want to just read everything without caring you can; you've received the message by that point, I can't see what bad stuff would happen if it wasn't added.

Key rotation would have the same effect as 'DNS rotation' (if you stopped leasing the domain, or changed records) - you might get a different result if you attempted to re-verify later.

I just don't really see it as a problem, you check when you receive the message; why would you check again later? (And generally you 'can't', not as a layman user of GMail or whatever - it's not checked in the client, but the actual receiving server. Once it's received, it delivers the message, doesn't even have it to recheck any more. Perhaps a clearer example: if you use AWS SES to receive, ultimately to an S3 bucket or whatever for your client or application, SES does this check, and then you just have an eml file in S3, there's no 'hey SES take this message back and run your DKIM & virus scan on it again'.)


It's just for humans, it's not usually used for anything else. For machines we have ARC (Authenticated Received Chain) which basically contains almost the same info but signed across the entire chain.


The notion that server disappearance is a non-issue is quite misleading. Servers go offline for various reasons, such as technical difficulties, financial constraints, or legal issues. Recovering and transferring data without relying on the original server is essential for users to maintain control over their data and identities. DIDs and recovery keys provide a valuable solution to this problem, ensuring user autonomy.

Your reply fails to address that push-based systems are prone to overwhelming home servers due to burst loads when content becomes viral. By implementing pull-based federation, the AT Protocol allows for a more balanced and efficient distribution of resources, making self-hosting more affordable and sustainable in the long run.


> The likelihood of a server just randomly disappearing is incredibly low.

Everything else aside, this is completely untrue.

I self-hosted my first fediverse account on Mastodon and got fed up with the complexity of it for a single person instance and shut it off one day (2018 or so?).

On another account at some point 50% of my followed people vanished because 2 servers where everyone in that bubble were on just went offline. Took a while to recreate the list manually.

This may be anecdotal but I've seen it happen very often. Also people on small instances blocking mastodon.social for its mod policies comes close to this experience.


Alternatively: the likelihood of any one server going away tomorrow is small, but the likelihood of something in your social graph going away tomorrow is high.


> I have 55k tweets, that would be a nightmare to host locally)

theyre tweets, how much could they cost? @ 280 bytes each, that's like 15MB. double it for cryptographic signatures and reply-to metadata. is that really too much to ask for the capacity to transfer to another host at anytime?

(also, leaving aside the fact that 55k tweets puts you in the 0.1% of most prodigious users)


I have every post made on BlueSky up to a certain point last weekend and it's only 3 GB.

I have every email I've ever received or sent (and not deleted) and it's only 4GB.

Should something require I download all that every time I login? No. But having a local copy is amazing, and a truly federated system should have and even be able to depend on those.

The Mastodon Server Covenant is a joke; the only enforcement is to remove the server from the list of signup servers; which if it just fell over dead because the admin died/doesn't care/got arrested/got a job will not matter.


How have you pared down your email to just 4GB?


Not sure, I guess I don't send or receive many attachments and delete marketing/spam.

My work email is 10gb.


As of my last Twitter export, I had 54425 tweets and the tweet data comes to 110M. But there's also 2G of media files that goes with it.


So… not only fits on ever popular cloud storage providers free tier, but also your phone.

Seriously. This is setting off all sorts of red flags. I’m old enough not to trust non w3c standards.


How did we get to 55k tweets being a nightmare for any social media platform?

A quick search got me to twitter stats from 2013 when people were posting 200 billion tweets per year. Thats 5-6 orders of magnitude more. You don't get a 10000x improvement just by federating and hosting multiple nodes.


The discussion here was about archiving each user's tweets on their own client device - this is where the 55k was brought up as a problem. I still think it's a low number, even if it includes plenty of images.


With a decent amount of images and videos this can easily be 100+GB. Even if it's a fraction of that, not something I want to sync down to my device.


> double it for cryptographic signatures and reply-to metadata

Ah, email, where a message of 114 characters with no formatting ends up over 9KB due to authentication and signatures, spam analysis stuff, delivery path information and other metadata. Sigh. Although I doubt this will end up as large as email, the lesson is that metadata can end up surprisingly large.

In this instance, I think 1–2KB is probably more realistic than the half kilobyte of “double it”.


There’s also all the media to go along with them.


Pretty sure a nontrivial percentage of peoples smartphones have that many thumbnails for the photo gallery app alone.

Anybody who has had a smartphone for a decade likely has at minimum 10k photos in their cloud locker with local thumbnail.


Ok, so add some more megabytes to that. Most people don't have that much microblogging data.


I actually think photos could potentially add up to quite a lot!


Sure, they could. Most people don't post tons of hi res photos. But I'm sure there are ways you could optimize to not have all the content on local device, if it's such a big deal. But this is a really strange point to me to be hung up on.


I think you’d be surprised at the number of photos posted, but also many people post tons of gifs (especially reaction gifs) which are fairly large.


"The likelihood of a server just randomly disappearing is incredibly low."

No. Just no.

If (IF!) some distributed social network breaks through and hundreds of millions or billions of people are participating, they are going to do things that The Powers That Be don't like. For better or worse, when that happens they will target servers, and servers WILL just disappear. Domains will disappear. Hosting providers will disappear. You can take that straight to the bank and cash it.

Uncoordinated moves are table stakes for a real distributed social network at scale. The fact AT Protocol provides this affordance on day one is a great credit.


> That's equivalent to Gmail storing all of your emails on your device in case you want to immediately pack up and move to another email provider. That is an extremely high cost (I have 55k tweets, that would be a nightmare to host locally) for an outcome that is very unlikely.

If your identity is separate from your Gmail account (as it can be with a custom domain, for email and for bluesky), this seems like a very plausible and desirable thing to be able to do. Just recently there was an article about how Gmail is increasing the number of ads in the inbox; for some people that might change the equation of whether Gmail's UX is better than it is bad. If packing up and leaving is low-friction enough, people might do it (and that would also put downward pressure on the provider to not make the experience suck over time)

And that's not even getting into things like censorship, getting auto-banned because you tripped some alarm, hosts deciding they no longer want to host (which has happened to some Mastodon instances), etc.


> The likelihood of a server just randomly disappearing is incredibly low.

It happens all the time. mastodon.social, the oldest and biggest Mastodon instance, has filled up with cached ghost profiles of users on dead instances. Last I checked, I could still find my old server in there, which hasn't existed for several years.


Email has only solved the "authenticity problem" by centralizing to a tiny number of megaproviders with privileged trusted relationships. Forestalling that sort of "solution" seems to me one of the Blueksy team's design goals.

Servers go down or get flaky all the time for various reasons. Easy relocation (with no loss of content & relationships) and signed content (that remains readable/verifiable even through server bounciness) soften the frustrations.

55k tweets is little challenge to replicate, just like 50k signatures is little challenge to verify, here in the 2020s.

If Mastodon does everything better with a head start, it should have no problem continuing to serve its users, and new ones.

Alas, even just the Mastodon et al community emphasis on extreme limits on visibility & distribution – by personal preferences, by idiosyncratic server-to-server discourse standards, by sysop grudges, whatever – suppress a lot of the 'sizzle' that initially brought people to Twitter.

Bluesky having an even slightly greater tilt towards wider distribution, easier search, and relationships that can outlive server drama may attract some users who'd never be satisfied by Mastodon's twisty little warrens & handcrafted patterns-of-trust.

There's room for multiple approaches, different strokes for different folks.


> Why do you need an entirely separate protocol to do this? Email had this exact same problem, yet was able to build protocols on top of it in order to fix the authenticity problem.

But if we started today, we wouldn't build email that way. There are so many baked-in well-intended fuckups in email that reflect a simpler time where the first spam message was met with "wtf is this, go away!" I remember pranking a teacher with a "From: president@whitehouse.gov" spoofed header in the 90s.

Email is the way it is because it can't be changed, not because it shouldn't be.


I'm sorry but this is ridiculous. Just because a protocol exists doesn't mean that if someone doesn't build on top of it, you can describe it as a crock of shit.


> > The DID system uses the recovery key to move from one server to another without coordinating with the server (ie because it suddenly disappeared).

> Why is this necessary? The likelihood of a server just randomly disappearing is incredibly low.

The likelihood of a server just randomly disappearing at any point in time is low. The likelihood of said server disappearing altogether, based on the 20+ years of the internet, can & will approach 100% as the decades go on. Most of the websites I know in the early 2000s are defunct now. Heck, I have a few webcomic sites from the 2010s in my bookmarks that are nxdomain'd.

Also, as noted by lapcat, these sudden server disappearances will happen. Marking this problem as a non-issue is not, in any realm of possibility, a good UX decision.

https://news.ycombinator.com/item?id=35883409

This is coupled with the fact that Mastodon (& ActivityPub in general) don't have to do anything when it comes to user migration: The current system in place on Mastodon is completely optional, wherein servers can simply choose to not allow users to migrate.

https://news.ycombinator.com/item?id=35883570

https://news.ycombinator.com/item?id=35884682

> There are community standards and things like the Mastodon Server Covenant that make this essentially a non-issue.

*The Covenant is not enforced in code by Mastodon's system, nor by AcitivtyPub's protocol.* It's heavily reliant on good faith & manual human review, with no system-inherent capabilities to check if the server actually allows user data to be exported.

> You're storing all of a user's post history on their own device in the case of an immediate outage. That's equivalent to Gmail storing all of your emails on your device in case you want to immediately pack up and move to another email provider. That is an extremely high cost (I have 55k tweets, that would be a nightmare to host locally) for an outcome that is very unlikely.

An outcome *that can still happen*. As noted by the incidents linked above, they're happening within the Mastodon platform itself, with many users from those incidents being unable to fully recover their own user data. Assuming that this isn't needed at all is the equivalent of playing with lightning.


The recovery key bit is the one part I actually like.

But improving on the ActivityPub user migration store is also a minor/trivial change away from doing much better than today: you just need to change ActivityPub Ids to either fully a contentadressable hash or referencing a base that is under user control, plus a revocation key style mechanism for letting the user sign claims about their identity in order to allow unilateral moves.


you say "objectively" a lot when most of what you write seems to be just overly emotional flashy wringing


> Mastodon also changed their app to use Mastodon.Social as the default server, so this is a non-issue.

They have instead created another issue.

Before, it was a usability issue which normal users looking for an alternative social network got confused on the sign up process and gave up.

If that wasn’t an issue why did the Mastodon devs decide to select a default server in the app after seeing this?

Now, they have traded that off and created a centralization issue going against the point of encouraging federation.

This only shows that centralization wins in the end.


> You're storing all of a user's post history on their own device in the case of an immediate outage. That's equivalent to Gmail storing all of your emails on your device in case you want to immediately pack up and move to another email provider.

In the world I currently live in, all my emails are stored locally on my devices. Also, text files take up little to no storage, so why does it matter?


> Why is this necessary? The likelihood of a server just randomly disappearing is incredibly low. There are community standards and things like the Mastodon Server Covenant that make this essentially a non-issue.

I literally read about a case over a month ago where some obscure Mastodon-server admin blocked someone's account on their server so it was impossible to move to another instance. The motivation was "I don't want capitalist here, can change my mind for money" (slightly paraphrasing). Basically, it's stupid to use any Mastodon instance other than the few largest one or your own.

That's why BlueSky's approach makes sense.

>with the rest of the federated Internet. You're saying like it's a thing that won and not a niche project for <10M users globally.


Y’all are doing amazing work and bsky is a joy to use and I’m really excited for the technical journey.

I’m guessing with more success will come more haters unfortunately. :-/


Some of this seems very familiar, in a good way, which makes me very interested in Bluesky and the AT protocol. I worked with XRIs and was on the XDI TC. I also was on the fringes of some of the early DID spec work, and experimented with Telehash for while.

I know Jeremie Miller is on the board, is Telehash or something similar being used within Bluesky?

Also, I'm sure you get this a lot, but I'd love a BlueSky invite please.

=Bill.Barnhill w dot a dot barnhill at gmail.com


Seems like a sane approach.

IMHO, any protocol that isn't signing content (like mastodon) is merely moving the problem. Signatures allow people to authenticate content and sources and allow people to build networks of trust on top of that. So bluesky is getting that right. Unsigned content should not be acceptable in this century.

Signed content immediately solves two issues:

- reputation: reputation is based on a history of content that is liked and appreciated by trusted sources that is associated with an identity and the associated set of public keys. You can know 100% for sure whether content is reputable or not. Either it is signed by some identity with a known reputation or it is not.

- AI/bot content could sign content with some key of course but it would be hard to fake reputation. Not impossible of course but you'd have to work at it for some time to build up the reputation. And people can moderate content and destroy the reputation of the identity and keys.

The whole problem with essentially all social media networks so far is a complete and utter lack of trustworthiness. You could be reading content by an AI, a seemingly bonafide comment from somebody you trust might actually come from some Chinese, Russian or North Korean troll farm, or you are just wading through mountains of click bait posted by "viral" marketing companies, scammers, or similarly obnoxious/malicious publishers. Twitter's blue tick is laughably inadequate for dealing with this problem. And people can yell whatever without having to worry about their reputation; which causes them to behave in all sorts of nasty ways.

Signed content addresses a lot of that. You can still choose to be nasty, obnoxious, misleading, malicious, etc. but not without staking and risking your reputation.

Mastodon not having a lot of these issues (yet) is more a function of its relative obscurity rather than any built in features. I like Mastodon mainly because it still feels a bit like Twitter before that got popular. But it's not sustainable. If a few hundred million people join, it will get just as bad as other networks. In other words, I don't see how this could last unless they address this. I don't think that this should be technically hard. You need some kind of key management and a few minor extensions to the protocol. The rest can be done client side.

That would be more productive than this rant against the AT protocol.


How much of this is actually implemented? Correct me if I'm wrong but bluesky doesn't actually implement federation yet?

As someone who wants to like bluesky, I feel like a lot of the scepticism comes from the seeming prioritisation of a single centralised server, over federation, for what is "sold" as a decentralised system.


my advice is dont stress, whatever you do, some people will not like it

for some reason people think what we have now is good, and they say "dont reinvent the wheel" but there is no wheel, what we have is just garbage, 50 years and later we still cant beat the "unix pipe"

we have to keep trying to make a wheel


Thanks for the great perspective.

I want to say that "if there was something off the shelf that met all our needs we would've used it" has been the justification for many over-engineered, not-invented-here projects. On the other hand, in many cases it is completely legitimate.


Not sure if you'll see this, but I work for an internal tooling startup and I'm trying to mess around with your API. Our tool has support for REST and python/js support with a limited set of libraries. I've been trying to figure out how to connect to the API via CURL so I can write some blogposts about building with y'all and I've been struggling.

I can get com.atproto.server.createSession to return me a token, but then when I try to transition into another endpoint (such as app.bsky.richtext.facet) I only get 404s. Is their any examples of using y'all with rest? Happy to take my question elsewhere if y'all have any places for folks to ask questions?


There's a Discord server for Bluesky/ATProto developers. I would try there.

https://discord.gg/3srmDsHSZJ


Please consider renaming your protocol to something that is possible to Google for.


It is literally the first result for "atproto", "at proto", "atprotocol" and "at protocol" on Google. How much more Google-able would you like it to be?


> ATProto doesn't use crypto in the coin sense. It uses cryptography. The underlying premise is actually pretty similar to git.

It's annoying how the word blockchain got associated with the worst of cryptocurrency excesses, isn't it ?

Also :

https://medium.com/@shemnon/is-a-git-repository-a-blockchain...


Not to derail this thread, but is there any way of seeing whether I'm actually on the waitlist? I remember signing up late last year, but there was never any confirmation and I never got the survey which I've heard people talking about. I'm super interested to try out Bluesky, but haven't been able to find anyone with an invite.


Also would like to know this, I haven’t received any confirmation email or survey either, and each time I “join the waitlist” it just gives me a success message as if I’m not already on the list.


I would like to know this as well. I never got a confirmation and I could have swore I signed up months ago when it first opened. I signed up again a few weeks ago and didn't receive a confirmation then either.

I suspect it's due to me using an uncommon tld for my primary email.

Email: blade -at- coates dot life.


Only 65K users? Why does NOSTR have 10 times[1] as many users already? I guess you choose to close the doors and only let 65K users come in as Beta testers?

[1] Users with a profile are at about 2.2 million, see : https://stats.nostr.band/


Because people are actually using this as a community now and there's a desire to not just open the floodgates before all the moderation pieces etc are in place. I joined when it was roughly 15-20k users and each wave of invites brought distinct subgroups. Then, the original short invite code tokens began getting brute forced and there was a large wave that completely disrupted the balance of things.

There's the protocol, but then there's also the community – and communities are ecosystems not just technical problems.


It's hard to remember sometimes every project is building both technology and community.

Technical merits of the different approaches aside... this is a calm, kind and well reasoned response to a rather, uh, emotional critique. Thanks for bringing the discourse back to where it should be.


My two cents to the author: make an open-source reference implementation of your own protocol.


If you’re not implementing a cryptocurrency then how will you incentivize folks to host instances?

What DID are you using? Is it unique to AT or something more widely available?

Any tips for getting an invite to Bluesky for us nerds on here?


People do all sorts of things without cryptocurrency as an incentive

They host Mastodon instances, contribute to OSS, edit Wikipedia etc etc


Mastodon is not very popular. Contributing to OSS and Wikipedia is a time investment and not a capital contribution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: