> But what does this buy us? For the most part it means that every piece of content everyone writes needs to be replicated to every point in the federation mesh to be useful.
This is because ActivityPub is tied to RDF and linked data concepts. This isn't specific to ActivityPub, just a criticism of the linked data web. As we will see later, the author doesn't quite understand that linked data can be fetched only on demand and represented otherwise as only an IRI, reducing data replication to a minimum. It depends on the specific ActivityPub implementation.
> This isn’t creating a distributed network, this is creating a whole bunch of massive points of failure.
This is a pretty ungenerous characterization. It's a distributed system where robustness is not a part of the protocol. It needs to be solved elsewhere, so innovation is needed.
> Meanwhile, what does this buy you? Immediacy of updates? Okay, does it really matter how quickly you see someone’s new content, if it’s worth seeing?
I think this is another ungenerous characterization. It buys a federated set of communities and applications that can exchange linked data in real time.
> For example, user-agents and RSS providers can use the If-Modified-Since: header to only get the feed updates if the content has changed since the last fetch.
ActivityPub also has a diffing system of data in place with Update activities too.
> Oh, and ActivityPub stuff generally also means that all attachments and other content also get replicated.
This is not true and is entirely implementation specific. If Mastodon creates local caches of federated data, that's it's design choice. Other applications can always fetch-on-demand and go back to the Pull Model.
That being said, static sites are not served by ActivityPub so that portion of criticism is definitely spot on.
Atom is well-specified, and has provisions for feedback. But what user agents / popular libraries / whatever else actually uses the feedback (commenting) features? Which servers that serve Atom feeds support that, too, and how well?
Sure, you could probably force most of these cases with some hacked up RSS implementation, and I can also use a longsword to mow my lawn. I'm personally very excited to see what ActivityPub brings with it and projects like PeerTube show that it's an extremely capable protocol even if we're still learning how to best use it and implement it.
These are not "hacked up RSS"; they're just RSS (well, more often Atom or JSON Feed) combined with other technologies like webmentions. These are all, like ActivityPub, IETF recommendations/standards. I've mentioned Micro.blog before, but it's worth mentioning again: it's a centralized server producing a Twitter-esque timeline, but the sources of posts might be individual blogs using any number of different blogging engines. When I post articles to my WordPress site, people can reply on Micro.blog and have those replies show up both in their timelines and as actual comments on my blog. (And in fact, behind the scenes, a "hosted" Micro.blog site is actually using all the same technologies, with Jekyll as the blog engine.)
As much as I like many things about Mastodon, I'm not convinced that it's a superior philosophy to Micro.blog's; the dark cloud to federation's silver lining is that the admins of any instance that your social graph connects to (that is, you follow someone on that instance and/or vice-versa) can break those connections without warning and with no recourse to you, beyond moving to another instance and hoping it doesn't happen again.
And, speaking more generally, I think the author definitely has some points that shouldn't be dismissed out of hand: it's possible that ActivityPub might indeed be better than other feed protocols at solving certain problems, but it's worth looking at each use case and asking if that's really true in this case. PeerTube might be one that ActivityPub excels at and that feeds don't, for instance. (I don't see anything about PeerTube's implementation that proves what it's doing with ActivityPub couldn't be done just as easily with, say, JSON Feed, but I haven't dug into that.)
Overall I think your argument is very strong if you can provide a comparable experience to mastadon. If not, then it's not clear why the benefits are worth it.
The catch right now is that Micro.blog is a paid service. I don't see any reason why someone couldn't set up something very similar built on the same tech--other than, of course, the willingness to put in the time and money. :)
"For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem."
Also, that user was completely right re: Dropbox, but they missed the market value of packaging all that for you in a few clicks install, on any platform with 99% uptime (also, I believe that user has noted in other threads they recognize their mistake in making the argument that Dropbox wasn't a value-add). On the other hand, it seems to my uneducated opinion on ActivityPub, that it is trading extra complexity for...push?
> It's funny how often that comment—which I made as a 22-year-old undergrad—resurfaces. Someone even reached out to me 2 weeks ago because they wanted to use it in an article as an example of "the disconnect between the way users and engineers see software"!
> I like to think that I've gained a lot of perspective over the last 11 years; it's pretty clear to me that point #1 was short-sighted and exhibited a lot of tunnel vision. Looking back, though, I still think that thread was a reasonable exchange. My 2nd and 3rd points were fair, and I conceded much of point 1 to you after your reply (which was very high quality).
> Obviously, we have the benefit of hindsight now in seeing how well you were able to execute. Kudos on that!
That is why we are using Atom on top of Pubsub in XMPP to built social networking features. The sub-layer is changing (Atom is embeded into Pubsub XML on top of TCP real time streams, XMPP) but in the end the content is not "transformed" but just "transported".
Atom already provides most of what ActivityPub provides regarding the description of the content (rich content, attachments, external links, comments, reply-to...).
You just need to put it on top of a layer that specifies how this content can be accessed, subscribed and delivered. In our case it's XMPP Pubsub but it can also be pure HTTP requests. Good thing is that moving from one to another is pretty easy.
This is a static page generated from Atom articles published on a XMPP Pubsub feed from TechCrunch RSS/Atom feeds https://nl.movim.eu/?node/news.movim.eu/TechCrunch, we can even generate back an Atom feed from it easily. On a similar way you can handle personal "blogs" https://email@example.com with likes, shares and comments :)
The XMPP Standard Fundation is working for more than 10 years now on integrating those concepts together and glueing existing standards to deliver social-features next to IM features and so on...
You don't have to create "new standards that will cover everything else" (see https://xkcd.com/927/) if you can just glue the existing ones together.
I'm really lovestruck by the capabilities of ActivityPub and I want to work on a federated Tumblr alternative, but I just have not found the time with all the personal stuff I have going on currently, but ideally you wouldn't store everything on your instance (unless you rather cache it, now THAT makes perfect sense to me) you would let a browser request certain things via AJAX and maybe your instance could tell the other instance, hey guy you keep asking me for this waaay too often, can you cache it for me? kthxbai and those who don't do it would be considered broken instances and possibly rejected? Not sure I feel like the protocol is still way too young. Also not sure about the security implications but let's assume that sane SSL cryptomagic is a requirement for blogging instances (or just all of them).
A problem with opportunistic caching would be degraded user experience for the first user that sees some specific piece of content, since they'd have to wait (potentially for seconds) for their instance to fetch the content from the origin instance.
While you can use Atom/RSS with WebSub, WebSub doesn't expressly use any particular syndication format (it can distribute any content directly, not just Atom/RSS feeds) and hasn't, IIRC, since one of the PubsubHubbub drafts (I think 0.3, but maybe 0.4.)
Maybe RSS could be dropped? Are there news readers that support RSS but not Atom?
Unfortunately yes. Gnus has not gotten Atom support yet (people keep using external programs to translate). Also far too many feeds still output RSS instead of Atom. RSS is a horrible format and there is no reason today to use it instead of Atom.
And there is existing tech for the popular reply-like-repost UX for this, based on webmention.
h-feed haven’t been a massive success.
Usually things that integrate more into the whole indieweb ecosystem, but some are (plugins for) more traditional feed readers.
One of the problems I had with blog comments is spam. Blogger would queue up moderated comments and send you an email about them. The idea occurred to me that that was redundant and it would be better to cut out the middleman - you are getting email notifications about comments needing moderation, and email already had long established and much better spam filtering. Why not have people submit comments via email? That way you can also reply to people privately. This is also convenient for people who read their feeds via RSS-to-email or in newsreaders/email clients like Thunderbird and Gnus.
Another aspect of comments via email is that it is much more like a letter to the editor. Your blog posts are articles, and you can choose to update them with corrections and comments that people send in, instead of trying to manage a half-baked message board that is the standard LiveJournal/Blogger/Wordpress/Disqus comments section.
I think an easy and valuable way to get non-geeks to use web page feeds more is to provide the option to sign up for the feed via email, by offering a "newsletter" option.
The first things the e-mail ecosystem is to do to become more adequate to the modern day world are to deprecate all the codepages but UTF-8 for any part of a message (header included), eliminate the overquoting tradition (including full text of the message the message is a response to) and introduce an efficient thread tracing mechanism instead, establish a more reasonable signature culture (i.e. people and software to stop including redundant and useless information in the bottom of every message) and make Markdown a core part of the standard (while HTML should better be deprecated).
I also agree that email should be a personal preference. I think that DFeed shows how open discussion systems should be built. DFeed provides a unified discussion system that is simultaneously accessible through email, NNTP, a web interface, and Atom feeds. That would be a really nice way to offer a comment discussion system on a blog.
The equivalent scenario in polling land produces a better result: an oversubscribed producer that begins dropping requests will typically just cause a transient delay, as clients retry at a later time.
RSS feeds commonly don't include their full history - certainly this is true of the biggest providers like Blogspot.
Finally a good client will know to adjust its polling delay according to the traffic of a particular feed. This is a <5 line function, and something Google Reader was implementing a decade ago.
Pull-based systems of course suffer from latency, but they require vastly less centralized state and coordination than a push based system, and suffer from much less painful failure modes
Most exciting of all (in my opinion), is that pull-based designs can exist on top of all the infrastructure that already existed, rather than requiring dedicated active network components all of their own -- that's a whole lot of crap code we can avoid rewriting over and over again for the next decade.
- It's tough to monetize since true p2p software can just get pirated. You can't gatekeep to make sure people pay.
- The shortage of IPv4 with the help of buggy insecure operating systems made closed-to-the-outside NAT standard, making peers inaccessible without manual configuration or some helper technology.
- Data hungry government and corporations see independent communication channels as an obstacle to their goals. (See the recent story about five eyes pushing for backdoored crypto in Australia.)