I explain that a bit in the blog post and video. I plan to implement more of the protocol to allow for following (inbox) and posting (outbox). Those are still ActivityPub but Mastodon is the most popular implementation of the protocol so people can understand how it can be used
The problem can be easily sidestepped by referring to it as a "Mastodon alternative", rather than a "Mastodon instance".
(Even if it weren't that easy, I'd still agree that referring to it as a "Mastodon instance" is both wrong and weird, regardless of how popular and relatable Mastodon is.)
"Mastodon instance that has nothing to do with Mastodon besides implementing a small subset of the ActivityPub protocol that Mastodon also uses with 6 Files" is kind of long I guess.
Well, this is quite timely. I was looking for a way to implement a partially static ActivityPub instance (because I want to have an easy blog-to-Mastodon gateway, and my site is entirely static), and the example data has already saved me a couple of hours - thanks!
Glad it helped. My goal is to make a static site/mastodon bridge too. I'm working on the /inbox portion next and then will tackle the outbox stuff in a future post.
I agree. I wish activitypub was more static file friendly. Many of the URLs don’t require parameters but almost all of them use them. I wanted to show how you could do it without them
Which RFC says query parameters is a good idea? The .well-known RFC (5785) itself states:
> well-known URIs are not intended for general information retrieval or establishment of large URI namespaces on the Web.
The WebFinger RFC appears to stuff its entire protocol into .well-known! And it includes some fine examples such as:
GET /.well-known/webfinger?
resource=acct%3Abob%40example.com&
rel=http%3A%2F%2Fwebfinger.example%2Frel%2Fprofile-page&
rel=http%3A%2F%2Fwebfinger.example%2Frel%2Fbusinesscard HTTP/1.1
Host: example.com
I would say that .well-known should be kept simple for both security and correctness. Dynamic content can be exploited more easily, as directly evidenced by you bringing CGI into the discussion. Additionally, .well-known is a global namespace and should not be a broad query interface when those queries can be reasonably furnished by other means.
My experience with .well-known is that it's more or less an application level DNS and identity service. But this looks like those hacks where you can read Wikipedia using `dig`.
The point is that for many small setups being able to just provide static files would have been sufficient if it wasn't for the way the webfinger spec requires query string arguments.
This doesn't explain why query parameters in a .well-known URI is a bad idea, it just explains that you don't like them. That's fine, and I agree with you, but I'm asking for the actual technical reason people are claiming query parameters in a .well-known URI is a bad idea.
I’d say it’s a bad idea when you can implement most .well-known features by just touching or copying static files into the right directory, and then suddenly a .well-known feature comes along that expects something smarter than a static file server, and now you’re doing complex routing just to patch that request over to a process that can actually service it.
This is the same argument people made against CGI, and web apps in general, and that argument turned out to be on the wrong side of history. What makes .well-known URIs different?
It's not the same argument at all. When there's functionality that needs to be dynamic that is fine. When there's functionality that could have been easily accommodated without it, it forces sites that otherwise could remain entirely static to be served up with a dynamic setup or use a redirect to somewhere to run a dynamic script just for that. If it was for a good reason, such as lack of ability to provide the functionality in a static way anyway, it'd have been fine, but providing webfinger with static files would've been easy with just a tiny bit more care.
E.g. here's a suggested change to webfinger that wouldn't made this purely optional:
* Change the basic URL format to /.well-known/webfinger/<acct>
* Still allow the "rel" parameter, but allow the server to ignore it and return the full set of resources.
Now all you lose is the ability to do filtering with "rel=", and the "failure mode" is simply that you get the whole static file returned if it's not supported.
I'm not sure that's true. You can map these URIs, params and all, to static files behind the httpd. Nothing is forcing anyone into running a dynamic application. Even if you don't want to map characters, all the param characters are valid in unix filenames.
> You can map these URIs, params and all, to static files behind the httpd. Nothing is forcing anyone into running a dynamic application.
Sorry, but this is a form of equivocation[1] at best. More honestly, it's a simple contradiction; at the point where you're "map[ping] these URIs, params and all, to static files", you are being forced into running such an application.
Although it's not a rigorous, well-defined term, it's widely understood what a "static" site is. It's the sort of thing you get with e.g. Neocities or GitHub Pages or one of its clones—where you cannot rely on being able to mess with the server configuration (past the point of specifying the hostname your site should respond to, if even that). Any more involved configuration moves it out of this realm and towards the dynamic systems that many people are not running and not interested in running—for various reasons, including pricing, maintenance, and sheer complexity/brittleness. A static site is a place where you dump a directory of files and the host doesn't do much beyond serving that file with the appropriate media type when someone requests it, which is pretty much the only thing that people can trust to work reliably if and when they move all their crud to another host and/or server configuration someday.
The problem with this is that the webfinger spec allows additional parameters (e.g. "rel"), so if you do this you violate the spec.
(EDIT: Actually given it does a full regex match it might work; though it'll return supersets of the intended results if rel= constraints are preset, and possibly break if the rel= argument is put after the "resource=" in the URL so would need refinement; of course it depends on using a server which allows arbitrary regex based rewrites, which rules out a lot of object storage etc.)
It may happen to work for some clients some of the time, until it suddenly doesn't.
Oh I love this! I've been trying to understand ActivityPub from first principles, looking for a really simple hackable server. (Really what I want is a server that lets me write the ActivityPub equivalent of CGI scripts to easily host diverse content.)
This little static file experiment is a great teaching tool. To folks complaining about conflating ActivityPub with Mastodon, part of his goal here is to do the minimum ActivityPub implementation so that his account looks a certain way on Mastodon. Other ActivityPub instances might be asking for different requests; he's doing the Mastodon minimum.
Conflating mastodon and activitypub might fly in your title or your seo, but this is a technical comunity of technical people and we're all clickbait-averse pedants, so maybe use an honest title here?
It's definitely interesting to read how ActivityPub actually operates, but conflating the two may give the wrong idea.