Hacker News new | past | comments | ask | show | jobs | submit login
Mastodon Instance with 6 Files (justingarrison.com)
31 points by JustinGarrison on Dec 6, 2022 | hide | past | favorite | 29 comments



That's not a Mastodon instance, that's a(n incomplete) implementation of ActivityPub, the underlying federation protocol.

It's definitely interesting to read how ActivityPub actually operates, but conflating the two may give the wrong idea.


I explain that a bit in the blog post and video. I plan to implement more of the protocol to allow for following (inbox) and posting (outbox). Those are still ActivityPub but Mastodon is the most popular implementation of the protocol so people can understand how it can be used


The problem can be easily sidestepped by referring to it as a "Mastodon alternative", rather than a "Mastodon instance".

(Even if it weren't that easy, I'd still agree that referring to it as a "Mastodon instance" is both wrong and weird, regardless of how popular and relatable Mastodon is.)


Or even "ActivityPub profile" or "fediverse profile". That's what it is, a non-interactive profile that appears on, but isn't limited to Mastodon.

But it's clickbait that worked, so maybe discussing that is equally pointless.


"Mastodon instance that has nothing to do with Mastodon besides implementing a small subset of the ActivityPub protocol that Mastodon also uses with 6 Files" is kind of long I guess.


I’m not very good with SEO, but it doesn’t roll off the tongue


Well, this is quite timely. I was looking for a way to implement a partially static ActivityPub instance (because I want to have an easy blog-to-Mastodon gateway, and my site is entirely static), and the example data has already saved me a couple of hours - thanks!


Glad it helped. My goal is to make a static site/mastodon bridge too. I'm working on the /inbox portion next and then will tackle the outbox stuff in a future post.


At the very end of the post:

> Those 6 files is all you need to create a Mastodon user. Here are some caveats you may have already noticed.

> - Following doesn’t work

> - Posts don’t work

> - Only 1 user per domain

The post might be technically interesting, but I find the title misleading.


> GET https://server/.well-known/webfinger?resource=acct:user@doma...

Admittedly I've never looked at AP or Mastodon but query parameters on .well-known addresses is a terrible idea. Those resources should be static.


I agree. I wish activitypub was more static file friendly. Many of the URLs don’t require parameters but almost all of them use them. I wanted to show how you could do it without them


Previously covered in the Mastodon 3.5 release thread this past year.

<https://news.ycombinator.com/item?id=30862612>


Any reason for this, or is it just your aesthetics? The RFC explicitly allows it.


Which RFC says query parameters is a good idea? The .well-known RFC (5785) itself states:

> well-known URIs are not intended for general information retrieval or establishment of large URI namespaces on the Web.

The WebFinger RFC appears to stuff its entire protocol into .well-known! And it includes some fine examples such as:

    GET /.well-known/webfinger?
        resource=acct%3Abob%40example.com&
        rel=http%3A%2F%2Fwebfinger.example%2Frel%2Fprofile-page&
        rel=http%3A%2F%2Fwebfinger.example%2Frel%2Fbusinesscard HTTP/1.1
    Host: example.com
I would say that .well-known should be kept simple for both security and correctness. Dynamic content can be exploited more easily, as directly evidenced by you bringing CGI into the discussion. Additionally, .well-known is a global namespace and should not be a broad query interface when those queries can be reasonably furnished by other means.

My experience with .well-known is that it's more or less an application level DNS and identity service. But this looks like those hacks where you can read Wikipedia using `dig`.


The point is that for many small setups being able to just provide static files would have been sufficient if it wasn't for the way the webfinger spec requires query string arguments.


This doesn't explain why query parameters in a .well-known URI is a bad idea, it just explains that you don't like them. That's fine, and I agree with you, but I'm asking for the actual technical reason people are claiming query parameters in a .well-known URI is a bad idea.


I’d say it’s a bad idea when you can implement most .well-known features by just touching or copying static files into the right directory, and then suddenly a .well-known feature comes along that expects something smarter than a static file server, and now you’re doing complex routing just to patch that request over to a process that can actually service it.


This is the same argument people made against CGI, and web apps in general, and that argument turned out to be on the wrong side of history. What makes .well-known URIs different?


It's not the same argument at all. When there's functionality that needs to be dynamic that is fine. When there's functionality that could have been easily accommodated without it, it forces sites that otherwise could remain entirely static to be served up with a dynamic setup or use a redirect to somewhere to run a dynamic script just for that. If it was for a good reason, such as lack of ability to provide the functionality in a static way anyway, it'd have been fine, but providing webfinger with static files would've been easy with just a tiny bit more care.

E.g. here's a suggested change to webfinger that wouldn't made this purely optional:

* Change the basic URL format to /.well-known/webfinger/<acct>

* Still allow the "rel" parameter, but allow the server to ignore it and return the full set of resources.

Now all you lose is the ability to do filtering with "rel=", and the "failure mode" is simply that you get the whole static file returned if it's not supported.


The actual technical reason is that it prevents us from implementing it with static files.


I'm not sure that's true. You can map these URIs, params and all, to static files behind the httpd. Nothing is forcing anyone into running a dynamic application. Even if you don't want to map characters, all the param characters are valid in unix filenames.

Anyway here's someone doing webfinger with a static file, in a way that works for serving for multiple users: https://gist.github.com/aaronpk/5846789


> You can map these URIs, params and all, to static files behind the httpd. Nothing is forcing anyone into running a dynamic application.

Sorry, but this is a form of equivocation[1] at best. More honestly, it's a simple contradiction; at the point where you're "map[ping] these URIs, params and all, to static files", you are being forced into running such an application.

Although it's not a rigorous, well-defined term, it's widely understood what a "static" site is. It's the sort of thing you get with e.g. Neocities or GitHub Pages or one of its clones—where you cannot rely on being able to mess with the server configuration (past the point of specifying the hostname your site should respond to, if even that). Any more involved configuration moves it out of this realm and towards the dynamic systems that many people are not running and not interested in running—for various reasons, including pricing, maintenance, and sheer complexity/brittleness. A static site is a place where you dump a directory of files and the host doesn't do much beyond serving that file with the appropriate media type when someone requests it, which is pretty much the only thing that people can trust to work reliably if and when they move all their crud to another host and/or server configuration someday.

1. https://en.wiktionary.org/wiki/equivocation


The problem with this is that the webfinger spec allows additional parameters (e.g. "rel"), so if you do this you violate the spec.

(EDIT: Actually given it does a full regex match it might work; though it'll return supersets of the intended results if rel= constraints are preset, and possibly break if the rel= argument is put after the "resource=" in the URL so would need refinement; of course it depends on using a server which allows arbitrary regex based rewrites, which rules out a lot of object storage etc.)

It may happen to work for some clients some of the time, until it suddenly doesn't.


That’s really cool! Thanks for sharing, but doesn’t a rewrite rule still require an active web server component and not static file listing?

I’m not sure if you could implement something similar with s3


Also, the expectation is that your server will do remote lookups as well, so this is a great way to do cascading DOS.


Oh I love this! I've been trying to understand ActivityPub from first principles, looking for a really simple hackable server. (Really what I want is a server that lets me write the ActivityPub equivalent of CGI scripts to easily host diverse content.)

I've been failing to find a simple example server other than maybe Darius Kazemi's https://github.com/dariusk/express-activitypub

This little static file experiment is a great teaching tool. To folks complaining about conflating ActivityPub with Mastodon, part of his goal here is to do the minimum ActivityPub implementation so that his account looks a certain way on Mastodon. Other ActivityPub instances might be asking for different requests; he's doing the Mastodon minimum.


Thanks for the note! That’s exactly why I made it and created the post.

I plan to implant the active parts of the spec in future posts


the conflation of Mastodon and ActivityPub continues


Conflating mastodon and activitypub might fly in your title or your seo, but this is a technical comunity of technical people and we're all clickbait-averse pedants, so maybe use an honest title here?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: