Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hades: An experimental HATEOAS-based HTTP/2 reverse proxy for JSON API back ends (github.com/gabesullice)
57 points by mooreds on Aug 5, 2018 | hide | past | favorite | 8 comments


A bit of a philosophical point here, but does anyone have a strong opinion on JSON API[0] vs jsonschema's Hyper-schema?

There are even more options out there, but right now from what I understand these two are in direct competition (though their specs seem to be really similar). Right now the landscape looks like:

- validation: jsonschema-validation

- hypermedia/HATEOAS: jsonschema-hyperschema, JSON API, HAL+JSON

- semantics: JSON LD

[0]: http://jsonapi.org/

[1]: http://json-schema.org/


We currently use a combo of JSONAPI + JSCON Schema validation and it is working well for us. We had to write a lot of tooling on both front and backend, and resolve a bunch of problems with expressing things like aggregated efficiently (If I want a count of a user's posts, I don't need all their posts, just count them for me) for complicated UIs, using things like metadata in the request. Our front end developers really like the consistency, and working on it from backend is very sane, despite having to think slightly more generally than normal in APIs. We looked into jsonschema-hyperschema but it didn't seem to quite express the constraints in the spec that we wish it did, though I'm unsure if that is still true with our current knowledge of the solution.

I will say I will strongly advocate away from APIs that are not in this fashion for all future projects. Having some of the consistencies granted by JSONAPI is wonderfully freeing.


A few observations/questions, if you don't mind:

1. I thought both JSON API and jsonschema-hyperschema were basically equivalent -- did you find some point that they weren't?

I'm not sure if this is what you were referring to, but JSON API has affordances for stuff like "meta" built in, but if you were doing jsonschema-hyperschema you'd have to ensure that the endpoint itself returned a let's say "Envelope<Thing>" which had the fields you expect -- is that what put you off? The straightforwardness of having certain affordances included directly in the spec?

2. I'm thinking of focusing jsonschema-validation + jsonschema-hypermedia (or maybe JSON API) going forward in the future, but the current more famous approaches are Swagger, API blueprint and RAML -- I don't want to commit to them because they often move slower and have made some questionable choices (IMO) in their diversions from the underlying standards they're based on.

jsonschema-hypermedia seems to be be functionally equivalent to swagger and more minimal at the same time but I haven't done any in-depth analysis to prove it so.

Why did you (and/or your team) not choose Swagger/RAML/API Blueprint?


I will say that we probably didn't do as much initial homework into the competing standards, as generally picking an accessible enough, flexible enough spec for everyone to know religiously was much more important than the particular one we chose. So we didn't do as much initial vetting of hyperschema and friends once we understood JSONAPI well enough. Meta is an extraordinarily nice escape hatch that we have used as we have learned good patterns that scale when adopting the spec across the business. It is very nice for calculated values, and really lets you have a grab bag of "stuff that isn't quite CRUD" or expensive to calculate values that may want to be contextualized somehow. Though it can be modeled as CRUD depending on the business case, our experience is that some stuff just doesn't belong on another resource but feels awkward as an attribute.

However, since the JSONAPI backend has matured we have sought ways to describe and document the resources and their interactions for both backend and front end tooling, automated and otherwise. What we've found is that Swagger and friends (I will return to your hyperschema question shortly) are good when your resources are nonstandard and the CRUD interactions are complex. When your spec (in our case JSONAPI) dictates the CRUD behavior explicitly, the only moving parts are the resources themselves. But in order to get buyin from that tooling, you must enumerate all the various CRUD routes.

When you have users, posts and comments, we might have the following routes:

    GET /users    
    GET /users/id    
    POST /users    
    PATCH /users/id    
    DELETE /users/id    
    GET /users/id/comments    
    GET /users/id/relationships/comments    
    POST /users/id/relationships/comments    
    DELETE /users/id/relationships/comments    
    GET /users/id/posts    
    GET /users/id/relationships/posts    
    POST /users/id/relationships/posts    
    DELETE /users/id/relationships/posts
That's just from the user side. Imagine as we add more of these connections, and potentially want to have users_v1 and users_v2, and maybe we add 10-15 extra resources and their admin counterparts. All those routes are boring and noninteresting. The important part is the resource itself, but Swagger and Hyperschema seemed to make us want to specify all those combinatorially large number of possible routes and describe the resources themselves to fullest detail. That's annoying and I'd rather describe my few dozen resources well, and just make the JSONAPI spec homework for devs to read for learning how the legos fit together.

From there tooling can be written on client and server to assert consistent behavior by funneling resource definitions through and producing payloads / validation to keep your server in check. With sufficiently documenting tooling, the CRUD routes are entirely bookkeeping and can be abstracted away into your framework We have a single `create` function in our backend for all our resources, for example, and it just asks the resources about themselves to figure out how to do its mechanics.

I think we could generate json:ld if we needed to. Our internal DSL allows us flexibility to spit out JSON Schema validation on the fly, so no reason we wouldn't also do that with some tweaks.

My twitter is in my profile. Shoot me a message there if you have some deeper questions, though I'm more than happy to try and answer here.


Thanks for the the detailed answer, I appreciate you taking the time to write that all out. I will certainly hit you up via twitter if I have some more questions.

The DSL you've mentioned sounds really convenient -- I've been wanting to write something like this for general use recently but can't pick which hill to die on with regards to language and how many/which specs to support. Sounds like a good development experience you guys have set up for yourselves there @ Albert.


Really nice, this kind of functionality should be part of static web servers / Amazon S3 in order for serverless catalog applications to really work. In particular geospatial catalog standards that want to replace CSW.

Unrelated, I see a golang port of jq is being pulled as a dependency, doesn’t knowledge of that query language on the clinet side break the HATEOAS expectations?


Since this is tied to Jsonapi, it seems like you should do this by referring to the named relationship. Something like 'X-Push-Related: author' if you were getting the article resource from the Jsonapi docs.

The server would then respond with a 'Link: /articles/1/author' header.


Interesting project. I just ran into the need for clients to be able to specify resources they want pushed in a big REST API refactoring I'm working on. It seems like this is really needed in the HTTP/2 Server Push spec. I jus wished they'd picked a better name than X-Push-Please.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: