Hacker News new | past | comments | ask | show | jobs | submit login
A standard for building APIs in JSON (jsonapi.org)
199 points by lobo_tuerto on Mar 28, 2015 | hide | past | web | favorite | 103 comments

For people who have been dealing with the churn of JSON API, represented throughout this thread, I'm genuinely sorry.

Let me try to give some perspective on the history. For a long time, JSON API was more of a set of guidelines than a strict specification. A lot of our early adopters hated all of the MAYs in the spec (you can see examples of that throughout this thread), so we decided to tighten things up considerably towards the end of last year.

That meant trying to eliminate the vast majority of options in the primary spec, and moving all optional features to named extensions that clients could programmatically detect.

Of course, significantly tightening up the semantics forced us to grapple with harder problems and pushed some ambiguities into the light of day. RC2 in particular was an attempt to seriously pare down the scope of the main spec, while making the semantics of what was left stricter. Dan (the primary editor) and I spent countless hours discussing various details, and people contributed hundreds and hundreds of (very useful!) comments during this period about various details.

RC3 was a smaller delta, but I could easily imagine that one of the changes had a large impact on existing APIs.

My overall goal for the project from the beginning was to nail down a full protocol (both the format and the wire protocol) that could be implemented by multiple languages on both sides. Originally, it was created because I was frustrated by the ambiguity of "REST-style API" was during the development of Ember Data.

The earliest versions of JSON API didn't really nail things down well enough to deliver on that promise, but I hope that the latest versions will be able to. Time will tell.

I appreciate the massive amount of work and the recent updates to tighten the spec.

One challenge with the churn is a lack of any high-level changelog (git commit history doesn't count). I have a team working off an earlier version of the spec, and I check back semi-frequently. But I haven't been able to find a document outlining "here are the major changes since the previous versions." I understand that would represent more work on top of work, but for such a large spec, the changes have been disorienting.

"Standard" can be called Standard only when it's accepted as a Standard. Prior that, it's just Proposal.

In case you're thinking, "Wow, I bet in the real world this would be a misery to impose on a team - I'm sure no one would really want to do this!", well, I worked briefly with a team where they imposed this. It was part of the reason why I say "briefly".

At Gragg, I've been experimenting with a different approach: I wrote a self-describing schema language for all of the JSON payloads that'd be sent, wrote a service and tested it with a "blob" type that doesn't validate, then wrote the spec when it was clear enough, then it turns out that writing a client from the spec is usually super-simple, you get to see all of the things which the spec can do.

Here's the basic things that have been mostly useful:

    Maybe x, for optional parameters.
        Process by defining what to do when null.
        Doesn't have to be tagged to be useful.
    Repetition in the form of [x] and Map String x.
        Process only via a for-each loop.
    Tagged-sums-of-products: ["tag", arg1, arg2, ...]
        Process only with a switch statement dispatching over arr[0].
    Record types like {"id": 3, "name": "Gandalf", "color": "grey"}
        Feel free to base logic on record.name etc.
    Primitives: dates, ints, strings, blobs.
The tagged-sums bit is probably a dealbreaker for me at this time: I picked this up from Haskell and I would not easily let go. The syntax in JSON is a little clumsy, but it's still important. Cf. Abelson and Sussman's lectures, "All interesting programs start with a case dispatch."

I've been starting using it on a personal project and it doesn't seem so bad. What issues did you have? (so I can be forwarned)

Well, obviously it's very prescriptive. In theory that should mean it requires less mental effort because it tells you what you should be doing - in practice, I found it required more effort to do things the precise way it wanted. I was far from alone in this belief.

My main issue was with the "related/linkage" stuff (which may or may not have changed somewhat since I was exposed to it), which is unpleasant, to my eyes, in the way it is expressed, and also verbose and often duplicative in what it produces.

A related problem was the Node.js implementation we were using, which generally felt like more of a hindrance than a help (and which we ended up extending beyond all recognition, making matters dramatically worse - though that was no fault of JSON API itself directly, other than by association).

Best of luck!

We're starting to build API endpoints with this where I work. We didn't have compound documents in an earlier version of our API, and we ended up having to make a lot of roundtrips for some things. So I had been thinking about something like the jsonapi "include" mechanism, and when I saw how jsonapi was doing it, it was pretty close to what I would have done. That made me feel better about going along with their decisions in areas I haven't thought about.

They keep saying that they're closing in on 1.0, but there's a few issues left:


They've been making a lot of changes lately:


It's good that they want to get all of the breaking changes done so that they can declare a stable 1.0. But it also seems like they're going to try and call it stable right after making a bunch of changes, which seems risky.

I also pitched JSON-LD/Hydra at work, because they're w3c-backed and JSON-LD has some uptake. But the other people who looked at those specs found them hard to digest. And I agree; as an implementer, I can read the jsonapi docs quickly and have a pretty clear idea of what to do. But with JSON-LD/Hydra, not so much.

I get the sense that JSON-LD/Hydra is more flexible than jsonapi, but I think jsonapi does what we need. And if it does what we need, then additional flexibility might actually be a drawback. I guess we'll see how it goes.

Thanks for the feedback. Personally, I feel like we nailed down the important changes earlier this month, and I agree that further churn at this point is likely to cause more harm than good.

For what it's worth, the issues you linked to are mostly about adding more rigor to possibly underspecified areas, not changing things that are already specified, but those things could easily be done after 1.0.

Oh boy, cue the haters. I'd like to address a few things that JSON API gets right, and where it fails horrendously (just in time for 1.0!)

The good:

- It opens a path for standardized tooling among API clients. Rather than having a whole mess of JSON-over-HTTP clients with a hard-coded understanding of the particular API, one could theoretically use a hypertext client to interact with any API using this standard.

- It establishes a baseline of what an API server must implement.

The bad:

- It tries to control not just the media type, but the protocol (HTTP) and server implementation. This is problematic because it dictates how your server must implement its routes, and is tightly coupled with HTTP.

- It tries to be very prescriptive, but it cannot cover all edge cases without being exhaustive. This comes at a heavy burden for implementers to get every part of the specification correct. Despite this, some extremely basic features are missing, such as providing an index document so that a client can enter the API knowing only a single entry point (as it stands now, clients must have a priori knowledge of what resources exist on the server).

The ugly:

- Since so many parts of the base specification are optional MAYs, and there is no prescribed mechanism for feature detection, there is no way to figure out if a server supports a particular feature of this spec without trying to do something and failing.

- The spec has made many breaking changes on the road to 1.0 (as other commenters have mentioned, and there is still room for breaking changes).

At this point, I think that this project gained traction due to the clout of the original authors (Yehuda Katz and Steve Klabnik) and the promise of no-more-bikeshedding, though I would argue that the bikeshedding has just shifted from particular APIs to the spec. Disclosure: I authored and maintained some libraries that implement this spec, and authored another media type for hypertext APIs.

> Since so many parts of the base specification are optional MAYs, and there is no prescribed mechanism for feature detection, there is no way to figure out if a server supports a particular feature of this spec without trying to do something and failing

> The spec has made many breaking changes on the road to 1.0

Interestingly, most of the breaking changes on the road to 1.0 were about drastically reducing MAY in favor of MUSTs.

Optional features were moved to named extensions, and there is now an explicit way to negotiate about those extensions.

For anyone who was put off earlier on by the number of MAYs, know that we heard you loud and clear. It may (no pun intended) be worth another look.

have you been doing REST over something other than HTTP?

Now that you bring it up, the JSON API specification does not even mention REST or hypermedia actually. What I was trying to get at is that media types don't have to be coupled to the protocol. One can exchange an HTML document over HTTP just as easily as one could transmit that document over another protocol.

Some media types do consider their applicability over multiple protocols, for example: http://amundsen.com/blog/archives/1151

I used to rave about how RESTful APIs with JSON were so much better than SOAP and other RPC style interfaces, because the responses were so simple and easy to parse.

This holds true for small projects, but as soon as you are working within a large, complex system that involves multiple teams, the benefits of using standards pays off.

I struggled in the past working on projects with companies that had built dozens of loosely-defined APIs, built with the good design intentions in mind, but suffered later from incomplete or inconsistent implementations. The app codebase became much fatter, in an attempt to abstract away those differences into a consistent mental model.

When complexity and communication reaches a certain threshold, it makes sense to invest in standardizing the APIs and responses. I've seen the payoff and am convinced: client libraries and frontend implementations get much simpler, documentation becomes easier, and discussions about how to make changes or design new APIs all but go away.

On the other hand, for small teams and simple services, using a standard like this is probably overkill, unless everyone involved is used to doing APIs this way.

> but as soon as you are working within a large, complex system that involves multiple teams, the benefits of using standards pays off.

That depends a lot on the nature of the project and the standards involved. In my experience, introducing complexity early on in the hope that it will pay off when the project itself becomes complex, leads to exactly what you would intuitively expect: a huge combinatorial code nightmare where productive work asymptotically approaches zero as time progresses.

There is a failure mode in the development of enterprise projects where actual work is being done only at the fringes where the tight external standards have loopholes allowing for the introduction of actual functionality through the backdoor. The resulting systems are of course extremely brittle.

> for small teams and simple services, using a standard like this is probably overkill

There are also simple standards deserving of the name. Small teams and simple services use them quite adamantly.

The "problem" I see with JSON API is not one of complexity (it really isn't, very), it's specificity. It's designed to cover a good range of common web-centric data interchange problems, but like any higher-order standard it carries the weight of certain abstractions and concepts. The pain comes, in my opinion, not from a project/team size mismatch but in cases where these abstractions are ill-suited for the problem at hand.

One of the key factors why plain JSON has become so popular: it's completely malleable. As a result, the actual JSON data on the wire can closely reflect the internal data structures (or at least a logical representation of them) of producers and consumers. The price for this is a relatively tight coupling, but the pain of it is lessened somewhat by the simplicity of the format.

In the end, the old adage of the structure of the project mirroring the structure of the organization probably holds true. When selecting a standard to work with, people choose one that innately reflects how their company works, and they do it for good reason: to reduce friction.

So much fear and loathing in this thread, which puzzles me. Anyone who sets out to write a hypermedia API in JSON is going to end up with a structure not dissimilar to this one. If this fits your needs, then use it, by all means and let your clients use an off-the-shelf lib for using your service.

I'm not a fan of the spec, but overall it seems to strike a good balance between what is specified and what is left out. To compare this to SOAP is missing the point.

What killed SOAP was an insistence on treating HTTP as a dumb transport - thereby breaking the constraints of the interwebs, the inherent brittleness of RPC, and the lunacy of the WS-* stack.

None of that applies here, it's closer in spirit to AtomPub, which is still a pretty decent standard, but just happens to be in XML which everybody hates nowadays.

I think a lot of the commentators in this thread seriously believe that having a different data format and interaction style for every API on the internet is somehow "simpler" than adopting a loosely defined set of conventions and writing them down somewhere as a standard.

It seems to me this standard is only really useful for APIs to be consumed by a specific GUI or GUI framework, as opposed to "data-centered" APIs that exist to provide access to a particular dataset for myriad purposes. Having a "links" section makes no sense in the latter case -- that's what docs/Swagger/RAML are for.

My stab at imposing some consistency on data APIs boils down to a header and a data section. The header's utility is in describing the data, most obvious utility comes from including a "count: 235" or paging data.

Less-obvious is having self-describing data, namely including the path and query parameters in the header, so you could ingest the data sans request and still know what it represents.

But it's a little bikesheddy, and that might be that data-only APIs are so freaking simple that no standard is really necessary. If so, I must question writing a standard around how we happen to build GUIs today as it seems doomed to SOAPy failure. But hey I'm not "full-stack" ...

One of my clients built their API to the jsonapi standard. Then, the standard changed, and it's no longer compliant. Oops.

Same issue here. Now so off-track that I'll probably just change the content-type and call it a day.

I'm really sorry that this was your experience. I give a little bit of historical context elsewhere on this thread[1].

[1]: https://news.ycombinator.com/item?id=9280602

And that is why any sane (experienced?) API and/or standard author should have versioning in mind at all times. In my experience there is no such thing as "the" API or "the" standard.

Their API is versioned (the json api version happened to be v2). But they don't feel the need to build a brand new v3 just to chase a moving target, especially since customers (and they themselves) are already using their v2.

It's only at RC3 right now. Why would you expect an unstable spec to be static?

There was an unversioned spec posted for months that looked usable, then suddenly they announced a complete rewrite as "RC2". Calling it a relase candidate was misleading, though - almost everything got changed again for "RC3".

I definitely can see why you'd see RC3 as a complete rewrite, but I don't understand why you'd feel that way about RC3. There were definitely changes, but they were much smaller and motivated by significant feedback by users who opened issues in response to RC2.

For example, RC2 mandated that all fields were dasherized. We made field names opaque in RC3.

At this point, we're pretty much nailing down tiny details and included this language with RC3:

    JSON API is at a third release candidate state. This means 
    that it is not yet stable, however, libraries intended to 
    work with 1.0 should implement this version of the 
    specification. We may make very small tweaks, but 
    everything is basically in place.

Release Candidates should already be at a largely stable point, _ready for release_.


My point is it became an RC3 only a few days ago. If OP used this a year ago it wasn't an RC, just an unstable spec.

It's a valiant effort but as someone who has dealt with unraveling a strict API built with SOAP I am going to pass. If I want something standardized I will use a schema based solution such as protobuf. It's already a quasi standard, has clients in many languages and is a binary protocol.

We also should not confuse an API with a transport protocol. I will tilt my hat to this not being as verbose as past attempts, but why reinvent the wheel yet again? It's not like prior attempts didn't function as expected - they did - but we in the industry chastised them for being too strict.

Let's work on improving the semantics and documentation around what constitutes an API. Swagger is an excellent example of this.

IMO, Protobuf is at same layer as JSON, not JSON API. It's a serialization format.

I'm a fan of swagger, especially because it's not tied to one language and helps the developer to design his API in an easy way. The tools provided for describing your API using the Swagger spec is really fascinating.

This tries to standardize API transports, which doesn't make a sense to me, because there is no need for that. If you are going to develop an API with (open) SDK clients that support consuming your services, then I don't need to care for your API transport to be written in JSON-API. Especially making transports human readable appears to be a waste of ressources in my eyes. Those API's are meant to be consumed by machines and debugging can and should be done with tools, not by enforcing a standard.

I was recently exposed to this standard on a green field sinatra project. It quickly became burdensome to maintain all of the linkages. It really deteriorates your ability to keep things DRY on the JSON generation (we were using RABL). Of course, these problems were exacerbated by the fact that there were other issues with how the data was structured and how the client wanted to access resources.

Roar[0] works great for this, we use it for our HAL APIs and I'm very happy with it.

[0] https://github.com/apotonick/roar

Yeah...no. Standardizing something like this is useless and only full of edge cases. Keep API's free form and flexible without imposing some sort of restriction. That's how you end up with bigger messes than what you started with. API's are not a mess currently anyway. The point of building a unique application isn't to create uniform systems across the map.

How is this any different than the HAL Specification? It seems like an exact copy except the word "includes" replaces "embedded"

One of my primary motivations when I started this (a few years ago now) was a straight forward representation of graphs of data (including bits of data without dedicated URLs), rather than trees of data.

Consider the case of a blog. Each blog post has many comments. Each post has an author, and each comment has an author. Some of those entities may have dedicated URLs, and others may not. Additionally, the authors are highly repetitive; you want to refer to them by identifier, not by embedding them.

Because a tree of data can be represented easily as a graph, but not vice versa, JSON API provides a straight-forward way to provide a list of records that are linked to each other. The goal is simple, straight-forward processing regardless of the shape of the graph.

On graphs, I was really hoping to see provisions in the spec to support the property graph model, specifically for properties on relationships. We are building APIs that access data in a graph DB, and all the data in the graph has had an obvious representation in the JSON API model except for attributes on edges.

As an example, let's say that my graph DB has People nodes and a MARRIED_TO relationship between two people to indicate they are married. A MARRIED_TO edge could have a "married_on" property containing the date of the marriage.

Where would the "married_on" attribute be represented in JSON API? I could stuff it in the "meta" member of the link object, but that feels loose and hacky. Maybe it could live in the "linkage object" along with the "type" and "id" members. But the linkage object appears to currently be a closed set of only those two members.

Is this requirement to present a full property graph model not as common as I would imagine? I'm a bit behind on my sync with the current state of the spec. This is the first time I've tried to elaborate this need.

At first I was confused by this comment because anything built with links is a graph. Now that I've caught your meaning, and seen the complicated "included" structure, I wonder if it wouldn't have been better to just add an "_id" property to HAL. HAL's designers, however, would probably just say that "author" should be its own resource, and be linked or embedded as required. That is, if it's too heavy to embed, then link it.

One of the primary ways in which JSON API is different from HAL (or JSON-LD) is that it specifies both a format for data as well as a full protocol for fetching and manipulating that data.

Most of the trouble in building a JSON API comes not from deciding on the format, but in nailing down the precise requests and responses that handle common kinds of interactions. This became very clear to me as I worked on Ember Data.

Besides minor things like _links -> links, it specifies a lot of behavior more explicitly than HAL. HAL is a very minimal specification and intentionally doesn't nail down every corner case, preferring implementors to choose what's right for them, perhaps at the cost of a more powerful generic client

Edit: and hey, why not, I'm the author of a generic HAL client myself: https://github.com/deontologician/restnavigator

FYI: JSON-LD is already W3C standardized and may be interesting regarding APIs, especially in the area m2m:

See http://de.slideshare.net/lanthaler/building-next-generation-...

But my "fear" is that it will all get to complicated, too. Currently I really like to work with JSON Schema (http://json-schema.org/), because it's simple and extensible.

Wow. No one needs this. API means application programming interface. It starts and ends its standardization at the application level.

Make your API consistent and write decent documentation for it. That's all anybody needs and will be simpler then trying to conform to some insane metastandard.

This reminds me of hypermedia API. Generally speaking, you're provided with an index of resources via xml/json, you then fetch those resources which provide a single or many levels of additional results which are fetch-able. In essence, your api data is almost browse-able. Objective is Idempotent response, fetch as little as possible, provide new functionality via new resources, and resources with discover-ability for auto RPC. https://github.com/swagger-api/swagger-spec/

We do have (optional) hypermedia in the spec itself.

Please don't make JSON into JXML.

Hydra and json-ld are already available : http://www.hydra-cg.com even if it's a little bit too verbose.

You can have a look of a simple way to do it in php with symfony : https://github.com/dunglas/DunglasJsonLdApiBundle

I had to write an API with lots of one-to-many and many-to-many relationships, with iOS, Android and JS apps accessing the APIs so it needed a mean to sideload data. I found JSONApi to be a good guide to building the API. Of course, I was a bit sad when the spec changed quite radically in its last iteration, but, hey, my API still works, that's not so bad.

Does anyone know when we could expect Ember Data to be fully compatible to JSON API? In a stable manner. Our architecture has Sinatra for the backend and Ember.js for the frontend. We're always in a struggle to support JSON API for our third party clients but maintain compatibility with Ember Data.

At EmberConf, it was announced that Ember Data 1.0 would ship with JSON API compliance out of the box, in June.

By what virtue is this a standard? The very generic name? For HAL, there is at least an internet draft: https://tools.ietf.org/html/draft-kelly-json-hal-06

The first step in the standardization process is to register with IANA. We've done that. Then, you get a bunch of implementations. Then, you go to the IETF.

Any idea if this overlaps even more with RAML? I've seen some projects based around RAML but none so far with JSONApi and I'm trying to understand what advantage there is either way when I don't see JSON being more expressive in this goal.

They're fundamentally different. RAML is about describing an existing API. JSON API is a way to build your API in the first place.

Here are my personal standard for a JSON api:

1. The format should pass a basic JSON linter.

2. (where applicable) The document should represent the logical object type that you'd expect from the endpoint.

Anything beyond this is getting too close to XML for my tastes.

As a co-author of JSON API, I'd like to address the value proposition of the specification.

First of all, JSON API provides guidance for a lot of API design decisions that are usually contentious because, although they may seem trivial, they must be made with care in order to be consistent and viable. For instance, JSON API provides guidance for:

* fetching related resources together with primary resources in order to minimize requests

* limiting the fields included in a response document in order to minimize payload size

* paginating data with links that work well with any pagination strategy (page-based, offset-based, and cursor-based)

* sorting results, even potentially based on related resource fields

* representing heterogeneous collections and polymorphic relationships

* representing errors

In my opinion, some of the most useful guidance is related to the representation of relationships, which can include:

* embedded linkage data which "links" primary resources with related resources in a compound document.

* relationship URLs which can retrieve linkage data and directly manipulate the relationship without affecting the underlying resources.

* related resource URLs which can retrieve related resources themselves.

In addition, JSON API supports extensions and has official extensions for performing bulk updates and JSON Patch operations. These particular extensions provide extremely useful mechanisms for transactionally working with multiple resources.

All of this guidance has been forged over the course of two years based on the feedback and contributions of hundreds of developers. Even if you were to incorporate some of this guidance in your APIs piecemeal, it would probably save your team many hours of design discussions.

However, the bigger value proposition of JSON API is just now beginning to be realized. Since we've tightened the "spec" into a proper spec by eliminating many MAYs and SHOULDs, it is now possible to reliably build implementations with a guarantee of compatibility. It's regrettable that it took us so long to move from providing loose guidelines to a more rigid spec, but I truly believe that the awkward intermediate phase provided invaluable feedback that ultimately informed the design of the spec and will improve adoption of 1.0.

We are seeing client libraries being built in JavaScript, iOS, and Ruby, and server libraries in PHP, Node.js, Ruby, Python, Go, and .NET. Although not all are (yet) compliant with the latest changes to the spec, we are tracking progress carefully as the spec nears 1.0 to ensure that we smooth over any rough spots encountered by implementors. I'm personally involved in developing Orbit.js and JSONAPI::Resources, both of which are nearing JSON API compliance.

I can say that it's incredibly satisfying to build applications with compatible client and server libraries. It lets me focus on the design of my application's domain instead of myriad details related to transforming data representations and protocol usage. Even better is the knowledge that other clients can easily use my API and all of its advanced features, regardless of their language and framework. This is ultimately the promise of JSON API, and it won't be long before it's realized.

Five years ago, much of my team's time was being consumed fighting with the WSDL and SOAP specs, and their various subtly incompatible or outright broken implementations on our target platforms.

A switch to JSON-RPC 2.0 meant everyone now understood the entirety of the specifications our APIs relied on, and if we couldn't find a good implementation for our target, we could either fix one, or write one ourselves in just a few hours.

The productivity gain was effectively infinite.

I look at the JSON API specification in all its architecture astronaut glory, and weep.

I love that the logo is invalid JSON. (see: http://www.json.org/)

ugh..please no.

When you're working in a small/medium sized team, everyone seems to have an opinion about format of an API. I'm thankful that a resource like this exists because it's easy to read and seems very well thought out.

Treating integer values as strings, really rubs me up the wrong way. They might as well wrap boolean and null values in quotes too!

Its hard to take the API seriously when they fk up some of the best parts of the json spec.

IDs are defined as strings, not integer values. These examples happen to use ones that have integers in them, but you can use things like UUIDs easily as well.

> first-name

No, thanks.

Also 'linkage'. And 'data' the name you give things when you're not naming them.

Attribute names are opaque to the base spec. Dasherization is only a recommendation (see http://jsonapi.org/recommendations/#naming) and not by any means required.

How they expect to use it in JS? obj.['first-name']? No way.

Seems to be treading a lot of the same ground as OData.

Let's see... JSON-RPC, JSON API, HAL, Swagger, OData, Jsend, Google's JSON style guide, more?

You might want to add http://loopback.io/ which is a node.js module to implement JSON APIs. It has its own way to do it so we could say it's a standard from the industry. Maybe it's too early to call is "standard" but it's getting traction. They're sending people around to give talks and 1 hour training events. I'm coordinating a team of developers using it. So far it's OK.

I should clarify: I'd love to see standardization in this area, but I'd also like to see a clear winner.

Exactly! Swagger, WSDL and other standards already solved this problem. Why not put the effort on making those better?

Standard? Who proposed? Who accepted? Any names, documents? Standard is a serious word.

Currently: registered media type with IANA. Eventually we'll pursue an IETF RFC.

But where are discussions? Can't find any link to discussions with developers on your site.

Currently, discussion happens on the GitHub repo, though we've thought about putting up a Discourse https://github.com/json-api/json-api/issues/295

From the example on the first page, it just looks like what is needed is a JavaScript object graph tool like JSOG [1], which will allow a large cyclic object graph to be serialized and deserialized to / from json.

[1] https://github.com/jsog/jsog

Looks like XML implemented as JSON. If you want that much structure, use XML.

What about RAML ? You can hook it up to json schema validation

When this is done, they will have re-invented SOAP.

I hope not!

You are mostly there. Granted even your json is nicer than soap, but it is very, very far from json.

About the only thing useful in the spec is a separate errors list so that I can easily check if it worked as it should.

Also there is apparently no way to authenticate so you either have to allow everybody to create/alter whatever on your server or nobody. Yikes.

jsonapi bears absolutely no resemblance to SOAP. I do not understand this comment at all.

Our cool new thing is not like that old obsolete thing everybody hates. Not! Not! Not!

Please keep HN comments substantive.

Why would authentication need to be specified here to limit what a user can do in a conforming server?

Does it support streaming?

What!? No JSON-LD?

Does anyone else find it funny that it's a .org domain?

Oh great. We killed xml because the astro-architects had turned it into a bunch of crap and now they are doing the same thing to json.

If I get my way, whatever I work on will be tailored to break if the client assumes this "standard".

While I wouldn't have anything break itself to make a point I'd much rather have documentation than a standard API format. Lets form a new JSON API standard

The result MUST be in JSON format.

Documentation of the API MUST exist.

Errors SHOULD change the HTTP code as appropriate

That is all.

I would say there is no need for this.

JSON/REST are beloved because it's simple and just works. Originally XML itself was simple too with just DTD as "Schema".

BUT then all the additional standards like XMLSchema, XSLT, XML-RPC/SOAP turned it in an ugly duck.

  JSON API is extracted from the JSON transport implicitly 
  defined by Ember Data's REST adapter. 
As JSON-API comes from the Ember.js camp, I remember the Ember vs. Angular discussions on HN 1-2 years ago. Google Trends shows Ember.js has gone nowhere. https://www.google.com/trends/explore#q=AngularJS%2C%20Ember... (and now there is also React). So I guess the "Ember.js way" to do things failed. At some point we should learn from history, and not turn JSON into SOAP/WSDL - https://xkcd.com/927/ .

XML-RPC/SOAP and/or many other remote procedure call transport techniques failed in in the long term and are a big pile of legacy mess (CORBA, DCOM, RMI, SAP RFC, .NET Remoting, XML-RPC, etc.): http://en.wikipedia.org/wiki/Remote_procedure_call

> JSON/REST are beloved because it's simple and just works.

There is no consensus on the details of "REST". Every implementation has its subtle quirks and solutions to the same basic problems. Every client library must be customized to adapt to these differences. Since the fundamental aspects aren't agreed upon, there is little chance of defining compatible features at a higher level.

On the other hand, the value proposition for JSON API is rooted in consensus [1]

[1] https://news.ycombinator.com/item?id=9281102

> As JSON-API comes from the Ember.js camp

JSON API has been proven in many languages and frameworks. The JSON API team is diverse and has only one member on the Ember core team (@wycats). More importantly, JSON API has been influenced by hundreds of contributors with diverse backgrounds and specialties.

At this point, JSON API is pretty far from its original extraction from Ember Data. It has come full circle to the point that a JSON API Adapter is being written from scratch for Ember Data (and is now almost fully compatible).

> On the other hand, the value proposition for JSON API is rooted in consensus [1]

> JSON API has been proven in many languages and frameworks. The JSON API team is diverse and has only one member on the Ember core team (@wycats). More importantly, JSON API has been influenced by hundreds of contributors with diverse backgrounds and specialties.

CORBA (one of the cited failed examples above) is notable for having been rooted in consensus. It pulled together dozens of use-cases from all over industry... and became a giant, hulking, overly complex, unpleasant-to-use mess.

I guess I'm confused then. If I wanted a complicated standard but one with broad consensus and lots of library support behind it, why wouldn't I just use XML-RPC? Other than not being XML, what does JSON API offer? Because at first glance, it looks just about as verbose and unfriendly as XML-RPC to me.

Json-rpc it's what you get anyway most of the time since very few have a clue of what rest is. Not that is a bad design by itself, many of them are concise and workable without being rest.

What I am concerned mostly is the added overhead of fitting everything under a single standard. Sometime you need state aware api, sometime you need data retrieval api, sometime you need to expose a remote interface to constraint manipulating objects with server side rules.

Why it all has to be a standard escapes me. You mostly get absurdly convoluted operation when you try to fit one model under another.

JSON API adheres to RESTful principles and constraints: it uses HTTP verbs to fetch and manipulate resources and provides allowances for hypermedia. XML-RPC is neither RESTful nor supportive of hypermedia and has no place in building a REST API.

I'm definitely in the lenient camp on a [simple -- complex] scale. So, I agree JSON/REST are simple and just works, for most tasks. That's certainly how I've used the two technologies mostly.

But I definitely see some benefits with json-api. Here are two:

1. When writing large single-page apps in JS, client <-> backend communication will need some structure. Could be a few project-internal guidelines, or something like json-api.

2. Client frameworks and backend frameworks are often separate projects (e.g. Django/express/sinatra server-side or React/Backbone/Mithril client-side). It's helpful with a standard to converge on for authors of backend json/rest serializers, and data adapters client-side.

I'm not expert of RPC's, but I'm pretty sure it's something quite different from jsonapi.

Your google trends search is very deceiving as you used different kinds of terms "ember.js vs angularjs".

here is the trend for "ember.js vs angular.js" http://www.google.com/trends/explore#q=ember.js%2C%20angular...

I used the official names, or at least what Wikipedia lists as official name: http://en.wikipedia.org/wiki/Ember.js , http://en.wikipedia.org/wiki/AngularJS. Adding "React" (Facebook's React) is even more difficult.

These kinds of searches are hard to get right. The name for the ember project is "ember", have a look at the website. The search term "ember" is too general though for comparison.

I guess the point is that google trends is often a bad metric.

What the hell is a PATCH request? This is not part of HTTP, as far as I can recall. Why not use PUT instead?

I don't get why so much negativity towards this. I have recently used this in a project where I was working on an iOS client to talk with a REST API based on JSON API.

I'm not sure about the server-side implementation, but using it client-side was a breeze. I managed to completely automate my web-service calls to automatically parse/generate the required JSON and update the resources client-side.

Then, all I had to do to interact with new resources was register resources client-side with it's mappings.

@interface ModelResource : NSObject

+ (NSString)resourceName; + (NSString)resourcePluralName;

- (void)createMappings;

- (void)addAttributeMapping:(NSString)attributeName toProperty:(NSString)propertyName;

- (void)addLinkMapping:(NSString)linkName toProperty:(NSString)propertyName withResourceName:(NSString)resourceName;

- (void)fromJSON:(NSDictionary)jsonData; - (NSDictionary)toJSON;


@interface UserModel : ModelResource

@property (nonatomic, copy) NSString fullName; @property (nonatomic, copy) NSString firstName;

@property (nonatomic, weak) EmailModel emailAddress;


@implementation UserModel

+ (NSString )resourceName { return @"user"; }

+ (NSString )resourcePluralName { return @"users"; }

- (void)createMappings { [super createMappings];

    [self addAttributeMapping:@"fullName" toProperty:STR_PROP(fullName)];
    [self addAttributeMapping:@"firstName" toProperty:STR_PROP(firstName)];

    [self addLinkMapping:@"emailAddress" toProperty:STR_PROP(emailAddress) withResourceName:[EmailModel resourceName]];


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact