
JSON API - steveklabnik
http://jsonapi.org/
======
almost
I think it's great that we're talking about standardising how we build RESTful
JSON APIS. However I don't think this has got it quite right yet.

What's the reason for the top level rel? It seems like it's just there to stop
the urls from being repeated and to save space, but isn't that what gzip is
for? Why complicate the data format and require all that extra logic and gzip
would remove most of the redundancy before transmission anyway?

Also the name is a bit of an annoying land grab. It'll make it hard to talk
about JSON APIs without getting them confused with JSON APIs that specifically
use "JSON API".

Lastly, it really seems based on a rails active record style data store, it's
assuming ids are the most important thing and that links are all relations
that point to other objects within the system. Proper hyperlinks and point
anywhere and can link together disperate systems which don't necessarily all
use the exact same formats.

~~~
wycats
> What's the reason for the top level rel? It seems like it's just there to
> stop the urls from being repeated and to save space, but isn't that what
> gzip is for?

It also makes it possible to cache things locally indexed on their IDs, and to
form URLs that make requests for just the precise documents that aren't
available locally. In order to achieve this, it's necessary to have both (1)
IDs, and (2) a way to convert a list of IDs into a single request for all of
the documents at once.

> Also the name is a bit of an annoying land grab

HAL is "The Hypertext Application Language". In general, people tend to be
using generic names for these things, so I chose an available, generic name.

> Lastly, it really seems based on a rails active record style data store,
> it's assuming ids are the most important thing and that links are all
> relations that point to other objects within the system

I reviewed a large number of server-side solutions (Firebase, Parse, CouchDB,
Django, Rails) and they all had the concept of an ID for the document. As I
said above, this ID is useful to keep track of which documents have already
been cached locally, and how to formulate a URL that makes a request for just
the missing URLs. I don't consider this solution to be particularly tied to
ActiveRecord.

Less importantly, it is also more convenient to cache documents on the server
using their IDs (or slugs, or whatever the storage wants to use), and allow a
top-level configuration to define how to generate URLs. This allows server-
side solutions to serialize and cache documents without having to be plugged
into the router architecture, but enforces a URL-centric view once the HTTP
response is built.

~~~
troels
> It also makes it possible to cache things locally indexed on their IDs, and
> to form URLs that make requests for just the precise documents that aren't
> available locally. In order to achieve this, it's necessary to have both (1)
> IDs, and (2) a way to convert a list of IDs into a single request for all of
> the documents at once.

Couldn't the id column contain a canonical url then? E.g.:

    
    
        {
          "posts": {
            "id": "http://example.com/posts/1",
            "title": "Rails is Omakase",
            "rels": {
              "author": "http://example.com/people/9"
            }
          },
          "people": [{
            "id": "http://example.com/people/9",
            "name": "@d2h"
          }]
        }

~~~
Orva
I think biggest benefit from this kind of specification would be for data
(partly) distributed and (partly) shared between different hosts, as there
would be at least some common ground for both clients and server how to
communicate. IDs are very abstract and does not necessarily tie data under
particular host. Of course for some data domains ID could be URL, but that is
decision made by data provider. Spec decision would be crippling for overall
use.

~~~
troels
I'd say it's the other way around. Url's are opaque for the client - id's
imply more knowledge of the implementation. E.g. the client would have to know
which host to communicate with and how to structure url's from id's. With
hyperlinked documents, all the client needs to know is http.

~~~
wycats
URLs are opaque, and can often serve as very useful IDs, but, alone, they
imply a one-at-a-time model of fetching documents, and this spec is trying to
provide a way to easily request only the documents a client needs in a
compound document.

Keep in mind that this spec actually requires that every ID be able to be
readily converted into a URL based on information found in the same payload,
so URLs are still front-and-center in the design. It just separates out the
notion of a unique identifier, so that it can be used in other kinds of
requests.

------
bitcracker
It amazes me how programming languages and APIs look more and more like Lisp.
Modern languages copy essential features from Lisp, and JSON as one of the
most popular JS libs almost look identical to Lisp s-expressions.

Someday also more people will realize how useful and effective the equivalence
of control structures and data really is.

JSON: { "posts": { "id": "1", "title": "Rails is Omakase", "rels": { "author":
9, "comments": [ 5, 12, 17, 20 ] } } }

LISP: (posts (id 1) (title "Rails is Omakase") (rels (author 9) (comments (5
12 17 20))))

~~~
dualogy
Yeah. Well. Also known as "in the end, everything is just an abstract syntax
tree". But while a few people will always delight and excel in reading and
writing everything as s-expressions, many of us will probably always find
either line-breaks or different kinds of braces, brackets and little syntactic
doodads make for easier and saner writing and reading -- even if the parser
needs to do a bit more work.

No matter what kind of code I'm looking at: "could I express this as
s-expressions?" Sure. "Would I want to?" Hell no.

~~~
bitcracker
> "could I express this as s-expressions?" Sure. "Would I want to?" Hell no.

Of course it is possible to implement syntactic sugar in Lisp which supports
JSON style expressions. DSLs are common in Lisp, and that actually became a
weakness of Lisp (so-called "DSL hell").

The interesting thing about s-expr is that Lisp doesn't need special data
conversion tools to handle them. Even control structures are expressed as
s-expr, and they can be created and modified dynamically which means that even
code can be exchanged at runtime on the fly.

~~~
arianvanp
This is one of the most attractive things of lisp. The fact that the language
has no notion of compiletime, evaltime and runtime. The user just doesnt have
to care abouut it. Very powerful

~~~
vsync
This actually isn't true for Common Lisp.

There is a distinction between reader macros and compiler macros, for example,
which is relevant for allowing using special syntax be optional for end users.

Certain things also need to be defined if you want them to be available in the
compile-time environment. And, sometimes you have to do a bit of extra work if
you want to have literal objects in your compilation environment and pass them
to runtime.

Check out
[http://www.lispworks.com/documentation/HyperSpec/Body/03_bc....](http://www.lispworks.com/documentation/HyperSpec/Body/03_bc.htm)
though it will probably take you a few readings to make sense of it; I know it
did for me.

But for the most part things happen automatically.

------
ch0wn
I read a lot of RFCs and drafts for media types lately and what strikes me
reading this spec is the very liberal use of MUST which seems to me like an
unnecessary violation of the robustness principle, that Jon Postel introduced
first in RFC761 (TCP). Mike Amundsen describes in his book 'Building
Hypermedia APIs […]':

    
    
        Media type designers should keep Postel in mind. Designers can make
        supporting the Robustness Principle easier for implementor by keeping the
        number of MUST elements in the compliance profile to a minimum. The fewer
        MUST elements implementors need to support, the more likely it is that they
        will be able to craft compliant representations using that media type.

~~~
steveklabnik
Yes, I want to reduce some of the MUSTs, or at least justify them more
strongly. Right now they're based on what our running code absolutely needs.

~~~
wycats
Indeed. In my experience, looser requirements in this kind of thing just leads
to tears on the part of client and server implementations. I'll happily reduce
some of the _MUST_ s to _SHOULD_ s or _MAY_ s if it makes sense for the
communication channel to consider them optional.

~~~
ch0wn
That's great to hear. One particular example which I found needlessly strict
is this:

    
    
        The request MUST contain a Content-Type header whose value is
        application/json.  It MUST also include application/json as the only or
        highest quality factor.
    

It makes sense for fully compliant implementation to have those headers, but
they way I understand MUST here is that a server would reject any request
without them.

~~~
steveklabnik
I've filed an issue for you about this: <https://github.com/json-api/json-
api/issues/2>

------
rcsorensen
It's great to see this laid out in a single place.

Couple of things that might be nice to see here:

* Pagination concerns

You call out "meta: meta-information about a resource, such as pagination",
but that doesn't say whether things are 0/1 based, what the names of the
values for per-page are, how to indicate length of the underlying collection,
etc.

* Search concerns

I don't know whether this is an area that has best practices yet, but having
it said and decided on something called "jsonapi.org" could save many people
many hours of pain in the future.

* Elective compound documents

Bit more of a reach, but there have been a bunch of times I wanted to say
"this resource, and these relations of it, and those relations of those". And
in some cases partial documents (id and name alone) of those tertiary
relations.

~~~
steveklabnik
I am literally sitting on a plane right now, but as soon as I get off, will be
registering this type with IANA. So that helps #2.

#3 will be do-able I think. I'm interested in this too.

#1 is something that needs a good answer, yes. I THINK it's out of the scope
of this, as there are already registered REL values that handle this, but we
should clarify.

I will be happy to answer anyone else's questions after I land, it's time to
turn off electronic devices.

~~~
1qaz2wsx3edc
Of all people.

What about hypermedia instead of rels?

I feel like `ids` param is a hack, clearly the system has a group of objects,
should that not be it's own collection?

For instances:

`GET /friends?ids=1,34,54`

Could be:

`GET /friends/best.json` and `best` to the system in some form, represents
1,34,54.

I don't want a RESTful JSON API. I want a REST JSON API.

~~~
steveklabnik
> What about hypermedia instead of rels?

... can you elaborate a bit more on what this means? I don't understand what
you're trying to say.

> I feel like `ids` param is a hack, clearly the system has a group of
> objects, should that not be it's own collection?

I'm not 100% sure what you mean here either, but I'm reading it as "Why not
use a comma rather than passing a list of GET parameters?"

The answer is "I don't think that's particularly important either."
Constructing your own URIs is against the very spirit of REST. Let the server
do that for you.

~~~
tomjen3
You shouldn't ever request ids from a server, you should request id -- as a
single item -- or a collection defined by the server and named (such as
user/friends.json, not users?id=for,bar,baz,foobar).

Basically rest apis map exactly one resource to a url and should never use the
hack that is ?.

~~~
steveklabnik
That is simply not true. Can you provide me with some sort of citation on
this? Fielding doesn't talk about URL construction in his thesis, and, in
fact, specifically mentions things like collections and multiple entities
residing in one resource.

Also, ? is not a 'hack', I don't know where you're getting that from either.

------
coderzach
Why not use JSON HAL?

<http://stateless.co/hal_specification.html>

~~~
mahmoudimus
I second this, what's the point of not using hal?

~~~
wycats
There are several reasons I chose not to use HAL:

* HAL embeds child documents recursively, while JSON API flattens the entire graph of objects at the top level. This means that if the same "people" are referenced from different kinds of objects (say, the author of both posts and comments), this format ensures that there is only a single representation of each person document in the payload.

* Similarly, JSON API uses IDs for linkage, which makes it possible to cache documents from compound responses and then limit subsequent requests to only the documents that aren't already present locally. If you're lucky, this can even completely eliminate HTTP requests.

* HAL is a serialization format, but says nothing about how to update documents. JSON API thinks through how to update existing records (leaning on PATCH and JSON Patch), and how those updates interact with compound documents returned from GET requests. It also describes how to create and delete documents, and what 200 and 204 responses from those updates mean.

In short, JSON API is an attempt to formalize similar ad hoc client-server
interfaces that use JSON as an interchange format. It is specifically focused
around using those APIs with a smart client that knows how to cache documents
it has already seen and avoid asking for them again.

It is extracted from a real-world library already used by a number of
projects, which has informed both the request/response aspects (absent from
HAL) and the interchange format itself.

~~~
nona
Well, I'm guessing your second and third point could be tacked on to HAL.

I see your point with the first one, although I must say in our APIs we have
rarely encountered duplication. And the recursive nature of HAL makes it
really easy to generate on the server-side.

(By the way, we've introduced a sideloading convention that only when you pass
along ?embedded=author,comments those sub-resources will be present under
_embedded. This way the clients can easily request only what's needed.)

------
akamel
this protocol seems to be a solution for ember; there are already other very
similar protocols; why is this called jsonapi.org and not emberjsonapi.org?

as noted in other comments, there is JSON HAL;

there is also OData (you might not appreciate that it is an ms initiative, but
its pretty well established and has many providers)
<http://www.odata.org/libraries/>

~~~
wycats
JSON HAL is an document format only; it does not formalize a protocol. JSON
API is a solution for any "smart" client that is capable of caching documents
and intelligently limiting subsequent requests. In general, I believe it will
be broadly useful for JavaScript frameworks (and native libraries) that want
to abstract the nitty gritty of how a document comes over the wire from its
"model" representation.

~~~
akamel
got it; so how would you compare it to odata?

~~~
wycats
Take a look:
[http://www.odata.org/documentation/odata-v2-documentation/js...](http://www.odata.org/documentation/odata-v2-documentation/json-
format/)

~~~
akamel
:) yes i'm very familiar with it...

~~~
wycats
And [this][1] looks similar to JSON API to you?

[1]:
[http://www.odata.org/documentation/odata-v2-documentation/js...](http://www.odata.org/documentation/odata-v2-documentation/json-
format/#6_Representing_Collections_of_Entries)

~~~
akamel
The format is not identical! I was looking for a philosophical comparison
between the standard (odata) and jsonapi; I am not seeing much more than a
simplified response format (a transform really).

Was hopping to see something more substantive in a comparison other than:
'results are returned under "d" instead of directly "posts"'.

Most project need to answer the simple question of 'why?' - 'why do I as a
project exist'; jsonapi.org's is full of 'Ember'; hence why my original
comment hinted that it should possibly be called 'emberjsonapi'.

If this is intended to be generic; it should ditch references to Ember and
instead refer to similar standards; explaining 'why?' it is better than them.

By all means if your intent is to make a more elegant standard (again
comparing to odata) then that's a worthy goal. Stating _that_ will help your
'consumers' understand what they are getting

edit: clean up

~~~
wycats
The references to Ember are in the introduction only and provide some
historical context for the project. I believe that it's important for
standards to come out of real experience, and that the context of that real
experience will help others understand the goals. That's why I provided the
historical background.

JSON API's design is based around a smart client that wants the ability to
avoid making unnecessary requests for documents it already has, and to provide
a format that avoids unnecessary duplication in compound documents. It also
aims to be relatively easy to implement, both on the client and server, using
tools and frameworks that are already widely in use using familiar idioms.

~~~
upthedale
But you haven't yet answered Akamel's question of why this exists, with
comparisons to the existing standards (OData in this case).

Everything I've seen just says that this is simply a subset of OData's
functionality. And the features that are being asked for by users commenting
on the OP are ones already provided by the existing standard.

And what about querying the data? There have already been questions elsewhere
in this thread about how this would work. OData already provides very rich
query support, including document projection (returning a subset of a
document) through to complex queries navigating multiple relationships (I
easily could ask, through a GET query string, to return all the actors who
have starred in films that belong to the comedy genre - navigating 3
collections, actors,films,genres). If queries are out of the scope, then fine,
but its clear people want to query their data.

So please answer what this offers over existing standards if you want to
compete.

Oh, and in response to your earlier post of what OData JSON looks like, you're
much better served by linking to v4, not v2 as you did:

[http://docs.oasis-open.org/odata/odata-json-
format/v4.0/cspr...](http://docs.oasis-open.org/odata/odata-json-
format/v4.0/csprd01/odata-json-format-v4.0-csprd01.html#_Toc355172930)

That doesn't look so dissimilar.

------
calebio
Is there any reason the top level objects are represented as an array of
objects vs an object keyed on ID? Sure the ID would have to then be a string,
but I feel that keying it on ID with direct lookup far outweighs having to
search for your item each time.

I feel that if you have an author with many comments:

{ authors: { '1': { id: '1', comments: [1, 2] } }, comments: { '1': {}, '2':
{} } }

it would be easier to say results['comments']['1'] instead of performing some
type of search across them each time.

~~~
oinksoft
Ordering. The ECMAScript standard does not specify property enumeration order.

~~~
alexkcd
The related documents are indexed by id from the primary document, so ordering
doesn't matter, and it makes sense to use a format suggested by calebio, as
it's both more efficient and terse. However, the primary document and other
nested entities which require ordering should return an array of objects.

Edit: clarification

~~~
calebio
I may be missing something, but how is searching an array of JSON objects more
efficient than direct lookup by ID?

~~~
alexkcd
I agree with you. "OP" was referring to your suggestion. I edited my pervious
comment to clarify.

~~~
wycats
OOP here. There are a lot of these kinds of decisions that need to be made for
an API like this. In general, I went with what we're already doing if there
was a toss-up. A big strength of JSON API, imho, is that it's an extraction
from a real world system that a number of people are already using in some
form.

It's important to note that the goal of JSON API is to be consumed by a
general-purpose client (like Ember Data), so the JSON will likely be processed
once and indexed as needed. In the system we extracted this from, the Array is
loaded into a Store, which indexes the documents by type and id, so future
lookups are quite efficient.

~~~
alexkcd
That's a fair point, but using arrays only where ordering matters adds nice
semantics that can be used by a general-purpose client. Using a map for
related documents makes it explicit that document entries are unique and
unordered (it's implicitly assumed to be true in OOP's case).

In addition, you'll find that you're using an array at the toplevel only for
the primary document when it is a collection, and for nested collections
within documents (such as comment ids). Semantics that general-purpose clients
can make use of.

~~~
wycats
Also a fair point.

The original reason for using Arrays (and something that still carries some
weight with me), is that people expected that Arrays be presented in a
particular order returned by the server. Indeed, the semantics of a to-many
relationship need to be set-like (in order to avoid nasty concurrent
modification issues), but people really wanted the ability to return an array
and have it "work as expected". In general, the right way to handle position,
imho, is to use a `position` attribute and sort on the client. After saying
all of that, perhaps this is a good reason to use ID indices, so people don't
get the wrong idea.

I'll sleep on it :)

~~~
alexkcd
I'm not saying you shouldn't use arrays altogether, just that you shouldn't
use them when you have uniqueness & no order.

For example, consider the posts.comments.users relationship. Here "posts" is
the primary document (and let's say a collection), "comments" and "users" are
related documents. The same user may have commented in multiple posts within a
single response, so which `position` attribute would you use in the related
"users" document? The answer is you don't, you can't, because the user appears
in different positions in different posts. The order of comments is defined by
the "post" document's "comments" _array_. Each comment document contains a
user id. You look up the user from the _unique_ "users" _map_ by that id.
There is no order that makes sense for related documents, since by nature of
being _related documents_ their entries may appear in different
places/positions in the parent document, where they are already referred to
from ordered collections (e.g. arrays of ids) or singular fields.

Hope that clears it all up :)

------
stilkov
First, I think it's awesome that you guys are documenting this for others to
re-use, whether it ends up being the one true format or not. Having options to
choose from is doubtlessly good.

Some thoughts:

1\. I'm not sure about the name. There will definitely many JSON APIs that
don't use (your) JSON API for a long time, even if this becomes hugely
popular. I don't see how this will not lead to avoidable confusion in the
future. Given that it's very document-centric, why not use something like
"JSON Doc API" or similar?

2\. In the ID approach, why are the base URIs the client needs to know about
not always discoverable, e.g. using standard link relations? Or phrased
differently, why would I ever _not_ want to use the "URL Template Shorthands"
approach mentioned later?

3\. Why use application/json and not something more specific? I can see some
reasons, but would be interested in yours.

4\. On creation, if I accept the pain of generating an ID on the client and
can construct the URI using the template, why can't I use PUT instead of POST?

5\. If I use a POST to create something, why don't I get a 201 Created with a
Location header?

6\. I'd suggest to upgrade the "MAY" for caching to a "SHOULD".

/edited to match @steveklabnik's numbers

~~~
steveklabnik
1\. All media types are 'document centric.' And real REST APIs serve up
documents. So seems fine to me. Also, IANA does not have a 'api+json' type
registered (until I did so last night), so the name isn't taken.

2\. If you're transitioning _to_ this kind from some sort of older kind.
Remember, this is extracted from real, working software; it's not some sort of
thing we imagined up. Not everyone is super on the hypermedia bandwagon yet,
and some will need to transition kind of slowly.

3\. I filed for 'appplication/vnd.api+json' yesterday, and so we'll be
changing the document as soon as the IANA gets back to me.

4\. You could, in theory. Allowing PUT seems fine, it just doesn't often seem
to be the case, so we didn't include it. I wouldn't mind having that in there.

5\. You should be, this is an oversight.

6\. That's very possible.

~~~
stilkov
> 1\. All media types are 'document centric.'

What I meant is that this is a particular kind of backend API, a very "model-
centric" one. Nothing wrong with that, I just don't think this is the one and
only kind and thus should take on the generic name.

> 2\. If you're transitioning _to_ this kind from some sort of older kind.

Understood. Maybe an approach is to allow for this to specified optionally,
with the fallback of being hard-coded if it's not present?

~~~
steveklabnik
I think that the examples appear to be a 'model-centric' one, because we're
trying to reach people that build very model-centric sites as of now, but
resources can be anything, so I don't think that it's super specific. This is
a good thing to think about though.

> with the fallback of being hard-coded if it's not present?

See above for some other good stuff about the IDs that wycats knew that I
wasn't as current on.

------
westurner
In terms of <http://en.wikipedia.org/wiki/Linked_data> , there are a number of
standard (overlapping) URI-based schema for describing data with structured
attributes:

* <http://schema.org/docs/full.html>

* <http://schema.rdfs.org/all.json>

* <http://schema.rdfs.org/all.ttl> (Turtle RDF Triples)

* <http://rdfs.org/sioc/spec/>

* <http://json-ld.org/>

* <http://json-ld.org/spec/latest/json-ld/>

* <http://json-ld.org/spec/latest/json-ld-api/>

* <http://www.w3.org/TR/ldp/> Linked Data Platform TR defines a RESTful API standard

* <http://wiki.apache.org/incubator/MarmottaProposal> implements LDP 1.0 Draft and SPARQL 1.1

------
jeffamcgee
The twitter API originally used pages, but they realized it was a mistake:
<https://dev.twitter.com/docs/working-with-timelines> . The way the facebook
API does it is a lot more sane:
[http://developers.facebook.com/docs/reference/api/pagination...](http://developers.facebook.com/docs/reference/api/pagination/)
.

I think that you should specify the format for cursor based paging of resource
collections. One way to do it would be to require a url to get more results:

    
    
        {
          "posts": [...]
          "meta": {
            "next":"/posts/search?q=baseball&after=1234"
          }
        }
    

Another option would be for it to be a key/value pair that must be added to
the url:

    
    
        {
          "posts": [...]
          "meta": {
            "next":"after=1234"
          }
        }
    

Either way, rest clients should treat it as a meaningless string.

~~~
steveklabnik
Currently, we don't say anything about searching. That's really an
application-level concern, not something that needs to be in this spec.

(So you'd define your own rels and use them, doesn't affect this level of
abstraction.)

------
gavinjoyce
A few additions that I'd like to see:

* Standardized paging

* Optional side-loading - GET /albums.json?include=artists,songs

* Multiple meta elements - so we'd have "albums_meta", "artists_meta" and "songs_meta" in the example above. This allows us to include 'has_n' relationship paging data.

~~~
wycats
Thanks for the feedback :)

Standardized paging seems to come up a lot, so it seems like a good thing to
add once the core spec stabilizes. Optional sideloading also has come up a few
times, and seems easy to add as a _MAY_ in the spec. I need to flesh out the
meta stuff in general, and the ability to have "meta anywhere" as well as top-
level metas is coming.

~~~
gavinjoyce
Nice, thanks. I've been building an alternative to ActiveModel::Serializers
which provides these features.

<https://github.com/RestPack/restpack-serializer>

In the light of your proposals, I'll either implement JSON API or switch back
to AM:Serializers. Side-loading, paging and Ember Data compatibility are my
main goals.

~~~
steveklabnik
I'd love to have your feedback from what you've learned about building stuff
with restpack. <https://github.com/json-api/json-api/> is the repo, please
open up issues for any questions/comments :)

------
pserwylo
This is important. We've just been implementing a SOAP interface to our
software, and have been discussing the best way to provide a JSON API using
the same mechanism. The sticking point is probably that there is no standard
way to define an API like there is with SOAP.

One question though: I notice that the language they use if very similar to a
typical RFC. Is this an RFC? and if not, why? I'm a little naive about the
process for submitting them and getting them accepted, but it would be great
if this ended up as an official RFC that could be referred to just like
SOAP/WSDL.

~~~
upthedale
I strongly suggest you have a look at OData. The latest developments define a
similarly lightweight JSON data format. It does everything this proposal does,
and more (methinks the things this doesn't do will likely be added as time
goes, simply reinventing the wheel - e.g. see the discussions above on
pagination or rich query support).

To address your issues around defining a standard JSON API - OData is itself
the standard API for any OData service. All that would differ between your
OData service and mine is the schema and the data inside. How you explore that
schema and access that data is what OData defines.

------
southpolesteve
What timing! I just released the first version of a rails engine that
automatically builds APIs to match Ember Data. It also uses active model
serializers. Check it out here: <https://github.com/southpolesteve/api_engine>

I would love to get some feedback on the initial version. I will definitely be
implementing more of the OP's spec this weekend.

------
eranation
This is great, but I can't seem to be able to find the NoSQL flavored support.
Seems great for relational style data models, but (I'm sure I'm missing
something) I didn't see support for full nested document models. Did I
entirely miss the point?

~~~
steveklabnik
The implementation detail of "SQL or NoSQL" should not bubble up to your API.
The data store you use is totally irrelevant, that's the entire point of
encapsulation.

~~~
almost
You've just summed up what bothers me about this format.

~~~
ch0wn
This sounds to me as if you're not really in search of a loosely coupled
hypermedia API schema, but an RPC mechanism.

~~~
almost
Exactly the oposite, I don't want a scheme that's tied to assumptions based on
relational databases.

------
mahmoudimus
The PATCH mechanisms seem like RPC to me. Having an operation that is passed
in the payload, i.e. "replace", is awkward.

Why can't you just PATCH a resource?

    
    
        PATCH /resource
    
        {
            "src": "newvalue.png"
        }

~~~
wycats
The `PATCH` mechanism is an HTTP verb (RFC 5789:
<http://tools.ietf.org/html/rfc5789>) using a standard patching mechanism (RFC
6902: <http://tools.ietf.org/html/rfc6902>). Both are RFCs that seemed like
good foundations to build on.

~~~
mahmoudimus
Ah, I hadn't heard of RFC6902. I'll give it a read.

I'm still of the opinion that it might be overkill to use only replace from
that RFC when you can just PATCH the actual field to change.

Is there some bit of wisdom or experience that I'm missing?

~~~
wycats
The main reason was to unify patches to attributes with patches to
relationships, which do require richer semantics.

It also makes it really easy to add a compound PATCH (updates to
posts/1/title, posts/1/rels/author, posts/2/body, etc. all at the same time)
in a single format. Once I bought into JSON Patch for the rest of this stuff,
I figured I may as well use it for attributes :)

~~~
mahmoudimus
Interesting, how can I get more involved with how this is going to shape up?

~~~
steveklabnik
Here it is: <https://github.com/json-api/json-api>

------
tericho
This is fantastic, looking forward to promoting it for widespread adoption
once it's stable.

------
tomjen3
That scheme is horrible. ? should never be part of a rest-like url and you
certainly shouldn't request more than one id at a time -- the data you should
show should be included in the JSON string.

~~~
stilkov
> ? should never be part of a rest-like url

Why would you say something like that? There's no basis for that at all. From
a REST POV, URIs are just opaque identifiers, the characters they're made up
from don't matter a bit.

------
dacort
I was a little confused that an "author" rel would be retrieved from a
"people" resource. Is there a way to define that author's are people other
than the client knowing this?

~~~
steveklabnik
1\. I think that this is a typo

2\. This spec is for protocol-level semantics, you define your application
semantics with a profile link in the meta section.

------
jorisw
Needs an introduction paragraph. E.g. what the purpose is of the site.

------
the1
please at least include referer link in the response

~~~
steveklabnik
That'd normally be included in headers, it's not really relevant to a media
type.

