
API Design Guide - andybons
https://cloud.google.com/apis/design/
======
andreygrehov
I would like to add Microsoft's API Guidelines [1] here, which is also a well
written document and can be helpful to anyone designing an API.

[1]: [https://github.com/Microsoft/api-
guidelines/blob/master/Guid...](https://github.com/Microsoft/api-
guidelines/blob/master/Guidelines.md)

~~~
camus2
It's interesting that both of these guidelines kind of reject HATEOAS by
mandating explicit versioning. It seems that HATEOAS was never really a thing.
It's just too complicated to implement in practice. In that sense, REST in
practice has always been just RPC without a clear spec for procedure call like
XML or JSON RPC.

~~~
zerocrates
It's just never been clear to me what HATEOAS is really supposed to be good
for. Sure, a client can follow the links in an automated fashion, but how is
it supposed to know what the resources actually are and which links it needs
to follow, which resources it has to create or modify, to actually accomplish
anything?

The general idea of returning links to related resources and/or actions is
fine and good, but the rhetoric tends to go further, to ecompass claims like
the API being "self-documenting" or amenable to a universal client. It always
seems to me that this Big Idea of just presupposes the existence of a "smart"
client that can really understand the links, one that there doesn't seem to be
much sign of.

GitHub's API proudly notes its use of hypermedia and URL Templates in its
responses, but I still have go read the documentation to decide what link I
need to use, and what needs to fill into those variable slots in the URLs. The
template doesn't do much for me that text in the documentation saying "these
GET paramters are accepted/required" wouldn't just as well.

~~~
squeaky-clean
> Sure, a client can follow the links in an automated fashion, but how is it
> supposed to know what the resources actually are and which links it needs to
> follow, which resources it has to create or modify, to actually accomplish
> anything?

I've never understood this either. My API client isn't smart enough to follow
links and write logic for me, so when they say "the client" can "discover",
they must be referring to myself, and not my code? Well I'd much rather read
documentation than click hyperlinks inside an API.

~~~
jalfresi
This is because the idea is that you would create a new media-type to
represent your resource. It is this media-type definition that would determine
what rel-types there are and how a client should interpret them.

for example, the spec for the HTML media-type that when a client sees a link
with the rel-type "stylesheet" is should fetch the resource using HTTP GET.

As REST requires that media-types be registered the idea would be that we
would eventually get a set of media-types that cover things like audio
playlists, and how to interact with them.

So any "intelligence" required by a client would be baked into the
implementation of the media-type processor. Instead of "client libs" for
specific web services, you would have a general media-type parser/processor
which could be re-used by clients of different web services to process common
media-types.

But apparently individual client libs for each web service that overloads JSON
is better.

~~~
tokenizerrr
> But apparently individual client libs for each web service that overloads
> JSON is better.

I mean, there are so many different types of resources and every API I
interact with definitely invent their own. How often do you come across an API
offering a playlist of music?

------
theptip
An interesting design question arrises around nested resources. Google in this
doc buys into deep nested structures, e.g.
`//calendar.googleapis.com/users/john smith/events/123` (from [1]).

I think this pattern is unambiguously sensible when the child objects are
strictly scoped under the parent.

But it's less clear how to represent resources that are shared between
multiple parents; for example, what if event 123 can be referenced under
another user's API resource as well? If we permit
`//calendar.googleapis.com/users/bob/events/123`, now we have multiple URLs
referring to the same object, and things can get quite tricky in the
implementation.

Django Rest Framework strongly discourages (and makes it quite hard to
implement) nested resources, FWIW.

I've found that a policy of only permitting one level of nesting seems to be a
good balance for shared objects, e.g.

`//calendar.googleapis.com/users/john smith/events/` returns:

``` [ { url: "/events/123"}, ... ] ```

Interested to know how others have solved this problem.

[1]:
[https://cloud.google.com/apis/design/resource_names](https://cloud.google.com/apis/design/resource_names)

~~~
combatentropy
Here's one way to tackle it, if behind your REST API is an SQL database:

    
    
      /schema/table/key
    

So:

    
    
      //www.example.com/calendar/events/123
    
    

To address many records, like all that belong to Bob, use the query string
instead of purely the path:

    
    
      //www.example.com/calendar/events/?user=bob

~~~
theptip
This is in my experience the `standard` design; it does give you a lot of
freedom to change what filters you allow, and to stack them. Nobody's going to
get fired for this design, and there's a lot of prior art around it to draw
examples from. It also has the benefit of keeping the API surface small and
clean.

But it makes it a bit weird to do HATEOAS; you _could_ do `GET Bob => {events:
"calendar/events/?user=bob"}` -- but then you're hyperlinking to a search and
not a resource.

It also tells less of a narrative in the structure; `user=bob` is just another
filter that you can use to apply to the events set. But we get a chance to
describe the shape of the data a bit more if we choose to declare an
intermediate resource (/users/) and attach some links to it
(=>/users/bob/events).

Now, if there are ten ways that you need to slice your `events` set, and
?user=bob is but one of them, then scoping a sub-resource /users/events/ isn't
that useful/descriptive.

As an aside, I think this is where HATEOAS is nice; it makes it very easy to
navigate an API as a developer, see what actions are possible at every node,
and hopefully learn the intent of the author of the API without having to chew
through a set of API documentation. Django Rest Framework's API browser is a
great example here.

------
daliwali
What they describe is not REST. Nowhere in this document mentions hyperlinks,
a strict requirement of the REST architectural style.

The best analogy would be a simple web page, which usually contains hyperlinks
that a client can follow to discover new information. Unfortunately, web
developers' understanding of REST ends with HTML, and they re-invent the
wheel, badly, every time they create an ad hoc JSON-over-HTTP service.

There is a standardized solution for machine-to-machine REST: JSON-LD [1],
with best practices[2] to follow, and even some formalized specs[3][4]. To
Google's credit, they are now parsing JSON-LD in search results, which is much
nicer to read and write than the various HTML-based micro-data formats.

On a related note, REST has nothing to do with pretty URLs, naming
conventions, or even HTTP verbs. That is to say, it is independent of the HTTP
protocol, but maps quite naturally to it.

[1]: [http://json-ld.org/](http://json-ld.org/)

[2]: [http://json-ld.org/spec/latest/json-ld-api-best-practices/](http://json-
ld.org/spec/latest/json-ld-api-best-practices/)

[3]: [http://micro-api.org/](http://micro-api.org/)

[4]: [http://www.markus-lanthaler.com/hydra/](http://www.markus-
lanthaler.com/hydra/)

~~~
mcherm
> What they describe is not REST. [...] a strict requirement of the REST
> architectural style. [...]

You have a word "REST" for which you are apparently granted access to Plato's
"true" definitions, which enables you to tell me that REST requires
hyperlinks, but not naming conventions or HTTP verbs.

I reject your definition.

Go ahead and use that word "REST" however you like. I will continue using it
to describe what you consider to be "ad hoc JSON-over-HTTP services".

Sure, I've read Fielding's dissertation.[1] I think stateless, cacheable,
layered systems are a great idea. I think "code on demand" is (usually) a
stupid one... even if it does turn out to work surprisingly well for web
browsers. But none of those matter.

I work with people who build "ad hoc JSON-over-HTTP services". They spend
hundreds of millions of dollars building ad hoc JSON-over-HTTP services. They
call them "REST" services.

I have to talk to these people, so I call them "REST" services too. Because
I'd rather build something useful than spend time telling people that they're
using a word "wrong", when the only _real_ meaning of a word is whatever it
will bring to mind in the person you are communicating with.

[1]:
[http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm](http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm)

~~~
daliwali
I am not the authority on what REST is. That would be Roy Fielding, who has
explicitly stated that hypermedia is a requirement[1]. So go ahead and tell
Mr. Fielding that his definition of REST is incorrect.

I am well aware that REST no longer means what it originally described, which
is why I think it should go by another term that is not burdened by being a
marketing buzzword.

[1]: [http://roy.gbiv.com/untangled/2008/rest-apis-must-be-
hyperte...](http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-
driven)

~~~
mcherm
> So go ahead and tell Mr. Fielding that his definition of REST is incorrect.

Not "incorrect", but I'd be happy to tell him that his term has been co-opted
by the programming masses to means something vaguely related to the original
meaning but less precise. I suspect Mr. Fielding already knows that.

> I am well aware that REST no longer means what it originally described,
> which is why I think it should go by another term that is not burdened by
> being a marketing buzzword.

I agree there. I think changing how the masses use the term is a lost cause.
(Consider the incredibly hard-fought battle to reclaim the original definition
of "hacker", which after decades did succeed in establishing it as a secondary
definition. That's the _most_ successful case I've ever seen.) So I think
ryeguy has it right: call Fielding's definition "HATEOAS" "real REST" or
"hypermedia REST" or something.

------
nevi-me
Very interesting read! I like that GOOG is pushing gRPC more on their own
services. I've been a gRPC user since Sep/Oct last year, and it's made
developing for Android, Node.js, JVM, Python more pleasant from a networking
perspective. The ease of just moving logic from Node.js to a Java gRPC server,
and then redirecting the HTTP2 proxy to the right place, has been awesome.

I've started teaching some people in the team how to use gRPC, and we're def
going to be using it where permissible on client projects.

------
camus2
Please drop fixed headers from web pages. If you want easy access to the top
of the page use anchor links instead. On a laptop headers often take a big
chunk of available screen. It just pisses me off every time I see a page with
a fixed header. All your reader aren't using imacs...

~~~
chatmasta
This is a marketing website. I'm sure they A/B tested the fixed header and it
probably converts better than otherwise.

~~~
camus2
This isn't a marketing website this is the documentation website for google
cloud. I'm logged in right now on google cloud and it's still displaying that
header.

~~~
chatmasta
Yeah? Who's paying google the big bills? The people who are looking to
"CONTACT SALES," which is conveniently a link in the fixed header.

------
etaty
I am curious if anyone went to GraphQL without regrets?

~~~
e1g
We've been using GraphQL for everything since late 2015. All recent code is
GraphQL-first, and all old code is proxied by a GraphQL layer in front of it.

Our application helps BigCos to understand if they pay people fairly and to
run smart pay reviews. It's a relatively small codebase, ~100k LOC, but it's
essential complexity is in managing and connecting dispersed data about
employees and markets. GraphQL allows us to represent the natural links within
this data, then the app frontends can present whatever business information is
helpful in that page/sidebar/widget/card without separate endpoints.

With REST, we had the same problems with every feature: over/under-fetching,
can't express relationships well, and can't evolve the schema easily. When we
tried to work around these issues (e.g. "v2?fields=a,b,c"), we ended up with a
poorly implemented subsection of GraphQL that's not benefiting from Facebook's
experience. To compare to the world of databases, I view REST as a Key-Value
protocol and GraphQL as an SQL with joins and functions. If all you need is to
lookup a document, don't overcomplicate it. But if you need to express
relations, you don't want to do that in userland.

The only advantage of REST is using a widely known standard with rich tooling
and well-published "best practices" (that just try to work around REST
limitations).

~~~
daliwali
REST describes relationships very well, via hyperlinks. You navigated to this
page via a hyperlink. If your API doesn't do that, it's not REST, this is what
people usually mean when they point out that an API isn't complying to the
REST style.

~~~
e1g
REST limits my knowledge only to the primary key of the relationship.

Given a trivial question like "here's a user, I need her friends and their
countries of birth" and the REST answer is a separate endpoint or an O(n)
operation on the client because these are _separate resources_. You want the
country flag too? I'm sorry, I can't support that requirement.

With GraphQL, it's -

    
    
        user(handle: "daliwali") { 
          name,
          friends {
             country {
               name, population, flag
             }
          }
        }

No separate endpoint. You want to show to the client how many dogs those
friends have, and if any of them play frisbee? No problems, and no backend
engineers involved.

This is why I said REST is a key-value protocol like in redis. With redis you
can either embed a small objects (which is not a relationship) or keep a PK of
the relationships (which means many queries). The more consumers your API has,
and the higher the latency, the more expensive either of those choices
becomes.

~~~
daliwali
You are correct that in a REST system, every resource has a "primary key",
that is the URL.

Where you are wrong is that REST doesn't mandate that a resource can _only_ be
accessed by its "primary key". There is nothing stopping me from requesting
the following URL:

GET /users/daliwali?fields=name&include=friends,friends.country

Any client would be able to follow that link and get something out of it,
whether it's JSON or HTML, without needing specialized tooling such as a
GraphQL client. A hyperlink makes that query widely accessible and
interoperable with any HTTP client.

~~~
daliwali
In reply to fixermark, there is also nothing stopping you from very
complicated queries in REST. There is not even a requirement that the server
must respond immediately. For example I could request:

POST /queries

With some raw database query as the payload (please don't actually do this)
and the server could respond with HTTP 202 Accepted, meaning that it's going
to take some time to process, meanwhile check back at the URL in the Location
header when it's finished.

REST does not mandate any upper bound on complexity, that's up to you to
decide.

~~~
squeaky-clean
But at this point, it's so different and you've added so much work to it, I
doubt anyone would recognize it as originally REST. You're basically rewriting
GraphQL yourself. And sure, you're allowed to do that, but why?

~~~
daliwali
>it's so different and you've added so much work to it

With web pages, it's standard and effortlessly handled by the browser. Given a
form with inputs specified by the server, a client sends a request with media
type application/x-www-form-urlencoded or multipart/form-data. There is a vast
number of implementations for practically every language and platform. A
machine client can send a form too, or JSON for that matter.

You have it backwards, Facebook is attempting to rewrite web standards by
themselves. And that is undermining the open web.

------
KabuseCha
Fantastic Read!

But I am still looking for some books on good API-Design, anybody has any
recommendations?

~~~
rainhacker
If you don't mind Java, Effective Java by Josh Bloch has good API design
material. Josh designed collections API in Java

~~~
jzsprague
very good read, but not really about web/http/rest API's.

------
pbreit
What is current consensus on client libraries? Braintree for example requires
that you use their client libraries where as Stripe makes them optional. With
Google's gRPC thing I can definitely understand using libraries for
performance. Otherwise, isn't making simple REST calls without custom
libraries sufficient for most uses? Or if you want a library, something
generic like Unirest [1]?

1\. [http://unirest.io/](http://unirest.io/)

~~~
thesandlord
I've used the simpler GCP APIs (like the Machine Learning ones) with direct
REST calls. I've also been forced to use the REST API for things like Google
Sheets because the client library documentation was so confusing.

For more complicated services, using a client library makes sense. Why
reinvent the wheel?

With gRPC/Swagger/OpenAPI/etc you can also generate your own client stubs if
you need to.

IMO, if you require a client library, there better be a really good reason...

(I work at Google Cloud, and often work with the API/libraries team. Opinions
are my own)

~~~
pbreit
Isn't building a client library for a REST API the definition of reinventing
the wheel?

------
RubenSandwich
Seems pretty good. Specifically this part of the guide is pretty well written:
[https://cloud.google.com/apis/design/resources](https://cloud.google.com/apis/design/resources).
One thing that is surprising to me however is that their is no mention of
using HTTP Status Codes in responses.

~~~
programd
Using HTTP status codes in your responses is a trap. It conflates the API
transport with the actual semantics of the API. The goal of HTTP error
responses is to say that something went wrong in the transport layer. The goal
of API error responses is to say that something went wrong in your service.
For example, your HTTP REST server may be perfectly fine, but your back end DB
may be misbehaving. Having separate API level error responses, for example an
explicit field called "error" in your JSON response, I consider to be best
practice. Frequently your client needs to know the difference, for example to
determine what kind of error to return to the user or what retry strategy to
use.

In typical HTTP REST services this transport/API error split makes it really
easy to create client code which only needs to check two conditions - if the
HTTP response code is 200 or not, and if the error value is set or not. You
also don't have to shoehorn your error handling into the very limited set of
errors provided by the HTTP protocol.

The other real-world advantage of this is that when you outgrow HTTP as the
transport protocol for performance reasons this makes porting the API really
easy to, e.g. protobuf RPC, or even raw TCP. The error is already defined as
part of the API and you don't need to rewrite all your client code to deal
with mapping multiple HTTP response codes to your new transport. It's good
future proofing I've seen pay off in a at least a couple of real-world cases.

Bottom line - your server should always return HTTP status 200 and a separate
API error response.

There's also a reasonable debate to have about whether non-error responses
should also include an explicit "error" field with some default OK value.
There may be good reasons to leave it out, e.g. if you want to save bandwidth,
but I consider that a fairly insignificant point. For consistency my APIs
always return a default OK error field on non-error responses - your mileage
may vary.

~~~
johnjuuljensen
HTTP has some very useful status codes, which convey standard conditions found
in most apis: 200, 201, 400, 401, 404, 403, 406, 429, 500

Also note that not all APIs will be used be developers. Often it's someone
less proficient with programming, and you can't rely on them to check the
content of the return message. Help them help themselves by making curl (or
whatever) bitch when there is an error.

It takes a bit more work to design an API that works over multiple transports,
but a good framework, such as Servicestack.net if your in .Net land, mostly
does it for you. By advocating 200 for all responses you're basically
reverting to SOAP and WCF (Windows Communication Foundation).

Each to his own though, and for internal stuff, it might make a lot more
sense.

~~~
justinsaccount
The problem with things like 404 is then you need to differentiate between

"Hi, this is the application and the object/document/whatever you are looking
for was not found"

with

"Hi, this is the server and the api endpoint was not found"

An API I use returns a standard apache 404 error page when the item you are
looking up doesn't exist. If the API endpoint was renamed my code that
consumes it wouldn't have any idea anything was wrong.

~~~
johnjuuljensen
Yes, an additional 4xx code to help differentiate between api endpoints and
resources would be nice.

I didn't mean to imply that HTTP status codes could stand alone as error
messages, so for a 404 error I'd also respond with more data, to help the user
identify the issue.

~~~
justinsaccount
Yeah.. The bigger issue with this app is that it responds with a default
apache 404 page instead of a 404 code + its usual xml response.

------
arohner
Step 1) document the endpoints enough that outside developers can write their
own clients.

It took quite a bit of work for me to get a native Clojure client working to
connect to the google cloud SDK. That was after wrestling with jar-hell around
gRPC and calling the Java client from clojure, which is decidedly not pretty.

~~~
euyyn
For future reference, here's the HTTP/REST documentation for the Cloud APIs:
[https://cloud.google.com/apis/docs/overview](https://cloud.google.com/apis/docs/overview)

E.g. for the GKE API: [https://cloud.google.com/container-
engine/reference/rest/](https://cloud.google.com/container-
engine/reference/rest/)

(For people that rather use an existing library, this lets you pick one of 7
languages and start from there:
[https://cloud.google.com/docs/](https://cloud.google.com/docs/) )

~~~
arohner
Yes, but that doesn't deal with authentication. My problem was getting the JWT
stuff working. The solution ended up being:
[https://gist.github.com/arohner/8d94ee5704b1c0c1b206186525d9...](https://gist.github.com/arohner/8d94ee5704b1c0c1b206186525d9f7a7)

~~~
euyyn
Ah, yeah, coding the OAuth2 flow in a new language (vs using an existing
library) is tricky, at the very least because OAuth2 itself is complex. The
best docs I know for that are here
[https://developers.google.com/identity/protocols/OAuth2Servi...](https://developers.google.com/identity/protocols/OAuth2ServiceAccount)
(we should link to it from the docs of the Cloud APIs).

From a glimpse, what's said there matches what your code is doing, so thanks a
lot for sharing it!

------
Dirlewanger
Protocol Buffers...GraphQL...JSON-API...so many damn choices for API
implementation! Next we need someone's essay of a blog post
comparing/contrasting them all.

Also, the Protocol Buffers link in the 3rd paragraph is 404.

~~~
scarface74
If you're using a good framework like C# Web Api, you don't have to choose.
Just implement them all by adding them to the pipeline and the framework will
automatically serialize/deserialize based on the accept header.

~~~
mbesto
Interesting, do you have more details about this?

~~~
scarface74
You register your custom serializer at application startup...

[http://www.strathweb.com/2014/11/formatters-asp-net-
mvc-6/](http://www.strathweb.com/2014/11/formatters-asp-net-mvc-6/)

Your controller action looks like this:

public List<Employee> Get() { ..... return employeeList; }

Based on the serializers you have registered and the request Accept header,
WebApi will serialize the list into JSON, XML, BSon (all built in) or a custom
serializer that you add like Protocal Buffers

[http://www.infoworld.com/article/2982579/application-
archite...](http://www.infoworld.com/article/2982579/application-
architecture/working-with-protocol-buffers-in-web-api.html)

Basically you can have all four registered and the client decides what it
accepts.

------
ex3ndr
They didn't mention one very important thing - querying only required data and
making connections between resources. For example, you need to download some
git commits with user profiles. User is a different resource than git repo.
How we can request such data in one single request? Then you will need also to
load referenced issues (if present) that is implemented as different resource.

GraphQL solve this problems in a very nice and flexible way.

~~~
kevincox
If the data is different there is very little need to ask in a single request.
Just send two parallel requests. Of course in some cases getting a list of
objects is cheaper then multiple requests but that is only if the objects are
"related" and stored together.

However I do agree that GraphQL is great for a lot of use cases.

------
rodionos

      HTTP Method DELETE. Payload: empty.
    

I know DELETE is not supposed to have any payload, but using PATCH is awkward
if you have to delete multiple resources based on a query or a filter. You
need to specify a 'delete' action as part of PATCH request which means the
payload model has to be different. Just awkward.

~~~
veesahni
A bulk deletion isn't defined as part of their "Standard Methods" .. For stuff
that doesn't fit the standard methods, they have this page on custom methods:
[https://cloud.google.com/apis/design/custom_methods](https://cloud.google.com/apis/design/custom_methods)

That said, I'd implement a bulk deletion in one of three ways:

:::: ONE

DELETE /thingies/id1,id2,id3

When all is ok, the response is a 200. However, if one of the deletions fail,
how you handle the response is more complicated.

:::: TWO

POST /thingies/bulk_delete

and have your ID's listed in the body. Still same problem with handling the
response.

:::: THREE

POST /bulk

Instead of implementing a bulk deletion at all, think about how you can
implement "bulk requests" as a higher level feature of the API.

So you can transport a bunch of delete's as part of a single HTTP request, and
get back a bunch of response codes packaged into a single response.

~~~
rodionos

      POST /thingies/bulk_delete
    

Exactly. That's the approach we've taken. It has worked better for us than
sending a delete action via PATCH.

------
amingilani
My biggest pain when designing a rest API is a standard authentication method
that won't drive me crazy. So far i've always used 3rd party modules to
implement different kinds of authentication but I never quite understood it in
depth.

Apart from HTTP Basic Auth, but please don't use that.

~~~
camus2
> Apart from HTTP Basic Auth, but please don't use that.

What's wrong with basic auth with HTTPS? You can delegate authentication with
OAUTH and then use OAUTH for authorization but authentication still has to be
done somewhere.

~~~
zeveb
> What's wrong with basic auth with HTTPS?

The only thing wrong that I can see is that it's 2017 and the browsers still
don't have a good (indeed, AFAIK, _any_ ) UI for logging _out_.

~~~
nandhp
Or for staying logged in across sessions (or not). But it should be fine for
an API.

------
somedumbguy22
I wonder if someone from the Apigee team wrote these, as Google recently
acquired Apigee[1], and the guidelines are mostly inline with what Apigee
recommends.[2]

[1][https://techcrunch.com/2016/09/08/google-will-acquire-
apigee...](https://techcrunch.com/2016/09/08/google-will-acquire-apigee-
for-625-million/)

[2][https://apigee.com/about/resources/ebooks/web-api-
design](https://apigee.com/about/resources/ebooks/web-api-design)

~~~
moca
This guide has been in use since 2014, including recently launched Cloud
Spanner API.

Disclaimer: co-author of the design guide.

------
jeppebemad
I don't see the actual guidelines, only Contents, Introduction and
Conventions. On iOS Chrome /Safari. Also the fixed buttons overflows.

~~~
hathawsh
The page's responsive design is buggy. On narrow screens, the entire left
column disappears. All the important content is only accessible from that left
column.

~~~
nandhp
I don't think it disappears, it just moves into the menu button, which seems
fairly typical for navigation sidebars on narrow screens.

------
andyfleming
Do any of these API guides have good guidance around batch endpoints like
handling a PATCH on multiple resources as a single request?

~~~
tofflos
I've used two strategies:

1\. If it's a transaction (all-or-nothing) I POST a new transaction resource
which references all the target resources. Remember that you can create as
many resources as you want. There is no need to have a 1:1 mapping between
your resources and the database.

2\. If it's a batch statement (some-can-fail-some-can-lose) I simply stick to
issuing multiple requests. This frees me and my clients from having to write
complicated code that deal with partial success - and it can still be very
fast especially with request pipelining.

If issuing multiple statements is too slow then I would attempt to increase
the speed of the stack before adding the complexity of having to deal with
partial success.

So in conclusion... No really good ideas on how to write batch statements. I
don't really think REST APIs with their one HTTP response code maps very well
for that use case. But I hope one of the two strategies above will be useful
to you.

------
novaleaf
on a related note, anyone know a good saas for api documentation? preferably
one that could take jsdoc imports or other code based generated docs...

~~~
jbattle
I'm not sure exactly what you are looking for but try this tool from the
swagger people:

[http://editor.swagger.io/#/](http://editor.swagger.io/#/)

It gets a little laggy for large swagger docs but its' quick and easy for
smaller API docs.

