
Microsoft REST API Guidelines - alpb
https://github.com/Microsoft/api-guidelines/blob/master/Guidelines.md
======
deathanatos
Pagination is one of those things I feel like so many of these things get
wrong. LIMIT/OFFSET (or as MS likes to call it, TOP/SEEK) style results in
O(n²) operations; an automated tool trying to pull the entirety of a large
collection in such a scenario is not good. I have to again recommend the
excellent "Pagination done the Right Way" presentation[1] from Use the Index,
Luke (an equally excellent site).

Just return a link to the "next page" (make the next page opaque); while this
removes the ability to of the client to go to an arbitrary page on its own, in
practice, I've never seen that matter, and the generalized URL format allows
you to seek instead of by LIMIT/OFFSET by something indexable by the DB; HTTP
even includes a standardized header for just this purpose[2].

I also think the generalized URL is more in line with Fielding's definition of
REST, in that the response links to another resource. (I don't know if being
in the header invalidates this; to me, it does not.)

If you get the metadata out of the data section of your response, and move it
to the headers, where it belongs, this usually then lets you keep the
"collection" of items you are return as an array (because if you need to put
metadata along side it, you need:

    
    
      {
         "the_data_you_really_want": [1, 2, 3],
         "metadata, e.g., pagination": {}
      }
    

vs.

    
    
      ["the", "data", "you", "wanted", "in a sensible format"]
    

)

(I've seen metadata-pushed-into-the-body turn API endpoints that literally
need to return no more than "true" or "false" into object that then force a
client to know and look up the correct key in an object… _sigh_.)

[1]: [http://use-the-index-luke.com/no-offset](http://use-the-index-
luke.com/no-offset)

[2]:
[https://tools.ietf.org/html/rfc5988](https://tools.ietf.org/html/rfc5988)

~~~
foota
Returning just an array from a json-endpoint that may contain sensitive data
is a vulnerability: [http://haacked.com/archive/2008/11/20/anatomy-of-a-
subtle-js...](http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-
vulnerability.aspx/)

(edit: may be mitigated in newer browsers)

~~~
niftich
The obvious mitigation is don't visit untrusted endpoints from your API
client, and/or don't run a script interpreter inside your API client.

~~~
foota
That's not what the vulnerability is here. If you have a server endpoint that
returns an array without wrapping it as a property on an object then an
attacker can write a webpage that overrides the Array prototype and requests
the json from your page as a script, thereby bypassing the cross origin check
it would be subject to as an xhr request. So if you have a page which returns
sensitive information as an array you (were, it seems browsers probably don't
allow you to override the array prototype anymore) are vulnerable to the
attacker stealing a signed in users secrets.

~~~
niftich
On the contrary, that's precisely the vulnerability:

1\. The API server needs authentication, which gets cached in the user-agent.

2\. The same user-agent, with the same authentication context, is then
directed to a malicious site.

3\. The user-agent contains a javascript interpreter, and the malicious
website serves a script to override the Array prototype.

4\. The same script then executes a CSRF request to the API server, reusing
the cached authentication context, and therefore stealing the response.

A browser fits all the necessary conditions that must exist for this
vulnerability to be exploited, but if you're the consumer of the API, you
don't have to use a browser as your user-agent. You can write your own, which
you'd do if you're making these requests programmatically in a non-web
application.

Regardless as to which user-agent you use, a viable mitigation is to not visit
untrusted endpoints with the same cookie jar or authentication context.

Another viable mitigation is to omit including a script interpreter in your
custom API client. This is in fact usually the case.

~~~
foota
Sorry, you are right.

------
mythz
Funny that Microsoft uses OData as an example of a bad URL:

[https://github.com/Microsoft/api-
guidelines/blob/master/Guid...](https://github.com/Microsoft/api-
guidelines/blob/master/Guidelines.md#71-url-structure)

Whilst continuing to praise OData as "the best way to REST" \-
[http://www.odata.org](http://www.odata.org)

~~~
bunderbunder
Microsoft came up with OData about 10 years ago, but it's now steered by a
technical committee with a lot of participants. I wouldn't necessarily be so
quick to assume that the OASIS committee - or Microsoft's representatives on
it - speak for all of Microsoft. Let alone an internal guidelines committee
that they apparently convened fairly recently (given the age of the report)
when they use that slogan.

~~~
mythz
This isn't just some poorly labelled content from a 10 year old legacy website
from a time before they knew any better, the OData website got a recent
redesign sporting new Stock photos and even larger font for their disingenuous
labeling which still continue to push OData crapware under the REST banner to
fool devs/CIO's into thinking if they adopt OData they're taking advantage of
the best form of REST - which is in-contrast and devalues sincere efforts like
this where they're actually looking to promote good HTTP API practices.

~~~
andybak
I recently had the misfortune to have to tackle oData to talk to Dynamics CRM.
The choice was between that and SOAP so it was the lesser of two evils.

But compared to the other nice, modern REST API's I'm used it smelt of mould
and enterprisey cobwebs. I'd have preferred an old school RPC API over that.
At least API's that abuse REST from that direction tend to have the virtue of
simplicity.

~~~
tracker1
For that matter, I know it's "bad" to use RPC, but imho it's sometimes the
cleanest/simplest interface you can make for something on an API.

~~~
ChrisAtWork
There's nothing wrong with RPC APIs. Other than being "Not Cool" due to the
legacy of SOAP, they can deliver some very nice value.

Various RPC mechanisms like Bond and Protocol Buffers (and more recently GRPC)
are trying hard to make RPC cool again. Personally, I hope they succeed, as
REST (like any technology) doesn't work for everything.

~~~
tracker1
For that matter, communication abstractions over websockets... Which has been
most interesting to me in terms of offering some better security choices.

------
blakeyrat
I'd love to see some guidance on how you're supposed to do file uploads
(think: uploading an image avatar for a user) and fit it into the REST
structure. These guidelines don't even mention "x-www-form-urlencoded" except
in a bit of fluff about CORS.

Made more frustrating by Microsoft's own WebAPI2 basically having zero support
for file uploads, meaning we had to go way outside the library to code support
for it.

Not sure why that's such a blind spot. Doesn't _every_ web service need to
upload files eventually?

~~~
gedy
POST /users/{id}/avatar

?

What additional guidance would you expect?

~~~
ranyefet
You can turn the uploaded image into base64 string in the client and then just
post it as part of your JSON payload.

~~~
adamors
This is what I've been doing and it's very simple for both client apps and on
the server. I don't know if this is feasible for larger files tho.

------
daxfohl
Well presented. It would be great if there was a language / framework that
made this guaranteed. As-is everything just returns 500 error on any
exception, lets you return errors with 200, allows update on GET, etc. Even
the Haskell ones.

~~~
tarequeh
Handling errors over REST API is something I've struggled with. What's the
best way to handle errors? Data validation errors will be different from
system/server errors. Tough to establish a universally applicable error
response structure.

I used to be in favor of sending 200 responses with error codes but now
gravitating back towards relaying the most relevant HTTP error & letting the
clients handle it.

~~~
MichaelGG
Any app of decent size will probably end up passing HTTP's error codes. 200 +
error is OK, except it can mess with caching. A 5xx with details in the
response is fine.

You might be tempted to map some errors, like "item not found" to 404, and so
on. But you still need to provide the real error code. So you're not gaining
much.

Honestly, I don't get the obsession with using HTTP features to represent part
of an API. It never saves work; you're writing a custom client each time
anyways. From a purely code perspective, you're going to deserialize the body
or a context object from a header. Moving that data into multiple headers can
only require more code, not less. Same for verbs. I've _never_ gotten any
benefit beyond GET-for-read, POST-for-write.

Elasticsearch is a good example. The URL space is overloaded with special
things, allows you to create objects you can't reference, and so on. They use
verbs, except you still have extra parameters tacked on. There's zero benefit
to me, the user, of them making it REST like.

Maybe if REST-someone creates a machine usable spec like WSDL (just "simpler")
then all these HTTP headers could be put to use.

~~~
paulddraper
The advantage is that there is some level of standardization.

404? That means the entity doesn't exist. 302? I should look somewhere else.
401? The server doesn't know who I am.

Accept? I can specify the format. ETag? I can get a faster response if I
include the token in the next request.

This stuff is really, really common, and people can learn your API very
quickly. A transparent caching server can improve performance.

Sure, with a custom protocol you can get a tight system. Hell, write your own
transport layer for even more control. But it will take longer to learn and
harder to interoperate.

~~~
MichaelGG
The time spent reading that 404 in this case means "the object ID isn't found"
versus "this path doesn't exist" pretty much negates any benefit - you still
have to include sub codes. Same for "access denied because token expired" vs
"invalid token". Not mention all the stuff that'll get crammed into
400/500/503.

If your app is simple enough that _all_ errors map 1:1 to HTTP, great. Or if
it doesn't need that level of error management. Otherwise HTTP just confuses
the issue.

~~~
paulddraper
So, you just want to explain the error further? Wonderful. RFC2616

> the server SHOULD include an entity containing an explanation of the error
> situation

\---

The 3-digit status code tells consumers (1) the status category (success,
redirect, client error, server error) and (2) a more specific status within
that category. It does that in a way that doesn't require me turning to your
API docs every 3 seconds.

------
ZalandoTech
Zalando released our RESTful API guidelines a few months back -- they're very
comprehensive and open-source. Feedback and suggestions welcome:
[https://zalando.github.io/restful-api-
guidelines/](https://zalando.github.io/restful-api-guidelines/)

------
dehora
It's great to see Microsoft release these guidelines. It's good work, a broad
document with a lot of interesting topics covered. You can always debate the
details (the pagination discussion here is very interesting), but having seen
first hand at Zalando how much dedication and effort goes into API guidelines
to support a large number of service teams, plus releasing them into the
public, it's no small feat and they deserve credit for doing this.

There's naturally been some discussion around REST/HTTP APIs and design
styles. One of the things we've tried to do with the Zalando guidelines
([https://zalando.github.io/restful-api-
guidelines](https://zalando.github.io/restful-api-guidelines)) is go into more
detail on using HTTP and JSON, and how REST properties are used. Zalando, like
everyone else, has had to think through versioning and compatibility, so it
was interesting to read what Microsoft do here. The Zalando guidelines take a
different approach, using media types for versioning, document just one set of
compatibility rules, plus a deprecation model, and it's working very well in
practice so far ([http://zalando.github.io/restful-api-
guidelines/compatibilit...](http://zalando.github.io/restful-api-
guidelines/compatibility/Compatibility.html)).

Btw, in case anyone from Microsoft working on the guidelines is reading and
ever wanted to swap guideline notes or ideas, that would be awesome. And once
again, great job releasing the doc :)

------
yeukhon
[https://evertpot.com/dropbox-post-api/](https://evertpot.com/dropbox-post-
api/)

This discuss some option for the GET vs POST.

In Kibana, everything is done over GET as get parameters, and I find that
extremely annoying and a poor design.

A lot of public APIs also don't honor or have any intentions in supporting or
using PATCH. Most APIs I have worked with only use PUT for modification.
Anything resembles "creation" is automatically a POST.

~~~
dozzie
> In Kibana, everything is done over GET as get parameters, and I find that
> extremely annoying and a poor design.

Kibana (4.x) being a website to display queries and some charts and weighing
over hundred megabytes is itself a clear example of a poor design.

~~~
yeukhon
We ran into a situation exactly like that. We hit the limit of the length of
GET.

Thanks.

------
iheart2code
Very cool document. I kind of got stuck at delta queries, though. How do you
implement that? I can't find any reference to delta/removed queries on Mongo,
Postgres, or MySQL. Do you just keep all records in the database and add a
"removed" field? How would that solution work with user privacy & users truly
wanting to remove data?

~~~
ChrisAtWork
Delta queries are pretty easy, assuming you have a rich data store behind your
data.

As others have said, your data store needs to be able to say, "Show me changes
since XYZ". Most of the Big Apps can do that, and from there the problems is
one of API Semantics.

This doc addresses the API Semantics, rather than the application design. To
try to solve the design problem would be impossible as every application is
different.

~~~
iheart2code
I understand what the doc is trying to convey, I just hadn't ever thought of
the application-level logic for record deltas before, so I got incredibly
distracted from the document. It made me wonder how other people, not using
temporal tables, are accomplishing this right now.

~~~
ChrisAtWork
I don't think I've ever seen delta's done at-scale with actual temporal
tables, but rather with transaction Id's of some sort.

For example (based on the doc) when you register a delta link, there's a row
written to Azure Tables saying, "owner:iheart2code, deltaToken:foo,
lastRecordIdSeen:123".

When you then make the Delta request, we look up "foo" for you, find that the
last id you've seen is 123, and then only give you records from the
transaction table with an id larger than that.

Temporal is always a can of worms, as clocks are impossible to keep in sync
and there are endless race conditions.

Making the delta tokens causal, rather than temporal, is the way to go.
Anything else is brutal in the distributed systems world...

------
captn3m0
Interesting to see Microsoft talk about "human-readable URLs". I still
remember the mess kb.microsoft (or the surrounding MS infra) was at a time.

Nice to see them support such stuff, still.

------
spdustin
Seems ... strange to bash their own "Best Way to REST" offering, OData [0]

[0]: [http://www.odata.org](http://www.odata.org)

------
mikro2nd
Funny thing: I've been thinking a bit about API versioning quite a bit lately,
and the best solution I've come up with is the ONE thing not at all covered in
this: put an `api-version` header into the request. I've seen both of the
schemes recommended here, and I like neither very much. So what's wrong with
my (header) solution?

~~~
JimDabell
Neither of the schemes mentioned here are good as they change the URI for a
resource, which breaks all sorts of things. Could you imagine if every website
wanting to switch from HTML 4 to HTML 5 had to update their URIs from
[https://www.example.com/HTMLv4/contact.html](https://www.example.com/HTMLv4/contact.html)
to
[https://www.example.com/HTMLv5/contact.html](https://www.example.com/HTMLv5/contact.html)?
It would be chaos.

For instance, if client application A talks to the service using version 1.0
of the API and client application B talks to the service using version 2.0 of
the API, then those client applications can't interoperate because they are
seeing two different sets of resources.

Your solution isn't far off the approach recommended by everybody who rejects
the different URI approach. You don't need an API version header. When your
representation format for a resource changes, provide a `version` parameter
for the media type. For example: `Content-Type:
application/vnd.example.foo+json;version=2`.

This is exactly how HTTP and the `Content-Type` header are supposed to work –
if your representation for a resource changes, you don't change the URI, you
change the media type.

~~~
ChrisAtWork
The "?api-version" approach in the doc is there for exactly the reason you
call out. Azure uses this approach.

By omitting "/v1.0" from the path it makes the actual URL's far more durable,
as they're not version dependent. There are pros and cons to this, as there is
with everything. In Azure's case it's great, as you can then use URLs as
resource identifiers and put ACL's on them. If they were versioned via the
path, this wouldn't work.

Other services, such as XBox, and (very old) Azure, put the API version in a
custom header (x-ms-version, for example). This proved to be extremely
problematic, and every team that does this had stories of all the problems it
caused and then spends time removing it.

I've never seen a detailed proposal for putting API version in the Content-
Type header, and will go look.

The content-type header does seem to have many of the same drawbacks as
putting the version in a header (any header). For example, people could not
make queries from a browser just by entering a URL as there is no way to
specify a header. Nor could someone bookmark an API GET request, which is also
quite handy.

Ease of use is huge, and I am in the camp that a version in the URL (path or
parameter) is much easier in every way than a header. Every with curl, it's
easier (I can never remember how to specify headers w/o the man page).

~~~
pdrayton
One slight downside of a custom header to specify a version is that OPTIONS
calls don't include the value of the custom header, so your pre-flight gets to
say yes or no without knowing what version is being called. Putting API
version in the URL or query string fixes this.

As for bookmarking a GET request, this is /almost/ doable even following the
MSFT guidelines since it says that service implementors MUST also provide a
query-string alternative to required custom headers (section 7.8), and that
service implementors MUST support explicit versioning. The only fly in this
ointment is that the versioning part of the spec only offers two places of
specifying the version - URL and query string, and seems to leaves no room for
other options.

Personally, I think the Accept header flavor with custom MIME types is the
most flexible for minor (backwards compatible) version - see GitHub's API for
an example - but it certainly isn't the most simple to work with, neither in
client libraries, Curl/WGet command-line use or API consumer tools (almost
none let you fiddle with Accept headers). Since API ease of use is such a big
factor for adoption, passing versions in the URL or the query string is most
likely an OK lowest common denominator for APIs that seek the widest possible
reach.

------
aligajani
Roy Fielding thinks this isn't REST. He says REST APIs != HTTP APIs. So, read
with caution. Also, I noticed the MSFT guide doesn't mention HATEOS.

~~~
fooyc
HATEOS is one of the steps too far that contributed in making REST suck more
for everyone.

People following this also tend to follow it like a dogme, and find that their
APIs are slow, too meta/abstract, and hard to consume.

------
danpalmer
Being that guy again, (and sacrificing my karma) but...

This is not REST, it contains nothing about hypermedia, entities having
knowledge of their URIs, or any way of discovering features of an API
programmatically.

While I'm sure there's plenty of good stuff in here (it looks otherwise fairly
comprehensive), APIs will continue to be a disparate system that requires
custom code for every integration until we can embrace the full benefits of
REST, and develop the technology and conventions/standards for integrating
RESTful services.

Edit: for an example of what's possible with 'real' REST, check out this talk:
[https://vimeo.com/20781278](https://vimeo.com/20781278) – better
documentation, easy parsing, API optimisation without client changes,
discovery, and the API is fully human readable/usable as well.

~~~
SwellJoe
REST seems like an elephant in a room with some blind folks. Everybody who
touches it thinks it's something different than the next guy, and they're all
describing only one element of the thing.

That said, I don't actually know what this elephant looks like, either,
because everybody I've read on the subject seems to have only a partial
understanding of it...thus, I have a partial understanding of it.

If you know what the whole elephant looks like, and have good resources for
what makes something _actually_ REST, I'd be interested.

~~~
jcrites
The best way to understand REST is by example. First check out the blog post
by Roy Fielding mentioned in this thread which summarizes the constraints [1].
Then let's examine how several popular restful services work versus those
constraints.

Here are some popular well-known services to use as examples: Google.com
Amazon.com Facebook.com News.ycombinator.com.

\- You enter these services through the site root with no prior knowledge
about the services aside from standard media types (HTML/CSS/JS)

\- Hypermedia is the engine of application state – meaning that you load a
webpage a.k.a. resource that contains hypermedia links to other resources, and
you navigate between states in the application by following those hypermedia
links. For example, the hacker news homepage contains links to news articles
and comment threads about them, which contains links to reply to those
comments, and so on.

\- The application describes how to compose requests using standard media
types such as web forms and JavaScript. Because the pages instruct the client
about what requests to compose and to which URL, the services have control
over their own name spaces which have no fixed resource names or hierarchy.

I consider REST to be most easily understood as a departure to application
protocol design prior to the Web, where each application and service had a
unique and custom binary protocol, and a client for these services had to be
reprogrammed to interact with it. REST is a set of constraints for designing
services that run at Internet scale and that highly decouples clients and
servers. Fielding's thesis on REST has more detail on the principles by which
the REST architectural style was derived.

[1] [http://roy.gbiv.com/untangled/2008/rest-apis-must-be-
hyperte...](http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-
driven)

~~~
vfaronov
The big problem with using the hypertext Web as an example of REST is that
there is a human operator literally driving that "engine of application
state". Most API clients cannot afford to compute their state transitions on a
network of 80 billion neurons.

~~~
JimDabell
No there isn't, a great deal of the web is loaded without direct human
requests. For example:

* Web crawlers

* Archival services

* Embedded resources (stylesheets, JavaScript, images)

* Newsfeeds

~~~
Avernar
All those examples are hard coded algorithms in the bulk copy category. They
pretty much just use the GET method to download all or a specific subset of a
site.

To truly drive a REST API requires a human or an AI. The AI doesn't need to be
human equivalent, just smart enough to inteligently interpret the hypermedia.
If the API changes the AI would be able to adapt.

I'm talking about the complexity of the computer on the Enterprise in ST:TNG.
You give it a command and it figures out how to go about solving it.

~~~
JimDabell
I don't know where you got the idea that REST needs futuristic AI, and you
haven't really explained why you think that way.

Why does a REST client have to _intelligently_ interpret the hypermedia? Why
are hard-coded algorithms a problem? Why does it need to adapt to changes in
the API? None of those things are requirements for REST, you've inserted them
for seemingly no reason.

For instance, consider an unattended webcam you want to use to show off, e.g.
the growth of a flower bed remotely. It has the URI for a web service that
returns a resource describing actions it can take:

    
    
        {
            "actions": [
                {
                    "rel": "report-problem",
                    "href": "https://example.com/report-problem",
                    "method": "POST",
                    "accept": "application/problem+json"
                },
                {
                    "rel": "update-frame",
                    "href": "https://example.com/update-frame",
                    "method": "POST",
                    "accept": "image/*"
                }
            ]
        }
    

The definition of the report-problem relationship is "This is the action to
take when an error occurs."

The definition of the update-frame relationship is "This is the action to take
when a new frame is available to upload."

This API defines one media type and two relationships. It requires no
futuristic artificial intelligence or human intervention, it just needs to
speak HTTP, parse simple JSON, and trigger the actions based on simple
conditions.

~~~
vfaronov
It also doesn't provide much benefit over a plain RPC-like interface, because,
for practical reasons, the webcam will be coded to send requests directly to
[https://example.com/report-problem](https://example.com/report-problem) and
[https://example.com/update-frame](https://example.com/update-frame) (the
"bookmarks"). If you want to change those URLs, your best hope is that the
webcam will understand a 307 or 308 redirect.

~~~
JimDabell
RPC vs REST is an entirely different argument. We're talking about REST best
practices here, not "let's do RPC instead".

There's no reason for the webcam to hard-code URIs, and that's a violation of
the architectural constraints. Just arbitrarily deciding to disregard facets
of the architectural pattern without giving a reason does not add to this
discussion.

None of this relates to whether or not you need futuristic AI or human
intervention. I think from the example, it's pretty clear you need neither for
a REST API.

~~~
vfaronov
> We're talking about REST best practices here, not "let's do RPC instead".

I don't think so. Avernar above seems to argue against REST as an
architectural style (RPC being the default as everybody uses it). I thought
you were trying to defend REST with your example, and I wanted to point out a
weakness in that example.

> There's no reason for the webcam to hard-code URIs, and that's a violation
> of the architectural constraints.

The webcam has to hardcode _some_ URIs (entry points). Consider that it could
be using other services: perhaps a social network to post updates on. Then it
must hardcode another entry point URI. But at that point -- and since every
resource is supposedly independent and only explicit relations matter -- why
can't the webcam treat the "problem report service" and the "update frame
service" as two entirely separate services to hardcode? Which constraint of
REST does this violate?

In your example, insisting that the webcam go through the entry point doesn't
even buy the server a lot of flexibility. You reduce the client's knowledge
from 2 URLs to 1 URL -- a URL that you may still want to change etc.

An extra HTTP roundtrip every time can well be an unpleasant overhead. HTTP
client caches are sparsely supported and notoriously complex (RFC 7234 is 41
pages long). Rolling your own cache is, well, rolling your own.

And then there is the human problem. I was blown away by a discussion [1]
where people reported using _cryptography_ to prevent _in-house_ clients from
hardcoding URLs. I was also saddened when I read the JSON API spec [2] -- a
great example of REST, I think -- only to discover that most existing
implementations [3] disregard the linking aspect and hardcode the URL
structure that the spec uses for examples. But this is not a complaint against
your webcam example so much as it is a general hurdle with REST.

[1] [http://blog.ploeh.dk/2013/05/01/rest-lesson-learned-avoid-
ha...](http://blog.ploeh.dk/2013/05/01/rest-lesson-learned-avoid-hackable-
urls/) [2] [http://jsonapi.org/](http://jsonapi.org/) [3]
[http://jsonapi.org/implementations/](http://jsonapi.org/implementations/)

> None of this relates to whether or not you need futuristic AI or human
> intervention. I think from the example, it's pretty clear you need neither
> for a REST API.

Absolutely.

------
raz32dust
Jeez... anyone else find the caps hurting the eyes? Is it necessary? Sorry I
have nothing constructive to add. Just had to bring this up.

------
zeveb
Wow, they REALLY LIKE TO SHOUT THEIR HEADINGS.

Otherwise, what I've read so far looks like a really good start. Say what one
will about Microsoft's products, but there are a lot of smart folks there.

~~~
mianos
From the same people who brought you the XML SOAP API. Smart people. Too
smart?

------
intellix2
Surprised people are still talking about REST after GraphQL

------
intellix2
Surprised people are still talking about REST after Facebook unveiled GraphQL

