
Microsoft REST API Guidelines - excerionsforte
https://github.com/Microsoft/api-guidelines/blob/master/Guidelines.md
======
jedberg
Every single person writing a REST api should have to memorize this table:

GET - Return the current value of an object, is idempotent;

PUT - Replace an object, or create a named object, when applicable, is
idempotent;

DELETE - Delete an object, is idempotent;

POST - Create a new object based on the data provided, or submit a command,
NOT idempotent;

HEAD - Return metadata of an object for a GET response. Resources that support
the GET method MAY support the HEAD method as well, is idempotent;

PATCH - Apply a partial update to an object, NOT idempotent;

OPTIONS - Get information about a request, is idempotent.

Most importantly, that PUT is idempotent.

Credit to arkadiytehgraet for retyping the table to be readable. Please give
them an upvote for the effort.

~~~
JMTQp8lwXL
There should be an option for deleting many objects. For that, you have to do
a POST request on AWS S3:

[https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteOb...](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)

Instead of polluting existing method names to mean new things, HTTP could
offer more method names.

Having POST alternatively mean "send a command" makes it meaningless. The
command could do anything.

~~~
treve
What advantage is there to having a bulk operation vs many operations?

With HTTP2 you can send a ton of requests in parallel.

~~~
JMTQp8lwXL
A lot of sites might have an HTTP/2 upstream server, but everything downstream
is still communicating via HTTP/1.1. I would imagine you can only the benefits
of HTTP/2 multiplexing if you fully utilize HTTP/2 end-to-end.

That means your CDN, your load balancers, and finally your actual
applications-- any any other intermediaries -- all communicating via HTTP/2.

Not to say that HTTP/2 isn't the solution, but it's going to be slower than
looking up "number of sites using HTTP/2", since you can't easily inspect the
behavior of the intermediary servers.

~~~
treve
Getting onto H2 is worth investing in then though, and drastically simplifies
things that would otherwise need bulk operations everywhere.

I'm not entirely sure what you mean with downstream, but in my experience H2
is really well supported almost everywhere these days. Ymmv ofc

~~~
JMTQp8lwXL
What I mean is, typically, there are many layers between an end user's network
request and reaching the server where it will be fulfilled. For example, you
might have your site fronted by Cloudflare. They provide DDOS protection, etc.
That Cloudflare server might accept HTTP/2 requests, and then open a new
request to your load balancer server. Then, your load balancer opens another
network request to the server fulfilling the request. The connections between
these servers may only support HTTP/1.1. So you lose the benefits of HTTP/2
since you can't multiplex the network request end-to-end.

~~~
treve
In this particular example (S3), I don't think this is really a concern.

Anyway, I'm curious... what components do you run into that don't support this
yet? At least in your examples I can't think of any major players that don't
do HTTP/2 out of the box, except, well, old versions.

Is this more of a hypothetical? In my experience HTTP/2 is pretty much
ubiquitous for anything current.

~~~
bebop
There are environments that do not allow http2 or websocket traffic on the
network. Most government/big enterprise have these restrictions.

------
littlecranky67
I've designed quite some REST APIs, but I've come to the conclusion that all
those semantic/HATEOS or other REST guidelines don't always apply or make
sense, depending on the problem domain.

I worked in finance and designed a REST API, and besides the standard
user/account object, basically ALL the data and operations were neither
idempotent, cacheable, durable and often couldn't possible be designed using
HATEOS et al.

Quotes, orders, offers and transactions carry lots of monetary amounts which
are sent in user currency, which is auto-converted depending on the user
requesting it (and currency conversion rates is part of the business). Most
offers are only valid a limited amount of time (seconds to minutes) because of
changing market rates. There is also no "changing" of object as in
PATCH/DELETE, all you do is trigger operations via the API and every action
you ever did is kept in a log (regulatory wise, but also to display for the
end user).

There is some way to try to hammer this thing to fit with HATEOS et. al. and I
put some effort in it, but I would have ended up splitting DTOs into
idempotent/mutable and non-idempotent/mutable parts and spread them across
different services, bloat the DTOs themselves (i.e. include all available
currencies in a quote/offer) and have the validity/expiry of objects via HTTP
caching (instead of the DTOs). That would have ended up in a complex and hard-
to-read API, would have significantly worse performance (due to lot of
unneeded data & calculations) and some insane design decisions (like keeping
expired offers/quotes around just so they are still available at their service
URL with an id, even though the business requirements would never store
expired offers).

Sometimes you just need to use your own head, accept that the problem domain
might not be covered by other "guidelines", and come up with a sane design
yourself.

~~~
pas
REST shines where there is a big path dependent interaction graph - ie.
feature switches, permissions, plans/packages and other statefull stuff.

------
SSchick
At MS, in our team no manager gave the slightest damn about being restful or
any proper consistent API design. Everything was constantly rushed and just
'tacked on', random API versions were created, contracts were broken, nothing
was consistent. It was a mess. That said, I believe most of these guidelines
completely ignored in most teams.

~~~
dpark
It’s almost as if there are different groups in Microsoft with different
priorities. Why is it so hard to get 130k people in complete assignment across
all areas? Seems like a simple problem.

~~~
Someone1234
> It’s almost as if there are different groups in Microsoft with different
> priorities. Why is it so hard to get 130k people in complete alignment
> across all areas? Seems like a simple problem.

I feel like this could have been said without the sarcasm and it would have
been a great discussion starter. Instead a potentially valid point/retort will
be obscured by the tone and discussion surrounding it.

For example:

> Unfortunately large organizations often struggle to get groups into
> alignment on issues like this, unless there's a strong mandate from the top
> down (e.g. Amazon).

No sarcasm or snark, same basic substance.

~~~
mixmastamyk
And not correct either. As mentioned, if Satya said so it would happen within
months.

~~~
dpark
There are a limited number of mandates that can come from the top. Upper
management loses credibility when they issue endless mandates, even if all of
the mandates are reasonable.

It’s also unrealistic that everyone underneath can focus on many mandates at
once. Which leads to the loss of credibility when mandates start getting
ignored out of necessity.

~~~
mixmastamyk
Good thing then this isn’t an endless mandate and only a single idea with a
short list of requirements.

~~~
dpark
“Endless mandates” meaning an endless number of mandates, not that individual
mandates are endless.

At Microsoft the number of simple mandates like this one are numbered _at
least_ in the hundreds.

~~~
mixmastamyk
It could be done easily if deemed a priority. Defeatism is not a compelling
argument.

~~~
dpark
Yes, for any specific mandate like this, it could easily be accomplished. The
point is that _in aggregate_ there are too many of these to accomplish.
Competent management will choose the highest value things to focus on. This
isn’t defeatism. This is realism. Focusing on everything is the same as
focusing on nothing.

Realistically, mandating RESTful APIs at the exec level is unlikely to be a
big win for Microsoft. The teams working on APIs at scale are largely already
doing this (and you’ll notice multiple groups represented by the authors). The
teams that aren’t doing this are largely not building APIs that benefit a
great deal from RESTful APIs, because they’re building internal APIs or
similar and RESTfulness would be nice to have but not particularly impactful.

------
vearwhershuh
The primary innovation in REST is HATEOAS (which isn't mentioned in the
document at all.) JSON isn't a hypertext, it just isn't a good format for
REST-ful services.

[http://intercoolerjs.org/2016/01/18/rescuing-
rest.html](http://intercoolerjs.org/2016/01/18/rescuing-rest.html)

[http://intercoolerjs.org/2016/05/08/hatoeas-is-for-
humans.ht...](http://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.html)

Doesn't anyone notice this? I feel like I'm taking crazy pills.

[https://www.youtube.com/watch?v=HOK6mE7sdvs](https://www.youtube.com/watch?v=HOK6mE7sdvs)

~~~
clintonb
The trouble with HATEOAS is that it requires extra work on the client. The
client is supposed to start at the root of the API, request URLs, and cache
them. Getting the data one wants requires multiple HTTP calls, many of which
are supposed to be cached.

I tried implementing this and found it toilsome. It was far easier to use
versioned URLs that followed a documented pattern.

When I checked about three years ago there wasn’t much in the open source
community that I could build atop for clients. I also didn’t want to maintain
an SDK client in addition to the API itself.

~~~
sk5t
"Toilsome" is probably the most kindly yet still accurate thing one could say
about exposing HATEOAS to the real world, at this particular point in time.

~~~
vearwhershuh
Every 1.0 web app developer in the world implemented HATEOAS to a great extent
without even thinking about it.

It's only toilsome when you try to shoe-horn it into a traditional data API,
rather than accept it as a unique descriptive aspect of the early web
architecture.

------
mumblemumble
Every time I see a discussion about REST guidelines, I get a little bit more
happy about having switched to gRPC.

~~~
asdkhadsj
Man, I wanted to love gRPC. The disconnect between Protobufs and languages
just felt too great though. You could do some weird things that just made
every language feel, to some degree, non-idiomatic.

I switched to Twirp at one point to retain the simplicity of RPC + Protobuf,
but avoid some of the complexity we didn't need via gRPC... but even that
suffered, of course, from the Protobuf problem.

Finally I'm back to plain HTTP and JSON. We don't worry too much about REST
fundamentals, and honestly we're more like an ad-hoc (JSON) RPC over HTTP, but
it's simple.

The only problem is documentation. The _one thing_ that I found perfect with
Protobuf. Seems really hard to have everything here.

~~~
addcn
How would you articulate "the protobuf problem" to a gRPC novice like me?

Also re http/rest docs -- check out my open source project -- it's sort of
like Git but for Rest APIs
[https://github.com/opticdev/optic](https://github.com/opticdev/optic)

~~~
asdkhadsj
I go over it in a bit more detail here:
[https://news.ycombinator.com/item?id=21621592](https://news.ycombinator.com/item?id=21621592)

But in short, Protobuf is inherently a language of its own. Like JSON or etc.
But it's feature rich enough that it can cause a fair number of
incompatibilities between a language's preferred style or usage of features.

Where the incompatibility shows up depends on the language. I found it to be
very different between Rust and Go, for example.

------
danellis
Why isn't this called "HTTP API Guidelines"? It doesn't seem to have much to
do with REST at all. For example, it says that people should be able to
construct URLs, whereas the REST style uses the URLs found in resources.

To be clear, I'm not saying that there is anything wrong with the practices
they propose here, just that they're not what they're claiming they are.

------
dang
Discussed in 2016:
[https://news.ycombinator.com/item?id=12122828](https://news.ycombinator.com/item?id=12122828)

------
erjjones
Ideally, they wouldn't re-invent the wheel again and just stick with the ODATA
protocol. They already have the platform to do so -
[https://docs.microsoft.com/en-
us/odata/resources/roadmap](https://docs.microsoft.com/en-
us/odata/resources/roadmap)

------
jmuguy
What do other folks here think of how MS handles query params, specifically
stuff like filtering?

[https://github.com/Microsoft/api-
guidelines/blob/master/Guid...](https://github.com/Microsoft/api-
guidelines/blob/master/Guidelines.md#97-filtering)

When working with the Graph API for 365 I thought it was really weird how you
had to pass some params

    
    
       GET https://api.contoso.com/v1.0/products?$filter=name eq 'Milk'

~~~
Ididntdothis
At first look I think using spaces is a bad idea but maybe it’s ok.

~~~
wnevets
yeah seeing spaces is kinda weird IMO

------
laurent123456
I find that MS are often quite good at developing good APIs and documenting
them.

In this new doc I particularly like the Delta Queries section [0]. It's
something that's difficult to get right but with this you can pretty much copy
and paste their guidelines for your project.

0: [https://github.com/Microsoft/api-
guidelines/blob/master/Guid...](https://github.com/Microsoft/api-
guidelines/blob/master/Guidelines.md#10-delta-queries)

------
perlgeek
> An example URL that is not friendly is:

>
> [https://api.contoso.com/EWS/OData/Users('jdoe@microsoft.com'...](https://api.contoso.com/EWS/OData/Users\('jdoe@microsoft.com'\)/Folders\('AAMkADdiYzI1MjUzLTk4MjQtNDQ1Yy05YjJkLWNlMzMzYmIzNTY0MwAuAAAAAACzMsPHYH6HQoSwfdpDx-2bAQCXhUk6PC1dS7AERFluCgBfAAABo58UAAA='\))

A well-deserved dig at the sharepoint API.

------
MiyamotoAkira
This keeps bothering me:

>7.1 URL structure

>Humans SHOULD be able to easily read and construct URLs.

>This facilitates discovery and eases adoption on platforms >without a well-
supported client library.

The URL is ephemeral on REST. That is because you create the documents on the
fly. They can be linked or not to things that you store on the datastore.
Allows you to easily change around as needed because the URL is not the API.
The hyperlinks are the API. The URL is like a memory pointer. You shouldn't
care about it.

------
vkaku
Too much emphasis on interfaces than on programmability. I don't even know if
they follow their own rules too.

Also, they define DELETE as idempotent, which is a little different from how
some of us write APIs.

~~~
alkonaut
Note that idempotent in this context means “2 calls will end up with the same
server state as one call”.

It does not mean for example that the second call can’t return a different
response than the first.

~~~
fyp
The reason people like idempotency is because it is great for "at least once"
systems (as opposed to "exactly once", which is hard to guarantee).

But REST's version of idempotency isn't good for this. If you retry your
request multiple times (due to flaky connection or whatever), it only
guarantees the same server state if your duplicate requests are bunched up.

For example if you do a DELETE then create it again with a POST, if there is a
duplicate straggler DELETE floating around it will end up redeleting your new
recreation.

~~~
thexa4
You can make it idempotent again by requiring API users to use an If-Match
header with the ETag of the state they're currently expecting.

Also allows you to implement optimistic concurrency. See:
[https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/If...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/If-Match)

~~~
fyp
I said this in another comment and got downvoted too. Can the downvoters
explain what they are downvoting for? What's wrong with if-match?

------
BstBln
Also seems they never heard about versioning via content negotiation :)

~~~
excerionsforte
Versioning through HTTP Headers was the spark for why I was looking around for
guidelines. I think it is the better way to go other than URL versioning.

~~~
BstBln
I think so too, but even in case one disagrees, they should have mentioned it.
Otherwise you’d think they never had heard of it :)

------
jayd16
Ok, this may be a dumb question but is there any risk to using/enforcing PATCH
in your APIs in this day and age? I still avoid it but that's probably just
cargo culting now, right?

------
jdprt
Where's the section saying "so get rid of your SOAP WS already OK?" That would
be lovely!

~~~
will4274
What's wrong with SOAP? SOAP is a lot more than REST, so you'd need to explain
an additional technology beyond REST in order to recommend moving away from
SOAP.

------
isvalid
Cool guidelines!

------
BstBln
„Some services MAY add fields to responses without changing versions numbers.
Services that do so MUST make this clear in their documentation“

WTF

~~~
cosmodisk
Remember doing an integration with a finance company's API. I read their
documentation,write the code,all works fine, we are moving towards go live
date. A week later, my integration fails. I try hell knows how many things,
eventually contact their lead dev asking what am doing wrong...The answer was
"Oh,emm,yes,we've kind of changed the response format slightly, it's not in
the documentation"... I got that fixed.. It's working again.Then, we suppose
to go live next week.2 days before go live date I get told we are cancelling
all business with the company...All the integration code goes into a bin.

~~~
BstBln
Assuming you’re working with JSON, it’s very easy to ignore extra fields and
no reason to deploy any fixes for that :)

~~~
gt2
Where did it say it was an extra field? It says format change which could make
for problems for sure!

edit: ah you must be referring to the GP. But the anecdote just says format
change.

~~~
jsight
Yes, I think BstBln was using a context aware parser to formulate his reply.

~~~
BstBln
Indeed :)

------
einrealist
This is more a opinionated design document for HTTP than it is for REST. There
is not a single word about semantic formats and HATEOAS. I really expected
more from Microsoft in this area.

Had I the time to write such a design document, I would start with ressources,
versioning, URI and semantic documents. I would write about entity models,
linking (links, link templates) and actions. I would write about
representations and about how represenations can support optimizations,
embedding of resources and entity expansion, which would otherwise be
addressed by inventions like GraphQL. And only afterwards, I would write about
HTTP as a transfer protocol. But that part can be brief, because there is
already the HTTP specification out there.

