GET - Return the current value of an object, is idempotent;
PUT - Replace an object, or create a named object, when applicable, is idempotent;
DELETE - Delete an object, is idempotent;
POST - Create a new object based on the data provided, or submit a command, NOT idempotent;
HEAD - Return metadata of an object for a GET response. Resources that support the GET method MAY support the HEAD method as well, is idempotent;
PATCH - Apply a partial update to an object, NOT idempotent;
OPTIONS - Get information about a request, is idempotent.
Most importantly, that PUT is idempotent.
Credit to arkadiytehgraet for retyping the table to be readable. Please give them an upvote for the effort.
Never use code formatting for text, as it makes it unreadable on the mobile devices.
For my own (and others') convenience, here is unformatted rule table from parent comment:
Until such time that the site supports quoted text a little civil disobedience is warranted. It's not like it's a hard problem.
Continuing to use code formatting after being made aware of the problem wouldn't be civil disobedience either. It would merely inconvenience the large number of people who read HN on mobile devices, for no purpose.
(Edited to remove a somewhat unrelated philosophical note.)
We all do it from time to time, even you did it few days ago , monospace have its purpose, maybe HN should simply fix it.
So yes, given HN's meager formatting options, monospaceisgood for that! :-)
You may note that later in that same comment there is a longer quote that I formatted in non-code italics.
But hey, if I encouraged you to create an HN account, that's great! Speaking as someone who writes code in a proportional font, we may have some interesting discussions ahead of us. ;-)
Because once upon a time the “H” in “HN” meant something.
Italics works fine for quote IMO.
Italics works fine for quote IMO
I propose using four underscores as a delimiter of a quote block, like asciidoctor, in addition to italics. I think it looks better than the markdown way of using ">" in front of a quote as it can get quite cumbersome and ugly when you're quoting multiple lines of something. However, I generally enjoy using markdown more if it works on a website. It just looks ugly IMO if it doesn't.
I think you'll find the practice of prefixing quoted lines with ">" originated with old Unix email and Usenet.
Thank you for the effort. I've updated my post with your text.
You're perfectly fine to design an API around RPC calls and POST. There are all of the issues around RPC that you will have to deal with (long running operations, versioning, serde and marshalling of arguments, argument and response typing and interface definitions, codegen of the IDL to product client and server stubs that will require modification and back/forward porting of implementation).
But an API definition like this one, covers a lot of that as part of the protocol, most of which is defined by HTTP anyway. You also get encryption, compression, caching, tooling "for free".
Instead of polluting existing method names to mean new things, HTTP could offer more method names.
Having POST alternatively mean "send a command" makes it meaningless. The command could do anything.
What about fetching many objects or creating many objects or updating many objects?
Edit: I was wrong. It's allowed to put a body in a GET request, but it isn't OK to use it according to the HTTP spec (which admittedly kind of weird). Source: https://stackoverflow.com/a/983458
Edit2: according to an edit int the SO response, it looks like I wasn't actually wrong: or at least not since 2014 when these RFCs became the standard for HTTP.
However, it is semantic nonsense to put it on a DELETE or GET.
DELETE is defined to remove the resource identified by the URL; GET is defined to fetch the resource identified by the URL. There no purpose for an entity.
> A payload within a GET request message has no defined semantics
> The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
Which, to me, sounds like: “it was forbidden to put info in the body, and now it's just not especially encouraged”. But maybe I'm over thinking the whole thing.
If your ids are UUIDs, each has a length of 36. If your URL consisted of nothing but UUIDs, you could safely delete 55.5(repeating) of them.
Probably in violation of some standard, but seems sensible enough.
Was recently dealing with a list of 100+ simple text fields each with unique identifying meta info.
Most usage of the form would only involve updating a few at a time, but on initial entry or a large update all of them could be updated or deleted.
A RESTful approach would have me submit 100+ individual PUT requests. However for a smaller instance of our server it would likely be affected by 100s of requests at a time and it would significantly complicate the client code to make all of these requests at a time and do the error handling for it.
It’s actually a small payload, even with 100 items, so it’s ridiculous to send them individual in order to be “correct” if it
- leads to a worse user experience.
- increases load on the server.
- complicates the codebase.
With HTTP2 you can send a ton of requests in parallel.
For one, having it be transactional.
>With HTTP2 you can send a ton of requests in parallel.
That's orthogonal. And they still have a cost (in time and size of transfer).
That means your CDN, your load balancers, and finally your actual applications-- any any other intermediaries -- all communicating via HTTP/2.
Not to say that HTTP/2 isn't the solution, but it's going to be slower than looking up "number of sites using HTTP/2", since you can't easily inspect the behavior of the intermediary servers.
I'm not entirely sure what you mean with downstream, but in my experience H2 is really well supported almost everywhere these days. Ymmv ofc
Anyway, I'm curious... what components do you run into that don't support this yet? At least in your examples I can't think of any major players that don't do HTTP/2 out of the box, except, well, old versions.
Is this more of a hypothetical? In my experience HTTP/2 is pretty much ubiquitous for anything current.
I think though that most operations don't need this. It's not an optimization I would blindly add to any operation, unless there's a specific case that warrants it.
I don't think that this is an anti-pattern. Most APIs do '1 thing at a time'.
The GET would be cacheable because they query would be repeatable and idempotent. In theory you'd never have to make the POST more than once unless you are creating a new query, which would make a new ID.
See section 13.
There have been proposed standardized fixes such as generalizing the HTTP SEARCH method that first appeared in WebDAV, but none have yet gone far.
For example, how do you have an up-vote link? What verb is that supposed to be? And links are all GET aren't they? Up-vote isn't idempotent. How do you make a correct up-vote link?
How do we have this disconnect between HTTP verbs and HTML? How did it work before AJAX?
(I'm not a web developer.)
So why were they in the original HTTP spec if they weren't supported by original HTTP agents?
HTTP/1.0 spec (1996) defined GET, POST, and HEAD.
HTTP/1.1 spec (1997) defined all of the other, newer methods.
> supported by original HTTP agents
The first web browsers I know about were Mosaic (1993) and Netscape Navigator (1994) predating the HTTP/1.0 RFC.
This is before me being old enough to follow technology, so below is a good portion of conjecture and hearsay (I'd love to be corrected by someone who actually remembers it):
Tim Berners-Lee initial web browser was intended as an editing tool as well as a viewer, and these ideas came back when they went to write up HTTP as a a "proper" IETF standard. Not uncommon (bit less so today, but back then especially) to design specs with what they thought it should do, not what they'd tried/done in practice. Making an official spec is the chance to get this stuff in.
At some point, a bunch of the file-specific stuff was split off to WebDAV as an extra thing (work on that started 1996, HTTP 1.1 was finished 1997) - or rejected and WebDAV created as a new home for those ideas? But PUT among some others survived.
An “upvote” is probably an idempotent action. Calling an “upvote” endpoint moves an object into an “upvoted” state, regardless of what it’s previous state was. However, it’s probably semantically a PATCH, since it’s a partial update of a resource.
There is no real disconnect here.
That being said, I too have seen things like this on the Internet. I would assume Google has some sort of heuristic to determine whether or not they can safely prefetch a link, but who really knows? You should absolutely avoid making this mistake in your own code.
EDIT: Since I didn't make my case well enough, here's an example. Let's say you want to charge a credit card. Are you doing a `POST /charge`? Or would it be better to say `card.charge(500)`? My point isn't that there shouldn't be a proper REST API below it, but rather that we should offer abstractions so normal semi-technical people can use APIs and don't have to think too much like a computer.
On the other hand, if I offer a web service and some someone asks “but what if I can’t make HTTP requests?” I’m _perfectly_ happy to tell them to piss off.
People think in ambiguity; adding a new item or replacing an existing item, as in this example, shouldn't be something an application needs to guess at.
I really don’t want to worry about setting http headers to make an add-if-not-exists call for example. The API in a client library with a method “TryAdd(...)” is understandable as an API, and the setting of headers and 4XX code you might receive on a duplicate is just returned as false.
POST /my-account/charge is not an abstraction, it's pretty much exactly what people do and have done.
The only exception is on resource creation (POST) since there's no existing version/generation number to match on. You can get around that by letting the client generate the id with a random id and reject if the id already exists. I haven't decided if this is a good idea or not since the client can potentially choose really weird vanity ids (especially if you're using uuid in hex or base64). But if you don't expose your internal ids in any UI that seems fine too.
I worked in finance and designed a REST API, and besides the standard user/account object, basically ALL the data and operations were neither idempotent, cacheable, durable and often couldn't possible be designed using HATEOS et al.
Quotes, orders, offers and transactions carry lots of monetary amounts which are sent in user currency, which is auto-converted depending on the user requesting it (and currency conversion rates is part of the business). Most offers are only valid a limited amount of time (seconds to minutes) because of changing market rates. There is also no "changing" of object as in PATCH/DELETE, all you do is trigger operations via the API and every action you ever did is kept in a log (regulatory wise, but also to display for the end user).
There is some way to try to hammer this thing to fit with HATEOS et. al. and I put some effort in it, but I would have ended up splitting DTOs into idempotent/mutable and non-idempotent/mutable parts and spread them across different services, bloat the DTOs themselves (i.e. include all available currencies in a quote/offer) and have the validity/expiry of objects via HTTP caching (instead of the DTOs). That would have ended up in a complex and hard-to-read API, would have significantly worse performance (due to lot of unneeded data & calculations) and some insane design decisions (like keeping expired offers/quotes around just so they are still available at their service URL with an id, even though the business requirements would never store expired offers).
Sometimes you just need to use your own head, accept that the problem domain might not be covered by other "guidelines", and come up with a sane design yourself.
Managers will care if your API's are a contract with your end customer (e.g., your product is providing APIs), and you're watching your NPS scores.
I feel like this could have been said without the sarcasm and it would have been a great discussion starter. Instead a potentially valid point/retort will be obscured by the tone and discussion surrounding it.
> Unfortunately large organizations often struggle to get groups into alignment on issues like this, unless there's a strong mandate from the top down (e.g. Amazon).
No sarcasm or snark, same basic substance.
Top down mandate cannot align 130k employees on every topic, even if that attempt were an appropriate thing for upper management to spend their time on.
There might be a more interesting conversation if context were given. If the comment about management not caring about RESTful APIs is coming from someone working as part of a team focused on public APIs, it’s a lot more interesting than someone working on desktop clients.
Edit: You know what, you’re right. I read the comment as a flippant “pft, that’s not my experience” and responded as such. I should have assumed better intent. Maybe the commenter was questioning why their experience doesn’t match up with this. Maybe they were asking why management isn’t pushing this effectively. I assumed a specific intent and should have considered different potential intentions.
It’s also unrealistic that everyone underneath can focus on many mandates at once. Which leads to the loss of credibility when mandates start getting ignored out of necessity.
At Microsoft the number of simple mandates like this one are numbered at least in the hundreds.
Realistically, mandating RESTful APIs at the exec level is unlikely to be a big win for Microsoft. The teams working on APIs at scale are largely already doing this (and you’ll notice multiple groups represented by the authors). The teams that aren’t doing this are largely not building APIs that benefit a great deal from RESTful APIs, because they’re building internal APIs or similar and RESTfulness would be nice to have but not particularly impactful.
Doesn't anyone notice this? I feel like I'm taking crazy pills.
I tried implementing this and found it toilsome. It was far easier to use versioned URLs that followed a documented pattern.
When I checked about three years ago there wasn’t much in the open source community that I could build atop for clients. I also didn’t want to maintain an SDK client in addition to the API itself.
It's only toilsome when you try to shoe-horn it into a traditional data API, rather than accept it as a unique descriptive aspect of the early web architecture.
It only became problematic when we tried to shoe-horn it into traditional data APIs.
> JSON isn't a hypertext, it just isn't a good format for REST-ful services.
The point of REST was to take ideas from the browser and apply them to other services.
But HATEOAS is proving that REST only really makes sense if you've got a browser on the other end.
> Doesn't anyone notice this? I feel like I'm taking crazy pills.
The first comment I saw on this article was, _every single person writing a REST api should have to memorize this table_.
If you were trying to sell people on a new library and said, "everyone here has to memorize this table," you'd get laughed at as a crazy person.
That being said, I have never seen developing api client driven mainly by HATEOAS
I think that's a good idea, just like REST and HATEOAS are both ideas trying to solve problems.
I'm just skeptical that it'd work in practice. It seems vanishingly unlikely that consumers will use it correctly enough that changing the URLs in that mechanism wouldn't result in making the breakage even more mysterious.
When I say "it only makes sense if you've got a browser on the other end," the issue is you need a client approaching the complexity of a browser to do all the redirection and such that makes this level of flexibility for the API implementer to work.
I believe something like that is possible, but this isn't the way.
Apart from being able to tell the client what functions they are allowed to use and a very superficial form of documentation I don’t really get the advantage of it.
I switched to Twirp at one point to retain the simplicity of RPC + Protobuf, but avoid some of the complexity we didn't need via gRPC... but even that suffered, of course, from the Protobuf problem.
Finally I'm back to plain HTTP and JSON. We don't worry too much about REST fundamentals, and honestly we're more like an ad-hoc (JSON) RPC over HTTP, but it's simple.
The only problem is documentation. The one thing that I found perfect with Protobuf. Seems really hard to have everything here.
Also re http/rest docs -- check out my open source project -- it's sort of like Git but for Rest APIs https://github.com/opticdev/optic
But in short, Protobuf is inherently a language of its own. Like JSON or etc. But it's feature rich enough that it can cause a fair number of incompatibilities between a language's preferred style or usage of features.
Where the incompatibility shows up depends on the language. I found it to be very different between Rust and Go, for example.
Could you elaborate on this?
Sure, take Go for example. Protobuf to Go works well, but there are some features of the Protobuf language that just don't exist in Go. Enums, for example. While Go does have constructs that are similar to Enums, they're just different enough to make it a bit weird.
This basic problem gets worse when you try to use it though. The `One of` construct, for example, is sort of impossible in Go. Iirc the Go implementation had to use runtime type checking to give some resemblance to the Protobuf spec.
Rust (which I focus on now) was far better with Protobufs. As far as I remember, there wasn't much of Protobufs that broke idiomatic Rust. However, there was plenty of idiomatic Rust that broke Protobufs iirc. Things like complex data structures behind enum variants. Enum variants as structs, tuple values, etc. iirc it was bad enough that I usually used an abstraction library to write my idiomatic data structures and convert to/from the Protobuf structs.
Which, is how I left it all together. I realized I had a ton of glue code trying to make up for the incompatibilities in Protobufs + (Go|Rust), such that it would just be easier to drop it - at least as far as my code is concerned.
We now struggle with documentation, something Protobuf did excellently, but at least the code smell is gone.
> The `One of` construct, for example, is sort of impossible in Go. Iirc the Go implementation had to use runtime type checking to give some resemblance to the Protobuf spec.
It looks like they took a reasonable approach, and maybe the deeper issue is that protobuf assumes you want direct access to those structures. That's not an unreasonable assumption given its domain, but I can see a red flag: the code generator is solving the problem by generating a forest of types, but that's also something a developer would never do.
Another approach would be to make the oneof implementation more opaque and let you access things via methods. While you'd always want to allow the consumer to ask "which kind of avatar" is this, you could also let the consumer query "get me the avatar image url" and that could return either success or an error.
> I realized I had a ton of glue code trying to make up for the incompatibilities in Protobufs + (Go|Rust), such that it would just be easier to drop it
That's the acid test for whether it works. And it means you can figure out if your language is good by porting a non-trivial codebase using an existing API and see how much glue is required.
I think it's because JSON is inherently smaller, and the spec primarily focuses on basic types that handle data.
With JSON I can write idiomatic code, in any language, and the translation to and from my code is correct. I don't need to abstract away my JSON code for arbitrary reasons, it works.
I'm not sure why that is TBH, I just know that it's a restriction of Protobuf I don't find myself running into with JSON.
To be clear, I'm not saying that there is anything wrong with the practices they propose here, just that they're not what they're claiming they are.
When working with the Graph API for 365 I thought it was really weird how you had to pass some params
GET https://api.contoso.com/v1.0/products?$filter=name eq 'Milk'
In this new doc I particularly like the Delta Queries section . It's something that's difficult to get right but with this you can pretty much copy and paste their guidelines for your project.
A well-deserved dig at the sharepoint API.
>7.1 URL structure
>Humans SHOULD be able to easily read and construct URLs.
>This facilitates discovery and eases adoption on platforms >without a well-supported client library.
The URL is ephemeral on REST. That is because you create the documents on the fly. They can be linked or not to things that you store on the datastore. Allows you to easily change around as needed because the URL is not the API. The hyperlinks are the API. The URL is like a memory pointer. You shouldn't care about it.
Also, they define DELETE as idempotent, which is a little different from how some of us write APIs.
It does not mean for example that the second call can’t return a different response than the first.
But REST's version of idempotency isn't good for this. If you retry your request multiple times (due to flaky connection or whatever), it only guarantees the same server state if your duplicate requests are bunched up.
For example if you do a DELETE then create it again with a POST, if there is a duplicate straggler DELETE floating around it will end up redeleting your new recreation.
Also allows you to implement optimistic concurrency. See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If...
"Do the same thing" isn't specific enough. Does it mean the state of the system? Does it mean the response code? Does it mean the response body? Does it mean all of these things? All seem like reasonable interpretations.
It doesn’t mean “do the same thing” or “get the same response”, only “end up in the same place”.
You can do a delete x, get an OK response, and the important state is that x is now deleted on the server. Then the next call to delete x does nothing (which is different from the first which deleted x!). So to be idempotent it has to do something different. The response in the second case can be both “ok” (because the thing is deleted) but also e.g “x doesn’t exist”.
It also feels very natural for DELETE to be IMHO, so I would be curious what strange thing you do
edit: ah you must be referring to the GP. But the anecdote just says format change.
Though doing it without bumping a version should be rare, especially without a big documentation notice.
Of course you want to mention that a new field was added and what it is for, but why do you need a big notice? Your clients should just continue to work.
That does not seem at all weird to me. The client side of it is the same as "don't SELECT *".
Had I the time to write such a design document, I would start with ressources, versioning, URI and semantic documents. I would write about entity models, linking (links, link templates) and actions. I would write about representations and about how represenations can support optimizations, embedding of resources and entity expansion, which would otherwise be addressed by inventions like GraphQL.
And only afterwards, I would write about HTTP as a transfer protocol. But that part can be brief, because there is already the HTTP specification out there.