The "true spirit" of REST, to me, is that there's a certain set of things you can do when creating an API that will let you re-use the huge amount of HTTP middleware that's been written and get correct (and useful!) semantics from it. Caches (browser-, edge-, and server-side-), load balancers, forward- and reverse-proxies, application-layer firewalls, etc. will all "just work" for your software if you do REST correctly, and won't have any weird edge-cases.
Re-implementing those same semantics in your own messaging protocol / format, without intertwining the concerns of the protocol and the message format, throws away any/all of those benefits. You need a protocol that guarantees that middleware can "look inside" the messages it's passing (or at least their metadata), in order for any of this to work. That's why HTTP has both a transparent part (req path+headers; resp status code) and an opaque part (req and resp bodies) to each message: the transparent part is there for data that affects middleware behavior, while the opaque part is there for data that doesn't.
Note that that doesn't mean you're stuck with HTTP1. SPDY/HTTP2 is effectively an entirely different protocol—but it keeps the same semantics of requiring certain properties of the metadata tagged onto each message at the protocol level, so that anything that speaks the protocol can use that metadata to inform its decisions.
The "true spirit" of REST, to me, is that there's a certain set of things
you can do when creating an API that will let you re-use the huge amount of
HTTP middleware that's been written and get correct (and useful!) semantics
from it. Caches (browser-, edge-, and server-side-), load balancers,
forward- and reverse-proxies, application-layer firewalls, etc. will all
"just work" for your software if you do REST correctly, and won't have any
weird edge-cases.
Doesn't that assume that the middleware also implements REST correctly? As the article points out, there are all too many HTTP libraries that only support GET and POST, not the more "esoteric" verbs like PUT and DELETE.
The article says "most client and server applications don’t support all verbs or response codes" which, in my experience, is not true. There are some but it's definitely not "most". For one, if you're writing both sides, don't build on client or server software that sucks. Sure, HTML forms don't support PUT and DELETE but how often to you use HTML forms instead of ajax requests (which do support those methods)? And, if you subscribe to a little bit of CQRS and Event Sourcing ideas, PUT and DELETE don't really make sense since you should really be POSTing commands to do those things. Where they do make sense is when manipulating files directly which I don't think comes up much these days.
Look into WebDAV and you'll find plenty :) LOCK, UNLOCK, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH. And also REPORT, SEARCH, and various version-control operations.
Oh sure, but the context is REST apis, not WebDAV :)
PATCH allows two users to update different fields on the same resource at the same time with no conflicts. Otherwise, they'd use PUT and the last update would win unless they use something (client-side) such as compare and set.
PUT and DELETE are just HTTP verbs and don't directly have anything to do with REST. You can support all verbs and have a garbage REST implementation, and (while obviously more difficult) have a great REST implementation for a handful of verbs.
What we really need is a set of verbs that allow to reliably distinguish between these three types of operations:
1. Pure read.
2. Impure (stateful) read.
3. Idempotent write.
4. Any other write.
There's no particular reason to separate inserts, updates, deletes etc as part of the protocol - they're all just different kinds of writes, and middleware doesn't derive any benefit from being able to distinguish them. Thus, this can be a part of the payload.
On the other hand, the difference between reads and writes, and the two subdivisions within each, do matter for purposes such as caching and automated error recovery (e.g. a proxy can repeat an idempotent write a few times before returning the error to the originator of the request).
In REST, we have GET for #1 and #2, PUT and DELETE for #3, and POST for #4. In practice, this is often simplified to just POST used for both #3 and #4, but that loses a valuable distinction (but is unfortunately often necessary because of the lack of support for other methods). On the other hand, the PUT/DELETE distinction is largely pointless.
It's not worth making the distinction between idempotent and non-idempotent writes given the possibility of network partitions. Every write must be made idempotent because all you can do is retry in this case. PUT is merely an optimization on idempotent POST requests, one that will become progressively rarer as HTTPS continues to spread.
POST requests can be made idempotent by binding the result of processing that request to a future, so every subsequent request simply returns the already computed result.
Impure reads are the default, and pure reads are designated by long-lived cache headers, ie. it's an impure read that lasts as long as the server responds.
So really you just need GET and POST, and server-side frameworks should make POST requests idempotent by embedding some notion of futures for side-effecting operations. See the Waterken server for the first development platform to really get this right.
It can be as long as you have some kind of identity information associated with the request, and provided alongside any requests that are the 'same'.
e.g. Instead of saying "Insert a transaction for £20", you say, "Insert a transaction that entity A calls 1234, for £20".
Stripe use something like this to ensure that as long as you call their API to ask for a payment and use the same key, they won't charge the person twice.
Requesting an insert can be idempotent. Each non-idempotent operation is tagged with a unique identifier, if the server doesn't have inserted data tagged under that identifier, then it performs the insert, if it does, then it returns the usual success code as if it had just performed the insert. This unique identifier is simply a durable representation of a future which I described above.
This would seem to require the identifiers to be attached to records in perpetuity (i.e. it would basically require client-generated IDs everywhere), so that the server can reliably verify that this operation has already produced a record. I can see it working, but it's a far-reaching change, that may not be easy to adopt for existing data storage schemas.
> This would seem to require the identifiers to be attached to records in perpetuity (i.e. it would basically require client-generated IDs everywhere)
Not sure what you mean by client-generated. The server-side app generates the id because it has to store and interpret it. This can be as simple as a sequence number, similar to how TCP guarantees delivery. It depends on the schema really.
This is the simplest way to ensure you can perform arbitrary retries of POST in case of a network partition.
This data doesn't even need to be integrated with your app's schema, although that's ideal so you can manage the storage lifetime. But you could use an entirely separate store for GUIDs and cached replies, and so it becomes transparent to your app and just becomes another layer.
> Not sure what you mean by client-generated. The server-side app generates the id because it has to store and interpret it.
Per the description above:
"Each non-idempotent operation is tagged with a unique identifier"
Since the operation originates on the client, the client has to tag it with the identifier, no? And the server has to store this identifier in a way that associates it with any data affected by that operation in a non-idempotent way.
Or are you saying that the client first has to make a round-trip to the server to generate the ID, and then use that server-provided ID for the actual POST?
> Or are you saying that the client first has to make a round-trip to the server to generate the ID, and then use that server-provided ID for the actual POST?
This is always the case for REST given HATEOAS, ie. you've already made some hypermedia requests to obtain the URL of the endpoint to which you will POST.
Unless the resource you're posting to actually is the public entry point of your service, but that would be very unusual.
Wait, but what about the request used to obtain the URL of endpoint to which you're posting? Isn't that one then not idempotent (since it would create new URLs every time)?
Generally speaking, what is the flow like? Suppose I allocated myself an endpoint, but then never posted anything to it - what does the endpoint actually contain then, if queried? Do unused ones get "garbage collected" somehow eventually?
> Wait, but what about the request used to obtain the URL of endpoint to which you're posting? Isn't that one then not idempotent (since it would create new URLs every time)?
Not necessarily. The numbers don't have to be stored before they're actually used by clients. For instance, you could return a simple integer, like an object version #, and an HMAC(integer, resource URL) to ensure the client can't tamper with it. You only store data under that integer when the user successfully POSTs to that resource for the first time.
> Generally speaking, what is the flow like? Suppose I allocated myself an endpoint, but then never posted anything to it - what does the endpoint actually contain then, if queried?
There are many possible designs here. I prefer something like this in CRUD-like contexts [1], because it gets me a fully auditable change history, optimistic concurrency control and all the storage needed is fully integrated into the app schema in a sensible way.
Combined with the HMAC described above, you don't need to preemptively allocate any storage for idempotent POSTS and you can use simple integers as your unique identifiers. So a POST against a version number that's not the latest version would simply return the version that followed if their POST was the one that succeeded, or a redirect to the latest version if someone else had updated it.
Hopefully you can imagine various relaxations on this to make other trade offs. For instance, a much simpler approach would be to return GUIDs for each possible POST, and you just store each GUID associated with the object when a POST succeeds. If the object's current GUID is the GUID the user is posting, return the current data with 200 OK, otherwise return a 301 Moved to a URL referencing the latest version and let the client try to apply their updates to that. The storage can be reclaimed after a suitable period of time; say a month if your app is for browser clients
Because applications have state. In fact, the vast majority of reads in any applications are stateful (the database backing your app is also state, you know; I'm not talking just about session state here).
By "pure" here I mean that we're talking about a pure function of its inputs.
Indeed, I forgot some. Then again, last time I tried to use PATCH, I found that support for it in various places in the stack was patchy enough to make it a non-starter.
You can use tools like Fiddler, or browser's built-in HTTP viewers, to observe interactions between your application and the server.
At a minimum, error codes and verbs help these tools highlight different behavior and problems. A 4xx class error, which usually means that your client-side application did something wrong, or that your user did something wrong, is very different than a 5xx class error that means that something really screwed up in the server.
Thus, the response codes make it very easy to use a 3rd party tool to diagnose malfunctions, even when the 3rd party tool doesn't know much about the application.
Likewise, at a minimum, a GET request is a read and doesn't change state. This is also useful when using a 3rd party tool to diagnose an application that runs over HTTP.
I do think that the article points out just how confusing proper REST can be. The semantics don't always match what the API is actually doing, either. Perhaps it's best to just limit oneself to the GET and POST verbs, 200, a few 4xx error codes, and 500. Perhaps it's also best to assume that rest semantics are for diagnostics and not for application level control?
When I was given the task of defining how our multi-robot server would interface with our user interfaces, I eventually settled on REST. Most of what I knew about REST had been obtained that week. I implemented something pretty vanilla with Django and it all felt pretty elegant. I didn't have to worry about defining a protocol, there was pretty much already one for me:
- GET, PUT, POST, DELETE (I learned there were others but were kind of niche/obsolete)
- 200s for success, 400s for client (request) error, 500s for server error.
It was all nice and worked great (and still does, years later).
But over time numerous people started told me that no it's actually all wrong for one reason or another. I've heard that I should never use anything except GET and POST. That I should ALWAYS return a 200 and provide error metadata as a response if there actually was an error. That POST is actually meant for updates and PUT is meant for new entities. That the opposite is true. That neither is true and I should always use POST for any mutation of state. etc.
I feel like I had success because I approached it from a position of ignorance, meaning I just implemented a simple, sane REST API and was none the wiser that I was doing it wrong.
> - GET, PUT, POST, DELETE (I learned there were others but were kind of niche/obsolete)
> - 200s for success, 400s for client (request) error, 500s for server error.
You did good job there. Ignore naysayers just because then they have to do more work to handle your correct error codes and actually send correct HTTP methods.
> I feel like I had success because I approached it from a position of ignorance, meaning I just implemented a simple, sane REST API and was none the wiser that I was doing it wrong.
None of it is REST though, it's just one more RPC over HTTP protocol, which is also what TFAA advertises. Though your version uses HTTP as more than a trivial transport which I guess is nice.
We read then manipulate the state of a multi-robot fleet by making HTTP GET, PUT, POST, DELETE calls that affect a database and robot broker/manager services. The state of the robots and the configuration of the system is represented in JSON. Interaction with the system is stateless, meaning you can jump in at any time, understand the full state of the system, and manipulate it safely. Interaction of a healthy system will involve numerous physical and virtual clients/servers, many of which are physically on the move most of the time.
If this isn't REST then the Wikipedia page on REST needs updating or I'm just not explaining it well enough.
I agree that the communication you described isn't completely REST style but rather "just" using HTTP as an application protocol. There is no real downside in doing so but it shouldn't be called REST.
REST based on the Richardson maturity model[1] involves:
Level 0: Using HTTP as the transport protocol: e.g. SOAP
Level 1: Identifiable objects: e.g. /object/object_id
Level 2: Using HTTP as the application protocol: e.g. Using POST to create new objects and DELETE to remove them
I'm not sure where I misguided people that I was just doing RPC. Almost all endpoints are as you described:
`/robots/robot_id/exceptions`
`/maps/map_id/destinations`
`/maps/map_id/areas/speed_limit_areas`
Performing operations is not done via. RPC but rather POSTing a new `mission` to the queue.
There's no HATEOS but that would generally be silly, since this isn't an API that requires easy discovery and consumption by third parties. I'm not sure I subscribe to the HATEOS required for REST, but I don't really care to argue.
I think I'm starting to get a sense that people can be really opinionated on this stuff, and I'm still lost as to what there is to gain by it. To suggest what I'm describing isn't REST would be to say that the first sections of the Wikipedia page are wrong. So are we so far off-base that we need to revise the Wikipedia page? Or is this more just an opinion?
Thanks for sharing your thoughts. Maybe there's a chance to de-mystify my confusion on why there's so many opinions on something that, to me, seems so simple to define.
> I'm not sure where I misguided people that I was just doing RPC.
Sadly the person you respond to is an idiot, their points 1 and 3 have literally nothing to do with REST. You could have all endpoints be /435645646 yet do rest, you can have the most beautifully crafted URLs in the world and do rpc, they're orthogonal concerns. Most people do the latter, incidentally.
> Performing operations is not done via. RPC but rather POSTing a new `mission` to the queue.
That's still RPC. Encoding your procedure calls via HTTP verbs doesn't make it not RPC, it makes it (as originally noted) leverage HTTP as more than a trivial transport.
> I think I'm starting to get a sense that people can be really opinionated on this stuff
Well yeah imagine you see a nice essay defining or formalising a concept (hyperlinked application interfaces) and creating a word/acronym for it (REST), then you see the world around it coopt it without any of the meaning to qualify something which already existed but has seemingly fallen out of fashion (RPC in this case). That's bothersome.
> seems so simple to define.
If your simple definition of REST is just that you're using HTTP, why would you need a separate acronym for it?
I'm with you on HATEOS, but I suspect OP is selling OP's system a bit short, and you're filling in the blanks in the least charitable way. The resource at "/maps/map_id/destinations", for example, probably has links to particular destinations, and those destinations probably have e.g. "modify this destination" forms. You might prefer for the URL to be "/ab129f294b", but that is your own taste rather than anything about REST. There is nothing wrong with memorable URLs.
>> Performing operations is not done via. RPC but rather POSTing a new `mission` to the queue.
> That's still RPC. Encoding your procedure calls via HTTP verbs doesn't make it not RPC, it makes it (as originally noted) leverage HTTP as more than a trivial transport.
RPC uses POST, and sometimes GET, usually to a single URL. OP is creating, modifying, and deleting resources, each at its own URL, using the proper HTTP verbs. You might be getting hung up on the fact that the "mission" resource corresponds to something conceptual rather than something physical like an individual robot? Again, that's your own taste, rather than anything about REST. Or do you mean instead that OP should invent some new verbs appropriate to robots, like MOVE or ACTUATE? That would be a bit goofy...
To be clear, I think we're talking about REST without HATEOAS. And I think this is better, imo, as you can write your client knowing the protocol. I've not come across a convincing description of why HATEOAS is a good thing at all. And I say this as someone who thinks capability based design is a good thing.
The reasons I have against it are that
1. I think it complicates the client because it needs to discover the api at runtime instead of just coding it up in a straight forward manner.
2. Receiving the capabilities seems to imply that these are the activities that can succeed. But they don't have to succeed. So you still have to handle errors. So you may as well know up front what the whole API is and handle errors, etc.
3. Dynamic apis are resistant to pipelining if you don't know what the next url will be.
But I'm here to learn, so please hit me with a clue stick and explain why REST sans HATEOAS is not better.
HATEOAS is a great concept, but we can only really take advantage of it if and when we move from this troglodyte era of APIs. The fact that we still consider normal to write new "client libraries" for each new service created is just absurd. Imagine if we had to write a new browser plugin for each new site we published - that's at the level we're at on APIs!
To be more specific, HATEOAS is an essential component to comply with the Uniform Interface constraint of REST, which, as the dissertation describes, allows for decoupling and independent evolvability of the client and server.
Since we still accept the idea that client libraries and the programs that use them should be completely coupled with the services they communicate with, and it's fine to force thousands or millions of developers to update their software because an URL on some service changed, HATEOAS doesn't feel particularly useful.
>Imagine if we had to write a new browser plugin for each new site we published
You mean like Electron applications? :P
But seriously, I don't know what kind of clients you want to use, but if I want to store data on an object store, for example, I know what the verbs are. And if a new one is added, then at some point I need to change something on the client side to factor that in. Unless the server is also serving the UI as they do with web pages.
<semi-rhetorical-strawman>If I'm using something like $AccountingProgram and my bank puts in a new possible action like "donate" to let me easily donate to charities which will return important information like deductions and the relevant fields, etc., then how might my accounting become aware of this new concept? </semi-rhetorical-strawman>.
Maybe I'm too troglodyte to see it. My engineering senses are tingling with excitement that it would be cool that it automatically gets through to the client (without also being open to click/UI hijacking from shitheads) but I just don't see how it can happen.
> Imagine if we had to write a new browser plugin for each new site we published
Still happens all the time in mobile world. Not as obvious in Desktop word because the site is the app, you just download it each time anew instead of once. But there are Chrome plugins for many sites - Chrome web store has a lot of them. Most of the sites can work fine without them, nevertheless they exist.
> But over time numerous people started told me that no it's actually all wrong for one reason or another. I've heard that I should never use anything except GET and POST.
I used to work for a place where people said exactly that. It was then followed with comments that anything other than GET and POST posed a security risk.
There's still a staggering amount of ignorance out there of HTTP; even amongst those who claim to be web developers.
The term 'REST API' has become conflated with 'nice API'. Nobody will object to saying 'shall we make it a REST API?', because it's like saying 'shall we make it a nice API?'.
REST has (had?) a very specific meaning, as defined by Roy Fielding in his dissertation. People seem to be objecting against using the term 'REST' for things that are not REST in that sense. But of course that's just arguing whether you use the correct word for what you built, not whether what you built is the right solution for your problem. So if they are trying to argue that it isn't, then their argument is fallacious.
I was excited when I first came across a RESTful solution I could use because I was living in a world where WCF was still considered groundbreakingly simple, and there was still asmx hanging around that I was avoiding like the plague. If there's a better model out there, and an implementation I can do hands-on research with, I'm always open to new ideas. But I am usually turned off by developers who feel the need to dump on what everyone else is doing instead of just proposing their new solution enthusiastically. Just like how I enjoy hearing how other people propose to use RESTful API's semantically, but not if they're going to spend half the time just ripping on how I've been doing it successfully for years.
> I didn't have to worry about defining a protocol, there was pretty much already one for me:
Well, you have it wrong with "not defining a protocol". You didn't define your
protocol systematically, instead you have defined it ad hoc, but you still
defined it.
> - GET, PUT, POST, DELETE (I learned there were others but were kind of niche/obsolete)
Really? You only have three modifying operations and one that reads the state?
Or is it that you crammed all the others into an informally specified
almost-RPC in a single POST request?
>That POST is actually meant for updates and PUT is meant for new entities
No.. So any creation or update can be a POST. If your operation is idempotent, then you can make it a PUT. If you take the subset of POSTS that are idempotent in your application, this is what could be made a PUT.
So a set operation foobar=50, would could be a POST, but since its idempotent, it could also be a PUT.
A increment operations foobar++, would have to be a POST, but could not be a PUT since it is not idempotent.
REST purism is very similar to religious extremism
There are a lot of them saying that your system is not pure REST, that it would be a joke to call it REST, that most "REST" APIs out there are not real REST and only they know the true way of doing stuff
If only they shipped something or made something work given real-world constraints
It is definitely not "definitely the other way around".
POST is meant for anything and everything, absolutely including updates. PUT is meant for writing-or-overwriting a complete or partial resource, at a URI that is the Request URI. In practice, for most applications, this makes PUT impractical for new entities (because the requester cannot know the correct URI for the new resource), but in some applications it can make sense, and either way there's more to it than you assert.
Like usual, the accepted answer is somewhat correct but not quite the best. Idempotence is the thing from which correct understanding of PUT and POST is derived.
The fact that the request-URI is to be the new home of the entity being PUT is the thing from which correct understanding of PUT is derived. Idempotence is a trivial consequence of that (so not necessary in its own right), and is not sufficient. Read the RFCs.
I'm going to have to call bullshit on that one. REST is one of the more successful strategies we've come up with for connecting systems, this is just another case of letting perfect stand in the way of good enough. Using GET for non-destructive operations and POST for updates and deletes is a nice, portable compromise. I've been trying hard for years to find a reason to bother with PUT, but so far I've found it not worth the effort.
> I've been trying hard for years to find a reason to bother with PUT, but so far I've found it not worth the effort.
And right there, at your final sentence, you basically described why REST has more or less failed.
GET and POST are useless for implementing a complete application protocol. You'd basically overload these http verbs to the point where you would implement your own protocol. And that's what most people do anyway. You choose to not use PUT, some other person chooses to not use PATCH or HEAD and I choose to curse vehemently every time I have to use someone's service.
REST is nothing but loosely connected guidelines that nobody uses in the same manner.
REST has "failed" in the same way that many original visions of the web/APIs/protocols have "failed" - you have a few "no-true-Scotsman" purists complaining about differences in implementation; meanwhile a great many real-world developers are quite happy and productive in a REST-like paradigm and don't particularly care that their API doesn't fit some Platonic ideal.
Could things be better? No doubt. But REST (or something like it) is largely "the way things are built" these days and most people don't mind. Calling it a failure is quite a stretch IMO.
I'm not claiming that REST is a protocol. I'm saying that the de facto state of the REST paradigm today is analogous to some protocols which have been twisted to support use-cases far beyond what their designers intended, in ways that make purists squeamish, but make developers happy that their shit works.
> use-cases far beyond what their designers intended
OK, I get what you're saying. Problem is that as far as I can see REST is perfect for S3-like services, Maps(?) and slightly more than basic CRUD applications.
You are bound to discover its limits very soon. The gazillions of books and blogposts out there heralding it as a serious interface are not helping either.
BTW, I don't know about other people criticizing, I am definitely not a purist. But after ~10 years of dealing with REST in various capacities(startups to Enterprises), I am a bit tired and can't wait for something to replace it.
Maybe it is fair to call REST a failure in that it is flawed, and there aren't perfect implementations. But I just had to go through a SOAP XML integration, and it was ten times more painful than the worst REST experience I've had. So I still see REST as one of the biggest tech wins in a long time.
Really? I made one yesterday based on a WSDL from a customer. The whole API was auto generated in type safe Scala in seconds. After adding authentication configuration and some sensible timeouts, I had the whole thing running in two hours. The SOAPui auto generated a mock test and load tests with which I could mimic specific weird responses.
The REST API I had to forward through however had no good documentation, no client library, so I was forced to write json serializers and reverse engineer the code. Overall, when you have a well written SOAP interface (rare, admitted), development time can be greatly reduced.
In order to get the well written SOAP interface, it helps to have the right tools and be used to using them. I think that increases the barrier to entry on using SOAP, which leads to the idea that SOAP is terrible and crufty compared to REST.
I learned SOAP relatively recently, but well after it was a fad. Today, my experience would be like yours if I had to make a SOAP service with just a WSDL. But when I started out? It would take a longer for me to wrap my head around all of it. I can't imagine how confusing it would be if I had to do it all without the benefit of working on a team whose focus is a SOAP-based product. I dunno if I would know where to begin.
Compare that to REST. Even if you know next-to-nothing to start, you can do a basic REST tutorial for just about any stack in an hour, and get a basic implementation of an API up in an afternoon.
Did you really just complain that REST is bad because you couldn't find an auto-generation tool to do your work for you? Because that's hilarious.
Also, you don't seem to have looked terribly hard, because there are a few really powerful tools out there for autogenerating server and client code for REST APIs, as well as mock servers and a whole host of other tooling (See: Swagger, RAML, API Blueprint for starting points).
I wonder how you arrive at your conclusions. It's rather hilarious, to be honest. I've found that good machine readable specifications of rest interfaces are as rare as well designed SOAP interfaces.
I've used a variety of tools to generate and test REST interfaces in the last ten years. To a mixed success, I must say.
>REST is nothing but loosely connected guidelines that nobody uses in the same manner.
I don't see a problem with this. In fact, I'd love it if everyone accepted this instead of whining that something isn't truly "RESTful."
For instance, the app I'm working on deliberately does not implement HATEOAS. I appreciate the academic effort behind REST and RESTfulness, but ultimately I view it only as an ideal to tend toward. As you said, it's a guideline.
> As the article points out, there are all too many HTTP libraries that only support GET and POST, not the more "esoteric" verbs like PUT and DELETE.
GET and POST are all you need: "Return representation of available operations" and "apply this operation". It's the lambda calculus applied to the web (see Waterken's web-calculus for a more formal treatment).
You might say that this is too anemic a foundation and you want more built into the protocol level to handle some common tasks, but I'm not convinced it's necessary or even desirable. At some point requirements will change and some of those tasks will be supplanted, but we'll have to live with them forever if we bake it into the protocol.
You're just supporting my argument and weakening REST's case.
I can do you one better. If you managed to reduce everything down to GET and POST here's an idea; just use POST for everything. Boom. You just re-invented SOAP.
> You're just supporting my argument and weakening REST's case.
How? Perhaps you should actually elaborate your argument. REST doesn't depend on the use of verbs, it's an architecture that elaborates the requirements for object designation (URLs), object lifetimes (statelessness) and hypermedia-driven service discovery (HATEOAS). Only GET and POST in HTTP are required to fulfill these requirements. If there's actually something wrong with that, then lay it out.
> If you managed to reduce everything down to GET and POST here's an idea; just use POST for everything. Boom. You just re-invented SOAP
Except you can't in a world with side-effects. You could do exactly what you say if every POST were guaranteed to be idempotent. Every request could carry a full payload like a POST request and a unique identifier to ensure at-most-once semantics. That would be a fine protocol, and totally REST compatible. What's the problem exactly?
Finally, SOAP carries far more baggage than you imply. It's a false equivalency.
It's not a false equivalency. Books have been written about it and describe how you defeat REST by doing what you describe.
When you encapsulate most of your operations under POST you basically create your arbitrary protocol. No assumptions can be made about your service. You can make your own assumptions about your own service, but the others can't.
> That would be a fine protocol, and totally REST compatible. What's the problem exactly?
The problem would be that it wouldn't be REST compatible. You should clearly read a few books on the subject. Friendly advice, not a snarky comment.
REST already has idempotent mutating operations, like PUT and DELETE. It's not just GET that's idempotent. So clearly the problem is that you redefine your own protocol, with your own semantics, even though we already have plenty of standardized methods/response codes.
> If there's actually something wrong with that, then lay it out.
You have obviously missed a ton of bibliography on the subject. Before reading random blog posts from random people who have their arbitrary assumptions about what REST is, I'd suggest reading a few books on the subject.
Honestly sorry if I sound snarky, but after 10 years on working with such APIs, and after reading countless books on the subject, it never ceases to amaze me how people still think that GET and POST are enough. But I don't blame you, but the browsers who basically broke the protocol by just using those 2.
> When you encapsulate most of your operations under POST you basically create your arbitrary protocol. No assumptions can be made about your service.
And you shouldn't make such assumptions, you should just use the input parameters exposed in hypermedia which you obtained from a public service entry point. It's called encapsulation and that's REST. Again, what's the problem? All I hear is complaining that REST doesn't work the way other architectures work. Big surprise.
Certainly REST's HATEOAS can sometimes make a service less efficient as compared to some alternatives (so it can preserve encapsulation and support upgrade), but that's not the claim you're making. You're claiming some kind of insufficiency.
> The problem would be that it wouldn't be REST compatible. You should clearly read a few books on the subject. Friendly advice, not a snarky comment.
I've read Fielding's thesis, thanks. I understand REST perfectly well.
> REST already has idempotent mutating operations, like PUT and DELETE. It's not just GET that's idempotent.
So what? PUT and DELETE don't have the necessary semantics. GET representation, apply operation listed in representation via POST. What more do you need?
> Honestly sorry if I sound snarky, but after 10 years on working with such APIs, and after reading countless books on the subject, it never ceases to amaze me how people still think that GET and POST are enough
You still haven't pointed out a single reason why GET+POST are not enough or how they "break REST". I'm not asking for "a ton of bibliography", I'm asking for a single example. An existence proof that my claim is false. It should be trivial if this shit ton of bibliography exists.
Frankly, most of this vaunted "bibliography" since Fielding's thesis has been non-REST crap. It's amazing how easily people can misunderstand a 150 page thesis.
> "REST is nothing but loosely connected guidelines that nobody uses in the same manner"
But is that really a huge problem?
Look at more rigorous rpc standards with more rigorous standards and formal interface definitions: RMI, xmlrpc, CORBA, SOAP, Thrift, AMF... I'm sure there are loads and loads of real world systems using these and they have their place, but REST has succeeded in a large niche that they have not.
That's really the winning point of REST: it's just formal enough to put most people on the same page, and loose enough to adapt to different requirements without too much effort.
It's useful to remember that REST was borne and adopted mostly in reaction to SOAP and XMLRPC (which in turn were basically replacing CORBA and RMI to work around the firewall). XMLRPC was too loose, and SOAP was too formal (way too formal). REST hit the sweet spot.
However, it's sad that the result was often that people just retooled their crappy SOAP / RPC systems to use decent URLs and called it REST. My litmus test is usually "returning 200 OK on all calls" - if you're doing that, that's not REST. Use proper HTTP return codes and put additional error messages in the payload, it's not hard. Also, if you have a job queue, give each job a URL with the ID and GET that to retrieve status.
Rest is a well defined semantic everybody bastardized because they find shameful to say they use json-rpc on the resume or elsewhere. There's a difference there.
Exactly. The whole "RESTful APIs" thing never was - I've been saying this since I first heard the term used. It's not a standard, just HTTP APIs, implemented wildly.
Nothing wrong with that, but don't call it a standard and don't try to name it something it isn't.
You don't need PUT nor DELETE. It is OK to use POST[1].
Most people don't understand REST. It is not a standard, it is not a set of practices. It is an architectural style and the web is built on it.
You can choose to follow the style (which is so much more than naming methods and URIs) or fight it. The dissertation is pretty clear about it[2], but most people ignore it because there are no code samples.
Ignoring the dissertation is OK. It was designed for people to understand how the web was designed, not as an implementation guide.
Ignoring REST is impossible, the web is built on it.
- The uniform interface is in every URI (if you use them, you use REST).
- The code-on-demand constraint is everywhere (if you use JavaScript, you're using REST).
- Hypermedia is everywhere (if you have links on your web page, you're using REST).
- The client-server constraint I don't even have to bother explaining (although WebRTC might bring P2P back in the game).
- The layering is everywhere (if you do haproxy, varnish, squid or similar, your using REST).
The _RESTful API_ thing is just a myth. People needed a name to discourage tunneling RPCs over HTTP, so they invented these loosely defined terms to push the idea. It should be called HTTPful, because it tells more about avoiding re-implementing HTTP features than it tells about the REST style.
And yet you don't provide a single concrete reason. And even go on to say the non-REST way you use REST at your team.
>REST is one of the more successful strategies we've come up with for connecting systems, this is just another case of letting perfect stand in the way of good enough. Using GET for non-destructive operations and POST for updates and deletes is a nice, portable compromise. I've been trying hard for years to find a reason to bother with PUT, but so far I've found it not worth the effort.
Note how none of this (or even few parts of what the author of TFA described) is actually REST as defined by Roy.
Taking ideas from REST and putting it into your JSON-RPC API doesn't make your API REST. But you call it REST anyways.
It's like reading a book on building a house and using ideas to build a shed. Sure your shed has electricity and a sink but it's still a shed, not a house.
Same with terms like RESTful. You don't call your shed a "house like building" as it's missing key pieces of a house.
REST requires HATEOAS. The only real implementation is the world wide web. Everything else is really JSON-RPC or some other protocol with a few ideas borrowed from REST.
It is considered best practice to separate out POST and PUT by PUT being idempotent while POST is not: https://stackoverflow.com/questions/18485621/what-is-meant-b.... This allows you to reason better about API calls in code by their HTTP method without having to check the fields sent.
What you just described is only the bare shell of REST. It would also apply to many types of API that don't follow REST principles. One thing this article gets right is that nobody is really RESTful because nobody knows what it means.
There's usually a dollop of RPC in most APIs - and there's nothing wrong with that - but the 'pure vision' of REST is similar to the 'pure vision' of the Semantic Web. It's a dream for the next life not something we will receive in this one.
I'm fine with using ideas that make sense and rejecting the ones that don't. It's still solid advice for API design, and the most successful approach we've come up with. Doing REST to the letter always turns into some kind of modern art installation with lots of hammers looking for nails; this is the technology department, religion is down the corridor. I have full confidence that the authors considered these ideas, not laws.
So… why do you call it REST rather than just HTTP since that's exactly what it is? You're using GET and POST for what they were built for, great, that's just HTTP, why not call it that?
> I have full confidence that the authors considered these ideas, not laws.
That's like saying the authors of the word "bicycle" considered it being two wheeled idea, not law, and you'll call your 8-wheeled ATV a bicycle because it has wheels and you think wheels are nice and solve your problem and the religion department is down the corridor.
Yeah wheels are nice and solve your problem, your motorised monstrosity is not a bicycle though.
I'm leaning more and more in the direction of ditching the term "REST API" and just saying our APIs are Web or HTTP APIs. I'm beginning to think people just say REST API because it looks better on their CV, not because its helpful to other developers who may end up using it.
I've made this argument in a much better form elsewhere but I don't have much time.
To my mind the benefit of REST purism was that it killed SOAP. A similar thing has happened at other stages of tech development (e.g. CSS purism cured us of all those awful <table> excesses).
The purism was a mistake and the details were wrong but it was necessary to get everyone behind a common goal - getting rid of something worse.
Although PUT is not necessary using it for idempotent calls like it was designed to do actually helps developers using the API understand the intent of the call better. They don't have to worry about undesired consequences/side effects. Which I think is a useful pattern.
> Although PUT is not necessary using it for idempotent calls like it was designed to do actually helps developers using the API understand the intent of the call better. They don't have to worry about undesired consequences/side effects.
This can also be solved by futures. Any POST that you'd like to retry, which will be any of them given the unpredictability of network partitions, can be bound to a future so the operation is applied only once and all future attempts simply return the bound value.
The point of having something like PUT (that is known to be imdepotent) is that it can be retried by any of the layers in the stack, instead of having a full roundtrip back to the client on every retry.
And it's not always safe to retry POSTs. If POST is an insert, for example, retrying it would produce two inserts. And this can have interesting consequences when processing responses. Suppose that you have sent a POST request, but before you could read the response, connection dropped. You didn't get a chance to read the status code, so you don't know if your insert succeeded or not. If it didn't, you want to retry - but you'll need to do a GET first to check the current state of affairs. OTOH, if you're doing a PUT, you can just retry immediately without re-checking.
> And it's not always safe to retry POSTs. If POST is an insert, for example, retrying it would produce two inserts. [...] Suppose that you have sent a POST request, but before you could read the response, connection dropped.
My previous post already covered idempotent POSTs via futures. The fact is, only the application can decide whether a POST is safe to retry arbitrarily or must be made safe by binding to a future. The PUT is merely a small optimization that is now obviated by the spread of HTTPS.
> The PUT is merely a small optimization that is now obviated by the spread of HTTPS.
Idempotence can be useful knowledge for client-side caches, so HTTPS doesn't obviate the value of the PUT (or, for the same reason, DELETE) vs. POST distinction.
If I understand your point correctly, you're saying that when something is talking over HTTPS, the middleware doesn't get to see the verbs, and so it can't optimize for them anyway. But if part of your connection is using different transports, then middleware in those segments can observe the verbs and react accordingly (including e.g. local retries in face of adverse network conditions, to avoid expensive end-to-end roundtrips).
While technically correct, I'm not sure how this is useful. You use HTTPS to communicate with an endpoint you trust, and anything beyond that is internal network infrastructure which is much more reliable than the HTTPS hops. The utility of local retries on this last hop don't seem compelling.
I agree that it's potentially useful, but it also adds it's share of complexity to any implementation I've come across. And most proposed uses I've seen do not obviously benefit from the idempotent approach, from my experience it often complicates the server implementation. I've found using the REST approach for URLs, separating reads from writes and using status codes a good compromise.
Http-rpc is. People call it rest nowadays even when it isn't nor understand the state tranfer part of rest. Not saying it's your case specifically but I'we seen exactly zero http-rpc API implementing rest but a shitton people claiming their json-rpc interface was.
PUT is for the case where the client knows the location of the resource to be created, whereas POST is for the case where all that is known is the location of a logical parent resource.
In my opinion, you can't really understand REST until you understand HATEOAS - the two concepts work together and REST (and the restrictions it imposes) isn't really very meaningful without HATEOAS.
Twilio Conference 2011: Steve Klabnik, Everything You Know About REST Is Wrong: http://vimeo.com/30764565
I might be unlucky but I've personaly never encountered a single REST API that implements HATEOAS (among many third party APIs I've integrated into softwares I've been working on on).
If almost nobody who implements a REST API has HATEOAS in mind, doesn't it mean that REST is de facto independent from HATEOAS, no matter what the initial theoretician said about what REST should contain ?
In other words, if REST mean something different for almost everybody than REST for the author (and a minority of people aware of the author's first intention), then in my opinion it doesn't mean that everybody is wrong about REST, it just mean that the word «REST» has evolved to a slightly more relaxed definition.
Using GET/POST/PUT/DELETE with a defined semantic, the confidence that GET is idempotent, and the proper use of HTTP status codes.
Back in 2005, it was really common to have only GET routes even for update of deletion, or worst: to have a single url: http://example.org/action which concentrated all the API surface, different behavior being triggered by the type of the payload (JSON or even XML). Also, all the errors where `200 OK` but with a payload which contained the error. It was all done on top of HTTP but nothing was really using the HTTP tools (route + method + status code).
Every single API / webservice had its own logic & semantic, working with 3rth party was a nightmare … It's exactly this kind of mess that the modern trend of «non-dogmatic REST» really solved.
> If it's just doing HTTP, why not call it HTTP?
Is it really REST ? No.
Is everybody calling it REST ? Yes.
Can we change how everybody calls it ? I don't think so, and I don't really think it matters.
Many things are poorly named[1], but as soon as it gets to the popular language
we need to use it for what it mean for people, not for ourselves.
[1] Is a «quantum leap» a nano-scale step forward ? Where is the isomorphism in an Isomorphic web app ?
> Using GET/POST/PUT/DELETE with a defined semantic, the confidence that GET is idempotent, and the proper use of HTTP status codes.
That's literally got nothing to do with REST though, that's straight out of RFC 7231 (sections 4 "request methods" and 6 "response status codes") and the IANA HTTP Method Registry.
- Technically REST and HTTP are two different things, I totally agree with you.
- Historically, before REST became popular, people where doing complete nonsense on top of HTTP, with no respect of the spec whatsoever (see my comment above). This madness was stopped because of REST ! It's only when REST gained in popularity that people started to learn HTTP, and since then people developed a lot of semantically valid HTTP interface and called them REST API.
To sum up my previous points : REST as imagined by Roy Fielding never caught, and the word REST is now almost unanimously used to describe «HTTP-compliant API». May HATEOAS rest in peace :)
> A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace.
I'm not getting it. We have an API layer serving both mobile and web clients. When we change the API layer, we have to be careful to not remove any fields willy-nilly that mobile might be using. Instead we hard-deprecate mobile versions and remove fields later. It keeps the application stable but makes future architecture depend on past architecture.
Server instruction on URL construction seems to be doing a lot of work in order to not fix the problem. It does not buy us the ability to just switch around field names and relations and hierarchy without concerns for what consumers were using the old system.
That's the real problem with server / client coupling, not that the client needs to magically know which route a created resource has. Sticking a link in the response body just seems silly.
Without reinventing SOAP in JSON form, by this I mean providing machine-readable schema information via the API, I can't see a way through this. But even that wouldn't solve the problem, you'd need some kind of intelligence on the client side for managing an API that might shift around under it without warning.
If we change a field from createdDate to createDate, is there a way to use HATEOAS to communicate the name change?
It's just saying that the "URLs" are not fixed in the client (aside from the one root URL) and are provided by the server and dereferenced from content types.
> If we change a field from createdDate to createDate, is there a way to use HATEOAS to communicate the name change?
No. In REST/HATEOAS, the content types are fixed and documented, the hierarchy is mobile. Content type alterations impact clients which may not be generic over type contents.
1) Almost every gripe in it refers to bad implementations, not bad specs.
2) It doesn't even mention HATEOAS. I'm no fan, but usually arguments about whether your API is REST or not revolve around how/whether you've done HATEOAS.
3) The rest of it is pulp tech writing that sounds like it was vomited out to meet some kind of publishing deadline.
Case in point:
> Consider, for example, when we might use the 200 OK response code. Should we use it to indicate the successful update of a record, or should we use 201 Created?
Here's a clue - read the spec! (SPOILER ALERT: the word "Created" is the giveaway). To quote:
200 OK
The request has succeeded. The information returned with the response is dependent on the method used in the request, for example:
<snip>
201 Created
The request has been fulfilled and resulted in a new resource being created.
"2) It doesn't even mention HATEOAS. I'm no fan, but usually arguments about whether your API is REST or not revolve around how/whether you've done HATEOAS."
You know that. I know that. But all the comments down to this point in the discussion have been over whether GET, PUT, POST, etc., are necessary or sufficient.
…and having implemented HATEOAS in a service (via json+hal), we’re moving far away (GraphQL) because of versioning, payload weight, and lack of flexibility. I’m sure there are things we could have done better, but the reality is that our clients need things that work differently than a “proper” REST/HATEOAS service.
We tried. We failed (insomuch as delivering a working service is failure, but we know it can’t grow the way we want it to grow because the payload weight is far too big). I’m not sure if it was us, or the fact that REST can’t map to everything we want it to be (it can’t, see the attempts to model actions performed to objects like IIRC fishworks did), but we are moving on.
Like any other architectural model, REST is good for some things, not for others. Versioning and payload weight seem like weird problems to have, though. Seems like JSON might have been a poor encoding for your use case.
By the way, do you have a link to that work by fishworks?
Any sufficiently complex data model will run into an issue where you need a mechanism to selectively render resources differently for different purposes. Putting new routes in place is a heavyweight way of doing this, and query-parameter mechanisms is a real pain. GraphQL started because RESTful APIs at FaceBook were causing great pain for their mobile clients (multiple round trips for data, full data sets, versioning issues on server and client). We experienced the same problem for similar (but smaller) datasets.
I did a quick look for the fishworks commentary, but I can’t find the discussion (this was from before the Borgacle consumption of Sun) about how much of a mismatch it was to implement VM controls through a REST interface. Is it legit to POST /vm/:id/restart or should it be a PATCH /vm/:id with a payload of { "action": "restart" }? Or…
Many advocates of a particular programming style (OO, REST, TDD are some of the more vocal, but FP advocates are right in there, too) seem to adopt the view that it is appropriate for everything. There’s definitely some impedance mismatch if you try to be REST “pure“ between that and the real world, the same as there will be if you go “pure“ for OO or FP for the same sort of things, but there will be different impedance mismatches.
I really tried with HATEOAS, because it makes sense, but the overhead has turned out to outweigh the costs for everything we’ve needed.
It does, it's just at a different level. REST is not a protocol, it's a set of architectural constraints, but it's well defined. Only one of the constraints (code-on-demand) is optional. HATEOAS isn't, since you need it to achieve the Uniform Interface that allows you to decouple the clients from the services.
What is it about CRUD over HTTP that drives some people nuts? A bit too much overhead, not perfect for high performance/low level data channels, and not perfectly standardized. But it piggybacks over a wildly popular level 7 protocol that takes care of security in a well tested way, already plays well with proxies/load balancers, has thousands of implementations in most languages, is well understood by network admins and usually already handled if you're shipping software to customers. It has a lot going for it.
Sure - everybody ends up reimplementing async jobs and polling... some people prefer XML/json/edn/etc... some people get pedantic on 3xx and non-200 2xx status codes... differing standards on referencing other objects/collections/etc... some people use POST where they should use PUT (or insist upon using PATCH and OPTIONS). It has just as many faults.
But it's wildly successful for are reason, and dismissing it probably is going to result in relearning a bunch of hard-earned lessons about integrating across lots of very heterogeneous systems and environments.
It's wildy successful for a reason, which is that its primary usee is by web browsers for document transfer. And as a result firewalls allow http over port 80 or https over 443. For a lot of cases you simply cannot communicate at all unless you use http. The reason for its success is that there is simply no other choice whether it's good or bad.
I think this article is a big lie, because it's based on the premise that what is being described is in fact REST.
First, it has nothing to do specifically with the mechanics of the HTTP protocol. There was an emergent pattern as people built APIs over HTTP, that they can re-use much of the semantics in common with HTML. HTML over HTTP naturally led to the definition of REST, where forms, semantics, and hyperlinks are not just an optional feature but a necessity.
Formal specifications, meaning hypermedia API media types are just emerging, Hydra[1] and Micro API[2] to give examples. Fielding wrote about REST in 2000 and about a decade later was rediscovered by industry, and I hope that two decades later, people will rediscover its utility for APIs. My interpretation is that Fielding did not "invent" REST, but rather formally described emergent behavior on the web at large. Implementations may have differed wildly but had many features in common.
The problems described in the article are the result of not following a media type suited for APIs. HTML is a wildly successful media type for machine-human interaction, there's no reason why there can't also be one or a few for machine-machine interaction.
In complete agreement here, especially your description of Fielding's thesis. If you're looking to create an architecture that scales well and permits discoverability, then it makes a heck of a lot of sense to examine and formalise the properties of real-world architectures that have achieved this.
More to the point, I think the original article's criticisms are pretty disingenuous. And his decision to ignore 'complicating factors' associated with network transport and caching: gee perhaps you're ignoring these because you CAN largely ignore these if you architect RESTful APIs. I mean c'mon, stuff like this makes me think the author is either very naive or very ignorant: The vocabulary of HTTP methods and response codes is too vague and incomplete to get agreement on meanings. No governing body - at least to my knowledge - has convened to set things straight.
Yes, yes they have. Remember SOAP? That's an OASIS "standard", it's highly specified in detail and it meets the author's desire for 'content' being independent of the transmission channel (you know, for when you want to implement a web API over two tin cans and a piece of string). He should just use that, or the OASIS ratified SOAP v2 (i.e. ebms3/as4), which is also 'transport neutral'. Have fun with that.
The rest of the criticisms basically boil down to "sometimes people don't implement it right" and "I don't understand HTTP response codes". There's really nothing that magical about REST. When you browse the web you are basically using a 'human-friendly' interface to a RESTful system. When APIs use the same architectural style it's the same thing, but for robots. That's pretty much it.
I see the point that REST is not perfect, but better than "awful". IMHO it qualifies for pretty nice:
Point 1: 200 is shorthand for 2xx. If I'm too lazy what the code is for "forbidden" or "insert something fancy here", I just return a 400. Does it's job and underlines that it's the user's fault. ;)
Point 2: PUT and DELETE worked for me very well. I remember just one project where everything had to be "tunneled" through GET. Because "exotic" toolchain/frameworks, at least there was a canonical way to do so.
Point 3: I think that's actually the nice thing about REST that its vocabulary is limited. Thinking about the pletora of unstructured HTTP XML RPCs and tons of structurally different Highlevel JS APIs for each vendor, REST seams quite relieving.
Point 4: I don't really get that. Chrome Debug works awesome, I get all the information I need. There are many tools out there for mega comfortable debugging.
Point 5: That's definitely a thing. I wish the article was only around that point.
> I see the point that REST is not perfect, but better than "awful".
:+1 and they are also better than what author advocates. One thing in author proposal that absolutely puts me off is this:
> JSON-Pure APIs (...) have only one response code to confirm proper receipt of a message - typically 200 OK for HTTP.
This is terrible. PLEASE do not design your API-s like this, this will confuse the hell out of all bots, browsers and everyone who depend on status codes (and that's all web clients). Your 404 error pages will be cached and stored in google, redirects will not be remembered, error pages not retried, etc etc.
I don't get the idea of json pure. Author claims that one benefit is:
> All errors, warnings, and data are placed in the JSON response payload.
But REST-ful API doesn't forbid you from returning informative error messages in json. Why not keep old HTTP semantics known by everyone and return json with informative error message?
I also think that author confuses HTTP semantics as defined in actual WEB STANDARDS with purely academic work of Roy Fielding. Meaning of status codes or methods has nothing to do with REST-ful this is just plain HTTP semantics as defined by RFC-s.
Sorry for the controversial question, and I probably raised my eyebrow if I saw a candidate choose XML over JSON when taking on a recent project... but really, why JSON? No comments, no multiline, no schema. I understand that XML is bad because ____ (not cool, verbose, old school, only enterprises use it?)
I am not a huge YAML fan but it seems this is the only human readable form of JSON.
The lack of self describing human readable scheme is mind bugling. So SOAP and XSD is bad (?) so instead we got swagger (?) is it really better?
Was ditching XML for JSON just due to cosmetic / trending reasons?
We still use HTML for markup. Why did everyone ditch XML for JSON beyond just treading reasons is an interesting question.
> but really, why JSON? No comments, no multiline, no schema
Limitations are good. I find it really easy to read. You can install browser extensions to make it even more readable.
Schema can come from a well written specification.
> Was ditching XML for JSON just due to cosmetic / trending reasons?
No, as said before, it's easy to write / read / generate / parse. No awkward DOM style management if you want to parse out a value. I really hated it with XML.
> You can install browser extensions to make it even more readable.
Most browsers can display XML readably by default.
> Schema can come from a well written specification.
That doesn't replace a schema though.
> No, as said before, it's easy to write / read / generate / parse. No awkward DOM style management if you want to parse out a value.
"DOM-style management" of XML documents which can be replaced by JSON is equivalent to hand-rolling a JSON parser from the raw bytes. It's not the hardest thing in the world, but it's not exactly easy either.
With most XML applications equivalent to JSON, that part would be tucked away in the relevant library, e.g. the XML-RPC serialisation format is almost isomorphic to JSON[0], and in Python converting between native objects and the XML-RPC serialisation is just `loads` and `dumps`[1] — though you don't usually have to do that at all since the XML-RPC client will handle the serialisation and deserialisation for you automatically — which is more or less what the JSON library provides[2].
And JSON is convenient in dynamically typed languages because it maps more or less directly to their native dynamic structures, that's not quite the case for languages like C++ or Haskell.
[0] null (nil) is an extension but it natively supports binaries and datetimes
> that's not quite the case for languages like C++ or Haskell.
So handling XML in those languages are easier?
BTW if your server / services feel more native with XML serialization / parsing to native data structures use that. Picking JSON just because everybody else does it is not a good idea.
However if you make a public web API consumed mostly by browsers (JS), then yeah, pick JSON and REST.
Both are "crap" since they're dynamic information which has to be brought into a statically typed world. However depending on the schema it might be possible to automatically generate the binding and hide the dynamic resolution.
> BTW if your server / services feel more native with XML serialization / parsing to native data structures use that. Picking JSON just because everybody else does it is not a good idea.
> However if you make a public web API consumed mostly by browsers (JS), then yeah, pick JSON and REST.
Browsers had native XMLHttp support before they had JSON support.
The fact JSON (use to be) a subset of JavaScript and the browser runs JavaScript natively is not convincing.
Other than evil eval(jsonString) (before JSON.parse was available) which is basically asking for XSS, what benefit you get from JSON being consumable by JavaScript?
I think XMLHttp was available in browsers (in a vendor specific annoying way perhaps but still available) way before JSON.parse was available... am I missing something?
eval() was present from day 1 and is the reason JSON looks like it does. Obviously it requires complete trust in the source (historically, the same origin).
Ever notice how nobody calls their API “RESTpure”? Instead
they call it “RESTful” or “RESTish”. That’s because nobody
can agree on what all the methods, payloads, and response
codes really mean.
These reasons have nothing to do with why people don't use the term "REST" versus using "RESTful". RESTful is the compromised approach, cribbing some of the concepts outlined in Fielding's paper but dispensing with others. Among those are a de-emphasis of linking, and a url naming pattern. HTTP verbs describe types of actions in RESTful lingo, versus meaning something about the different idempotency & safety guarantees of a request in the REST paper.
In REST, I'm not sure that a lot of these issues are that contentious. I do think that some of the emphasized points in "RESTful" design practice can create more contention in API design, but that's the downfall of that one pattern, it has nothing to do with what Fielding described.
No governing body - at least to my knowledge - has convened to set things straight
IANA has a ton of info on link relations and they're thoroughly speced. IETF has tons of API / REST related specifications. Profiles allow for defining what your data means, there's open source curated lists of these profiles already so if you were designing an API you could even leverage existing works to make your design easier.
Roy is probably a great guy and he certainly had a lot of
great ideas. However, I don’t believe that RESTful APIs was
one of them.
This statement is just downright hysterical given the relative disparity here. So much of the web is owed to Fielding's paper, there's been countless books and blog posts and just human man hours devoted to the work he's done. Who's this guy? Why is this such a trend in blog posts in this community, it just looks foolish and comes off as petty...
Most of the problems described with REST in this article are examples of REST implemented improperly. We've had years to get it right; things like HTTP verb support, debugging, discoverability are all by and large a solved problem. The one point that stands is that it is deeply tied to HTTP, but I consider that to be a positive: a well designed REST API means the message is only about the content of what it's serving and not about negotiation of that content.
Here in the "JSON Pure API" you see a reinvention of HTTP request and response concepts built into the API payload, leaving the implementation of negotiation up to the consumer of the API. You lose all the benefits of years of development that have gone into browsers and web servers to handle this for you.
The main problem with REST is that people tend to call any JSON endpoint they build a REST API (and hence the term, "RESTful") which leads to a misunderstanding of what REST actually is.
Right, there's a great many blog posts that can be TL;DRed as "We implemented REST wrong, and it didn't work. I guess REST is overrated."
There's two major components that makes an API truly REST, as opposed to the XML-RPC and SOAP-style APIs that were popular in the early 2000s: Proper use of the HTTP verbs, and the use of Hypermedia.
Most so-called RESTful APIs have adopted the verbs part of it successfully and correctly, but the vast majority have completely whiffed on hypermedia.
Your HTTP API is a state machine whether you like it or not. Hypermedia gives you a means to describe the state transitions as a part of your API. Without it, you are essentially requiring each of your API consumers to re-implement the state transitions for themselves. (The aggravating part of that, however, is that there's not too many good hypermedia-based client libraries, because it seems that most of the proponents would prefer to navel-gaze in their ivory towers, thinking about RFCs and getting the standards perfect. But I digress.)
I'm not sure I see the advantages. It's a lot more verbose, based on the few examples I'm seeing there, and strongly reminds me of SOAP, XML Schema, and the rest of that over-complicated stack.
I suppose some amount of complication would be fine if the benefits were clear, but I feel like I could implement a client and a server for the less-verbose API that you start with, in the amount of time it would take to implement a server alone with all those bells and whistles. And I can't think of any scenario where they would actually add value.
I think the reason why RESTful APIs with very simple JSON payloads became so popular is because they get the job done with so little effort and so few abstract concepts to grok. This feels like a step backwards in that respect.
> Most of the problems described with REST in this article are examples of REST implemented improperly
If so many people get an idea wrong, then there is a problem with that idea.With SOAP or JSON/XML-RPC, there is a normative spec, and your implementation either passes the spec or it doesn't, so something is SOAP or it isn't and it is provable. There is no spec describing what exactly is REST and what isn't. You can't demonstrate something is or isn't REST in a strict fashion.
A dissertation is not a spec, it is a discussion. Roy Fielding never wrote a REST or a HATEOAS spec. which led to you complaining about how people get them wrong. They didn't get anything wrong, they took some of his ideas and rejected the REST because nobody can tell them what is REST and what isn't.
I think the first comment/reply to the post hits the nail on the head, by Florian Klein[0]:
> Awesome! You replaced REST with REST. How revolutionary:
> Instead of transferring application state through the wire via one of its representations,
> you know transfer its state through the wire via one of its representations.
> Do I need to go any further?
> I think so, because to me your confusion comes mainly from the vocabulary you seem to misinterpret. You didn't even talk about hypermedia or links! That's kinda strange in an article about REST.
> Even tho I agree this vocabulary can be confusing, you should not throw the stone on the wrong subject (REST).
> Your problem is HTTP, right?
> Rest is not tied to http at all, and your point about making response bodies self contained is an honorable idea, but it doesn't change anything to your application being RESTful or not.
I work building, debugging, deploying and integrating JSON-formatted RESTish web API's every day. I find purist API's to be the most painful to use, and I find people who take the this-is-pretty-much-RPC-over-HTTP approach to be the second most painful unless it's perfectly built for my use-case.
There is a perfect middle-ground for most API implementations and really it comes down to the architects and software engineers implementing the API being able to think like API consumers and thinking of most use cases for their API's up front, then having a good level of "80% of people will use it this way, so we'll try and cater to the masses but support the other 20% like this".
There are definitely the marketing types who sell REST as the be-all-end-all solution to APIs, and all the other bells and whistles that go along with it. That isn't unique to REST.
I'm not on a mission to appease 100% of all use-cases for an API or integration. Such a solution will never exist and people trying to make out like there must be some sort of "holy grail" of API definition and design out there and %insert_current_fad% is wrong! No. We're doing our best, and today's API's with their API Test Consoles and their interactive walk-through's for people new to the API are a hell of a lot better than what we had before.
> most client and server applications don’t support all verbs or response codes for the HTTP protocol. For example, most web browsers have limited support for PUT or DELETE. And many server applications often don’t properly support these methods either.
I have never, as in ever, stumbled upon this problem. So I googled it. It turns out that what he means is that HTML forms don't support PUT and DELETE. In a world where everybody uses JavaScript anyway, I cannot see how this is a flaw in RESTful API's. It might be a flaw in HTML forms though.
PATCH is easy, especially when you use one of the two more useful standards (RFC 7386: JSON Merge Patch, or RFC 6902: JSON Patch).
PUT is not much use for large resources (like those typically found in business systems) since it requires you to provide every field, but usually you would want the back end to assign at the last the primary identifier, date created, etc. I seem to remember reading that even Roy felt PUT was not well designed.
POST is a catch all for any other update/insert type activity.
To paraphrase Winston Churchill, "http verbs are a poor system, but better than all the other systems that have been tried from time to time."
When you say you want the server to assign the identifier and additional fields, then I guess this would be considered read-only metadata against the resource. I don't recall seeing anything stating that in my RESTful travels.
Seems like most of the comments hate on REST because it's been poorly implemented by an API developer. Why is the solution yet another API protocol (JSONpure) and not stricter enforcement? (Idealistic i know...)
The point about SOAP not requiring documentation makes no sense either. You'd still need to document what the underlying fields in the various endpoints are. (We build against a lot of terribly documented SOAP APIs and its pure torture)
In terms of PUT (and PATCH) not being extensively used - it comes down to your use case. For the idempotent micro-services we build APIs against, there is a massive difference in the behavior expected for POST/PUT/PATCH and it would be pretty burdensome (and limiting) to have to create parsing code on the server for POST.
I liked that the article made me think of using websockets for a pure JSON API, but I think it misses a lot of what is nice about rest and much of what it criticises is actually HTTP..... Rest as a set of verbs that act on resources is really useful.
I used to find it awkward to implement services in rest, as in some action that is triggered and may overlive the request cycle until I started thinking of service commands as items in a work queue that get processed by a worker. So when a service is requested I can see it as a resource being created.
From the article: "The way forward: JSON-Pure APIs".
I can see a little of this. It's better today to send parameters in HTTP data in JSON format rather than encoding them in the URL. If you're doing a pure GET, you can send parameters with the URL, but anything that changes server side state probably shouldn't be done that way.
Of course, what's happening is that the JSON crowd is re-inventing SOAP, but, whatever.
> Of course, what's happening is that the JSON crowd is re-inventing SOAP, but, whatever.
SOAP uses schemas and the like, they just keep reinventing the same RPC as ever, except can't actually bring themselves to use the word. At least JSON-RPC is honest there, and ticks all of TFAA's box:
* uses whatever's under as a trivial transport (covers 1, 3 and 6)
* all data and metadata are part of the higher protocol (covers 2, 4 and 5)
Hence also reinventing CORBA IIOP, DCE, Apollo NCS, Xerox Courier...
Actual innovation in this area seems to be limited to JS background data sync libs. Ironically, reference-capable protocols like CORBA anticipated this whereas flattening it all REST-style was a step backwards.
I think REST is a perfectly acceptable and well-adopted pattern. There's no lie to it. I think GraphQL could displace it but adoption will be key.
What I am sick of with REST APIs is having to code them from scratch in dynamic languages each time and hook up all that laborious plumbing!
I read in an O'Reilly book on the subject that good REST APIs are declared. To that end I've been looking to tools like PostgREST, PostGraphQL, Swagger, and the ill-named Servant. Define the data, declare the resource routes, and you're good to go; generate the server stubs, the client code, and the documentation from the specification.
You don't get that from adopting a new standard every few years.
> REST became popular when it was detailed and promoted by Roy Fielding as part of his doctoral dissertation entitled Architectural Styles and the Design of Network-based Software Architectures in the year 2000. Roy is well known for his contributions to development of the web, especially the HTTP specification.
So far so good.
> Roy advocated using the request methods he helped define in the HTTP standards to impart meaning to HTTP requests.
Not even close.
REST may be one of the most misunderstood ideas in all of computer science. As described in Fielding's dissertation, REST is a software architectural style whose main idea is that hypertext drives state changes, aka Hypermedia as the Engine of Application State (HATEOAS). Fielding leaves the transport protocol as an implementation detail.
In other words, an automated agent (API consumer) should interact with an API in the same way that a human interacts with a Web site. User browses to well-known URL. Page displays content with links. User clicks on a link. And so on.
A human doesn't need to read documentation on using any particular website any more than an automated agent should need to carry service-specific instructions on using an API. Media types and hyperlinks do all the heavy lifting.
The stuff about structured URLs and response codes came later, was created by others, and has almost nothing to do with REST's central idea. It's a different architectural style altogether.
From Fielding himself:
> I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating.
On first pass, I totally agree with the author. I haven't reviewed the proposed alternative yet but fingers crossed.
That said, I think he misses the single biggest issue with REST APIs that I continuously encounter and which has caused me to consider them sub-par. Representative "State" Transfer. REST APIs are only good for transferring around the state of stateful objects.
However, CRUD operations are only part of the equation in most modern software. Much of what we do is performing actions on those stateful objects (or, better yet in the microservice world, avoid state altogether). REST provides no identifiable mechanism for performing actions outside of CRUD operations and is, therefore, a completely impractical solution. Most real companies' attempts to build a "RESTful" API just end up being RPC over HTTP dressed up to look like REST.
REST provides no identifiable mechanism for performing actions outside of CRUD operations and is, therefore, a completely impractical solution.
The mechanism is identifiable once you start to really think in REST terms. The problem is that people think in RPC, and so try to shove its model instead.
REST is based on names (resources), not commands. Therefore, we must implement our actions as resources. Instead of having a "transfer()" command, you have a Transfer resource type, which you create to start one. Eg:
POST /transfers ... (Body describes parameters)
201 Created, Location: /transfers/34
This change from a command to a resource additionally provides the benefits of REST, necessary for communication over an unreliable network: rather than keep your connection open while the action is processed, you get an URL you can poll to check if it is finished. This works even if the client goes offline for some time.
To add to my previous post: it's not like this is a weird concept; every e-commerce site out there implements this. Imagine if we followed the RPC model: the user would click on "submit", and then the browser would wait for a reply when the order was finished being delivered - multiple days later!
Obviously this is impractical, so people follow the natural REST model: they create an Order resource, then send you a link with which you can follow the status of that resource.
But bring the "API" word and suddenly all of the preconceptions push developers to implement everything as function calls.
The promise of APIs is simple. Send some data, something happens, get some data back.
But at about hour 4 of wrangling a bearer token to authenticate your barely-documentated PATCH request that is returning a strange error about your application/x-www-form-urlencoded body, you start to realize that APIs in theory are very different from APIs in practice.
(That being said, I don't love the "solution". It's very simplistic and it seems like the author doesn't really understand what he dislike about APIs. I don't think we need another protocol, but rather higher-level tools for dealing with them.)
I've never actually seen a REST API in practice, just HTTP APIs, by which I mean I've never seen a fully hypertext driven interaction with data. Let me sketch what I would expect that to look like.
I fetch from a URI (http://example.com/meep/boris) and get back a response with Content-Type application/meep; version=3. My browser searches for a viewer for that content type. The content type defines the actions I can take on the content, either locally or talking to the server, just as it does for any other protocol passing data over the wire. That understanding has to be out of band for the model.
But to be truly RESTful, the viewers must also be accessible via hypertext. For example, the response with the content could include a link header to point me to a usable default viewer for the content type. So you have to have a content type that whatever you're using (browser, SPA in the browser, etc.) knows how to interpret as a handler for content types. Whether it uses PUT, PATCH, or BABOON in HTTP verbs should be irrelevant. That's part of the protocol of the content type handler.
It's that second part that's a problem. The security implications are a bit worrisome. And if you're shipping handlers only for your own data, you control all the pieces, so there's no point going through all the decoupling.
Here's the rub, you're completely right, the problem is for the most part when you don't have rockstars making JSON Pure APIs you end up with half of HTTP redone in some god awful manner.
I once worked with an API where they implemented their own HTTPS and because their own https didn't support gzip they removed quotes from json keys to save bandwidth. JSON pure probably is better when working with knowledgeable people but REST is better than what most people come up with and when not following REST
> Ever notice how nobody calls their API “RESTpure”? Instead they call it “RESTful” or “RESTish”.
Isn't that just because RESTful is a play on words?
> For example, most web browsers have limited support for PUT or DELETE.
Really? How? If anything browsers restrict these methods to exactly how they should be used (DELETE can't have postdata but PUT can).
It's true that folks don't use PUT/DELETE much (the alternative works perfectly well though), but to me that's because they're unnecessary complexity, not browser support.
Relating to this...recently I needed to hook up a web app to a JSON RPC API and also extend that API, having to work with a legacy system. It was interesting to note that there was only one POST verb used in the entire API and the data format was always the same. At first I grumbled and was annoyed by the use of this system as it was old as hell but after some point, I was like, "oh, i don't have to worry about verbs now" or where to grab from params, query string, headers or body... so it was actually easier. I still hated it though because it wasn't REST because I was pretty biased with the status quo.
What I'm trying to say is having more choices or more ways to do something isn't necessarily a good thing as it makes it easier for people to do it so many ways that they can screw it up or just make it weird for everyone else. REST is kind of weird that way and I agree with this article for the most part.
New mobile-first products should consider GraphQL over REST. There's a lot of benefits that you will probably end up doing yourself in a bastardized version of REST anyways, like field restrictions and multiplexing multiple unrelated data requests over the same network call.
I think there are two types of developers. Pragmatic ones that puts focus on making things better/easier, and then those who generally don't give a shit, as long as they have some kind of standards to go by. Pragmatic ones are the ones who comes up with new frameworks, new languages, new standards. The rest are, sort of like sheeps. And we are all sheeps at some point. It's just a problem when the sheeps have a problem with someone trying and coming up with a new thing.
So when I see a blog article like the author, I think we should just hear him out, get what you can get out of it and either support him or let him be. Sure, he may not be perfect but at least he has tried scratching an itch where he saw a potential problem and looked for a solution. And the problem he has is actually shared by thousands of developers so he is definitely not alone.
Every couple years (hell, every couple months now), we face some new technology, or some new way of doing things and there is always the initial set of people who come across as being highly offended by it.
Like back in the days when XML was all the hot stuff and when JSON came out, most bypassed it stating it's not sophisticated enough, doesn't have any xpath support, bla bla bla, doesn't have a reference book must be not ready for enterprise. It was the same thing with SOAP when REST first came about. And these type of patterns repeats throughout the years, from frameworks, languages, and probably everything else.
People hate change. I think we all get that, it's just the way it is. However, over time the same people that were against it, slowly gives in to the same thing they were against the more people start talking or hears about it. I'm sure it happened to you at some point.
So folks, don't just judge a book by it's covers. Give it some time and perhaps even give it a try. RESTful may be good as it is for you now, but I'm sure you also know that it is not as good as it gets. There are those who believes there are problems with the RESTful approach we have now and looks for a new standard that solves them. They may be right and perhaps will come out with something we will all be using some day. Who knows.
I've developed both REST and XMLRPC API's. The right tool for the right job.
Sometimes REST is that tool.
I also tend to combine JSON with REST. I still enjoy using proper HTTP codes when possible. So if you're sending a json payload to update an object that doesn't exist I'll give you 404. But at the same time you're able to create that object by defining its properties in a json payload in the next request and not some obscure url parameters or POST parameters.
I don't see anything wrong with using proper HTTP methods and return codes when possible.
I think what bothers me most about REST is that the endpoint is not sufficient to get started. The schema is never known. I hate SOAP and xml-rpc, but at least you can know for sure what endpoints, parameters, and variable types are appropriate. With REST, you must have documentation or source code of the service you communicate with, they offer no discoverability.
I won't use REST again. I got an opportunity to use GraphQL recently in my profession and all of my projects will be using it in the future.
> I think what bothers me most about REST is that the endpoint is not sufficient to get started. The schema is never known.
If you are actually doing REST, the interpretation of a resource representation is fully specified by the media-type (insofar as if there is additional schema, etc., information necessary, how to find that given the actual representation is also defined by the media type.)
Obviously, this is not true in the all-to-common REST-minus-HATEOAS.
You can sprinkle links/rel and use custom/vnd mime types (and include links to the correct json or xml schema) and then a developer can figure out how an API works just from an endpoint, yes ..
but that's not what this person is saying. The thing about SOAP is that it is very concrete and a standard. Yes you get implementation glitches/bugs here and there, but for the most part, you have a standard definition for an object and a standard way to describe it.
It's so standard you can point a debugging tool like SoapUI at it and it will construct a nice little form for filling out all the objects you want to pass with the correct types.
There are other terrible things about SOAP and it's difficult to version/add functions/update/blah blah, but it was concrete.
You could create tools to auto-discover all the paths in HATEOAS implementations, but it's still not quite as concrete as SOAP. Most people don't and rely on documentation to build clients with the right schema/requests/types. REST/HATEOAS are concepts and you document your formats, marshallers, etc. SOAP/WSDLs describes the entire data-interchange in a much more concrete format.
As much as I hate SOAP, I wish we had a more concrete interchange standard that wasn't as terribly complex.
Swagger is a decent middle ground. There's a lot of warts, but it's serviceable and fairly prevalent with implementations in a variety of languages. Mainly I like to use the yaml version to quickly communicate rest api's. I've also recently started appreciating JSON Schema more recently for this reason. It's lighter weight than xml schemas. Inclusion of regex based field validation is pretty handy.
REST without using hypertext data types is indeed non-discoverable. You have experienced why REST is almost pointless without HATEOAS. Next time, look into using REST with HATEOAS, as it was intended to be used.
Previously using HATEOAS, gathering large amounts of data, say some kind of reporting dashboard, required a great deal of seperate HTTP requests, it was far less efficient in I/O and performance than without.
Anyway, I find GraphQL provides this solution very kindly, I can ask questions about the great big world, and request only specific fields, like a user's name, the names of their 10 closest friends and their availability status, all in a single call.
With HATEOAS this could be 10 to 20 calls, and maybe contain extra information in the response I'm not necessarily interested in.
I was looking for someone to mention graphql... For lots of cases this is a great way to reduce the client calls and give the client exactly what it is asking for..
Did I missed something on the article, whereas, I was hoping that it would expound on why REST should not be trusted. A title more apt would be:
"REST is totally flawed" or something similar.
Saying REST is a big lie is somewhat misleading, when the overall context of what it is and what it should be is still up for debate.
Does it matter when you can't completely and fully define what REST should be, but you are out there building awesome products while the rest of the web world is still up in arms on what it has to be?
I think this article actually misses some good criticism of REST that I've encountered over the years:
- how do you perform queries across resources?
- how do you perform actions that don't map to CRUD actions (e.g. tell a server to reboot itself)
- how do you perform actions that are transactional across resources?
I'm not saying there aren't answers to these questions, but doing any of these things in REST is not straight-forward.
This article is a complete strawman. His description of a supposed "REST"ful API is the least RESTful API I've seen in a while:
> The what-we-actually-indended-to-use request method embedded in the request payload, e.g. DELETE
Don't do this.
> The what-we-actually-indended-to-use response code embedded in the response payload, e.g. 206 Partial content.
Don't do this!
> If you’ve ever worked with a RESTful API, you know they are almost impossible to debug.
You're begging the question.
There exist plenty of APIs that abuse the HTTP methods and status codes, which I feel like is really the core argument being made. But completely ignoring it and what its purpose is is throwing the baby out with the bathwater. Read and understand the RFCs for HTTP, for a start; unfortunately, I'd wager that far too many devs of ostensibly RESTful APIs do not do this, and it shows when you get a response with a status code that makes zero sense. The vast majority of HTTP APIs I've interacted with violate both the semantic meaning of the methods and that of the status codes. (GitHub's is about the best I've ever seen.) A "RESTful HTTP API" is an API that uses the mechanisms in HTTP to accomplish the ideas of REST; (i.e., I'm not trying to equate HTTP and REST, I just think abuses of HTTP are a major impediment to understanding REST. Using HTTP well will naturally help you accomplish REST.)
> can anyone out there explained to me what 417 Expectation failed really means?
You've not read and understood the RFCs for HTTP. 417 Expectation Failed is obvious if you have; an expectation (on the request) failed (cannot be met by the server). An "expectation" is denoted on the request through use of the "Expect" header. The only existing expectation is 100-continue. Even if you do not know this by heart (and I don't expect that), it's readily findable:
1. Google "http rfc status codes"; unfortunately it's the second result; Google doesn't understand that the second result is an updated version of the first. Regardless, if you go for the first result, it points you to the second (-ish, b/c the RFC was split into multiple).
2. You select "417 Expectation Failed" in the Table of Contents.
3. You read the extremely straight-forward explanation. If you don't understand what the Expect header is for, the RFC links you to it.
So what are the ideas of REST? Start at its (de-)acronym: "Representational state transfer". That is, transfer of a resource. A "resource" is just an "object" or a concept, a thing — the actual concrete thing represented by a resource is going to be determined by your domain specific problem. E.g., "a user's profile data", "a message in a thread", "a news article" are all "resources" in that they embody some concept or idea that we want to communicate the underlying state of. You also need a standard, or uniform method of uniquely identifying, or locating these resources, which is what a URL is for (you then see why it's lit. uniform resource locator). So we build URLs to stand in as names for resources.
In order to transfer the state of a resource, embodied at a URL, you need to send it across a wire. You need to serialize it into some representation that's going to get transferred. That's what HTTP is supposed to help you do.
If you were writing a RESTful API, embedding things like the status of the operation in the response body should feel wrong, because the status isn't conceptually a part of the resource you were trying to operate on in the first place; go back to our example of "a post in a thread" — whats a status got to do with that? While technically, yes, HTTP's entity body is capable of transferring arbitrary binary data, the end result of using it that way results in simply the re-invention of wheels, such as needing to signal the success or failure of operations on resources. HTTP's purpose, alongside the ideas of REST, is to pull out the common bits that occur when writing code that transfers representation of stuff around, such as, e.g., caching, getting an ok to transfer large content prior to writing it all out to the wire, knowing the status of the operation, or pagination of collections of resources. (I cannot count the number of times I've witnessed API designers reinvent pagination, badly!)
The manner in which people like to use HTTP, with effectively only GET and POST (maybe!) is more akin to RPCs to me. It works. You can do that. But you then need to handle caching, pagination, status of operations, etc. on your own, and you'll end up, I believe, reinventing HTTP. Doing a lot of that (I think, and I think this was Roy's original point) is more effective if you structure your operations around transferring representations of resources around; this is especially visible in caching, because in caching you need the representation of a resource, because that's what a cache works with by its very nature. (As opposed to, say, making opaque method calls on a remote instance.)
Go read [1]; nothing really in there is bound to HTTP, just that HTTP makes a lot of it easier. Also, while I understand that Roy has a lot of arguments around how representations should be hypertext — and I agree with them, mostly — I think that comes second to the ideas that:
1. A URL represents a resource
2. The point of GET / PUT / DELETE is to transfer the state of that resource.
If you don't understand those two points, I don't think you'll understand the arguments behind hypertext.
> They are easy to debug since transaction information is found in easy-to-read JSON inside the payloads using a single, domain specific vocabulary.
Pushing everything into an opaque blob removes the ability for any tooling to pull out high level information. Chrome devtools, httpie, etc., all disprove this point.
> Problem #1: There is little agreement on what a RESTful API is
I agree. I also feel like too many people who think they know have not read anything from Roy Fielding, or even the HTTP RFCs.
> The REST vocabulary is not fully supported
depends mostly on
> most client and server applications don’t support all verbs or response codes for the HTTP protocol. For example, most web browsers have limited support for PUT or DELETE. And many server applications often don’t properly support these methods either.
This isn't true: JavaScript in all browsers in respectable use supports this; Android and iOS fully support HTTP; most server-side development languages have excellent tooling for this. This is a completely false statement, and requires proof, or at least a concrete example to back it up. (Were it true, it only invalidates the use of HTTP as an aid to accomplishing REST.)
> Problem #3: The REST vocabulary is not rich enough for APIs
POST is, essentially, a catch-all for odd operations not supported by other verbs.
> Imagine we create an application where we want to send a “render complete” response back to an HTTP client
If we have a resource that represents a rendering job, if you GET a representation of that render job, it can include some indication of completeness. The bigger problem here is actually HTTP's polling, IMO. Websockets might serve this specific example better, but this singular example doesn't invalidate that most web APIs boil down to CRUD $object of $type, which HTTP supports phenomenally well.
> Problem #5: RESTful APIs are usually tied to HTTP
Well, yes. At the end of the day, it has to be tied to something. HTTP is a pretty good something, with decent tooling.
> They use only one response code to confirm proper receipt of a message - typically 200 OK for HTTP.
Yes, you can send a single bit back. HTTP attempts to be a bit more rich than this. E.g., if something is in progress, and attempting to retrieve the data I just stored will fail until some serve-side job is complete, HTTP can easily signal this. Muxing them into 200 OK removes that information.
> They completely separate the response content from the transmission mechanism. All errors, warnings, and data are placed in the JSON response payload.
HTTP already has this: the response content is the body of the response. The transmission mechanism is HTTP. I don't want errors, warnings, etc., in the payload, where they are opaque and unusable by common tooling.
> They can easily be moved or shared between transmission channels such as HTTP/S, WebSockets, XMPP, telnet, SFTP, SCP, or SSH.
These channels do not support transferring the state of something. (Okay, HTTP does.) telnet is a mostly dumb pipe, same for SSH. You would need to build some custom, non-iteroperable layer on top of that. HTTP is the standardized version of that.
So much of my confusion is mapping POST and PUT to CREATE and UPDATE, which I've always felt was an arbitrary semantic decision. Why the unambiguous words CREATE and UPDATE are not used in the HTTP spec as method names is beyond me.
In the second article the author explains what a 'JSON pure' api would function. It completely throws out the OSI model and provides no way for the application layer to react to transmission errors, which HTTP codes provide.
As the first comment in the article mentions, this guy just replaced REST with REST. Nothing revolutionary. I thought he was going to talk about some rpc mechanism or something.
Agree totally. REST is hard to work with and very confusing since every application does it differently. I have seen applications returning HTML response for errors cases. REST is better than SOA or CORBA, but we need something better.
Like other posters have said, the author appears to be pointing out shortcomings with HTTP, not REST. Roy Fielding made it fairly clear that REST is not strictly associated with HTTP. REST, as an architectural style, is defined by a set of constraints: https://en.wikipedia.org/wiki/Representational_state_transfe.... Anything that meets these constraints is considered "REST". Most of the constraints sound like common sense for API design, and where most APIs are disqualified from being truly "REST" and merely "RESTish" is in trying (or not) to fulfill the Uniform Interface/HATEOAS constraint: that the client dynamically traverses information and operations from one resource to another via hypermedia links.
Interestingly there's yet a deeper problem with fully RESTful APIs (Hypermedia APIs), where REST's Stateless Protocol constraint combined with HATEOAS creates an API where clients need to undergo multiple HTTP round trips to load the data they need. For example, suppose your app lets users browse movies. You might have a sequence like:
client: "hey, I'm gonna act like a browser and hit api.com and take it from there"
GET api.com
=>
{
"movies": {
"rel": "content/movies",
"href": "/movies"
}
}
client: "hmm, ok I guess I'll click the movies link"
GET api.com/movies
=>
{
"items": {
"count": 4,
"prev": null,
"next": "/movies/page/2",
[
{"href": "/movies/1"},
{"href": "/movies/2"},
{"href": "/movies/3"},
{"href": "/movies/4"},
]
}
}
client: "ok, I guess I'll fetch each of those movies (I kinda wish the server had just told me what those contents were in the first place)"
GET api.com/movies/1... etc.
=>
{
"href": "/movies/1",
"rel": "self",
"title": "Waterworld",
"image": "/images/waterworld.png",
}
...
And don't forget the client-side logic to join/order all this data and handle errors. This problem is called "underfetching" and it's present in true REST APIs by design. Ironically, many "RESTful" APIs break from the REST constraints specifically to avoid this problem.
> Roy Fielding made it fairly clear that REST is not strictly associated with HTTP. REST, as an architectural style, is defined by a set of constraints:
Out of curiosity, do you know of any examples of RESTful APIs that use a protocol other than HTTP (say like IMAP or NNTP)?
He isn't going to provide you an example because there is none meaning full in production. REST uses HTTP because it makes no sense to use it with anything else. REST only makes sense with HTTP. With another socket protocol, there is no need to bother with headers and co.
You're right, there are no other protocols at nearly the scale of HTTP that use REST. I was just pointing out that REST was not strictly associated with HTTP in Roy Fielding's definition. If you look at my other link (Richardson Maturity Model), you can see that RPC-over-HTTP is vulnerable to same criticisms in the OP's link. Hence, I think the issues discussed in the article don't singularly apply to REST.
That is the the real test beyond technical jargon and If someone provides a working example that works over different protocols as you have suggested that will be end of discussion :)
Please don't post unsubstantive comments. If something is wrong, explain how so we can learn. Alternatively, if you don't have time to do that, please just don't post anything.
REST is successful for the same reason React Native is successful.
Before REST, there was SOAP, similar in spirit to the JSON-Pure solution proposed by this author. SOAP's promise was a version of Java's "Write once, run everywhere". In the case of SOAP, this meant that if you have a SOAP client, you could consume data from any SOAP server with only the domain specific part different.
Like Java, SOAP over-promised. In practice, there was horrible incompatibility between all the different SOAP providers and consumers; people lacked features so they hacked it on to the protocol, only to discover later that the default SOAP client in $THAT_OTHER_LANGUAGE didn't allow them to consume that hack as easily as it was in the first language. And so on, and so forth.
REST, on the other side, hardly promises anything at all. In practice, most people doing a REST API agree on a rough idea of what URLs look like, and yes let's do our best and embrace HTTP verbs and status codes. In a way, REST is "learn once, use everywhere" - not unlike React Native's motto.
This became wildly successful because the clients were all compatible from the start (after all, HTTP was widespread already). Compared the SOAP and most RPC setups, the clients are super underpowered; you need to do a bit of extra work. The only way to know what URLs to compose is to read the API docs (because HATEOAS is beautiful in theory only). The data is often JSON but maybe not, and you're again going to have to pay attention to the docs.
In practice, though, it keeps the developer firmly in control. Most developers using REST actually understand the entire protocol. They know enough HTTP to be dangerous, they can look at what goes over the line and it makes sense. The moment you add more features to the protocol, you just make it more complicated and obtuse.
I'm not convinced that's a way forward, not without serious (GraphQL-level) benefits.
Re-implementing those same semantics in your own messaging protocol / format, without intertwining the concerns of the protocol and the message format, throws away any/all of those benefits. You need a protocol that guarantees that middleware can "look inside" the messages it's passing (or at least their metadata), in order for any of this to work. That's why HTTP has both a transparent part (req path+headers; resp status code) and an opaque part (req and resp bodies) to each message: the transparent part is there for data that affects middleware behavior, while the opaque part is there for data that doesn't.
Note that that doesn't mean you're stuck with HTTP1. SPDY/HTTP2 is effectively an entirely different protocol—but it keeps the same semantics of requiring certain properties of the metadata tagged onto each message at the protocol level, so that anything that speaks the protocol can use that metadata to inform its decisions.