I wholeheartedly agree with the majority of this. In my experience “REST” has been like a religion in which hardly anyone either read the founding text or bothered to interpret very much of it. Fielding’s thesis does not require all of what adherents have claimed it does. An API does not a priori need to adhere to any sect of “REST”, though for many use cases it will benefit. For example, a machine-to-machine API might well have verb URLs to execute something. That’s verboten, a sin but not an excommunicating taboo, in “REST”. The torch carriers can get very specific about a great many aspects, too. I find it refreshing when someone makes good use of HTTP in a way that REST would forbid. For example, if memory serves, Elasticsearch’s search API uses a GET verb, appropriate for retrieving, with a request body to provide information that would be super awkward in the URI. This API is not intended for UI usage, so why not do this?
User interfaces can very much benefit from strategic adoption of application state transitions modeled via hypermedia-style links, that’s entirely different.
How likely is it, however, that even if Roy wrote an article in a major publication and gave talks at conferences about what’s not really REST, the community would change its use of “RESTful”?
>How likely is it, however, that even if Roy wrote an article in a major publication and gave talks at conferences about what’s not really REST, the community would change its use of “RESTful”?
Ha if agile is anything to go by, people on LinkedIn will be describing their recruiting process or marketing strategy or breakfast as RESTful
our recruiting process is based on a set of idempotent responses, for example if you send in a cover letter for a full stack position we immediately respond with an email asking you to confirm that you are a rockstar ninja.
> This API is not intended for UI usage, so why not do this?
There are HTTP clients that are incapable of sending a request body with a GET request, because the original HTTP/1.1 RFC [1] says that a client must not send a request body in a GET request. This has changed with newer RFCs, but 2616-compliant clients still exist.
If you're going to do this, you need to support POST as well, to accommodate those clients. Indeed, ElasticSearch accepts both methods. At that point perhaps you might consider whether it's worth doing just to have a conceptually clearer HTTP method. My take on TFA as it pertains to this issue would be: why bother with the strict adherence to perceived REST principles here? The search API endpoint is not a resource that you're requesting that is identified by that URL; it's an operation you're performing, so why not just treat it like the RPC that it really is?
[2] "A message-body MUST NOT be included in a request if the specification of the request method (section 5.1.1) does not allow sending an entity-body in requests"
Conceptually clearer HTTP methods is another valid direction to consider. Of course, the farther we wander from hypermedia, the farther we wander from the HT part of HTTP. At some point we recognize that HTTP became popular because firewalls let port 80 and port 443 traffic in and we have code-level APIs to do work, we just try to make our work look like it’s hypertext.
>I find it refreshing when someone makes good use of HTTP in a way that REST would forbid
I've written many "REST" interfaces in my time and definitely tried to make the "right" choice vs. the dogmatically correct one when it made sense. I still have a copy of API documentation I wrote "with apologies to Roy Fielding."
This is all pedantry. I don't think I've ever talked to a developer who cares.
All people care about is whether or not you have a sane API to work with over HTTP. It uses appropriate verbs, maybe, and has meaningful endpoints and request and response objects you work with, hopefully.
You can make up a new acronym for it like "WNI" or Web Negotiation Interface, but it doesn't change anything that people care about.
There are bigger problems in the world. No need to keep redefining dirt.
If you spend a considerable amount of time pondering such things, it's probably a sign that you need to be growing somewhere else in life and using those skills to enhance your existing ones.
This shouldn't even be a thing to think about. HTTP verbs and error codes don't map well to many real use cases.
You might argue that "GET" does map well to a query. However, you can't send a body with a GET, so you need to put that information in the URL, which is limited and causes other issues to consider.
If we just admit that we're really doing RPC anyway, there is no point in using any other verb than "POST", passing in the arguments via the body, and returning any other codes than 200 and 400 with the result (or error) in the response. If we use JSON for this (as we likely already do), you have JSON-RPC.
> You can make up a new acronym for it like "WNI" or Web Negotiation Interface, but it doesn't change anything that people care about.
I actually care about it. The word REST forces us to think about irrelevant things, such as whether something should be a PUT or a POST, which error code maps best to an exception - only to end up with an API that is more annoying to implement, document and use.
We would do well to transition out of of this insanity driven by buzzword compliance. If we managed to transition out of XML, we can do this too.
> Have tires really changed that much in 200 years?
As soon as you ignore HTTP constructs, you drop infrastructural benefits and the benefits of particular levels of abstraction reacting to those responses, which are often fully intended to be used.
You can’t get frustrated that HTTP isn’t a raw socket.
If you didn’t care about any of that you might as well use websockets for everything. And then as soon as you do, you lose out on everything that requires an HTTP request/response.
You end up rebuilding things that already exist. It’s a waste of time.
> As soon as you ignore HTTP constructs, you drop infrastructural benefits and the benefits of particular levels of abstraction reacting to those responses, which are often fully intended to be used.
Can you turn this word salad into a real world example? If it's "you can cache GET requests through a proxy" or something of the sort: Fine, use HTTP GET for that. You don't need to do absolutely everything through RPC. Obviously, a web client using RPC needs to be shipped through HTTP GET, for instance.
> If you didn’t care about any of that you might as well use websockets for everything. And then as soon as you do, you lose out on everything that requires an HTTP request/response.
No. I'm fine with using HTTP for RPC. I'm fine with request/response, it maps well to RPC. What I have a problem with is shoehorning RPC into HTTP verbs and codes for literally no benefit. That's what distinguishes "real world REST" from the more reasonable JSON-RPC.
So not only do they not map cleanly to actual applications but if you intermix your app's request statuses and error codes you don't have any way to distinguish an error in your application with an error in your transport.
Does a 400 response mean that your HTTP is malformed or the that the app specific request payload is malformed? Depends on the app...
You and the article author basically agree but you insist that you can just say “hey let’s just get over it, guys!” instead of deconstructing it. But the root of the problem is a fundamental misunderstanding. It doesn’t work to just get over it when you don’t understand what “it” is.
The poetry of writing semantically meaningful urls is gone. We need really detailed descriptions of the interface boundary, who is responsible and how things are encoded, such that one can use any language they wish for either side of the call.
I think we rather realize that the original meaning was misconstrued and then have to use different terminology to describe the original idea because the original terms have been co-opted. It looks like an old idea being presented as new and indeed it is new to many.
The author suggests that "there's just far too much confusion about what REST means to rescue it" and suggests differentiating APIs in 2 other categories instead, but in my opinion this not only introduces more confusion, but even there doesn't seem to be any constrain that would prevent the same confusion from arising again. There's no library to enforce a certain standard. It's just another set of well-intentioned guidelines.
In a few years we would be reading "should we rebrand Hypermedia APIs?" kind of blog posts.
> in my opinion this not only introduces more confusion
I don't think that's possible. I've officially banned the word "REST" from being used in technical discussions. Using that word is not just useless, it's harmful.
Note that this is not an issue with the original paper, but everything that happened after it.
The problem is not REST per se, the problem is people thinking a machine-to-machine API "should" conform to REST. I have not seen a coherent argument for why HATEOAS should be a useful property for API's beside "REST says so".
That said, we should not throw the baby out with the bathwater. Some REST principles like statelessness are good design for API's also.
100% agree, HATEOAS is rather useless for machine-to-machine. And it shouldn't come as a surprise that most APIs decide to not commit to this design constraint.
I prefer to think of real-world APIs as coming in varying degees of RESTfulness depending on the design constraints they have chosen to conform to. Good engineering is about making judicious analysis of the trade-offs involved, if HATEOAS doesn't bring you anything dropping it is the right thing to do.
HATEOAS was a little more useful when the commonly used media was XML rather than JSON. XML has XPath, which I was able to use to find the URLs for the different actions I want to perform, should they ever change. In practice, however, those URLs never changed or were constructed in a very predictable way.
To be fair, you need at least one hard-coded URL in your client, don't you? To hit the root entrypoint.
And if you have one hard-coded URL... and use it as the root of other URLs, then it's no hard to support than anything.
URLs are useful in REST because the API often crosses domain boundaries and company boundaries. One site links to another and so on. Most APIs are not like that.
In fact they're only like that when they serve, say, static image assets off a CDN, and guess what, then passing back a URL in your API is self-evident and natural.
If you look at the hydra API example[1] linked by OP, this is exactly what it needs: one entrypoint URL.
If your API only needs a single domain name (as over 99% of the APIs are), clients can just have the base URL encoded in a constant or a configuration file.
What HATEOAS ostensibly gives you is not the ability to painlessly switch domains, but rather to change your entire URL path structure and even the entire schema represented by your REST endpoints.
This feature turns out to be quite useless for most REST clients. Cosmetic changes to URL structure aside, any useful change (in other words - a schema change), will break the client logic, which is still hard-coded to deal with a predefined schema, predefined types of resources and a predefined set of actions that can be performed on each of them.
Roy Fielding accurately notes that a truly "RESTful" app should let the engine of its state be completely driven by the server. In other words the client cannot have any _specific_ logic related to "state transfers".
If your client is not doing this, like a browser does, than what does it have left to gain from implementing HATEOAS? I'm honestly interested in hearing good arguments, since I haven't heard anything that managed to convince me yet.
I've always worked all my API up to level 2 (HTTP Verbs) of the Richardson Maturity model, but I've never felt convinced that HATEOAS is worth the effort. It seems like the majority of HTTP API designers out there have the same opinion.
> What HATEOAS ostensibly gives you is not the ability to painlessly switch domains, but rather to change your entire URL path structure and even the entire schema represented by your REST endpoints.
Which would be utterly pointless, because the fact this schema is opaque to the clients means that it's meaningless to the clients.
So you have no reason to change it.
> I've always worked all my API up to level 2 (HTTP Verbs) of the Richardson Maturity model, but I've never felt convinced that HATEOAS is worth the effort. It seems like the majority of HTTP API designers out there have the same opinion.
The "maturity model" is honestly non-sense and Fielding also thinks that.
It's kinda like first level to being a bird is flapping your hands, second step is whistling with your mouth... No, you're not any closer to being a bird, you need to be the entire thing for the benefits of being a bird to come about.
Trying to stick to the CRUD concept of HTTP verbs is rather harmful. CRUD is a common starting point for a domain's verb structure, but being restricted to it is crippling and results in bizarre obscure semantics or underdeveloped business constraints.
I'd say to hell with all of it. I stick to HTTP as-is, and that's it. Also it's telling I think that to this day HTML forms only support GET and POST. Even HTML isn't RESTful I guess. /s
While Level 3, as currently implemented by some APIs, is a cargo-culted version of Fielding's idea of HATEOAS, Level 1 and 2 of the model have some usefulness for some types of APIs.
What you get by implementing what is called a "RESTful API" nowadays is something that is very different from the original concept of REST, but is still useful to some applications, over plain RPC.
Let's unpack it. Fielding RESTful system as a system which implement the following constraints:
1. Follows a Client-Server model
2. Is Stateless (request-response model)
3. Indicates cacheability of responses
4. Has Uniform Interface defined by the following restrictions:
Identification of resources (URIs); Manipulation of resources through representations (media types); Self-descriptive messages; Hypermedia as the engine of application state (HATEOAS)
5. Isolates access to non-adjacent layers
6. (Optional) Supports code on demand.
Applying all the restrictions stated above, lets you immense benefits: for instance cacheability, stateless and uniform resource identification make your system easy to distribute and scale to handle large amount of traffic; self-describing messages, manipulation of resources through representations and code-on-demand let you have a system that can evolve gracefully without requiring a big bang protocol upgrade.
The half-assed "RESTful APIs" of today implement only a subset of the constraints listed above. They are always client-serves, stateless, isolate access to non-adjacent layers and implement two of the uniform interface restrictions (URIs and manipulation of resources through representations). Sometimes they also indicate cacheability. However, they do not support HATEOAS (including most of the level 3 wannabes), code-on-demand and most importantly self-describing messages.
The lack of the constraints above means our modern APIs cannot achieve some of the benefits the web browser does: most notably flexibility and open-ended evolution. But they do have some other benefits over traditional RPC systems.
I think the maturity model itself is quite useless, but the following properties are useful:
1. Correct HTTP verbs: Differentiating between PUT and POST will let your intermediaries know when the request can be retried.
2. URIs uniquely identify resources: beyond the obvious benefit (cachability), it allows for easy sharding, distribution and smart redirection by any intermediary server.
With traditional HTTP-RPC (even with a standardized protocol like gRPC or SOAP), your intermediary has to be fully aware of the RPC schema to cache or know when it can retry. If you want to add a sharding gateway you need to especially develop a customized solution. A Level 2 "RESTful API" still has all these benefits.
> Applying all the restrictions stated above, lets you immense benefits: for instance cacheability, stateless and uniform resource identification make your system easy to distribute and scale to handle large amount of traffic; self-describing messages, manipulation of resources through representations and code-on-demand let you have a system that can evolve gracefully without requiring a big bang protocol upgrade.
Suppose that you wanted or needed any or all of these things: REST doesn't give these things to you for free. If REST was a standard that people actually followed, as opposed to an almost meaningless buzzword, maybe you'd get some of these things for free. The original REST paper doesn't specify any of these things to a degree where they would "just work". If it did, it would be a standard so complex, nobody would implement it properly.
> With traditional HTTP-RPC (even with a standardized protocol like gRPC or SOAP), your intermediary has to be fully aware of the RPC schema to cache or know when it can retry. If you want to add a sharding gateway you need to especially develop a customized solution. A Level 2 "RESTful API" still has all these benefits.
The only thing you're reasonably going be able to cache through a "dumb" intermediary are GETs for static or rarely changing resources. You don't have to handle these through RPC, you can use plain HTTP handlers for that, you don't need to go "full REST".
You keep calling it the system, as if rest gives scale and benefits to any system. No. It’s for large grained mostly static resources directly navigated by a user. I.e. the web.
Fielding never claimed this architectural style works for machine apis. And it doesn’t.
What is the problem with hard-coded URLs? It is certainly simpler than having to extract the URLs from some other request. If they don't change (and they shouldn't!) I don't see the problem.
Harded-coded URLs represent a larger API surface area. Hypermedia-driven APIs eg: HATEOAS permit the server to view URLs as an implementation detail. Some people and some problems benefit from APIs with large surface area. Some people and problems benefit from APIs with small surface area.
I tend to like APIs with small surface areas because it means I don't have to drag along outdated endpoints across minor versions because the org doesn't want to v2 anything. URLs as an implementation detail makes backwards compatibility much easier. So many times it's been useful to combine or split endpoints as the business needs change without affecting clients.
The worst part about using HATEOAS for me has been clients who are used to constructing URLs continue to do so and create a ticket where the obvious first question is "Are you constructing that URL?"
Even if the client fetches the URL dynamically, nothing about the semantics of that URL parameters or results can change or the client will break, you can only add extra optional things that the old clients will ignore. All those URLs still very much are part of the API surface area even if they're not hardcoded - their functionality still needs to have out-of-bounds documentation (contrary to the HATEOAS assumptions - "self-descriptive messages" IMHO can work only for syntax but are an unrealistic goal as far as semantics is concerned) and fixed behavior to match that documentation, in all realistic cases the client can't properly interpret the data without programmer intervention if the model changes.
So this gives you only the freedom to change the URL naming scheme, but if you want to change the functionality in any way that's not 100% compatible, you still have to drag along outdated endpoints with the old functionality or you'll break clients.
But this goes so much in contrast to your comment that it seems that I'm probably misunderstanding you - can you give an illustrative example on how exactly does this allow you to combine or split endpoints without affecting client software that sends/takes data from those endpoints and does some action (business logic) based on that data?
And another take on why clients being able to manually craft URLs is bad design: [0]. But prohibitting clients from manually constructing URLs really is not that hard: [1].
> In the future, I plan to make URLs opaque when building level 3 APIs. Instead of http://foo.ploeh.dk/customers/1234/orders, I'm going to make it http://foo.ploeh.dk/DC884298C70C41798ABE9052DC69CAEE
.... Obviously, that means that my API will have to maintain some sort of two-way lookup table that can map DC884298C70C41798ABE9052DC69CAEE to a request for customer 1234's orders
I pity the people who had to work with him, and the many thousands of hours this has probably ended up costing.
Oh yeah, having a table with UUIDs instead of customer numbers for primary keys does waste many thousands of hours, that's why CQRS is also a terrible idea.
Do you realize that without “constructing a url” you need to go through few roundtrips just to find the resource url you actually need?
You force your users through garbage you get support tickets like that. Instead of blaming the users, think why your great approach is actually problematic for them.
The single entry point has no benefits unless the api is multi-company, no one specifically hosts it. Is this the case? If no, your hateoas and single url approach is pure cargo cult.
URLs are not supposed to change since a URL identifies a resource. It is weird arguing for HATEOS as a mitigation for breaking a more fundamental constraint of REST.
> URLs are not supposed to change since a URL identifies a resource
Strictly speaking, you're wrong: a URL specifies a location of the resource, and a mechanism for retrieving it. If the location of the resource changes, so would a URL.
Even without this philosophical argument, there is an actual practical issue that URLs do change all the time. Both HATEOAS and Persistent URLs are mechanisms to cope with that: HATEOAS proposes to write clients that instead of using hardcoded URLs would use late-bound URLs supplied by the origin server; PURL proposes to hide the origin server behind a proxy that would translate hardcoded URLs used by the clients to the current scheme and either proxy or redirect the requests.
The main problem with either solution is that URLs use domain names as their "authority" part, so whenever you lose control of the old domain, the "entry point" URL has to change and there is nothing much you can do about it.
Yes, I know that URLs (URNs, actually, IIRC) were supposed to be immutable and eternal, with redirects set up (and maintained somehow, by someone, forever) to point to the current locations, but that didn't work out for obvious reasons.
Well if the API moves to a different domain name, HATEOAS will not help you one wit. You still need to hardcode the entry point.
I still don't see any reasonable scenario where an API would change the (site-relative) URL's without also changing the semantics of the API. And if the semantics change you would have to rewrite the client anyway.
In any case, if you change the URL's without changing the semantics you should respond 301 on the old URL, so even a client with hardcoded URL's would work correctly without the need for HATEOAS.
Well the problem is exactly that they do change sometimes. And having only one hard-coded URL is easier than having twelve of those.
> It is certainly simpler than having to extract the URLs from some other request.
It's not extracted from other request, it's extracted from the response to the original request, and using a field from the local struct is not harder than using a global constant.
The practical problem I have seen with systems (mis?)designed like this is a scenario when the client needs to do action X on resource Y attached to thingy Z - and they already have all the required info to do that action - then they still need to do a request on Z to get the URL for Y and do a request on Y to get the url for action X, which adds extra requests and latency for no good reason. Like, there's no significant difference if there is some original request that has been made, but often that "original" request is otherwise unnecessary and only gets made to enable that extra layer of indirection.
> The practical problem I have seen with systems (mis?)designed like this is a scenario when the client needs to do action X on resource Y attached to thingy Z - and they already have all the required info to do that action - then they still need to do a request on Z to get the URL for Y and do a request on Y to get the url for action X, which adds extra requests and latency for no good reason.
If they have “all the info they need”, they have the identity of Y, which in a system designed around REST is the URL, and if Y is a resource, then its URL is the URL for actions on it, and the type of thing it is will tell you what method to use and (where one is applicable) resource to send to accomplish the action.
> Well the problem is exactly that they do change sometimes.
Well they shouldn't! And if the API arbitrarily changes URLs around without reason, then surely they might also change the URL for the initial request.
If you treat them as public interface, they shouldn't, yes. But if you treat them as internal implementation details, they absolutely could.
One can also think about it as static vs dynamic dispatch: you make an initial request, you get a vtable of "method name -> method pointer" as a part of response. Or you can hardcode the method pointers and make it the remote server's job to properly chose the appropriate internal endpoint.
I disagree. Even if an API is effectively used by a machine, the root cause either is human interaction or it makes sense to also allow human interaction (e.g. for testing, troubleshooting, interactive documentation). HATEOAS can help to define formats that solve both problems.
Obviously, since defining formats and driving logic through defined and identified-in-band formats is an element but not the whole of HATEOAS. OTOH, applications using pseudo-REST without HATEOAS typically hardcode both URLs and formats rather than relying on in-band identification of both.
> I have not seen a coherent argument for why HATEOAS should be a useful property for API's beside "REST says so".
Loose coupling and extensibility, the same reason that its used on the web, which is quite critically a machine-to-machine protocol (while obviously humans consume it via user-agents, automatic crawling and othee machine-to-machine processing is key to the success of the web) and HATEOAS is key to why it works.
Another problem with REST is that Fielding himself describes it as a "large grain resource approach".
It's no surprise we love HTTP and URLs when we have to return, say, images, downloads or entire documents in our APIs. Just put the URL in, serve it on your CDN.
I think their point is that REST means nothing beyond “uses HTTP verbs” these days. Most companies/teams can’t even agree on the differences between POST, PUT and PATCH; let alone status codes, payload structure, authentication, etc. So using the term REST will mean something different to every person using it. Similar to how a Ruby, PHP and Python engineer will envision a different architecture if you tell them to build a “backend”.
They call REST anything that uses JSON over HTTP nowadays. What I don't get, when they don't know or don't want to do REST, why they are not just calling their API JSON/HTTP?
Somehow someone thought it’s a good idea to name a spec JSON:API[1] which is a ton of fun in search engines. The spec itself is quite alright though and it makes requests rather predictable.
HTML (/browsers) have agreed upon payloads, status codes and authentication. The lack of PUT/DELETE/etc is more a function of use cases than the HTTP/HTML standard and there are plenty of example servers that are fully RESTful. That was also the original use case of web browsers (as a document-store/multimedia+hyperlink-enhanced FTP).
> I've officially banned the word "REST" from being used in technical discussions.
So if I want to refer to the architecture of that name described in Fielding’s Architectural Styles and the Design of Network-based Software Architectures, what is the approved circumlocution?
Hypermedia APIs, on the other hand, are an active area of research, so it makes perfect sense for them not to be standardized yet. And once a specific standard does emerge, it will have a different name too, of course.
I don't think anyone cares, really. It's a well established name, people knows what it means a REST-style API. It's not perfect, but nothing is, it's good enough. Let's not change its name, what would be the benefit of it?
It would only cause even more confusion for no good reason.
I concur most don't know what it means. Years ago I started reading around trying to understand it, and all I found was that the ones who supposedly did understand, really couldn't explain it clearly; and they all had different definitions.
Reminds me of "if you can't explain it simply, you don't understand it well enough"
Nitpickers do care. When they start adding HATEOAS and shoehorning semantics into HTTP verbs to make the API more RESTful, they make everything worse to use.
The problem is that REST is a buzzword, so you want to keep the word somehow, but throw out the nonsense.
> Nitpickers do care. When they start adding HATEOAS and shoehorning semantics into HTTP verbs to make the API more RESTful, they make everything worse to use.
Sounds like bad work organisation or bad priorities are a factor too. These same people could as well make sure some other spec is followed to the letter; e.g. email validation logic from another HN submission [1].
No, it does not deserve a rebrand. It owns the ecosystem of shit that it brought about over the last two decades.
I cannot tell you how many meetings I've sat through where engineers have wasted time on discussing the RESTiness of a given "API" call, or which verb makes the most sense.
Anyway, we're only a couple of years of group think away from adopting something like JSON-RPC and then it's welcome back to 1999 for all.
I think that's a valid discussion. Each method has specific semantics in the HTTP spec, and consumers of an API (both automated and human) will expect you to follow them.
For example, I expect to be able to retry a PUT request without much concern. But if I'm not able to do that then that's a problem that would have been worth a discussion.
I think the problem with REST is that these kinds of semantics discussions are helpful _if_ you go all in. Frankly, I've never seen anything more than toy apps that went all in. Most of the world wants JSON-RPC and is bending towards REST to be "correct." I suspect that giving in to RPC would be cleaner overall.
Until we build proper RPC composition (via lightweight monads, I'm sure) and let that sweep the world.
Excellent point. It is like I said for Facebook is great for me in sense all folks who I have no interest in contacting are gathered at one place. So now I can avoid all in one simple way.
> I cannot tell you how many meetings I've sat through where engineers have wasted time...
We have similar REST experts who are making everyone else's life hell.
Oh yes, the best were those who seriously claimed, in my company, that actual endpoints should have been random UUIDs. Because, you know, you must get there only through "discoverability" and hypermedia links.
And then another couple of hours wasted discussing encoding some complex query into url parameters.
On the other hand it was a good lesson to avoid such discussions in the future using any feasible excuse.
The term really is a bit too academic to understand for a lot of people who are unfamiliar with API design, but comes down to fairly simple concepts when explained in plain language.
I compare it to how monads have an ivory tower definition (the famous 'A monad is just a monoid in the category of endofunctors!') for mathematicians, but can be explained in practical terms much more helpfully, even if that loses some theory.
Years ago I edited the Wikipedia page--it has always been thorny. The trouble is that the definition of REST is so abstract as to be completely meaningless to the average person reading the article.
If you hate using your finite lifespan on something useful, read the talk page. You will find this same edit war playing out over 10+ years between the theorists ('a style of client-server stateless API design where resources undergo state transitions') and the realists who (mis-) define it as a HTTP api.
Even though the theorists are correct, I think it's important to recognize that Wikipedia is going to be the first source of for literally millions of people. It would be a tragedy to not acknowledge that.
If you pick an API stack, consider your users.
If you build an API for general consumption that you want to gradually evolve and that should remain valid years from now, a RESTful API with HATEOS is a great choice.
If you need to drive a UI from your API and you control that UI, use something that makes you more productive and can be easily changed on both ends quickly - without regards to backward compatibility.
Not sure I understand:
> No business can be modeled purely as data transfers.
Most businesses can, unless you have a very different definition of what a data transfer is.
Most business can in a convoluted way. But should you they?
Take HN as an example. People can upvote comments. Should you represent the action of upvoting as a resource creation, in a REST way ? Or as an action ?
Take Amazon, how do you represent the action of putting a cart item aside ? The action of paying ? The action of changing which card is the default one ?
Thanks for putting it succinctly. I was going over some REST design principles and the "no verbs in the URL" thing is just really awkward and not intuitive (and something being non-intuitive leads to complexity in my opinion).
As in your examples, people think in terms of data (noun) and what they want to do with it (verb). Cramming everything into 4-ish verbs makes it awkward.
The solution given for this is usually to put the action in the body. But that feels like working around a limitation that doesn't even have to be there.
> People can upvote comments. Should you represent the action of upvoting as a resource creation, in a REST way ?
Maybe yes. Likes are a relationship between a User and a Post/Comment. In a standard normalised relational db model, they would have their own table. So `PUT /like` would add a Like to that table
I'm a normalization anti - I believe that many normalized tables are an antipattern and that likes are better stored not as their own table but as a list within the comment row - And I still believe that `PUT /like` is the right pattern to use, and that likes should be thought of as their own resource in a RESTful way.
I'm pretty sure netflix, spotify, notion, airtable, github and many others can be modeled as pretty pure data transfers. What am I missing? What internet-based service is not using data transfer as its base for building services on top of?
Indeed you can interpret my comment like that, so I will rephrase it in an other way: REST represents everything as a resources and the creation, update, and retrieval of resources.
Unless your business is fundamentally about storage without any kind of associated logic, REST will be a bad design choice. Why limit yourself to 4 verbs ? This is insane.
Your business has more semantics than create, read, update, delete.
Git has commit, rebase, branch, tag, revert, amend, clone, and so has GitHub. A good API will represent these actions as actions and not as a convoluted mess of resource creations or updates.
There is an impedance mismatch between REST and most
businesses, like there is an impedance mismatch between an ORM and a relational database.
I think we're talking about slightly different things. I'm using REST as an interface to a storage (of blobs/objects/whatever), just like you would SQL SELECT, UPDATE, INSERT, DELETE. I then add more things that listens to changes to that storage, so the data is sortof the API.
You're talking about putting the API layer upfront so you'd have a REBASE verb in the same layer that we have GET/POST or SELECT/INSERT etc.
Your example of git easily abstracts on my view since it already does those basic operations, but not on yours since you'd need a separate REBASE verb and so on as a part of the protocol.
I think we just have different expectations of the protocol.
> I'm using REST as an interface to a storage
> I then add more things that listens to changes to that storage, so the data is sortof the API.
I get it, but that's exactly the issue.
Data changes are a lossy way to represent business actions/intents.
Why would you represent your service as a storage ? Is your business about data storage ? If not, that's the wrong abstraction.
What's the advantage of this ?
> You're talking about putting the API layer upfront so you'd have a REBASE verb in the same layer that we have GET/POST or SELECT/INSERT etc.
I would definitely _not_ do that.
Separation of concerns tells me to not mix the transport layer with other concerns. HTTP is just a transport, business semantics have nothing to do in this layer.
"there's just far too much confusion about what REST means to rescue it"
No, there is no confusing. REST is exactly what's in Roy Fielding's thesis. Anything as is derived from REST, often with dismal ends. RESTful, RESTlike, RESTish are weasel words designed to say something along the lines of 'It's REST, but with these additions', or 'It's REST, but with these constraints loosened', or a mix of both.
I was there at the beginning of SOA in 2004 and SOA then was an amazingly useful concept that worked well. The term SOA was hijacked to mean so many things, often by consultants pitching their version of SOA enablement or training, and at a rapid pace. By 2009 the meaning of SOA was so buzzwordy and clouded that people were trying to invent new terms to refer to the original concept.
If someone wants to create a new thing, create a new thing and give attribution to the shoulders you stand on (i.e., ABC is based on REST but is different in these ways..1,2,3).
SOA was overloaded even in 2004, when IBM was promoting it heavily. In 2021, it so inclusive that even microservices fall under its purview.
> often by consultants pitching
You do realize the term originated by consultants, like many other terms used today (DDD, Gang of Four, Agile, anything from Fowler/Martin etc.). Consultants may not always be the best at implementing things, but I've always found that they do a great job of giving names and marketing repeatable business/software processes.
You're probably right. The Thomas Erl book came out in 2004, so it was popular at least a year before that.
Btw, I am a consultant. As for implementing things - there are consultants that track toward the marketing/image side of the spectrum and there are those that track toward the shipping/substance side of the spectrum, as in all things.
On naming you are spot on. Being good at naming things is almost a job requirement. I've been at larger companies where there were multiple training sessions on how to name things for marketing, and when to hire outside consultants to come up with the names. I was on one project where tech leadership was outspoken and stepped on some toes. They respun the projects concept slightly and changed the name to sidestep the animosity.
I only remember the timing because I was at the MySpace parent company before/during its hypergrowth, so all sorts of architectural patterns were being discussed/evaluated/used. This was during a time when complaining about SOAP/XML was the trend, greybeards liked to remind the young'uns about the nightmares of COM/CORBA, and Domain-Driven Design and Patterns of Enterprise Application Architecture were the new shiny things.
> As for implementing things
That's on me, I took a cheap jab at consultants. I've worked with many great consultants, more often than not people I've worked with in the past that went independent (or someone they similarly worked with).
> Being good at naming things is almost a job requirement
I wish someone had told me that moving up the seniority ladder also meant more PowerPoints, presentations, and buzzwords. Every startup CEO I've worked with seems to fall for the allure of language, since they're now point person to investors and are expected to provide 12-16 hours of board meeting material a year. Finding succinct language to explain ideas that will likely cost millions of dollars is hard work.
> changed the name to sidestep the animosity
I had a boss that liked to tell me "Don't focus on getting more people talking louder about your idea, but to stay quiet so the loud few are heard". I didn't need to get people to love my ideas, just not speak out against them. Neutrality is sometimes just as good as an endorsement, and that's doubly true with engineering work when you're usually trying to convince non-or-slightly technical peers to invest in your recommendations.
> greybeards liked to remind the young'uns about the nightmares of COM/CORBA
Yeah, I dealt with COM/CORBA at its tail end too. It was horrible, but could be made to work. Was forced to use it as a key piece in a collaborative testbed and simulation framework running and communicating at 4 research centers (UK, US east, US west, AUS).
> a cheap jab at consultants
Understandable, I'm among the first to admit many consultants deserve it. One time a co-worker and I travelled from NY to our corporate mothership in VA for meetings. We got there and our first meeting was delayed. I took out my laptop and started working, trying to get some code in while we waited. My coworker took out his newspaper and read that. Same coworker told me 'He always makes sure he comes out ahead on expenses.'
> moving up the seniority ladder also meant more PowerPoints, presentations, and buzzwords.
I hear that. I get to code maybe 20% of my time now as a tech lead on a bigger project. Previous tech lead positions were on small projects where everyone needed to code as well as wear their other hats. It's a step up in pay and, in a way, impact, but I question whether I like it. I may stay pure technical. I am at least getting to code Rust in my personal time.
> get people to love my ideas, just not speak out against them
I wish more people knew that. We tried to convert people to the OASIS XDI specification, but I think W3C and others saw it as a rejection of their approach. Arguing with them to convert them just made them more vocal in their objections and eventually the standard's ratification failed. Was a decent spec, though it did need a fair bit of supporting specs still.
No. Just no. It was the "branding" of Agile by consulting companies and publishers as a product they could sell is what got us into the sh*t show we're now in. Same has happened with REST and everything else the marketers touch. No. The _last_ thing we need is another vacuous rebranding. Unsurprisingly, most of the commentors here seem to get that, and are already calling out the b.s. when given the chance.
The point about the World Wide Web being an implementation of REST was really interesting, and now that I think about it web pages are one of the few things that usually respect HATEOAS. I'm reminded of one article by the author of intercooler.js and htmx, "HATEOAS is for Humans" (https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...), that made the same point but that I didn't understand fully when I read it.
Well-written article. Its always fun to inspire coworkers/peers to actually read Fielding's dissertation and then have a proper discussion about what REST actually is/means. There are those who get it and are eager to think about the architectural side of the networked apps they work on/with. And then there are those who just want to code / get things done. I can't be angry about the latter ones. But thinking more about architectural implications would bring us forward in the long term.
Whenever I use or write an OpenAPI/Swagger definition, my heart breaks. At least it gets work done. But that's also the reason why I personally still prefer good old server-side rendered HTML with progressive enhancement. There is so much overhead involved in defining some arbitrary HTTP API and then have a UI in some overly complex way. HTML is such a powerful tool to drive the client (the Browser).
Rebranding misses the point, because most people who claim to provide a REST API are actually providing web API that completely misses many of the rest principles.
What we need to do is retrain everyone to use the more general term "Web API" and reclaim the term "rest" for architecture and design decision discussions.
For years I have called my APIs "HTTP API" as they usually do not follow the abstract definition of the REST. And because I've seen so many APIs being called REST that barely follow any definition of a sane API.
The whole term is used by people as synonym for an API by management and by developers who never took closer look of the definition/paper.
I think that the situation is bit like in OOP, TDD and many other things that cause flame wars. When something good ideas come up, people tend to be religious about it. Instead of taking good ideas and utilize them for better solutions, we take things to extreme and fight about tiniest things.
REST is already an architecture style, nothing more than an a set of guidelines to design interfaces in order to ensure you get specific qualities and operational advantages.
These articles with bold statements regarding the need to change buzzwords already miss the whole point to begin with, and provide absolutely nothing of value.
That's kind of my point. "Use BEST Practices" is nothing more than a set of guidelines, a bold statement to ignore the buzzwords and just do what is currently understood to work the best with today's technologies, whatever buzzwords those practices are currently called, which change over time.
For example: BEST Practices a couple decades ago said to use XML over HTTP, while BEST Practices today say to use JSON over HTTPS.
Plus, it makes you think of eating pizza, instead of sleeping!
> That's kind of my point. "Use BEST Practices" is nothing more than a set of guidelines (...)
I think you're missing the whole point.
REST refers to a specific architecture style which comprises a set of very specific design criteria which are explicitly and very clearly specified in a doctoral dissertation.
The term REST is a term that was coined to refer to that very specific set of design criteria once applied as a whole. Not in part, not as a cherry-picked assortment of choices, but as a whole.
So no, the term doesn't apply to a loose set of guidelines which some guy decided to use just because.
It would be like stating you're using HTTP just because you set up a protocol that does request-response and might have support for annotations passed as.key:value pairs resembling headers, buy at the same time had no verbs.
I'm not a fan of the complexity things like gRPC introduce to most languages (the PHP tooling is horrible for example), and I really don't like the limitations of HTTP verbs and the box that trying to confirm to CRUD puts you in.
I second this (although I'd just stick with JSON-RPC 1.0). REST has its uses case, but alas it's now so ubiquitous that developers don't even think about wether they should or should not use it.
For machine to machine communication between heterogenous systems, the simplicity of JSON-RPC means one can implement a client easily in any language. Also creating method stubs is trivial. The payload is still human-readable should you use it from the browser and it's not prone to debates and interpretations around the meaning of HTTP verbs. All you have to know is what method you want to call and with what arguments... something that is natural to all developers, while they almost always get the endpoints/resources/verbs thing wrong.
I’ve been in GraphQL land for 4-5 years now, and all these discussions about restfulness, Http and other details make me cringe.
It feels like watching apes fighting about whether the cylinder goes in the triangle or star-shaped hole. It does not fit! Realize this and move on!
Unless your company is only about CRUD (with no JOINs), the Rest model is severely limited or mismatched to your business domain.
Just use GraphQL already. It’s actually simple, I promise! Self documenting, introspectable, supports so much code generation and validation tools, etc...
I know I’ll get downvoted for saying this, the HN crowd thinks « GraphQL = complicated » for some reason.
GraphQL is inherently more complicated. It's one more step at a minimum, and more honestly two or three more steps.
I can build an endpoint and point it straight to a parameterized SQL query.
With GraphQL, I build an endpoint, and point it straight to a GraphQL... schema? And then I have to define where that data comes from, and then how to return it...
It's just professional negligence to suggest it's not more involved.
The 15 seconds and 10 lines of code you "loose" defining your graphQL schema saves you hours soon after, when you'll be fixing the problems created by poorly structured REST endpoint and endless debates.
With REST: I can build an endpoint and point it straight to a parameterized SQL query.
With GraphQL: I can build a resolver and point it straight to a parameterized SQL query.
So yes, I insist that it's overall much simpler to use GraphQL, having done both extensively.
Here is a GraphQL server in Python, (Note that it automatically provides documentation, a GraphQL interactive playground, Introspection, etc.). Do the same with Rest and tell me where the overhead is
from ariadne import QueryType, make_executable_schema
from ariadne.asgi import GraphQL
from starlette.applications import Starlette
type_defs = """
type Query {
projects(first: Int): [Project!]!
}
type Project {
id: Int
name: String
}
"""
query = QueryType()
@query.field("projects")
async def resolve_projects(_, info, first=10):
query = projects.select().limit(first)
results = await database.fetch_all(query)
return results
Starlette().mount(
"/graphql",
GraphQL(make_executable_schema(type_defs, query)),
)
(It could be made even shorter with Graphene, but shorter != better in my opinion).
I've liked GraphQL the few times I've used it, but I think the standard way of doing pagination with `node`s and `edge`s makes me feel less like I'm working with my data and more some odd abstraction. I think if I used GraphQL more I could get over it, but that aspect always makes me wish for just a bit more.
> REST is just the coolest thing in web service API design right now, isn't it?
Is it? I still make my APIs REST-like, but I think in terms of “coolness”, GraphQL is more appealing to people isn’t it? Personally I reason that in my APIs GraphQL would be extra complexity for no benefit in my specific use-cases so far. And REST-like serves me well. But for me I choose this way because of ergonomics and because it allows me to implement the things I do with about the least amount of complexity. Coolness doesn’t play a part anymore. But if it did I would surely think GraphQL would be more cool.
> Client applications would not need to be hard-coded with any domain-specific knowledge of the server-side APIs they interact with. Instead, they would discover all the available resources and operations dynamically, at runtime. A client application developed for one hypermedia API could be easily forked and modified for another hypermedia-driven web service. And "smart clients", which are capable of consuming any hypermedia API with a common grammar, could become a reality.
This is WSDL and WADL, basically, except there's no common grammar to render a usable interface. The only real thing like what is described is a browser.
If you want a universal interface, you make a web app, and the browser is the universal UI. Otherwise you just make a crappy console app and a crappy custom protocol using a HTTP API (or if you hate compatibility, simplicity and convenience: gRPC).
To answer the poster's question, no, we're not going to rebrand REST. There are tons of improperly used terms that have been around for ages, they do not get rebranded. You just have to make a new thing and give it a new name and hope that it too isn't misused. ("DevOps" is the most glaring example to me, but also add "hacker", "crypto", "web", "cloud", "container", etc)
This is interesting, and I broadly agree with Kieran's complaints about the quasi-REST-lite that passes for web APIs today.
Sadly there's not much tooling or guidance around to try and encourage people to do REST properly. It's really not done well at migrating out of academia, as the author points out, and that's starting to look like a missed opportunity for industry. Other comments here seem to tacitly accept that (lots of "Oh well... is what it is.") Everyone's learned to operate within the status quo, but it's quite costly for API consumers to deal with a range of idiosyncratic APIs (and it's bad for providers too: trying to produce something ergonomic and consumable takes significant effort). Hypermedia would make a lot of that cost go away by making APIs inherently discoverable and eliminating all those unique client libraries and boilerplate.
(Plug follows)
To try and address that and, in the process, nudge the world towards building and using true hypermedia APIs, I and a couple friends started building https://intertron.dev - a mechanism to turn any web API into a proper, hypermedia REST API. Contact us on that page if this is a topic of interest, we'd love to talk. We can let you have a play with the pre-launch version too.
I access a url with some parameters, I get a JSON object out, and it's documented what side effects or whatever happen. Why do we need more structure than that? I don't really see the need.
It has been my experience that most programming terms end up being business or marketing driven if they gain much traction. My pet peeve at one time is API. This term contains the word "interface", meaning external code will interface with your code. However, business calls everything an API and it means a unit of software work from their perspective. Probably the goofiest one is AI. When people say they are using AI it doesn't tell me anything. What is a VM? Python, Perl and other interpreted languages are said to run in a VM. Isn't a container a VM? A container is just host OS VM. I remember when VMWare only had host OS VMs until they came out with the Hypervisor VM, ESXi. Docker is a brand of container and both are VMs. The point is that it is human nature that any word that becomes popularized will take on business and marketing purposes. You just have to roll with it unless one is expressly carrying out technical communication, then clarity of definition does matter.
I've always felt a little silly to have "REST" included in my resume because too many job descriptions mentioned it as a requirement. Once during an interview, I couldn't resist to challenge the interviewer asking me about "RESTful APIs" that I had built in the past, and I explained pretty much the same thing to him. Thankfully, he was quite open to admit that what most people are doing is not REST.
This is what happens to every umbrella term that covers a hodge podge of broad ideas and patterns. Nobody knows what OOP is because it’s a bullet list of a dozen concepts that varies with which OOP guru you ask. Similarly, REST is a long list of loose ideas and broad guidance.
We should rebrand REST to Rest to emphasize that the original meaning is lost and that we just use it as a shorthand for a http like API. When it comes to web it's best to set our standards as low as possible because being right doesn't seem to have any advantage over being wrong wrt adoption.
That is exactly the point! Most people talking about "REST" are just talking about web service APIs or HTTP APIs, because they are not aware of the actual meaning. REST became a shallow term.
It makes absolutely no sense to even suggest that REST should be rebranded "HTTP API" or even "hypermedia API" because REST is neither protocol-specific nor is "hypermedia" the only (or even main) design trait.
It's like someone who is entirely unfamiliar with REST feels compelled to make bold statements about traits he doesn't fully grasp.
> I think hypertext is wider than HTTP as far as this blog post in concerned.
Yes? Did you wildly misread my comment?
REST is either the original definition, in which it is hypertext-driven and protocol-agnostic, or a worthless buzzword for http APIs which are not hypertext-driven but are very much protocol-specific.
That's a reference to HATEOAS, but the whole point is that REST refers to an entire collection of design traits, where hypermedia is only one of them, and you cannot have REST if you do not meet other design requirements first, such as resource-driven architecture.
Making REST all about HATEOAS is almost as bad as making REST all about hard coded paths to resources.
> If you take REST as what it’s become (...)
REST didn't became anything. It has always been the same thing. It's specification is set in stone in the doctoral thesis, and complemented with subsequent posts and public statements from the author.
This sort of post has nothing to do with REST at all. It has zero to do with technology or design. This is vacuous and baseless buzzword bingo that adds nothing of value and just manifest a fundamental misunderstanding of the subject that is intended to be discussed.
We already have RPC-over-HTTP. It's not REST, everyone knows that. That's a very old concept, as is the resistance to implement HATEOAS. There is nothing of value in this. It's just a clear sign that the author jumped to the part where he feels compelled to coin buzzwords by appropriating established and well-defined concepts while skipping the part he gets acquainted with the subject, thus showing a fundamental misunderstanding of the whole topic.
It is the style of API's people commonly call REST API's which he suggest should be rebranded to HTTP API's. Because they are not really REST and don't need to be.
He is not suggesting rebranding the architectural style known as REST.
> It is kind of API's people commonly call REST API's which he suggest should be rebranded to HTTP API's.
First of all, the title of the blog post is literally "should we rebrand REST?"
Secondly, these appeals to rebranding REST APIs is already telling of the author's lack of familiarity and insight into the topic. The concept of RPC-over-HTTP is already widely established, as also the age-old stopgap solutions to REST misnomers and incoherences and buzzword abuses which is the Richardson maturity model
I mean, look at the date that article was posted. 2010. It's over a decade old. This debate is not new, and is already solved. We know what REST is. We know that HATEOAS is a key part, and one that no one implements. We know that there are plenty of ignorant folks starving to put buzzwords on their CV. We know that most people end up using resource-driven RPC-over-HTTP with no hypermedia or discoverability. Just because a blogger failed to do his homework and happened to feel emboldened to coin new buzzwords that doesn't mean he's making any point at all or providing any value.
The blog post explicitly mentions the Richardson Maturity model.
He also says that even systems or designs are consciously trying to acheive Level 3 in the Richardson model (HATEOAS) such as JSON-LD with Hydra or JSON:API are still not RESTful in practice.
This seems to be the main point of the article, but it is quite hidden. The idea of an app that can display and manipulate data on a distributed network of servers using a well-defined set of JSON-based content types is very interesting. But this is not what the vast majority of currently existing "REST APIs" (even those who are using a more hypermedia-oriented format like JSON:API or JSON-LD) are trying to do.
I personally think it is too late to rebrand "REST". Like a lot of confusing names (NoSQL, Serverless, "Cloud", "JavaScript"), it is here to stay. But the main point the author is trying to make is that there are true Hypermedia API clients (which are virtually non-existent in the wild) and HTTP API with various concepts borrowed from the REST model (but are not fully RESTful themselves).
User interfaces can very much benefit from strategic adoption of application state transitions modeled via hypermedia-style links, that’s entirely different.
How likely is it, however, that even if Roy wrote an article in a major publication and gave talks at conferences about what’s not really REST, the community would change its use of “RESTful”?