Why is REST so popular? Because it's easy to implement and works for lots of use cases. I'm sorry that you found places it doesn't, but in the real world, having been through that SOAP pain it's being compared to, I'd say there's not even a comparison. Everyone seems to want to find a reason to dislike product/technology/feature X but in this case, X is just better than anything we've had for a 90% adoption case.
What is with medium.com? Why is it so many links to this site are full of hateful hipsteresque opinions looking to sound smarter and more insightful than they actually are?
I avoid medium posts as much as possible. Everyone is an expert on there with very strong opinions telling me how every technology older than 2 years and not written in javascript is obsolete/dead/not the right way/new <insert a dead technology>.
And what's with the UI on their publications. They take up top 25% of the screen with the branding and navbar and bottom 10% asking me to sign in and both sticks on the screen. Who approved that?
I have another opinion about that. Read as much medium posts aspossible, but keep a critical mind. Medium is nothing but a blog platform, not a science platform, not a news paper. Medium is important, because it makes it easy for everyone to post her/his mindset or agenda. No one has to agree with anyones opinion. But its important to know about those opinions, because it makes it easier to have a dialog.
Agreed, its a content platform with a where the content fights for attention with the medium.com branding. How many people do you really "never miss a story from"? I generally avoid medium.com for this reason alone.
I'm further puzzled when I see companies using medium for their company blog as well. All I can figure is it must be recommended in some user guide to "growth hacking."
Medium posters borrow clout from medium which makes their post have clout. If you think of medium posters as just bloggers with their own domain or blogspot, you'll see them differently.
Isn't it just a content container? I mean, saying "I avoid medium" sounds like saying "I avoid wordpress" or "I avoid blogspot", speaking of content quality and not look&feel. About this one, I'm logged in, so I have to suppose your % (25% and 10%) don't apply to the logged-in case, also because as soon as I scroll down, all those frills disappears and there's only the text.
It's a content container. A content container that for some of us is now stereotyped as "content in here is often low-quality", which means that in the vast amount of content on offer each day we're more likely to skip links leading to medium.com. If people regularly spread bad blogspot links around, blogspot would have the same reputation. (I'm sure there is tons of spam on blogspot, but when people share blogspot links in my circles they usually are blogs by people that have been at it for ages and care more about content than appearances, so for me blogspot is a high-quality signal)
Being in a known content container is great if you don't have your own "brand" and as long as people associate the container with good content. If they don't, or if your content is way above average, it pulls you down (which provides motivation for below-average writers to write on them, hiding in the crowd, and motivation for good writers to leave)
It is a centralized content container, which makes click-bait titles like this one better material. So you can create a controversial opinion on a subject, get views, make it an entertaining even if exaggerate read and it gets featured among other pieces especially if you use the right popular tags, then your post is on people's medium app and newsletter subscription.
WordPress, on the other hand, is open source software you can host it yourself and thus you can't just slap some tags on it and get featured on top of a newsletter. This is also why some authors prefer medium, easy to setup and easier to get an audience, but then you have to resort to these marketing techniques to drive your views up.
Disclaimer: I have written some medium posts with catchy/controversial titles to test said techniques, call me part of the problem.
I happen to have worked with the system author coded/maintained. We're not in touch and I'm not here to defend him, but he's no hipster. Rather, he had to integrate a lot of heterogeneous services, as that system was acting like a hub between many departments in the company, with various tech skills and resources. If anything, I suspect he's more kind of unsatisfied by changes that he deemed unnecessary.
As a note aside, the xmlrpc endpoints of the aforementioned system worked fine and saved us time.
Having implemented many web service APIs in both SOAP and REST I have the same opinion.
XMLRPC or JSONRPC seem to be the happy middle ground.
The posted article hit home with me as I had to re-implement working SOAP services in REST because you know, management buzzwords and new shiny.
I quickly found, as the article articulates, as soon as you enter the land of verbs and workflows REST starts to stumble and becomes very network chatty. And when that network chattiness is backed by other network chattiness the grumblings of why the hell you can't just return a deep object graph of data from an endpoint ensue.
But you can with REST. Most of this conversation is a strawman against REST. Creating or changing resources can impact other resources. It's obvious / logical. Responses can include the things that they impacted along the way. Much of this thread amounts to "I used this shitty REST API once" or similar for RPC.
Well, it depends on how you reckon REST. I can assure you the real world issues I've run into don't have anything to do with the frameworks used. The impedance mismatch between what one would consider a sane API that could just as easily be implemented as a compiled library and the mappings of that API to HTTP verbs and what is considered proper REST were where the problems were.
My question would be, outside of convenience to SPA developers, what do you see as the specific advantages to REST over XML/JSON RPC or SOAP?
I don’t really remember SOAP since that was over a decade ago, but what I like about REST is that it harmonizes the front end and backend structures of the application and it makes things predictable. It also puts up some guardrails against obviously dumb behaviour like deleting records via a GET and with JSON API, makes it possible for things to snap together once you’ve got the structure right. Ember Data + Rails set up to follow JSON API is just unbelievably productive. Pagination, filtering, optionally including other resources, linking (both to the related resource itself and to the relationship itself) it’s really fast, consistent, powerful, flexible, and secure. It’s not always performant, but when performance is important I make one little RCP or nonstandard REST endpoint (say return a link to a big TSV blob) and I move on with life.
It lacks conventions thoug so every api is more quirky and even sloppier than a bad REST api. At least with REST there’s some existing structure and it’s not completely up to the imagination of someone in a hurry.
I don't remember the history prior to the meeting well enough today to give an authoritative version. So for fear of getting some of it wrong I will say nothing. However, I'll say that the answer to your question is "no".
If you control client and server and the server's functionality will for the foreseeable future is limited in scope, xmlrpc is the boss.
We had a django/rest based internal microservice. Was a pain to build and maintain. Switched to xmlprc (it's in Python stdlib), removed tons intermediary code, and wrote only a few bits of new code.
This talk by Jonas Neubert opened my eyes to how xmlrpc can be the glue for Python (which is already glue).
The pain you're referring to is relative to the language you used and when you touched it. SOAP is a comprehensive and we'll defined specification and when implemented properly you forget it's there because it just works.
After about 2003 major vendors had their implementations locked down pretty well. In Visual Studio you just implement a basic controller and the remainder is configuration. If you wanted to consume something from Biztalk, PeopleSoft, any Oracle Product, or any other Enterprise product you could just add a service reference to a WSDL URI and a tool would generate your interface classes and DTO classes for you in your language of choice.
In the Open Source world things were very different. Whenever I would provide a service to be consumed by a vendor, I would provide reference implementations in C#, JAVA, Python, and PHP. I would spend about 15mins on C# and JAVA, then the rest of the day fiddling with Python and to a lesser extent PHP.
PHP and Python have had SOAP libraries for a decade but they require considerably more effort to even consume SOAP. I have never tried to stand up a SOAP Service with them but I can't imagine it's any good.
Around 2012 I remember working with a partner company that was using RAILS for their platform. It was an absolute nightmare for them to integrate with our existing SOAP service layer. SOAP protocol libraries were the least of their issues. No client certificate authentication in their HTTP libraries. No serious XML support. They wrote their own implementation from scratch.
Well, that's actually where the SOAP pain is: it used to be that you couldn't make any good use of it unless you were using walled-garden vendor software. A lot of people dislike Windows, Visual Studio and anything else made by Microsoft. Same goes for Oracle. So then you are left with everything else, which basically means: everything without SOAP, Enterprise editions of runtimes, service buses and the likes.
There are a lot more developers out there not working for an enterprise and not using those tools. Especially the developers doing open source work or doing small scale work.
While nowadays it's fairly easy to make a Java Spring Boot application consume and serve SOAP, with automated WSDL imports and all the WS-* specifics, this wasn't pretty much never the case with anything new and free (as in speech).
> when implemented properly you forget it's there because it just works.
I think that's the link missing for most developers. SOAP was intended to allow machine-generated SDKs to remove all of the sharp edges of dealing with it, and in that regard it was largely successful. What brought it down was the advent of non-"enterprise" web development—development happening outside of a .Net or Java IDE that generated code for you. If you ever had to handroll a wrapper for a SOAP endpoint though, god help you. I honestly believe SOAP was the thing that made XML seem uncool by comparison to JSON and REST. JSON still doesn't solve data representation problems as well as XML, we've just learned to accept "good enough" in its stead.
> PHP and Python have had SOAP libraries for a decade but they require considerably more effort to even consume SOAP. I have never tried to stand up a SOAP Service with them but I can't imagine it's any good.
I've tried suds and zeep. They both didn't really work in my specific cases. My strategy from now on is to write a bit of code in Visual C#, then analyze the xml traffic, and then generate that same XML using Python/PHP. The SOAP protocol is actually pretty simple once you know how it works.
SOAP has a lot of problems, but it's pretty amazing that you don't need a client side library to use it in Visual C#.
REST is not popular, there are only a few RESTful public API in the wild. The rest (unavoidable pun, sorry) are simple HTTP APIs with JSON serialization which maps with various degree of coupling to internal data layer.
The main cause you don't need REST limitations to achieve same goals.
OP has strong opinion about why we need to reimplement rpc over http every time for every application and write clients for every popular platform to be able to consume it.
Every time I use REST from any google cloud api, my hair is moving and you definitely will have nightmares if you look into their python client code.
This is the no-true-scotsman fallacy applied to REST. In point of fact, if REST has been around for nearly 17 years and there are so few of these APIs in the wild, then its been incredibly unsuccessful in its aims.
The truth is that these sites are, in fact, RESTful and that REST is just mediocre at what it professes to do. The requirement to treat all operations as resources + HTTP verbs is the primary leaky abstraction that everyone seems to want to gloss over.
I've been curious to see one (for years). Everyone talks about how this REST service is being done wrong, but few will link to services being done "right".
The article laid out a detailed explanation of shortcomings of REST. You might not agree with everything in it, but you offer no real rebuttal, instead dismissing it as a “hateful hipsteresque opinion,” without acknowledging any of the actual criticisms the author gave. This strikes me as unfair, haughty and lazy.
> ...having been through that SOAP pain it's being compared to, I'd say there's not even a comparison
OK, that is indeed the most usual response to "Why REST?". The main reason why people like REST is "because SOAP". It's a false dichotomy that the industry has fallen for.
Oh, yeah, and you can run the GETs directly in your browser/cURL. I like that part too, but it only gives you so much.
REST is more of a philosophy, than a standard. Hence everyone does it differently, and you have no chance to use the same library to talk REST with multiple different services (unless you make that library an overcomplicated beast).
XMLRPC? It's a universal standard, that is just a few pages long and everyone can grasp it in their lunch break. Nobody would complain that your API is not "XMLRPC enough". It works (almost) the same way everywhere. You can get the XMLRPC library that's been built in since Python 2.2 and be reasonably sure that you are going to be able to talk to a random modern XMLRPC API. You'd have other such libraries for every major language. Ditto for JSONRPC, if XML sounds too scary (though it doesn't really matter much - it's a mostly transparent implementation detail).
I'd wish people would stop bringing up SOAP as an excuse for REST. Yes, it was worse, but that does not mean that REST is particularly good.
REST allows you to make decisions about the http interaction out-of-band, where SOAP provides (and often requires) the ability to describe all decisions about types and parameterization and exception cases in-band.
The REST way allowed one to get started with something small and simple that people could agree to just by talking it over together. With SOAP you had to make all those decisions up front and put it in the specification. I believe SOAP is so complex that it's an analog to CORBA/IDL.
REST is (at least initially) simpler and less specific than SOAP. That's its strength.
> I believe SOAP is so complex that it's an analog to CORBA/IDL.
SOAP was intended to facilitate the same programming model but more loosely coupled. So that's not at all far off. And for what it intended to be—a specification for machine-generated bindings and SDKs—it was quite successful. It just didn't make the usability leap over to web development that happens outside an IDE.
I think “hipsteresque opinions looking to sound smarter and more insightful than they actually are” is a very good definition for much of Medium’s content.
It may have to do with the platform being used by individuals trying to create a brand of themselves, resulting in a high percentage of sensationalist and controversial posts. Along with the necessity to write on a regular basis.
didn't read all that stuff, busy getting stuff done.
I agree with this. When I end up on a new project and I'm not the lead, and there is a lead who is pedantic regarding how they want their URLs crafted (or wants to implement a complicated query pattern, or introduce an extra layer of objects to satisfy an abstract notion of purity), I'll just go with the flow. Accidental complexity, pattern seeking and cargo-culting are personality traits of many programmers, and I've come to the conclusion that its best to accept this, and get on with the job of actually delivering value.
The problem becomes evident when you return to the results so the flow a year later to change something, and wish you were more mindful when getting things done.
Cargo cults are not good. Finding and adhering to regular patters that logically underlie what you do saves time and mental effort, while also preventing certain classes of errors. (No, these are not GoF design patterns.)
I’m certainly not advocating spaghetti code, balls of mud, or a thoughtless lack of architecture. What I’m specifically referring to is wilful anti-pragmatism that betrays a sort of insecurity...it’s hard to define but easier to recognize.
Finding the balance between pointless over-engineering and rigidity on one hand, and good software design on the other, consistently, is what seems to distinguish great programmers I’ve worked with, from those who are “just” very good.
But, like I said, it often isn’t worth the trouble fighting about these things, even when we see them, and getting on with the job is more important than ensnaring oneself in religious debates on projects.
me: already groaning at the thought of fixing stuff after that one who didn't "read all that stuff," the "left", and thankfully the right was too busy pontificating to do anything so there's no fixing to be done.
>Why is REST so popular? Because it's easy to implement and works for lots of use cases.
Could have given the exact same non-argument for SOAP -- which in its time dominated corporate services.
Whether it's "easy to implement and works for lots of use cases", it's a moot point if there would be something even easier to implement and worked even better for real use cases.
Why stop at "easy" when you can have easier AND more coherent?
Besides, REST is anything like easy. Case in point, almost nobody ever correctly implemented the original REST spec -- that's why everybody calls their implementations "REST-ful", they are some loosely inspired deviations that cargo cult a lot of useless junk.
Supposedly the "real illuminated REST" (like real communism) doesn't even concern HTTP, it's a philosophy beyond web services. And yet's it's all the web related garbage part of the introductory examples that everybody follows (or tries to).
It’s an architectural style defined by Roy Fielding’s thesis (who also was the editor of HTTP/1.1 RFC). It attempted to describe the Web’s architecture in neutral terms and how it was derived by combining previous styles.
Some people took this and crafted a quasi religion out of it, but that’s partly because vendors in 2002 were all lined up trying to replace the web with a CORBA equivalent and then almost succeeded with SOAP/WS-*. People got strident to fight the dollars that were lined up. It’s easy to forget there was no powerful web/internet community with social media and blogging platforms in those days, it was all mailing lists and a couple of conferences vs. Marketing budgets, sales teams, and agenda-wielding engineers on standards bodies from Microsoft, IBM, BEA, Sun, HP, etc.
All RESTful means is that something is attempting to conform to the style. There never was a spec.
>It’s an architectural style defined by Roy Fielding’s thesis (who also was the editor of HTTP/1.1 RFC). It attempted to describe the Web’s architecture in neutral terms and how it was derived by combining previous styles.
It's application for what are essentially RPC needs is which I consider cargo cult.
The people you describe used to have blogs with crappy Wordpress themes that barely got noticed by Google, now they are on a big site so get discovered I guess?
It's full of click-bait because of the opportunity for articles to go viral. Spread is built into the platform, so you have people writing the most click-bait articles possible.
I don't get why the main thread is an attack to the blogging platform instead of a discussion on the topic.
IMHO REST is not easy to implement, it's easy to say "oh... okay I think I've got it... let me try" but it's very hard to implement RESTful APIs and the author mention a few valid pain points.
Simple use cases like: a user forgot their password, what the proper way of handling this case? A PATCH for a User based on ID? If the ID is a integer ID then you have to query user by email or username before you are able to request a new password, if the PATCH handles this case what other cases does it handle? Can a user request a new password and change their age at the same time, is this a valid case? How you structure your controller to route to these special cases, do you create a custom endpoint thus making your API less restful?
One thing I like about GraphQL is this idea of mutations, in many cases they are analogous to RPC calls, `userForgotPassword(usernameOrEmail) -> forgotPasswordResult`.
Because there is no "booo" button. Every post in medium just have Likes or none, you can like or comment just that, making a critical comment is to much for most of the users that just want to say "I disagree"
If medium has some kind of down vote, it will regulate itself a lot more, and users that disagree will not have to go to make a comment and expose themselves being critical.
Right now is full of "Content Hackers" trying to make reputation.
So long as an article makes a reasonable attempt to present a viewpoint, downvoting does not add anything to the issue. If you disagree, upvote those replies that you agree with (unfortunately, Medium makes that more complicated than is should be.) If no-one can be bothered to say what's wrong with the article, maybe there isn't much wrong with it.
For example, xmlrpc has been recommended in comments as alternatives to consider. Maybe what's wrong with Medium is that the informative replies are on HN... (to be fair, grpc is mentioned on Medium.)
You think this guy is a hateful hipster, but most of what he says probably resonates with most software vets. To me, it's mostly against the REST zealots who demand REST is done in a very particular manner. I've seen companies with a very stable, robust RPC framework that had a small faction of REST zealots who were extremely against it because it didn't do things according to REST. It didn't matter to them how stable it was or how well it worked.
Also, REST is relatively not popular - what percentage of the world's APIs use REST do you think?
TBH, you're the one coming off like the hateful hipster to me.
The orthodoxy (aka "standard") is there for a reason. It's very easy to shoot yourself in the foot.
If you're only doing small-scale internal interop, especially where you control both ends of all connections, you don't need standards, just do whatever works, but if you have scale dreams, thinking about the rules (What is 'state'? How is it represented? Where?) and why they are there will save you from a lot of headache going forward.
You need a standard within the ecosystem you play in, if only to reduce the amount of work integrating bits and pieces.
For example, I work on an application that has reasonably tight integration between pagination on the front end and parameters / headers on the back end. Works quite well, particularly since we can control both ends. It's not completely ad-hoc - we chose one specific idiom for pagination that had reasonable support, but not necessarily the most widespread - and then extended it slightly when we needed a few operations that had no direct support (e.g. multi-delete, multi-update).
But if you're integrating from everywhere, there's no upside on a unifying standard because any unifying standard will either have so many wrinkles, complexities and caveats to cover every special use case that nobody will implement it correctly or understand it correctly; or it will be ill-suited to many domains, forcing a poor mental model, increasing the probability of bugs and reducing extensibility.
There's no difference between debugging REST and JSON-RPC over HTTP, so there's no point to compare them. The other alternatives, that I've mentioned, are less easy to debug.
It takes just a few clicks to generate complete set of requests from WSDL (set, as in "one request per each operation"). I think it's much better user experience than hand-crafting json to use with curl.
The nice thing about SOAP was it had a full-fledged type system built in. REST is certainly easier to debug, but it's also more likely you'll be required to debug it, since many of the problems you'll run into in a REST interface would have been caught at compile time with SOAP.
SOAP's big problem, IMO, was the crazy insistence on URI formatted namespaces, which took a simple XML message and turned it into something bloated and confusing.
I think there’s a tendency in software for people to start out without understanding all the complexities they’re going to encounter. I think this is just human nature.
When you start out doing RPC you think, I don’t want to bother with schemas, I don’t want to bother with hierarchical error codes, I don’t foresee the need to set the user’s password but not retrieve it. So you don’t want to bother with a technology which makes your life more difficult, to solve problems you don’t have and cannot foresee.
So you choose something simple. But you run into all these problems anyway, because they exist, no matter if you were capable of foreseeing them or not.
But by then it’s too late. You’ve written 50 KLOC and you just have to keep going.
I believe this is why many technologies become popular which are actually too simple to handle the types of problems they try to solve.
Yup. It's the mechanism behind programming as a pop culture. Kids without a lot of experience are sick of the old way because it's too hairy and complicated, they come out with a fresh new approach that isn't nearly as broadly applicable, then it gets improved until it's fit for general purpose, at which point it's hairy and complicated and the cycle starts again.
I don't think that everything is standing still, though. Usually each successive generation has an edge on the previous one; either the previous generation was constrained by memory or CPU or bandwidth and had self-limiting architecture because of it, or the next generation needs to solve a problem involving an order of magnitude more data or compute and it needs a different approach.
But, of course, not everyone (or, realistically, not many people at all) is constrained by the thing that causes the revolution; people usually just get on the bandwagon because you must, if you don't you won't be as employable, won't be as hip, you'll find it harder to employ engineers to work on your project, etc.
"All good solutions to existing problems have been discovered. There may be new problems which need new solutions, but no one will ever improve on what we have already done. Anyone who things they can is a child playing in the dirt."
I think that approach of creating abstractions and concepts just in time is good tbh. It allows for thing to get incrementally complex. I wish there were a way to manage concepts through that process so that you could have all of the simple, easy to change stuff for new concepts while using the more comprehensive and often complex stuff for more "hardened" concepts.
One element of it is also that in many cases the path to "becoming big" starts with bootstrapping and experimental projects that you may not know if it is going to survive and exist in a few years or if you are going to throw it away in 3 months.
So in those cases you may still choose to be scrappy, knowing that it will come back to bite you, but at that stage that is thought of the success scenario and a good problem to have (if it happens at all).
This is in contrast to large serious projects for large existing companies where you can much more confidently know that X and Y are going to be required because from day 1 you know the project is not just some casual thing.
This is also why I think the software creation process and tools need to seriously think about having adjustable safety/pain knobs to allow for cheap scrappy prototyping but also allow to tighten the screws for production.
You can kind of see a glimpse of this between various programming languages, particularly in their type system. But the general concept is broader than just that.
It occurs to me that young people may be better at bootstrapping in a scrappy way, because they don't yet have the knowledge and experience to immediately consider all the things that would be required in a mature implementation. Speaking for myself, now that I'm approaching middle age (37), if I contemplate developing a new product from scratch, I risk paralysis by analysis, overthinking every aspect of it. I certainly didn't do that when I was 21.
Scrappy bootstrapping is a double-edged sword. On the one hand, it brings us great new products. On the other hand, ignoring some real-world concerns can be a major problem for users. As just one example, consider the impact for people with disabilities (e.g. blind, mobility impaired) who need to use an app that was developed with no regard for accessibility. And I've blissfully ignored other real-world concerns myself. For instance, the first desktop app that I worked on (in my 20s) had no support for HTTP proxies (as often found in corporate networks back then).
You're right of course, like the first time you hit a race condition (with a week of debugging) and build a distributed lock system. You publish it and people find it useful!
Only to realize later postgres offers fine locking capabilities far beyond what you've created (now that you get it).
Then you realize that all anybody is doing is creating subsets of Erlang (half serious). So why aren't we all using that?
In the grand scope, is it really a bad thing?
We end up with X ways to solve problem Y and you never know X + 1 could have some advantages. Exploration should be encouraged IMO!
Unfortunately, we get a whole lot of ad-hoc, informally-specified, bug-ridden, slow reinventions of the wheel for every better mousetrap. Usually (though not always) the best simple solutions come from those who understand where the complexity lies.
Then you have phase two of engineer development, where having been burned by something surprising engineers overbuild everything. That's when you get technologies that are too complex to handle the problems they try to solve. That's when you get seven layers of dependency injection for something that could be a single-line algebraic statement with five variables.
That's exactly what YAGNI/KISS encourages. Don't make things more complicated than they need to be right now. That includes used technologies.
I'd much rather use a simplistic tool while I still get away with it, risking having to switch to a more complicated tool later, than starting off with something way too complicated for what I need.
Chances are the difficulty isn't in switching from REST to SOAP, or the other way around, but in dealing with the all the assumptions that permeate those 50 KLOC. In the end it just comes down to having a clean code base that can be steered in another direction.
Note that I only mean not dealing with what cannot be easily be foreseen. Turning a blind eye to what you should know will soon be a problem, is a different story. As an example, if you ignore concurrency from the start, that will be hard to set straight later.
But you can't just decide to avoid making assumptions, because the shape of your API strongly dictates the shape of your handler methods. I've never seen an API migration that didn't involve rewriting nearly the entire API layer.
Even worse, you have to maintain both the old and new versions of the API code, because turning off the old API endpoint is going to take at least a year.
> "Any sufficiently complicated X contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Y." (where X is the new "simple" way and Y is whatever the greybeards are using.
I agree 100% with this article. A simple RPC API spec takes minutes to define. 'Rest'ifying takes much longer, there are a million little gotchas, no real standard. Everyone has a different opinion of how it should be done. Data is spread across verbs, urls, query params, headers, and payloads. Everyone thinks everyone else doesn't 'get' REST. If you try to suggest something other than REST in the office you become the subject of a witch hunt. It really is a cargo cult of pointlessness. My co-workers have spent sooo much time trying to get swagger to generate documentation correctly as well as generate client side APIs, and there are countless gotchas we are still dealing with. It really is SOAP 2.0, when a simple JSON/RPC protocol would of done fine. Don't get me started with conflating http server errors with applications errors. And trying to do action like requests with a mindset optimized for CRUD. How much time have we wasted figuring out the 'standard' way to do just a login API call RESTfully. Please comment below how to do it, I love the endless debate of what is REST and what is not.
I agree with a lot of the things in your post, but this one in particular has produced the most grief for me:
> Don't get me started with conflating http server errors with applications errors.
I've wasted so much time dealing with 404 errors that were returned by the webserver itself (not the app) because the endpoint I was hitting was wrong or had moved, and vice-versa when I was correctly hitting the app but got a 404 error back from the API and I thought that the endpoint was wrong. And, of course, similar issues for 500 errors and the app itself dying versus the app processing normally and indicating an expected failure response via a 500 error code.
To add to all that badness, a lot of JS libraries in their async API method calls have different error handlers for success and for failure response codes, so you end up having to lump together business logic (for resources not found) and retry/error-handling logic (for the server the not working correctly) into the same failure response callback handler. It'd be much cleaner if all the business logic could be handled in a single callback and all of the failure logic could be handled in another. And, of course, you only even get to this level of badness once you figure that out; you can still waste quite a bit of time before you even realize that your callback is not being called because the JS framework is interpreting the expected 404 your API endpoint is returning for non-existent things in business logic differently than you are.
I still wouldn't go back to SOAP, but I do tend to prefer HTTPS/JSON-based APIs that don't abuse verbs, HTTP error codes, and mixes of URLs/params/headers/payloads. Better to put all of that stuff inside the JSON payload where it will only be handled by the application business logic, rather than mixing it in with all of the HTTP constructs that are used for other things as well.
Agreed I'm very much in favor of JSON body in, JSON response. The URL is just a way to hierarchically organize the endpoints. Just like in binary APIs where public methods are organized in classes and namespaces.
I've always made up my own error codes, which I embed in the 200 response since I end up having to map HTTP codes to what they mean anyway, and many libraries have their own behavior on how to handle various HTTP codes or cannot recognize anything other 200 and other (some LUA engines, for example).
The proper REST API should be specified as the set of domain-specific document formats (media-types) and have a custom browser as a client. Turns out, we already have HTML and web-browsers, so there is little point in actually building such APIs. It's always more appropriate to build a website instead. On other hand, what usually called 'REST' is nothing else but RPC where 'procedure call' = 'http method + url'. There is nothing wrong with that (with the exception of the name), but trying to satisfy any REST/HATEOAS constraints on top of RPC foundation seems difficult and pointless.
Don't agree at all. There is a huge difference between calling a function "foo()" that makes an RPC call and the relatively equivalent REST call "http.GET('/foo')". The former feels like a function call, and callers will assume it operates like one. However, in reality the former is not a function, it's making a network call, and it's incredibly unreliable.
In theory, the latter does the same thing, but it's far more explicit, the developer knows it relies on the network and accordingly that it may fail. Developers will be more inclined to plan for errors when the possibility of such errors are more obvious.
What's funny is that most consumers of REST APIs do it through a wrapper that turns it back into a statically typed RPC. REST truly is a useless middleman that no one realizes they just don't need.
I doubt that most consumers of REST APIs are doing anything statically typed. I would guess that the vast majority of REST consumers are written in browser javascript. Server side, I bet at least half are written in a dynamic language.
I don't think the parent here is talking about whether the language is static/dynamic.
The point is that most REST calls are wrapped in a statically 'dispatched' function call.
So in most cases you'd do something like:
let foo = () => http.GET('/foo');
.
.
.
foo() //foo is statically dispatched in the source code here
It seems like "RPC" is being used pretty loosely in this thread. Whether foo in your example is RPC or not depends on its signature, if it attempts to synchronously return the response from the server, then it is RPC, if it just returns a Promise then it isn't.
> There is a huge difference between calling a function "foo()" that makes an RPC call and the relatively equivalent REST call "http.GET('/foo')".
Is it really a useful distinction? Let's rename our 'foo()' to 'dangerously_unreliable_with_unpredictable_latency_foo()'. Is there still a huge difference?
> accordingly that it may fail. Developers will be more inclined to plan for errors when the possibility of such errors are more obvious.
That part of your comment looks suspiciously similar to the usual argument against exception handling to me.
Dangerous and unreliable don't really capture it... how about potentialy_async_call_relying_on_network_that_raises_lots_of_exceptions_foo(), then I'd agree they are pretty similar. However http.get("foo") often says the same thing more succinctly.
And to your second point... yes, all developers should check possible exceptions, just like all children should brush their teeth. However, if you have bad habits and you aren't good all the time, then at lease brush your teeth after eating sweets, and likewise, developers should please check for exceptions around network calls.
Some time ago, another HN user commented a lack of good async support in languages and libraries caused some of the issues with early RPC. With more languages introducing futures and promises as return values or asyncronous functions, don’t you think we might finally have the tools to express that unreliability in a simple function call?
How is that different than any function call that start something in the background? I have used and even written myself simple job code (with threading) that has functions like (simplified):
int start(void (*job)(void*), void (*done)(void*), void* tag);
`job` is called on a separate thread at some point (goes through a job scheduler) and once it is done, `done` is called at the "main" thread (at some synchronization point, usually during the event loop), `tag` is just passed around for context. `start` returns zero on failure.
I don't see RPCs as anything different conceptually (after all the job might also fail). The only issue someone might have is when expecting a synchronous API, but even in non-networked code there are tons of asynchronous APIs.
The main difference is that networks are inherently unreliable. Threaded or multi-processed applications can also be unreliable but for different reasons.
The goal should be to inform the caller of the types of errors that may pop up. Obviously, with network or RPC calls, the caller should handle the case where the network is down. With threaded apps, the potential errors are more subtle, but the caller should definitely be aware that it's not a synchronous call. The function header you proposed is a bit clumsy due to the c semantics, but gets the general point across well enough.
This. REpresentional State Transfer stands in opposition to Remote Procedure Calls - except for a very narrow subset of hypertext/hypermedia applications.
The part I find most interesting about Fielding's thesis[1] is the introduction with architectural overview. He managed to map out modern Web apps perfectly - they can be REST (Web app with db/storage backend, perhaps extended with something like webdav) which is amenable to multilevel caching, smart client;movable data: json api js app, or smart client;movable code: js/Ajax - executing js delivered by server on client (subtly different from a "pure" js/json app (which is similar to an XML/xslt App).
I don't know why the hype of rest lead people to insist on conflating their architectures.
Kind of my point. It's a great dissertation, with some great ideas in it. One of them is REST (Web pages). But the other architectures are well documented in there too - with trade-offs.
Ed: not sure about "time wasted bickering over it". Bickering is always time wasted. Careful analysis of software architecture, patterns, and figuring out what you're actually trying to achieve - is time well spent.
There are fundamental trade-offs between REST and different patterns - depending on where the truth of your data recides, if you need acid or not, and where (what part of) your code executes.
Well, I'm glad you can put together an RPC api that quick, but the reason REST is so ubiquitous and why arguing against it is going to make you the subject of a witch hunt is because it's so easy to consume. Your API is useless if people don't want to use it.
But like the article mentioned, clients using REST are used to dealing with wrappers written for their language anyway. They'd prefer to not bother with URLs, query strings, and MIME types, and simply consume an API in the language that feels natural for them.
You can argue that REST is easier to debug for developers, but nothing makes XML-RPC or binary protocols inherently _less_ easier to debug. It depends on the platform and library you're using.
I wholeheartedly agree with the article. Well done.
Or in the case of SOAP, simply cannot work out how to consume it. In at least one case there was a SOAP service offered and I had a good quality SOAP client library in a popular language and I couldn't work out how to make a single working request.
Not all APIs are public though. You could picking a communication protocol among services within a single technical organization. In that case you can decide to train everybody to use Thrift, for example, if the pros are strong enough.
Are you assuming REST is easier than RPC to develop and/or consume?
After moving to a REST based API there were endless meetings between co-workers of what is and isn't a good REST url. Our clients often come to us with dumb mistakes. Unlike RPC where the parameters go in a single place. With REST the parameters are spread across the verb, url, header, query param, etc..
I think they were suggesting that it was more difficult to design/develop (as per your example), but easier to consume if designed/developed well, and that this was the proper tradeoff.
Yes, I'm assuming REST is easier because of tools like curl, postman and even the major browsers with HTTP GET and great dev tools.
With RPC your consumers probably need to know some coding and even maybe a specific language, a framework or a library.
So yes, for me REST is easier, and I always love to see landing pages like this one - https://freegeoip.net - where the client can test the API in a few seconds by copying an example to its browser address line. This is a simple use case, but I hope you get my meaning.
I don't really understand the debate here regarding tooling. REST, SOAP and RPC are all ways to define/codify the API and the parameters. It all goes over HTTP at the end. So Curl, postman and all other HTTP-enabling tools/libraries can be used.
SOAP even has handy discovery tools that frameworks can consume and construct entire APIs in most popular languages.
It's just that no-one uses SOAP from a browser because XML is a royal PITA to write in JS. I would assume it's because all JS developers are too-busy writing more libraries-for and layers-over JSON.
It's precisely the 'simple' examples which don't address any of the complexities in the original article. read-only properties in a resource being PUT back, for example. You're not having to deal with that with simple read-only services like freegeoip.
It's often new employees fresh out of college who are the worst. They can be obsessed with doing things 'right', which you can't blame them because they have little experience. The internet tells them REST is right, so anyone who doesn't agree with them is wrong.
Would be fine, but could include more if you were following a spec like JSON API 1.1. I really don’t have any of the problems you seem to have. But I work in a high level programming language, so maybe it’s that? Either way I find RCPs to get messy when they get bigger. Sometimes it’s the right move, but for web apps I generally prefer REST.
If you're defining an RPC protocol in just a few minutes, you're leaving a ton of stuff out. Anyone that assumes an RPC call will successfully complete or assumes the network is always there, is writing buggy code. An "RPC protocol" makes writing such buggy code easier. A REST protocol makes it slightly harder. In theory, they are almost identical. But in practice, developers equate RPC calls with function calls, which they are definitely not.
I've written over 10 applications that use Stripe and every single time, the first thing I did was use Stripe's official library for the languages I work out.
There's nothing wrong with wrapping REST APIs, as long as you wrap them in something that makes it clear they are a rest API. There's a huge difference in call foo() and calling http.get("foo").
How is it any different in any way than any other async function? You just end up providing an unusually large number of parameters via headers and body, then at some point in the future the request completes with values and/or errors. What separates them?
Classic RPC functions would never be async, since the idea behind RPC is to replace a sync local function with a sync remote function, without having to make any changes to the calling code.
"Async RPC" is a more recent idea, but still gets referred to as "RPC", so the complaints about classic RPC still get raised since the term is overloaded.
Reading the article I wondered why we even integrate services so deeply with HTTP. The things I care about is the ability to cache at HTTP layer and the option to move endpoints. Which can be added indepentendly of the actual protocol. Moving a service to another protocol than HTTP could be an interesting option.
I think some reasons for still using HTTPS as a starting point are the ease of proxying (load balancing), the support for virtual server names, the fact it's easy to use from a web browser, the built-in encryption with TLS.
Plus, for any given large organizational customer HTTP/HTTPS are allowed through their firewall/other network security apparatus, wheras other ports require a bunch of special exceptions from the security people to use. Said people often refuse to give exceptions no matter how reasonable the request might be, so everybody ends up doing everything on port 80 or 443.
> A simple RPC API spec takes minutes to define. 'Rest'ifying takes much longer, there are a million little gotchas, no real standard.
The thing is, RPC is fundamentally broken due to the nature of distributed computing, and REST is not. All the time you have to spend doing things right is … the time necessary to do things right.
And REST really is very simple. The problem is the cargo-cult nature of folks who don't really understand it.
If I had a nickel for every time someone said "don't really understand REST".. often people who both think they understand it say it to each other. It's pretty funny. Guess what, we understand it fine, and we don't like it.
Surprised to have scrolled this far and not see one mention of GraphQL. It has a discoverable, schema based design, strongly typed. It segments requests into three types, queries, mutations, and subscriptions. Queries are simple data fetching. Mutations can be treated like RPC calls. Subscriptions are for long lived connections to receive live updates for data queries. I think it fixes a lot of problems with REST. I think it works extremely well with microservice architecture.
Where I'm at we have an entity component system in postgres for DB (Entity table with just an id as primary key, then all other tables only have foreign keys to the Entity table). We were implementing random REST routes which tried to line up with typeful ideas which don't exist in the DB but which the page structure of the site exposes. Switched to GraphQL, bunch of methods on a ReadEntity, an UpdateEntity, & a CreateEntity; currently implementing a clientside ECS to mirror this so that the clientside can work on intermediate entities & then submit them together. GraphQL server's real simple, just focuses on access control. Frontend gets to grab whatever components it needs for a given React component. Have ideas on how optimization can be added by adding in prefetch hints to avoid staggered loading
Sorry if this feels like a tangent hijack rant, but figured I'd drop a line on trying to explain what makes GraphQL so good
Lots of comments here arguing REST is popular because it's easy, but there's another higher level reason too: it forces you to think about the network.
In far too many RPC protocols, calling functions that operate over a network are treated like normal functions. A function call, almost by definition, fails to take into account network errors, and race conditions where multiple events overlap. Network calls are not function calls, and the fact that REST calls are relatively distinct from normal function calls is a good thing.
If a network (or endpoint) fails you usually only few options during runtime, retry, skip, stop. That is pretty much all you want to know. Everything else is specific to the endpoint, which is more about contracts and constraints then about networking. You either use the endpoint correctly of not. I.e. using a database like MySQL has similar constraints. And decent engineers know how to work with it and where it is happening.
Yep, and no amount of REST / other introspective boilerplate can help about the fundamental problem of not being synchronized.
There is no solution to the "A knows X, but B does not know that A knows X, or A does not know that B knows that A knows X, or B does not know that A knows that B knows that A knows X..." problem.
Other than that, I think at some level networking is nothing more than function calls that can take a long time and/or fail.
However, most of the Getting Started articles that you find about any publicly published REST API usually starts you off with a bunch of curl commands.
Even if the networking aspects are completely hidden from you in your application, your formative experiences with the REST API almost certainly was with the network requests.
Given the dozens of languages and libraries that interact with APIs, I don't see the issue of starting with a curl command. It's a common denominator, programmers of almost all languages understand, like international sign language. No one would seriously use curl commands in production, but load up a command window, and it's an easy way to start messing around.
Agree with your second point though... network requests are hard. They are not much different from real distributed programming. Making REST requests should be done with care.
I do use curl commands in production for stuff like user data in an aws cloudformation template. Having an api I can hit with curl that returns json I can parse with jq is super convenient.
I don't think the parent commenter sees curl-centric getting started guides as an issue either. Actually from this thread's context I assumed they thought it was a good thing.
I worry that this is something that can be applied to programming in general these days.
Back in the day of the C64 etc, programmers had to worry about the underlying hardware as they were for the most part working with assembly.
But as increasingly abstract languages have come to be (most modern ones use virtual machines and garbage collectors), the programmer never have to consider the hardware their code has to interact with at some point. End result is all manner of bloat and memory leakage.
I don't think you closely read what parent wrote. Do you think a function call is "too abstract" because "nobody knows what it's doing under the hood"?
CORBA is the best protocol I ever dealt with. Strong contractual semantics, exceptions, interface definitions. Spiced with transparent compression, encryption and bi-directional communication. All those goodies were already available 12 years ago.
The only downside - it required a reliable and precise implementation that took a lot of efforts. I always used IIOP.NET for most of my gigs and it was excellent. I also ended up as an active IIOP.NET contributor.
CORBA ORB implementation was a fine art a few could grasp. And this was the biggest drawback - the standard was (and is) excellent, but most implementations tend to be complex and shaky.
I still actively use CORBA. The server usually offers two kind of endpoints: REST for third-party integrations (which are usually naive and simplistic), and CORBA for the system itself. I've built nice things with such architecture that involved worldwide deployments including embedded hardware. I am very proud of my involvement and the fact that I could help to improve the everyday routine for many people worldwide.
I was somewhat involved with CORBA in its early days. It had some very smart people driving it. It was derived from work already being done by the large companies like IBM, DEC, Apollo, Sun, HP, and Microsoft.
But CORBA was haunted by a key principle that limited its influence. Unlike the IP protocol stacks, which are layered from the lowest wire protocols on up to the highest layers, CORBA dictated the highest level protocols and didn't address the lower level protocols. Different CORBA implementations couldn't talk to each other; consequently, CORBA didn't work for my company because we were was trying to design a product that could work across heterogeneous networks of workstations and servers.
If the CORBA folks were so smart (and they were), how could this happen? Why didn't they design the original CORBA protocols from say UDP or TCP on up? The CORBA members were all from different companies, and all had different independent products. There was fierce competition in this space so it was impossible for the members of CORBA to agree on the low level networking protocols because doing so would harm some companies' product lines while benefiting an others.
Thanks for this input. I always love reading things things that get me excited to try and look into stuff I have previously discarded for probably "popular" opinion.
I feel like this is the root of the author's agitation:
"I don’t care. Trees are recognized by their own fruits. What took me a few hours of coding and worked very robustly, with simple RPC, now takes weeks..."
He seems unhappy that REST doesn't work the way his familiar tool (RPC) does. I myself worked with middleware-messages-over-TCP systems for a decade before switching to web apis. I don't have this issue. And I personally don't follow the "holy specification", and REST works just fine for me.
The problem is the assumption of a "simple RPC" protocol... there is no such thing. There are network issues, proxy errors, and two-sided race conditions that complicate any network related code. Network programming is distributed programming, which is not easy. RPC protocols try to mask that difficulty, but more often than not they sweep it under the rug.
RPCs make it easy to get started on a dev machine. making network calls appear like function calls definitely speeds things up when the network is working perfectly, but it papers over the complexities of debugging network issues and complex distributed race conditions, which will inevitably come up later.
Being explicit with network communication has its benefits.
(corollary: for the same reason as above, I don't like lazy evaluation of database queries that exist in many languages. When you hit a database, it should be intentional, you should know about it, and should properly prepare for any necessary network issues, caching, and cursors. Many modern web frameworks gloss over this).
I don't see how those things are really solved with rest, particularly given that people will often be using a wrapper rather than building their URLs manually everywhere.
Definitely not "solved", just made more explicit, kind of like a warning sign in the road. A while ago I worked with an RPC system where the RPC calls looked just like normal function calls... everything worked well, until it didn't.
Anytime a computer program consumes a potentially scarce resource (e.g., network, disk, database, etc), there should be some warning-sign or flag raised to the developer. RPC hides that, whereas REST makes it more explicit. So much of programming is social, and the mechanics of REST, even though theoretically identical to RPC, raise many social flags warning of danger ahead.
Or in other words, there is absolutely no technical justification for using this overly verbose and unmaintainable mess. It's just some vague philosophical thing that has no clear benefits. Like OOP.
I doubt that these qualities are actually achieved. Generally I don't think there have been many (if any) languages where excessive verbosity built in to the language has proven to be a good idea. Think COBOL, Java...
It's worse if not only statements are longer, but you are actually forced to treat similar things in very different ways (like URL resource vs query string vs post parameters). It just increases code complexity without any clear benefit.
I'm not really sure I see the difference. One of the first things done is to wrap all the calls to the webservice in some kind of API (often using a pre-existing library), which puts you back exactly where you were with calling a function which happens to make a web request.
We developers need to understand there are no one-size fits all solutions. No protocol is optimal for all use-cases. Design is always a question of trade offs. Architectures are means to an end.
The OP's story is a bit weird, because it seems they had a system which worked very well with XML-RPC, but they changed it to REST for no apparent reason except that "REST is the future". Regardless of the merits of REST vs RPC, such a change will require a major redesign of the system. The resource-oriented world view of REST is very different from the procedure-call oriented XML-RPC. You really need to clarify what benefit you hope to achieve before attempting such a redesign.
The problem with the article is it doesn't really consider the use cases where REST is appropriate and when it is not. Rather it blames anything on the protocol itself which is considered "good or "bad" completely disconnected from use cases.
> And you’re gone for hours, reinventing the wheel.
So don't do that. The fault lies not with the technology.
Sorry, but these kinds of excuses always remind me of homeopathy zealots explaining why there technology didn't work in this case.
REST, just like OOP, is not a means to an end. It's fundamentally wrong. It's trying to shoehorn strange philosophical viewpoints into what's a technical problem. It's trying to decompose problems that can't be decomposed. It's... never the right solution. I've never seen it succeed. Like, ever.
Well, the exception perhaps being that its object-verb syntax supports function name completion in IDEs better. But I think it's a net negative, since it has more negative effects on architecture (wrong structure, because developers are encouraged to invent vague concepts to be home to methods that do less and less. It leads to endless bikeshedding).
Not to speak of other bad ideas that once defined what OOP was, and are now commonly seen as wrong - like inheritance or even multiple inheritance.
I've seen quite a few good projects written in an "OOP" language, but not in an OOP style - mostly misuse classes for namespacing (which I don't think is a good idea either since it makes usages of namespaced things hard to find).
I've never seen a "true" OOP project that wasn't quite a mess and couldn't have been written much cleaner in a plain old procedural style:
Use freestanding functions, the most successful abstraction to date.
Stop with that singleton bullshit. Most things that need to be managed exist precisely once in a program (talk to the sound card, to the printer, to the network, to the graphics card, to the file system, allocate memory...). Making classes first and instanciating once (or how many times, how can I know by looking at that handle?) is just plain silly, overly verbose, and confusing to the consumer of the API.
Don't couple allocation and initialization. It's a stupid idea. It leads to pointless, inefficient, and (in some languages) error-prone one-by-one allocations.
Flat fixed structs for data organization. By default expose struct size for massive decrease in memory allocation. Expose almost all fields (except for truly platform / implementation-defined ones) and stop with that silly getter/setter boilerplate.
Mostly use data tables (with direct integer indexing mostly) like in a relational database, for data-driven architectures. (CS/programming technology TODO: How can we switch between AOS/SOA more seamlessly? Maybe we can get inspiration from Graphics APIs?)
Don't use silly flexible-schema XML/object hierarchies to "compensate" for having no idea what's in the data. It doesn't help.
Make interfaces ("vtables" if you will) only sometimes where they are needed, not by default. Don't call this inheritance. Bullshit. It's an interface, not more, not less. If you think interface descriptions must typically be bundled with each data item, think harder. They are independent data.
We don't need no friggin "doer" objects for every simple thing. It doesn't help a bit, but only makes things less readable and more complex. Just do what needs to be done!
Most of the things you mention are considered anti-patterns in OOD nowadays anyway:
- singletons
- getters/setters everywhere (but not in favour of public fields, which just as much introduce tight coupling, but 'tell don't ask' style which allow to localise functionalities prone to change)
- introducing interfaces upfront - it goes against Reused Abstraction Principle or Rule of Three (discover rather than design abstractions, apply interface only when you have at least 3 classes that would adhere to it)
Both OOD and functional programming try to reach loose coupling and composability by different means. All these 'patterns' have usually some more sane general architectural concern standing behind.
It would be interesting to know whether old school procedural approach allows you to achieve all those architectural benefits in large scale applications.
REST has the same problem as object oriented programming. It's too skeuomorphic. Lots of web applications are wrappers around conceptually monolithic resources, so it's convenient to use a protocol that makes that assumption. But as soon as you need to nest resources, or perform some action that has nothing to do with CRUD, or do just about anything interesting, the metaphor begins to fall apart.
(That's also why OOP has all these "patterns." Many are just attempts to cope with the "object" metaphor falling apart. "Is" an AttackingRock a Monster, or "is" it an Obstacle? Hmm...)
Although the article somewhat exaggerates the problem and in fact REST is great for many applications, truth is (IMHO) REST is indeed overrated and the RPC style (don't confuse it with SOAP!) is over-vilified in the IT culture. In so many (though far not in all of them) cases a concise JSON-RPC API would be a much more elegant solution than REST. I believe this is a great example of where the "right tool for the job" principle should be applied rather than a buzzword cult.
Many comments here emphasize on the problem of developers expecting RPC functions to be as reliable as local functions are. Well, that's their own problem, IMHO. The only appropriate solution is to remind them they are doing it wrong, e.g. the same way REST gurus use to remind everybody they are doing REST wrong. In fact there is a huge number of "developers" around who just invoke all the file system, database and network (REST, RPC, SOAP or whatever) calls synchronously, don't validate inputs nor outputs and don't even wrap the calls in try/catch (believe me, I used to support a fairly popular API and had to explain this stuff to just soooo many people complaining about their apps hanging or panicking on occasions when our REST API quirks (e.g. a field is missing from the response) or fails).
This article expressed a lot of what I’ve been mulling for years.
I must’ve spent hours of my life poring over the Wikipedia HTTP Response Codes page, looking for the most expressive error code for my situation. It’s barmy.
You don't need to. For me, REST can be as simple as: encode the type of request into the URL, request parameters into URL parameters and/or query parameters, request data into a JSON payload. Use GET for read-only operations, and if you're really not particular about it, use POST for everything else. Return 200 for success, 400 for client error and 500 for server error. Transport a more detailed application error code and description in the response body. That's it.
You don't even need that. I don't think there is anything wrong with returning a 200 response with a JSON body that has some 'error' tag built into it. It may not be purely RESTful, but if it's obvious to the developer interacting with the API, who cares.
Yep and when I have to investigate production issues, you're the guy that makes me do slow wildcard queries of the detailed error text instead of just filtering on the (indexed) response code. You push your laziness onto the rest of the world that way.
The worst one I worked with recently would return 200 and only a human readable error message (no status). On top of that the message is sometimes phrased differently. Here is an example from memory: "The field email is not valid." and "You provided an invalid username."
I disagree, at a minimum, use 200/400/500. Each grouping of error code defines semantics which are implemented by generic clients / servers / middle boxes / monitoring agents etc. These are things out of your control, and often ran by a range of different companies. Debugging these is .. hard.
That means I can't use any kind of generic retry / back off code because I've now got to start dealing with your custom errors, and if you have html versions of the info Google will start demoting you.
You're assuming you want your API to work with a specialized crawler, like Google-bot. If that's important, then sure, design your API so Google-bot can crawl it nicely, but then it's Google designing your API, not you.
I think you missed his primary point. By doing what you describe above, generic client side code to handle retries/backoff etc etc is rendered useless. Your users now have to implement something custom for this (and, if your doing network operations and DONT do this, you likely don't have a very robust system).
> You're assuming you want your API to work with a specialized crawler, like Google-bot.
Not really, I just don't
In general, the key point as about working in a generic way, given that it's so simple.
I don't like the idea of returning a human readable message that says there was an error but a machine readable message that says everything was fine. I have far too many cases of having to deal with human text explaining that a value is missing already in my data.
Not at all. Some languages and libraries don't handle non-200 status codes nicely, they may raise exceptions for each code, or maybe not. For example, Python's basic urllib library makes it easy to get 200 responses but a hassle to get anything else (you need to wrap it in a try/except).
For some APIs, it definitely does make sense to use proper status codes, but for others, it's like fitting square pegs into triangular holes.
well most consumers work better with 400 errors.
(i.e. angular1 or even angular2 where the 4xx error codes will be inside the error clause of the promise/observable)
I agree. There are plenty of freedoms built into the REST way that enable you to create more or less detailed responses.
"I don’t care. Trees are recognized by their own fruits.
What took me a few hours of coding and worked very robustly, with simple RPC, now takes weeks..."
Weeks? For a REST API? No, I don't think so. REST gives you the tools to be pragmatic and quick, so use them.
Grpc is the future, I'm amazed that nobody seems to be using it. Easy endpoint definitions and code generation in almost every popular language. Much faster than REST and zero boilerplate code. The client libraries even have http baked in so no "controllers" or route mapping to write. It's simply fantastic.
If you run into a language without grpc support you just standup a JSON proxy and pretend it's REST.
I tried some experiments earlier this year with radically simpler RPC calling conventions. It's called NSOAP, and is available for express, koa and React. It gets rid of HTTP verbs and treats the url like code. https://github.com/nsoap-official/nsoap-express
While on the surface that might seem like a workable idea, getting input validation right is going to require more syntax (making it far less clean). Specifically, URLs are only ever string data, so without type annotation everything is strings, even if it looks like a number, array, or even more complex data type.
> While on the surface that might seem like a workable idea, getting input validation right is going to require more syntax (making it far less clean)
Input validation will go into the router, which I have created for Express, Koa and React. Application code will not have to deal with validation or parsing.
> Specifically, URLs are only ever string data, so without type annotation everything is strings, even if it looks like a number, array, or even more complex data type.
You'd have to pass more complex data types either in body (as JSON) or as parameters with quoting. Current router does however, infer types to the extent of:
//Params inferred as string, number, boolean and number.
curl "http://www.example.com/search(Jeswin,20,true,x)?x=100"
I read the article upto the point where they consider using rest to create an entry into the rest_password_email table or some such thing. That's stupid and ludicrous.
That isn't even a use case for REST. I think the writer needs to consider taht http APIs cover many different use cases with subtle differences. REST exists to solve the problem for one of those use cases, i.e. data model interactions over HTTP. But HTTP APIs can also implement function calls. Trying to emulate a function call, using REST is a terrible idea, and the blame lies with the developer that thought it would be a good idea. Not with REST.
Consider the logout operation. This a valid function call. But not using REST or any of it's principles. It's just a valid use case for HTTP. Now consider API methods that deal with a users profile information. Such as creating the user profile entry and later, maybe updating the address or status of the user. This would be a perfect example use case of REST.
TL;DR
I think the author fails to understand that REST isn't and doesn't try to be a solution for every possible use case of an HTTP API. It's simply a framework for structuring APIs that directly, or almost directly interact with the models exposed by a webservice.
Now, onto RPC. RPC, is binary protocol over TCP. It's not even HTTP. RPC vs HTTP is a very valid argument depending on the use case/constraints for the project. But RPC vs REST is comparing two very different things. REST, IMO, derives most of it's flexibility from being built over HTTP, and not because REST is some kind of magic sauce that it's touted by many to be.
The author specifies two different RPC protocols in their article: XML-RPC and JSON-RPC. Both often do push plain text over HTTP.
> REST, IMO, derives most of it's flexibility from being built over HTTP, and not because REST is some kind of magic sauce that it's touted by many to be
In that case, there is no difference between REST, XML-RPC, JSON-RPC, and SOAP. That could be a valid opinion to have, but I suspect it’s not one most people would agree with.
REST isn't appropriate as a description of JSON-based apis because JSON isn't a natural hypertext and thus makes HATEOAS difficult to implement. Most JSON APIs described as REST are really RPC APIs with a bit of URL layout taken from the REST world.
Unfortunately XML-RPC was such a nightmare that calling something JSON-RPC was out of the question. Shame.
There are a few blog posts up on the intercooler website that discuss this that I found enlightening:
The article talks about how to communicate between systems. But many systems don’t need to be distributed.
If you can avoid writing a distributed system, that’s easier than even a “better REST”, if such a thing were to exist.
For example, old-style non-SPA web applications, can directly use underlying logic classes directly, can throw exceptions, need not serialize data, one connection to the DB with one transaction capable of rollback, and so on.
Or monolithic servers rather than microservices.
Sometimes you need RPC, but for those situations where you don’t, avoiding RPC completely is a significant reduction in complexity.
HTTP is just a way to communicate state - a protocol that _can_ be used for implementing the architectural style. Unfortunately, most of the rant is about HTTP.
Once you see HTTP just as one example of a more abstract way to interact in a client-server model, your focus will shift towards the more important topic: semantic formats that represent state, formats that make it easy to write clients against. And if you do it right, you build a vocabulary that represents your domains. A good format to start with is JSON-LD. But please don't just describe the entities of your business domain. You have to describe semantics of how the client can interact with the server and change state (like links, actions, feeds). And if you build a business around it, put these semantics / vocabularies at a central place - build something like https://schema.org for your own corporation.
Then use HTTP for communication and building out your systems. And it will be more robust, flexible and scalable than anything built on SOAP.
Those who refuse to learn history are doomed to repeat it.
REST is not the end of history, there can and should be successor architectures, but RPC is not that successor. It’s been around a very long time and fell out of fashion for good reason. It is a convenient approach that can be used if you have major control over all the interfaces, endpoint implementations and underlying infrastructure, as Google does for its use of gRPC. It really falls over if you want independent implementations and variable infrastructures over a large scale, as is the case for most Internet / Web interactions.
RPC is fundamentally flawed in that it tries to pretend that the network doesn’t exist and that distributed systems don’t have fundamentally different concerns than single computer systems. A good overview of this history from 2009 is here:
https://www.scribd.com/mobile/document/24415682/RPC-and-its-...
Keep in mind that REST only became popular around 2007, it was an uphill battle to popularize it from 2001 onwards. The web had grown to a mammoth, and vendors couldn’t make money off it, so wanted to replace it with CORBA or some other RPC (SOAP). It took a concerted effort to fight that in standards bodies, on mailing lists, on blogs. Those days didn’t have Github or social media or a myriad programmer conferences. The posts are still up if you want to see them. This history of having to fight to be perceived as relevant and useful is why REST tends to have a bit of misguided religion behind it. SQL proponents had the same issue longer ago.
The Web and REST led to the largest increase in networked computer interoperability in history, after the TCP/IP suite and Telnet. The architectural style is what catalyze JavaScript into such a ubiquitous language as it made mobile code a first class citizen in the architecture. It’s what catazlyed Google into a powerhouse as it baked self describing, uniform interfaces as the standard, enabling spidering, indexing and analytics on a global scale. Moving on from it will be harder than people realize.
By all means, be an engineer - use RPC if it fits your problem and constraints better. Use event streams if it fits your problem better. Use GraphQL or SPARQL if a data query interface is a better solution for your needs. There is no one architecture. But please, rants about how the world would be better if we all did RPC, it comes across as very divorced from the wide variety of problems and suitable architectures out there.
> It really falls over if you want independent implementations and variable infrastructures over a large scale, as is the case for most Internet / Web interactions.
But what exactly? I can't really think of anything that would work without an interface description (like a HTML form).
Let's have a few informal RPC signature conventions for example for retrieving a Web page (the equivalent to HTTP GET). It would make developers lives easier. But essentially there is no problem with replacing links with remote procedure calls.
HTTP GET is one of the most formalized, optimized and tested standards in the world. Replacing that with “an informal RPC signature” would be a nightmare for developers : misunderstood semantics, little to no interoperability, poor performance, poor scalability, and on.
If you think there is no problem replacing hyperlinks with RPCs, be my guest, do so on all your forthcoming projects, and see if it works. It sounds like a bad idea to me but we are both just random people on the internet.
I author REST APIs and try to do them properly, with mediatypes, link relations, and all the good hypermedia stuff most people avoid. Nonetheless, the author's post reflects the sort of rant I've had to coworkers at the watercooler, or anyone who'd listen.
The author nicely preempts the debate about HATEOAS and "most RESTful APIs aren't REST" and shows that the debate is part of the problem. It is. It's not a spec but an architectural style, and people are bad at design (and quite honestly, they have better things to do with their time), so broad-stroke ideas about how to lay out your system aren't as useful as a framework or codegen. So the least-effort solution wins, where you half-ass implement first three three characteristics of REST-as-seemingly-observed-in-the-wild (HTTP, JSON, templated URLs), and call it a day. If it's good enough for Stripe and Twitter, it's good enough for you.
No wonder we're in this boat; one upon a time REST was just architectural and intellectual wankery, specified in an obtuse and hard-to-read thesis by a guy who's smarter than most of us combined. It sat unknown for years, until AJAX became all the rage, and people began making API calls from jQuery to load-and-splice parts of a page for rich interactivity. Here, a public, HTTP endpoint that served easily-parseable JSON made sense. Some prominent companies released public APIs in this style, and then the blogspam began.
Within a short amount of time, REST and "RESTful" was cool and forward-looking, and SOAP was old and crufty, and any kind of ad-hoc RPC was bad. Implementing a REST API wasn't solely about usability, but also signalling that your company was forward-looking too, rationality be damned, which is why most implementations cargo-cult look the same: HTTP, schemaless JSON, templated URLs. Endless debates about HATEOAS begin, and come to an unsatisfying conclusion, because real developers' concerns about architectural purity are dwarfed by their desire to build a mostly-working API that looks recognizable to the public, and move on.
Serious players looking for reliable internal RPC develop stuff like Thrift and gRPC, which are thoughtful reimplementations of the ideas behind 80s and 90s RPC, but with the added advantage of coming from a single vendor and not committee hell. Meanwhile, Facebook also reinvents SQL in JSON and gives it to the client, what could go wrong?
Maybe the reason "REST" won is because it was easy to glean and misunderstand well enough to put out something simple and good enough. This dominance of the low-end is going to be hard to undo.
As I mentioned elsewhere in the thread, there are two posts on the intercooler site that specifically addresses this issue that I found very compelling:
The difference is that a real HATEOAS API provides clients with context through the payload for all service interactions. HTTP becomes just the transport layer. Roy Fielding never wanted to point to anything else. He was inspired by how HTML enabled human-to-service interaction through a Browser (the client) and formulated an architectural style for human/machine-to-service interaction. The emphasis must be on how we design formats. Communicating the state is just freestyle, HTTP being the prominent one.
Thank you. This is one that I've been puzzling over for years. If anyone has a straightforward answer, and can provide example REST responses with and without "HATEOAS", I'd really appreciate it.
See the 'Richardson Maturity Model' levels [1], and contrast level 2 with 3. Level 3, satisfying the HATEOAS constraint, has hyperlinks leading to other resources, and the relationship between the origin and destination resource is qualified (with link relations [2]).
Essentially, everything is a graph, resources are the nodes, and the labels on edges are the relations, also serving as an inventory of state transitions.
The "HATEOAS debate" is the trope that says people will inevitably debate whether a particular API that claims to adhere to REST does in fact adhere to REST, because many APIs that namedrop REST don't satisfy the HATEOAS constraint. This is rarely a practical debate, but the fact that it keeps coming up -- and people take the time to explain -- is part of the problem with REST. This was the author's point.
I think the problem I have a hard time with HATEOAS is that I really can't see what difference it makes to the API. I mean I get that it's always better to return a URI rather than simply an ID. But the link relations feel like they are from a time when we all thought the semantic web (RDF and OWL) was going to be a very big deal.
Maybe REST was the best protocol for the great public API explosion of the past decade, where startups wanted to expose a public API to anyone on the internet. The most important requisite was that the most developers could access the API with a minimum of technical knowledge and tools, and the APIs were simple. I am less and less sure that REST is the best solution for communicating between internal services, which know a lot about each other and where you can spend time onboarding developers on a specific techniques and stack.
I don’t know yet because I haven’t had as much experience with other solutions as with REST. I am curious about Thrift and Protocol Buffers. I have worked once on a project migrating from REST to Protocol Buffers for an internal API and it noticeably improved performance, but code complexity remained more or less the same.
Well, REST is architecture. Protocol Buffers are (de/)serialization.
I've tried it before (& although it has the properties I like[language agnostic schema definition..]), it was finicky to get working with of one the biggest IDEs out
there(IntelliJ IDEA)
Also, it doesn't support Kotlin yet.(Java interop only option)
The only place I can see Protocol Buffers being useful is inside Google..cos it was a pretty crappy dev experience.
And I would be pretty confident in saying that not many Googlers/ex-Googlers outside Google use it. Cos if they did, somebody would make the dev experience much much better.
I use protobuf at work all the time (not google) and prefer them over almost all other serialization formats. To be fair, I mainly work in Scala, C# and JavaScript where great implementations exist. In the case of Scala, IntelliJ just needs to know that the generated source files are just that and then it „just works“ for me
Decades of horrific developer pain disagree with you. The “tools@ all have different ideas of what is correct, good style, the right name spacing and encodings...
We’re about the same age. How is it that you’ve not experienced the same horrible almost-but-not-quite interoperability messes with XML and SOAP that I have over the last 20 years?
Man, it's almost like representational state transfer and remote procedure calls are two different things, and sometime your service does fine with the simpler one of those, and other times your service needs finer-grained controls.
But if we were thinking critically about how these protocols differed then we couldn't write an entertaining polemic, right?
Rest is not an RPC protocol. If you need RPC you should use an RPC protocol. Rest is not universally applicable. The verbs don't map well to anything beyond simple CRUD operations. If you want to do something that isn't one of those operations it is going to be painful.
You can get all the benefits of the plain text serializataion by using json or even xml as the payload serialization format with a much richer set of verbs. You can even use HTTP as the lower level protocol if you want and gain all the benefits that that gives you.
Most of the pain people experience is when they try to use REST when they need an RPC protocol instead.
POST represents a non-idempotent operation. The other HTTP verbs (GET, PUT, PATCH etc) are idempotent. The awkwardness surrounding PUT/PATCH stems from the need to ensure that those requests remain idempotent.
Nothing bothers me more than an idempotent request (e.g. a search query) using POST. If you design your API starting with "what should happen if the client makes this request multiple times?", then it becomes much easier to model with HTTP.
HTTP 401 Unauthorized actually means Unauthenticated. 403 Forbidden actually means Unauthorized. Yeah, it is a bit of a mess but the referer header is misspelt so what can you do.
A GET request's body has no defined semantics, so your server is firmly in the realm of "nonstandard implementation detail" if you choose to take into account the body in your decision-making.
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
Yet another discussion where people talk past each other because everybody means different things by "REST". It certainly is not the "REpresentational State Transfer" as invented originally.
It is hard to evaluate an idea without being tied to an implementation. How would you evaluate Monads as an idea without talking about Haskell?
The primary problem with REST is that most people learn it by example. Then they believe it is about web services returning JSON or about pretty URLs. It is not.
I hate working with pedantic programmers who chastise you for using the wrong http method or not formatting url's correctly because it's not restful, and then few years later jump onto some new tech, like graphql or something else, and then evangelizing the new religion. That's how some "senior" devs maintain their importance, using this esoteric knowledge to keep lesser programmers out. And if you try to catch up, you're playing a fool's game, by the time you learn all the peculiarities of their interpretation of "RESTfull", they have already jumped to GraphQL. Same for React (remember this one - presentation (HTML) and logic (JS) should not mix!?), but now that we've switched from backbone to react, suddenly it's ok. It also applied to random methodologies like agile, scrum etc. A lot of cargo-culting, and not a lot of actual engineering.
REST is like Agile. Almost everyone who claims to do it only kind of do it, and those who really do it get all wrapped up in the theology of it and turn it into an unworkable mess.
Personally, I don't care about REST, I never did. However this made me giggle:
> "REST offers better compatibility. How so? Why do so many REST webservices have “/v2/” or “/v3/” in their base URLs then? Backwards and forward compatible APIs are not hard to achieve, with high level languages, as long as simple rules are followed when adding/deprecating parameters. As far as I know, REST doesn’t bring anything new on the subject."
The author has a serious misunderstanding about what backwards compatibility or lack thereof means.
That APIs end up with "/v2" and "/v3" in their endpoints, that's absolutely the right approach for backwards compatibility, as long as the previous version of the API stays online and continues to be supported.
Backwards incompatible evolution means breakage. No matter what features you like to add or change, by introducing backwards incompatible changes you're going to break people's software and that's never a good thing.
The author is also naive in thinking that there can exist "simple rules for deprecating parameters". There's no such thing. Nobody pays attention to your service announcements or policies unless their software breaks.
Also "high level programming languages" have absolutely nothing to say on this matter. Yes, it's simpler to update a library that breaks compatibility within a statically typed language, but that will still come after the software breaks due to the network protocol being changed, implies active maintenance with associated costs and mainstream languages have no facilities to efficiently describe network level protocols.
What are the best options for building JSON RPC client/server in javascript ?
Essentially I am looking at a Node.js based server and Browser based client as of now but cross language compatibility would be nice to have.
gRPC pure javascript client [3] is labelled as "incomplete and experimental".
axon [4] looks interesting but seems to have received no activity in almost a year. I am also slightly skeptical of adopting a tj project given his departure from Node community.
Jayson [1] was the only one I could find in the search results [2] which is actively maintained and well documented.
I'm currently developing a product using gRPC-Web as an interface between our client and server-side code. It's very pleasant to work with, especially when using TypeScript on the client side.
Oh, and for those who want to be able to query APIs from the command line without prior knowledge, the following works if you have reflection enabled on your gRPC server:
q3k@anathema ~ $ alias grpc="docker run --rm -it --net=host returnpath/grpc_cli"
q3k@anathema ~ $ grpc ls 127.0.0.1:50051
helloworld.Greeter
grpc.reflection.v1alpha.ServerReflection
q3k@anathema ~ $ grpc ls -l 127.0.0.1:50051 helloworld.Greeter
filename: helloworld.proto
package: helloworld;
service Greeter {
rpc SayHello(helloworld.HelloRequest) returns (helloworld.HelloReply) {}
}
q3k@anathema ~ $ grpc type 127.0.0.1:50051 helloworld.HelloRequest
message HelloRequest {
optional string name = 1[json_name = "name"];
}
q3k@anathema ~ $ grpc call 127.0.0.1:50051 helloworld.Greeter.SayHello 'name: "gRPC World"'
connecting to 127.0.0.1:50051
Rpc succeeded with OK status
Response:
message: "Hello gRPC World"
gPRC is nice... my only two gripes are that it isn't quite as debuggable as JSON, it's often overkill, and because it moves so fast (right now) I've seen updates introduce real errors into the protocol (they are fixed in a few days, but when they occur it's a terror to debug).
SOAP also has other stuff : proper handling of binary (MTOM), WSDL/XSD contracts are enforced and XSD is a very powerful, standardized language to express contract which in turn allow you to share the contract. When you're working across organisation/institution borders that's very helpful. REST is not there yet. If I had to communicate each little gotchas of my REST API each time, that'd be really tough.
So for me the power of SOAP is how it helps me to share interface contracts in clear, precise ways.
Moreover, SOAP also have the WS-I profiles for security related stuff and that is quite advanced too.
My understanding is that RESTful is good for public WEB APIs, however for tightly-coupled systems the best was SOAP/Xmlrpc.
Nowadays the best middle ground probably should a SOAP-like wrapper for JSONRPC(instead of XMLRPC), i.e.using JSONRPC as the payload instead of XML, does such thing exist?
So, SOAP/XML too verbose, RESTful not good for tightly-coupled systems, what other options do we have, ideally have something like SOAP but using JSONRPC?
You manage both dismiss HATEOAS for lack of discoverability and then claim there's no automatically generateable client? If I claimed a WSDL was "discoverability for SOAP", would that make the purpose of HATEOAS clear?
I find REST APIs a bit annoying, primarily because of caching behaviour. For example, if I have a GET on a list of resources and then PATCH a particular resource from the list, the browser will still keep the old list in cache and not update (with the newly patched resource) till the cache expires. For this reason, I just use my own json APIs using POST requests (to bypass the cache) with some application specific caching in the browser via indexedDB - it's not that much more work, but the experience is significantly better for the user.
I agree with the article. Fortunately, REST is dead. I'm fully confident that in 5 years, newcomers to web development will simply learn GraphQL.
I realize that's a strident statement, but I really believe that.
It is basically a restrictive implementation of RPC, which captures the reality that clients often need to ask for follow up data based on the result of a query. There are a couple needs that don't have fully baked solutions yet, but it's already extremely handy. It reminds me of what BEM did for taking the decision fatigue out of writing CSS.
> newcomers to web development will simply learn GraphQL.
I doubt it. Newcomers are supposed to learn HTTP. When one knows HTTP, one mostly knows REST.
GraphQL is not a replacement for REST, it's a query language on top of REST. It's a useful tool to fetch object graphs but REST is not going away anytime soon. GraphQL doesn't deal with security, caching and other issues solved with REST for instance. GraphQL also pushes a lot of complexity toward both the server and the client. On the server,it's an ORM on top of whatever ORM developers are already using.
REST can be as simple as you want it to as verbose and rigid as you want it. Defining an API has certain public expectations such as request/response and basic HTTP status codes, but again can be easy or more detailed. There are RESTafarian and pragmatic ways to implement REST APIs.
Much of the problem of any tech becomes that engineers/developers are supposed to make complex things simple but instead sometimes make simple things more complex because it seems smarter. Complexity is needed only enough to make it simple to interface with and consume. REST is an example of web services done right and more simple compared to SOAP which was supposed to be the Simple Object Access Protocol but they forgot the simple part and just kept adding layers.
APIs should be easy to work with and as non changing as possible in the public signatures and routes/data/configs or else versioning should come in on major changes which should be handled wisely. Sometimes that is RESTafarian or pragmatic, sometimes more HTTPRPC, sometimes that means schemas for consumption, sometimes pretty urls, sometimes ids and name blurbs, sometimes responses wrapped in app messages/wrappers, sometimes directly on HTTP(s) response bodies, and it all depends on if it is public or private and if you control both endpoints, the clients and business needs.
So my problem is that REST stuff is usually a horrible mess with lots of special-casing, while SOAP gets ridiculously leaky and requires very tight coupling and lots of tooling to allow servers or clients to implement it cleanly, because the envelope formats etc. are really complex. If you've got something which can consume the WSDL and generate client code then you're good - provided it has a sane connection handler etc. etc. etc. because usually in my experience they then try to do too much for you and are very vague about what that actually is.
Needs something else. Preferably something already invented. My employer has a SOAP API which is very annoying, written in a framework that lets us throw any exception we like back to the clients, thus exposing internal details by accident in an interface which we then can't change because people might be processing those exception names to understand the errors they're getting when their requests aren't valid. This is where the idea of RPC needs a gatekeeper (oh for a framework-level option to say "you can only throw exceptions to the client which are in this namespace" or maybe "only ones which derive from an class in this namespace").
We also have a REST API, which is reasonably good at hitting the RESTful ideas, but not great everywhere because yup, there's that ambiguity in the conceptual framework itself.
There's also a lot of code involved in actually consuming the thing.
I need to read up on alternatives. But our customers aren't going to change any time soon!
Anyone have much experience with gRPC? We are using Swagger/jsonschema for our REST APIs and it's quite the nightmare in terms of development cost. IMO, writing schemas in a self-describing "language" like JSON should be a crime; it's too verbose and easy to make mistakes. At run time it gets bad because you need to unmarshal the JSON objects into Python classes which can be too much overhead for large request or response sizes.
XMLRPC with WSDL was bad enough that I only had to use it once. We migrated WSDL stack to REST (2007-8), which was an improvement, but it was fraught with a fat clients for each language integration to uniformly handle SSO authorization and other sundries.
Has the author considered something built upon Protocol Buffers, which have can have reasonable versioning semantics and is portable and supports status code canonicalization, like gRPC?
What was wrong with WSDL? I also only used it just once or so, but found it to be transparent. Generated a client (set of functions) for two languages and never touched the WSDL or SOAP directly afterwards.
SOAP was designed to be a "fit all cases" solution. and that turned out to be an over complicated over engineered solution that pretty much everybody hated.
REST grew organically from people using HTTP and liking it and thinking "hey I could actually make an API from http calls and that would be pretty neat". Sure it's not great for a lot of cases, but it gets the job done and is the natural way to query a server with a browser. So I f I can make a website and the API at the same time, then that's that much less work for me.
This is exactly the same type of rant we get about node.js, electron or Python. When people chose these technologies they know it's crap. Writing a server in an interpreted, dynamically typed language is pretty dumb. Running a desktop app as a local web service rendering in an embedded browser is somewhat less than ideal. And using one of the least efficient language for data processing is somewhat counter intuitive...
The point is: people like working with these. Get over it.
Any structured approach is difficult and prone to errors. People who so often jump onto the REST bandwagon often have poor to no understanding of what REST is, of what HTTP is, and how it all works.
Comparing REST to SOAP, is to me like comparing HTML to SASS/LESS.
SOAP had so many flaws, no need to repeat. But the fact that you could not trust that SOAP software created using Microsoft software would work with something created with Java software was a disaster. Sometimes I ended up building custom SOAP libraries to connect with the other services. It was promised to solve all our problems, but ended up creating more.
REST is not a "standard" library, it is just a basic guideline on how to use the HTTP protocol "out of the box".
What the author seems to be looking for, is one hammer to fix all problems.
If you are spending too much time writing REST services, maybe your frameworks are flawed, you could be automating more or your interface is too complex?
SOAP is a very complicated spec, and intended to be used to automatically generate bindings in whatever language you're using base in the service definition fine (WSDL). It's not intended to be human-readable.
So to provide a SOAP framework you need to not only provide all the plumbing for dealing with the actual messages, but all the rolling to auto-generate the client library for an API. The whole mechanism is very complex and there were lots of places where the spec was not specific enough, or the implementations had slightly different opinions about interpretation.
Then on top of that, Microsoft for a good while purposefully deviated from the spec to provide "enhancements" that other implementations had to reverse engineer to be compatible.
Then on to of that you had a whole raft of extensions to the spec like WS-Security, WS-Reliability, and (ironically for a "standard") WS-Interoperability. The chances that all of those specific extensions worked across different implementations was even more remote.
So, there were two big reasons in my opinion, that SOAP failed and REST shined:
1. The only way you could really trust that your SOAP client would generate compatible bindings for an API is if you wrote the API. This defeats the purpose of having public APIs. The author's complaint about REST implementation taking hours when his known tools take minutes rings hollow to me because I've spent weeks trying to get two simple SOAP systems communicating.
2. SOAP reinvents the wheel in a much more complicated fashion. Mant of those WS-* extensions were written to provide things that you already get if you embrace the existing networking stack (TCP/IP, HTTP, etc.). SOAP was trying to be transport-agnostic, so it ignored the existing stack and had to reinvent most of it itself. One of the key principles of REST is to embrace HTTP and let the existing system do with for you.
For example, SOAP used POST requests for everything. So all the tools that work with HTTP requests on the wire have to be rewritten to understand SOAP semantics instead of just looking at the request method.
Most of the use-cases in practice of WS-Security is covered by TLS/SSL. Theoretically WS-Security provides more flexibility, and can handle use-cases that TLS/SSL can't, but in practice it leads to over-complicated and less-secure systems.
WS-Reliability exists because you might use SOAP over something that isn't TCP. Most of the guarantees WS-Reliability gives you could have gotten for free if you entrance TCP.
While I generally like transparent systems as RPC pretends to be one, I have do not like unreliable systems. As network communication by its nature is very unreliable, it is even worse when some magic technology tries to hide that unreliability and calls it transparent. So much for RPC.
Yes, RESTful is not always as simple as it seems at its first glance. But that articles doesn't suggest anything helpful besides arguing why RESTful implementations are difficult. So if someone going to write the next 'REST is the new SOAP' article please provide some sane alternative idea (doesn't have to be a finished implementation).
Both Google and Microsoft have published their API design guidelines, which are very much RESTful. Both companies have applied their guidelines to broader set of products at very large scale. At least, we have not seen alternatives that would be applicable or scalable to companies at Google or Microsoft scale.
API design is like UI design or car design or clothes design. It is a form of craft, which does take time and skills. API design is for customers to have best experience, it was never meant to save time or work for the designers.
Disclosure: contributor of Google API Design Guide.
> You want to use PUT to update your resource? OK, but some Holy Specifications state that the data input has to be a “complete resource”, i.e follow the same schema as the corresponding GET output.
If you were working in a strongly typed language with RPC calls, you would see the same problems, or symptoms thereof. For example, if you had the two RPC calls storeFoo and retrieveFoo, you'd expect them to both take Foo objects, no? Something like,
and PUT/GET in HTTP yearns for the same dichotomy.
> So what do you do with the numerous read-only parameters returned by GET (creation time, last update time, server-generated token…)? You omit them and violate the PUT principles?
Yes, and just call it POST, since it's no longer symmetric and violates PUT principles. REST has nothing against POST. Again, this problem would be reflected similarly in RPC:
(or perhaps storeFoo ignores some fields on Foo, etc.)
> creation time
This is a fantastic example, because I actually had this problem w/ a non-RESTful API. We took a generic sort of "create record" and recorded the current time as the "start" time. The problem was when the device lost network connectivity (which was essentially always, just a matter of "how bad is the latency): the creation of the record would lag, sometimes by hours, and the eventual "creation time" it was tagged with was wrong. We should have trusted the device, because it knew much better than the server when the actual record was created.
Now, this isn't going to be the case all the time; sometimes you literally want just the time the DB INSERT statement happened, and that's fine.
> Pick your poison, REST clearly has no clue what a read-only attribute it
POST a "create" request, GET has the newly created fields. Symmetric PUT/GET is nice, but show me where in HTTP it is recorded as an absolute.
> Meanwhile, a GET is dangerously supposed to return the password (or credit card number)
Not only does REST not dictate this, nobody in their right mind should do this. GETs just return the resource. What that resource is, what data is represented — that's up to you in HTTP just as it is in RPC.
> lots of resource parameters are deeply linked or mutually exclusive(ex. it’s either credit card OR paypal token, in a user’s billing info)
and you're in the same hot water, again, just with RPC. If your type system allows it, you can make them mutually exclusive there, perhaps something like,
but then, that cleanly translates back into a RESTful API's information too. Now, JSON is typically used, and it doesn't really expose a real sum type, which is a shame. You can work around it w/ something like,
and it works well enough. If you can't express it in the type system (in either RPC or REST) then you have to do some validation, but that's true regardless of whether RPC or REST is in use.
> you’d violate specs once more: PATCH is not supposed to send a bunch of fields to be overridden
…how is that a violation of the spec?
> The PATCH method requests that a set of changes described in the request entity be applied to the resource
> With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.
That's exactly what sending a subset of fields is. It's an adhoc new media type describing a set of changes.
> So here you go again, take your paperboard and your coffee mug, you’ll have to specify how to express these instructions, and their semantics.
> OK, but I hope you don’t need to provide substantial context data; like a PDF scan of the termination request from the user.
If it's not a simple "delete this thing", that's fine. POST is still there.
> * For exemple, lots of developers use PUT to create a resource directly on its final URL (/myresourcebase/myresourceid), whereas the “good way” of doing it is to POST on a parent URL (/myresourcebase), and let the server indicate, with an HTTP redirection, the new resource’s URL.*
Either is fine.
> Using “HTTP 401 Unauthorized” when a user doesn’t have access credentials to a third-party service sounds acceptable, doesn’t it? However, if an ajax call in your Safari browser gets this error code, it’ll startle your end customer with a very unexpected password prompt.
Only if you accept Basic authentication, and indicate that in your headers. It is, I agree, somewhat unfortunate that the browsers do not let you control this behavior. I don't feel that this is an issue for many people these days. (If you're using something like a JWT, you're not going to hit this, since you'll likely be using something like Bearer for an authentication scheme.)
> Or you’ll shamelessly return “HTTP 400 Bad Request” for all functional errors, and then invent your own clunky error format, with booleans, integer codes, slugs, and translated messages stuffed into an arbitrary payload.
Would you not need to stuff that information into some form of error or exception type in the RPC world? (The error "code" might be standardized, e.g., JSONRPC does this, but the associated data cannot be.) But unless you clearly fall into one of the predefined categories, and even if you do, it doesn't hurt to settle on a standard "error" type/format.
REST is about talking about the resources, about defining formats / objects that clearly indicate whatever state/data they represent. For most people, this is "just" going to be a JSON document, but all too often you see the same concept or state expressed five different ways in not-RESTful APIs. For example, a ZIP code that is sometimes just a string, sometimes a {"zip_code": "12345} sometimes a {"zip": 12345}; the author here is failing to understand that a common type exists, and that he should form an actual format around it that he can refer to globally (this field is a zip — and we defined that here and we know it always looks the same, always.)
Frankly, I feel like most of that issue is from JavaScript and JSON, since both discourage any form of static typing of data, and humans naturally just get the immediate job done, but at the cost of having the same concept expressed six different ways.
> Or, well, yes, actually it remember its authentication session, its access permissions… but it’s stateless, nonetheless. Or more precisely, just as stateless as any HTTP-based protocol, like simple RPC mentioned previously.
This isn't what "stateless" refers to. It's stateless in that the requests are (generally speaking) independent of each other. One might need authentication, and you might have to get that somewhere, and that might need a request, yes. Store data of course changes the state of the server. But you can disconnect and reconnect and reissue that next request without caring.
> The split of data between “resources”, each instance on its own endpoint, naturally leads to the N+1 Query problem.
It can, but again, doesn't need to, and is no more susceptible to this than an RPC API. In an RPC API, you still need to determine what to expose, and how many API calls and round trips that will require…
> “The client does not need any prior knowledge of the service in order to use it”. This is by far my favourite quote. I’ve found it numerous times, under different forms, especially when the buzzword HATEOAS lurked around;
I feel like this was expressed by Fieldings to encapsulate two ideas:
* You can push code, on demand.
* You can refer to associated resources, or even state transitions, via hyperlinks.
Most supposedly (but not really) RESTful APIs do neither, so it's entirely a moot point. I think people way over read this to mean absolutely no prior knowledge, whereas Fieldings intended it to mean something closer to "only knowledge of the media types being returned", which is actually quite a bit of prior knowledge. But the two bullets above still allow you to encode a considerable amount of flexibility into an API, considerably more than something that ignores those points.
Frankly, I feel like if you start with an RPC protocol, you'll eventually want some stuff: caching. Retries would be nice, but you need to know if you can retry the operation. Metadata (headers). Partial requests. Pagination. Can you encode all of this with RPC protocols? Absolutely! But it comes up so often, it would be nice to standardize it, and many of these things (caching, pagination, range requests) get a lot easier if you stop talking about operations and start talking about the data (resources) being operated on, which is — to me — the big "ah ha!" of HTTP, and why HTTP is what it is today. That is, HTTP is a highly evolved RPC protocol.
We now recently had to design an internal system to manage student data. We went with REST and the first thing I did is to document all API-s in a repository using Swagger.
It's not really true that there are no standards, they defined Open API, it has a nice browser based editor to update the YAML files and it's easy to read an maintain.
All changes to the API-s are done via pull requests, so changes are reviewed.
The HTTP verb is also a non issue, there's 4... FOUR that you need to match with CRUD. Is that hard? Even if it is, read the docs.
Well of course comparing state of the art jsonrpc with spaghetti rest will give jsonrpc a headstart
In truth the error is in the premise “we switched rest because everyone’s doing it/is the way webservices are developed now”
Well, no, and that’s the issue with most “rest” api that actually are jsonrpc with html verbs (and ignoring that using json for rest is impossible to begin with): they were written to hit a bullet point on a feature list
I agree with almost every point of this article. REST is terrible. The only thing I don't agree is that SOAP is bad. Web services are awesome if used with some limitations. Major platforms have autogeneration, so you are spending zero time to build a robust API. I've yet to see anything as simple and fast as Web services. I think that world migrating to REST is a huge mistake.
I’m fine with REST as a transport mechanism though the resource overload constraint (Swagger) is the sort of thing that might make me dump it.
If I want to overload get with a custom search end point, I should be able to do that without someone’s idea of a perfect resource definition telling me I’m wrong.
This is the sort of thing that made me remove WCF from my resume.
Simple and flexible. This should be any standards mantra.
Here's the secret of REST: it's all about the information and definitely not verbs.
The author begins by confusing HTTP with REST. I understand that HTTP is probably the most common protocol for implementing RESTful architectures, but since the author is devoting so much time to trashing REST, they ought to correctly make this distinction. Hence, the complaint that HTTP verbs aren't adequate to express certain problems isn't a critique of REST, it's a critique of HTTP. This problem is present throughout the entire article.
Then, the author complains about the specifications of REST (i.e. "What a scoop in the software world" with respect to the client/server architecture). Not all applications are well-suite to client/server architectures -- those applications are not well-suited for REST. Some principles of any architecture may seem simple, but I don't think that's an inherit problem.
The author really revealed their ignorance with:
>Rest is awesome, because it is STATELESS. Yes there is
>probably a huge database behind the webservice, but it doesn't
>remember the state of the client. Or, well, yes, actually it
>remember its authentication session, its access permissions…
>but it’s stateless, nonetheless.
REST is not stateless, RESTful communication is. This just means that each request from the client contains the information necessary to create an appropriate response -- and that this is possible for all resources. That implies that issuing temporary authentication tokens is not RESTful, since an initial authentication request is required to create a second "meaningful" request. This is a desirable trait for many classes of applications.
However, all of this being said, the proposition 'REST is sometimes used inappropriately' is true, and it is interesting and constructive to identify situations where REST is being used inappropriately. It isn't clear to me that RPC calls are appropriate in all (or even most) places REST is used.
Rants like the one in this article are really toxic, simply because they make people think this is an acceptable way to approach technological problems. What would actually be useful is to:
* identify and find examples of misuses of REST
* try to create a taxonomy of REST abuses and recommendations for those scenarios
I think REST's ambiguity and arbitrary nature is a real thing for those that are opinionated. (not me so much) Strange but JSON RPC did away with those types of arguments but wasn't used maybe because the name sounds complicated (two acronyms too much)? Now we have GraphQL that seems to be the best answer in ease of use of consumption and design. Rest easy.
The old spec, https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html, seems to say that 401 can mean either unauthenticated or unauthorized. If you haven't signed in, it means unauthenticated. If you have signed in, then it means unauthorized. Were the status codes written before authenticate and authorize had the narrow meanings of today?
Meanwhile, 403 seems to be reserved for when the server just generally doesn't want to do what you're asking it to do. It may tell you, it may not, it doesn't have to tell you, so there. "Authorization will not help..."
REST is freedom (with all its downsides when you can do what you want), SOAP is like military (with all its strong rules).
It's like comparing a micro web framework with a full web framework.
I think it always depends what you want to do in which scale with REST. What's great for a single hobby developer would be bad for a big world wide corporation.
If you really want more rules, more restrictions and more boundaries for 100 of developers working on your project you should definitely look on something build on top of it (graphql, odata) or one of the replacement architectures like grpc if you have high demands in your infrastructure.
For me as a developer I'm always happy when things are not overcomplicated at the beginning, but some developers like more structure down the road. At the end you need to get stuff done and I think it's better to educate yourself about REST in general than some big other framework which you need to implement somehow everywhere.
This is really tangential, sorry, but I'd like to mention that the (US) military is strongly opinionated in a relatively small number of ways. Additionally, the rules are self-enforced throughout most of its structure, and almost none of the rules being broken will cause the system or its components to fail. The US military is actually incredibly flexible, because the (mostly) self-enforced strong opinions inform the decisions of subsets of the military all the way down to the individual level, allowing each component to be effective almost regardless of the health of the rest of the system. SOAP was incredibly inflexible. I think perhaps a better analogy would be to compare SOAP to some defense contractors, where things must work a certain way, or they grind to a halt and become very expensive to debug and triage. Also, incidentally, I'm interested to hear about your experiences with gRPC. I have just played with it, but it sounds like you had some issues scaling with it? Or didn't like it so much? I much prefer REST because of its inherent flexibility, as you do, but I am interested to hear from people who scaled with gRPC.
It's only easy if the system is built once and never changes.
For example, does an "item sold" event from 8 years ago has the same data fields and fundamentally mean the same thing with the same outcomes, compared to the ones recorded today?
If not, you've got to start talking about stuff like writing multiple versions of code to play-back multiple versions of events slightly differently. Or at least have two versions in place, while some background process "upgrades" the old events.
And that's assuming you can even imagine a good way to re-express old obsolete behavior into a direct modern equivalent.
In contrast, I'm suggesting a mutable model which emits events. The events are used for triggering current logic, and are kept primarily for auditing/data-mining proposes.
It seems to me REST offers no real advantages over other specifications/protocols from a server side perspective. And the reason for REST proliferation has been about the rise of SPA JS frameworks.
I'll be honest and say I refuse to read this based on its title. I've used SOAP somewhat recently. It's shittier than anything I've done in programming and I've done plenty of PHP 5 stuff. There's so much inane stuff in SOAP, and it's rare that a SOAP client in one language will work properly with a SOAP server in another.
Yes, it's HTTP-based APIs. But they've perverted the HTTP spec with weird extensions like SOAP attachments. So good luck doing SOAP operations by hand.
Not to mention it's insanely poorly specified. For example, SOAP requests can be GET or POST but most things only support using POST anyways. Why not specify it! God only knows.
I really hate SOAP a lot. Comparing it to REST for some Medium rant makes me irrationally angry.
REST is an anti-standard. It doesn't tell you what to do, it just gives you some guidelines of sorts. And honestly, you don't even have to follow them - the spirit of REST is in simple JSON + HTTP-based operations.
POST /accounts/ {username, contact_email, password} -> {account_id}
POST /accounts/:account_id/subscriptions/ {subscription_type} -> {subscription_id}
POST /accounts/:account_id/send_activation_reminder_email/ -> {}
DELETE /accounts/:account_id/subscriptions/:subscription_id/ {reason, immediate} -> {}
GET /accounts/:account_id/ -> {full data tree}
...Or an unending number of other interpretations. You can, for example, flatten the tree and put `/subscriptions/` at the root. Doesn't matter. It isn't that ridiculously hard to understand or formulate.
And if you want, you can make everything perfectly RESTful and make everything a resource. Kubernetes and its API objects concept are great examples. Deploying services with POST/PATCH/PUT? Just add some layers of abstraction.
Having REST bindings for many languages is not a ridiculous thing to do either. If your API is huge, autogenerate it somehow. Swagger/OpenAPI exists. Or, maybe don't make bindings at all. If I wanted to write payment handling code for Stripe, I'd feel no hesitation doing it by making the HTTP requests directly. It's not hard or scary. Them having rich, quality libraries for many languages is a marketing decision, and a damn good one in my opinion. It does things that no cross-language abstraction layer could to make life as simple as possible.
Honestly, people can do whatever they want, but I genuinely do not get why REST-based services are so bad. Honestly, if anything is difficult nowadays, it's authentication. I'm a big fan of OAuth2, but there's a thing that isn't fun to write client libraries for.
(I have. The problem isn't that it's complicated; the problem is that nobody implements it right. Google's OIDC implementation has all kinds of bugs. Losing scopes on refresh is incredibly annoying.)
and when someone joins the team and raises endless arguments with you about how what is already working isn't REST, because their interpretation is different, you have problems.
and when you have a client that insists that you're not following REST conventions because your endpoints take POST for everything and don't take actual PUT commands, and they are not going to retool their interpretation of REST to match yours, you have problems.
These problems would exist anyway - interacting with other people usually brings some problems with it - but putting things under this banner of REST implies that there is some 'right' way of doing things which contributes to the interaction problems.
Just acknowledging "hey, this could have been done 4 ways, this is the way that was chosen, deal with it" - regardless of whether the REST acronym is involved or not - should be the way to go, but an implied 'standard' creates more hurdles.
I could say the exact same thing about RPC. API design is difficult, largely in part because programmer types often refuse to acknowledge the "design" part. REST actually improves this by providing a sane set of ideas to base your web API design off of. It only makes it worse if you want it to.
REST has plenty of nice advantages people ignore too, like for example, often with REST APIs, ACL and authentication can be handled much easier because you can use URLs as a primitive for permissions.
When I see articles like this, I recognize a fellow expert.
The same way JSON is better in practicum than XML, despite missing some key tooling, JSON RPC 2 is better than SOAP or REST in practicum because it was designed by someone with experience who had to actually get shit done.
Every time someone tells me that I don't get REST and links me to Fieldings dissertation, all I hear is the bleating of goats and the peals of Dunning-Kruger.
As an industry I do think we are extremely fast of ditching old lessons learned in favor for the new shiny, which still gives the same ol’ problems. We like to redress the same problems in different coats but still have the same problems at hand. We think the “new shiny” is solving the problems, just because there’s yet no structure highlighting the problem. And therefore it’s “nice”, but it will eventually creep up on you. Also if you had a bad experience with tech X you will hate it, and disregard everything else it actually solves and does it good.
So for reasons unclear you tried to replace a working system with a design pattern you didn’t really understand, and you were disappointed with the results? What were you hoping to get from this madness?
Honestly, REST is a great idea. Yes, people get religious about HTTP verbs and URL structures. But you don’t need to. Prioritise clarity to humans above everything else, and you’ll be fine for the most part.
> Honestly, REST is a great idea. Yes, people get religious about HTTP verbs and URL structures. But you don’t need to. Prioritise clarity to humans above everything else, and you’ll be fine for the most part.
I've seen this argument before (I've even made this argument before), but I'm becoming increasingly unconvinced by it. Let me explain my thinking:
1. If there's ANYTHING that sets REST apart from a generic HTTP-and-JSON protocol, it's the focus on basing endpoints off the concept of resources and collections, and using HTTP verbs to determine the type action to be performed. (And also, possibly, internal hyperlinks, but nobody seems to actually pay attention to that part.)
2. Depending on your application, somewhere between "some" and "most" of your API traffic doesn't deal things that easily map to resources and collections, doesn't involve actions that easily map to HTTP verbs, or both. One of the most common examples is sessions. When you log in, are you creating a session resource? Is that a sub-resource of the user resources, or an independent collection in its own right? Or are you actually retrieving a session token from the user resource? When you log out, are you actually deleting that session? Or updating the session to remove its authentication?
3. It's easy to say "prioritise clarity to humans", and I totally agree, but in that case, you'd just have a /login and /logout endpoints, you'd do a POST to the former with a valid username/password to login, and a POST to the latter to log out, the end. That's super clear, and super usable, but nothing about that is RESTful. And the same thing comes up over and over again. For every time I find myself doing simple CRUD operations on a collection of concrete resources, there's 10 times I find myself needing to do complicated, ambiguous operations on things which aren't.
4. But if I relax and focus on making a clear, usable API, then almost by definition I have to do so by not being RESTful. It's easy to say that you shouldn't get religious about HTTP verbs and URL structures (and I agree), but since HTTP verbs and URL structures is 100% of what people seem to mean by "REST" in actual practice, then...in what sense is REST a good idea? It's like drinking rum and coke without the rum...and the coke. What's left? Nothing Roy Fielding would recognise, in my view.
I'm happy to entertain the idea that REST is useful when you are doing CRUD operations on simple, concrete resources (updating prices on an online store; posting comments to a blog; whatever). But in my personal experience, most people seem to view REST as the ideal way to handle making an entire API, even the bits that are clearly nothing more than RPCs triggered via HTTP.
REST is a design pattern, and like all design patterns, it can be over-applied.
Bits of your application that have natural nouns - “post”, “credit card”, “product” - fit naturally to endpoints. Log in doesn’t, so don’t try shoehorning it in. If only 80% of your application can be defined as REST level 4 but is clear and easy to use, who cares? And if your domain doesn’t fit well with REST, then don’t use it!
Personally, if I have an object like “product” and want to use some custom verbs, I include them in the “links” section of the product response. My authentication process is in the documentation, with /login and /logout URLs.
My first priority is always to make something usable by humans. I try to do that using REST wherever possible, because usually those goals go together. When they conflict and I can’t resolve them, I choose usability by humans over strict REST principles.
> but in that case, you'd just have a /login and /logout endpoints, you'd do a POST to the former with a valid username/password to login, and a POST to the latter to log out, the end. That's super clear, and super usable, but nothing about that is RESTful.
And this is the beauty, compared to SOAP: you can mix-and-match. Make your resources as RESTful as possible (but e.g. omitting passwords/cc numbers), and implement the stuff that's not resources in good ole plain-json way.
For logins there's no other choice anyway, given the tons of different ways: some are email+password, some username+password, some add a tenant parameter to the mix, some add one of the dozens of captcha solutions, some want Oauth, various 2fa schemes, ...
Came to make this exact point. Use a framework to automatically give you a solid, dependable REST api focussed purely on crud operations on your core data model. Anything weirder than that (which I don’t believe will be ten times the volume for the vast majority of use cases), do in the cleanest way possible, even if that means you end up with a non-REST api wrapping a core RESTful one.
I think the most realistic and fast solution is to provide a Segger / Open API spec. As a result you get the best of both worlds: easy consumption and a schema which lets you generate clients.
I'm not sure if GraphQL will ever replace REST as REST is suitable for most of the cases. REST is easier to grasp and still have a great tools and SDKs around. I know that there are fields where GraphQL is better but in general I think REST will remain No. 1.
Article reads like being from a crumpy snarky old age dev who lives in his XML-land filter bubble of the 1990s. Go back to XMLRPC and SOAP please. This site is getting more of these corporate old farts lately that don't want to adjust at all. I wish the time back when this site was about startup news, VCs, and new exiting things and not about whining and boring stuff.
We've asked you many times to please comment civilly and substantively, and yet you've continued to repeatedly violate the guidelines, so we've banned the account. We're happy to unban accounts if you email us at hn@ycombinator.com and we believe you'll start following the guidelines.
What is with medium.com? Why is it so many links to this site are full of hateful hipsteresque opinions looking to sound smarter and more insightful than they actually are?