Hacker News new | comments | show | ask | jobs | submit login
REST is the new SOAP (medium.com)
581 points by sidcool 7 months ago | hide | past | web | favorite | 338 comments

Why is REST so popular? Because it's easy to implement and works for lots of use cases. I'm sorry that you found places it doesn't, but in the real world, having been through that SOAP pain it's being compared to, I'd say there's not even a comparison. Everyone seems to want to find a reason to dislike product/technology/feature X but in this case, X is just better than anything we've had for a 90% adoption case.

What is with medium.com? Why is it so many links to this site are full of hateful hipsteresque opinions looking to sound smarter and more insightful than they actually are?

I avoid medium posts as much as possible. Everyone is an expert on there with very strong opinions telling me how every technology older than 2 years and not written in javascript is obsolete/dead/not the right way/new <insert a dead technology>.

And what's with the UI on their publications. They take up top 25% of the screen with the branding and navbar and bottom 10% asking me to sign in and both sticks on the screen. Who approved that?

Firefox reading view could have been invented for Medium! It doesn't make the articles any better though.

Personally I will keep using REST. If I ever find something is too CPU or network hungry with http/JSON I will take a look at a binary protocol.

For me REST has opened up the world of web applications to simple integrations. It is what makes simple, single use case web apps useful.

Exactly. It doesn't have to be one or the other. Why can't they both be suitable for different use cases.

I have another opinion about that. Read as much medium posts aspossible, but keep a critical mind. Medium is nothing but a blog platform, not a science platform, not a news paper. Medium is important, because it makes it easy for everyone to post her/his mindset or agenda. No one has to agree with anyones opinion. But its important to know about those opinions, because it makes it easier to have a dialog.

>"And what's with the UI on their publications"

Agreed, its a content platform with a where the content fights for attention with the medium.com branding. How many people do you really "never miss a story from"? I generally avoid medium.com for this reason alone.

I'm further puzzled when I see companies using medium for their company blog as well. All I can figure is it must be recommended in some user guide to "growth hacking."

I use the stylish add-on for Firefox and remove the bars.


alternatively, use a bookmarklet to remove floating div... name the such as '1. rm float'

then, it's only alt-B -> 1 away, in Firefox

Thanks for this - Stylish add-on for chrome also works nicely too!

Medium posters borrow clout from medium which makes their post have clout. If you think of medium posters as just bloggers with their own domain or blogspot, you'll see them differently.

Who are these people who think that writing on Medium gives you clout?

Isn't it just a content container? I mean, saying "I avoid medium" sounds like saying "I avoid wordpress" or "I avoid blogspot", speaking of content quality and not look&feel. About this one, I'm logged in, so I have to suppose your % (25% and 10%) don't apply to the logged-in case, also because as soon as I scroll down, all those frills disappears and there's only the text.

It's a content container. A content container that for some of us is now stereotyped as "content in here is often low-quality", which means that in the vast amount of content on offer each day we're more likely to skip links leading to medium.com. If people regularly spread bad blogspot links around, blogspot would have the same reputation. (I'm sure there is tons of spam on blogspot, but when people share blogspot links in my circles they usually are blogs by people that have been at it for ages and care more about content than appearances, so for me blogspot is a high-quality signal)

Being in a known content container is great if you don't have your own "brand" and as long as people associate the container with good content. If they don't, or if your content is way above average, it pulls you down (which provides motivation for below-average writers to write on them, hiding in the crowd, and motivation for good writers to leave)

It is a centralized content container, which makes click-bait titles like this one better material. So you can create a controversial opinion on a subject, get views, make it an entertaining even if exaggerate read and it gets featured among other pieces especially if you use the right popular tags, then your post is on people's medium app and newsletter subscription.

WordPress, on the other hand, is open source software you can host it yourself and thus you can't just slap some tags on it and get featured on top of a newsletter. This is also why some authors prefer medium, easy to setup and easier to get an audience, but then you have to resort to these marketing techniques to drive your views up.

Disclaimer: I have written some medium posts with catchy/controversial titles to test said techniques, call me part of the problem.

I use the "Make Medium Readable Again" extension for when I _am_ interested in the content.


I happen to have worked with the system author coded/maintained. We're not in touch and I'm not here to defend him, but he's no hipster. Rather, he had to integrate a lot of heterogeneous services, as that system was acting like a hub between many departments in the company, with various tech skills and resources. If anything, I suspect he's more kind of unsatisfied by changes that he deemed unnecessary.

As a note aside, the xmlrpc endpoints of the aforementioned system worked fine and saved us time.

Xmlrpc is incredibly underappreciated.

Xmlrpc is simple and it works.

Xmlrpc is soap without the bullshit.

I built lots of personal apps that were flash/flex front ends that talked to python backends over xmlrpc to quickly whip up his for my python aps

Having implemented many web service APIs in both SOAP and REST I have the same opinion.

XMLRPC or JSONRPC seem to be the happy middle ground.

The posted article hit home with me as I had to re-implement working SOAP services in REST because you know, management buzzwords and new shiny.

I quickly found, as the article articulates, as soon as you enter the land of verbs and workflows REST starts to stumble and becomes very network chatty. And when that network chattiness is backed by other network chattiness the grumblings of why the hell you can't just return a deep object graph of data from an endpoint ensue.

But you can with REST. Most of this conversation is a strawman against REST. Creating or changing resources can impact other resources. It's obvious / logical. Responses can include the things that they impacted along the way. Much of this thread amounts to "I used this shitty REST API once" or similar for RPC.

Well, it depends on how you reckon REST. I can assure you the real world issues I've run into don't have anything to do with the frameworks used. The impedance mismatch between what one would consider a sane API that could just as easily be implemented as a compiled library and the mappings of that API to HTTP verbs and what is considered proper REST were where the problems were.

My question would be, outside of convenience to SPA developers, what do you see as the specific advantages to REST over XML/JSON RPC or SOAP?

I don’t really remember SOAP since that was over a decade ago, but what I like about REST is that it harmonizes the front end and backend structures of the application and it makes things predictable. It also puts up some guardrails against obviously dumb behaviour like deleting records via a GET and with JSON API, makes it possible for things to snap together once you’ve got the structure right. Ember Data + Rails set up to follow JSON API is just unbelievably productive. Pagination, filtering, optionally including other resources, linking (both to the related resource itself and to the relationship itself) it’s really fast, consistent, powerful, flexible, and secure. It’s not always performant, but when performance is important I make one little RCP or nonstandard REST endpoint (say return a link to a big TSV blob) and I move on with life.

It lacks conventions thoug so every api is more quirky and even sloppier than a bad REST api. At least with REST there’s some existing structure and it’s not completely up to the imagination of someone in a hurry.

btw SOAP was supposed to be "xmprpc standardized"

I remember well the meeting when things started to go off track...

Im actually curious now that you mention it but don’t expand on it. Was the road to hell really paved with good intentions?

I don't remember the history prior to the meeting well enough today to give an authoritative version. So for fear of getting some of it wrong I will say nothing. However, I'll say that the answer to your question is "no".

I agree.

If you control client and server and the server's functionality will for the foreseeable future is limited in scope, xmlrpc is the boss.

We had a django/rest based internal microservice. Was a pain to build and maintain. Switched to xmlprc (it's in Python stdlib), removed tons intermediary code, and wrote only a few bits of new code.

This talk by Jonas Neubert opened my eyes to how xmlrpc can be the glue for Python (which is already glue).


> that SOAP pain

The pain you're referring to is relative to the language you used and when you touched it. SOAP is a comprehensive and we'll defined specification and when implemented properly you forget it's there because it just works.

After about 2003 major vendors had their implementations locked down pretty well. In Visual Studio you just implement a basic controller and the remainder is configuration. If you wanted to consume something from Biztalk, PeopleSoft, any Oracle Product, or any other Enterprise product you could just add a service reference to a WSDL URI and a tool would generate your interface classes and DTO classes for you in your language of choice.

In the Open Source world things were very different. Whenever I would provide a service to be consumed by a vendor, I would provide reference implementations in C#, JAVA, Python, and PHP. I would spend about 15mins on C# and JAVA, then the rest of the day fiddling with Python and to a lesser extent PHP.

PHP and Python have had SOAP libraries for a decade but they require considerably more effort to even consume SOAP. I have never tried to stand up a SOAP Service with them but I can't imagine it's any good.

Around 2012 I remember working with a partner company that was using RAILS for their platform. It was an absolute nightmare for them to integrate with our existing SOAP service layer. SOAP protocol libraries were the least of their issues. No client certificate authentication in their HTTP libraries. No serious XML support. They wrote their own implementation from scratch.

Well, that's actually where the SOAP pain is: it used to be that you couldn't make any good use of it unless you were using walled-garden vendor software. A lot of people dislike Windows, Visual Studio and anything else made by Microsoft. Same goes for Oracle. So then you are left with everything else, which basically means: everything without SOAP, Enterprise editions of runtimes, service buses and the likes.

There are a lot more developers out there not working for an enterprise and not using those tools. Especially the developers doing open source work or doing small scale work.

While nowadays it's fairly easy to make a Java Spring Boot application consume and serve SOAP, with automated WSDL imports and all the WS-* specifics, this wasn't pretty much never the case with anything new and free (as in speech).

> when implemented properly you forget it's there because it just works.

I think that's the link missing for most developers. SOAP was intended to allow machine-generated SDKs to remove all of the sharp edges of dealing with it, and in that regard it was largely successful. What brought it down was the advent of non-"enterprise" web development—development happening outside of a .Net or Java IDE that generated code for you. If you ever had to handroll a wrapper for a SOAP endpoint though, god help you. I honestly believe SOAP was the thing that made XML seem uncool by comparison to JSON and REST. JSON still doesn't solve data representation problems as well as XML, we've just learned to accept "good enough" in its stead.

> I have never tried to stand up a SOAP Service with them but I can't imagine it's any good.

Python has decent SOAP server implementation with one-to-one request/response schema modeling https://github.com/baverman/dropthesoap

> PHP and Python have had SOAP libraries for a decade but they require considerably more effort to even consume SOAP. I have never tried to stand up a SOAP Service with them but I can't imagine it's any good.

I've tried suds and zeep. They both didn't really work in my specific cases. My strategy from now on is to write a bit of code in Visual C#, then analyze the xml traffic, and then generate that same XML using Python/PHP. The SOAP protocol is actually pretty simple once you know how it works.

SOAP has a lot of problems, but it's pretty amazing that you don't need a client side library to use it in Visual C#.

> Why is REST so popular?

REST is not popular, there are only a few RESTful public API in the wild. The rest (unavoidable pun, sorry) are simple HTTP APIs with JSON serialization which maps with various degree of coupling to internal data layer.

The main cause you don't need REST limitations to achieve same goals.

OP has strong opinion about why we need to reimplement rpc over http every time for every application and write clients for every popular platform to be able to consume it.

Every time I use REST from any google cloud api, my hair is moving and you definitely will have nightmares if you look into their python client code.

> The rest (unavoidable pun, sorry) ...

The remainder.

Good name for a REST library. ;-)

This is the no-true-scotsman fallacy applied to REST. In point of fact, if REST has been around for nearly 17 years and there are so few of these APIs in the wild, then its been incredibly unsuccessful in its aims.

The truth is that these sites are, in fact, RESTful and that REST is just mediocre at what it professes to do. The requirement to treat all operations as resources + HTTP verbs is the primary leaky abstraction that everyone seems to want to gloss over.

Link to these "actual" RESTful public APIs?

I've been curious to see one (for years). Everyone talks about how this REST service is being done wrong, but few will link to services being done "right".

The article laid out a detailed explanation of shortcomings of REST. You might not agree with everything in it, but you offer no real rebuttal, instead dismissing it as a “hateful hipsteresque opinion,” without acknowledging any of the actual criticisms the author gave. This strikes me as unfair, haughty and lazy.

There's quite a "circle the wagons" mentality common in programming nowadays, following politics lead I suppose.

Nah, that was in programming well before politics.

They can have the same root cause though: someone's identity feeling threatened.

> ...having been through that SOAP pain it's being compared to, I'd say there's not even a comparison

OK, that is indeed the most usual response to "Why REST?". The main reason why people like REST is "because SOAP". It's a false dichotomy that the industry has fallen for.

Oh, yeah, and you can run the GETs directly in your browser/cURL. I like that part too, but it only gives you so much.

REST is more of a philosophy, than a standard. Hence everyone does it differently, and you have no chance to use the same library to talk REST with multiple different services (unless you make that library an overcomplicated beast).

XMLRPC? It's a universal standard, that is just a few pages long and everyone can grasp it in their lunch break. Nobody would complain that your API is not "XMLRPC enough". It works (almost) the same way everywhere. You can get the XMLRPC library that's been built in since Python 2.2 and be reasonably sure that you are going to be able to talk to a random modern XMLRPC API. You'd have other such libraries for every major language. Ditto for JSONRPC, if XML sounds too scary (though it doesn't really matter much - it's a mostly transparent implementation detail).

I'd wish people would stop bringing up SOAP as an excuse for REST. Yes, it was worse, but that does not mean that REST is particularly good.

It's an architectural style.

REST allows you to make decisions about the http interaction out-of-band, where SOAP provides (and often requires) the ability to describe all decisions about types and parameterization and exception cases in-band.

The REST way allowed one to get started with something small and simple that people could agree to just by talking it over together. With SOAP you had to make all those decisions up front and put it in the specification. I believe SOAP is so complex that it's an analog to CORBA/IDL.

REST is (at least initially) simpler and less specific than SOAP. That's its strength.

> I believe SOAP is so complex that it's an analog to CORBA/IDL.

SOAP was intended to facilitate the same programming model but more loosely coupled. So that's not at all far off. And for what it intended to be—a specification for machine-generated bindings and SDKs—it was quite successful. It just didn't make the usability leap over to web development that happens outside an IDE.

CORBA? That's a term that gave me a sudden flashback to debugging nightmares in the 90s.

I think “hipsteresque opinions looking to sound smarter and more insightful than they actually are” is a very good definition for much of Medium’s content.

It may have to do with the platform being used by individuals trying to create a brand of themselves, resulting in a high percentage of sensationalist and controversial posts. Along with the necessity to write on a regular basis.

Programming Paradigm Becomes Popular B/C it gets stuff done -->

left: it gets stuff done because it's smarter

right: it's bad for you, it's actually getting less done

middle: didn't read all that stuff, busy getting stuff done.

didn't read all that stuff, busy getting stuff done.

I agree with this. When I end up on a new project and I'm not the lead, and there is a lead who is pedantic regarding how they want their URLs crafted (or wants to implement a complicated query pattern, or introduce an extra layer of objects to satisfy an abstract notion of purity), I'll just go with the flow. Accidental complexity, pattern seeking and cargo-culting are personality traits of many programmers, and I've come to the conclusion that its best to accept this, and get on with the job of actually delivering value.

The problem becomes evident when you return to the results so the flow a year later to change something, and wish you were more mindful when getting things done.

Cargo cults are not good. Finding and adhering to regular patters that logically underlie what you do saves time and mental effort, while also preventing certain classes of errors. (No, these are not GoF design patterns.)

I’m certainly not advocating spaghetti code, balls of mud, or a thoughtless lack of architecture. What I’m specifically referring to is wilful anti-pragmatism that betrays a sort of insecurity...it’s hard to define but easier to recognize.

Finding the balance between pointless over-engineering and rigidity on one hand, and good software design on the other, consistently, is what seems to distinguish great programmers I’ve worked with, from those who are “just” very good.

But, like I said, it often isn’t worth the trouble fighting about these things, even when we see them, and getting on with the job is more important than ensnaring oneself in religious debates on projects.

Interesting viewpoint. But I am concerned that my mental capabilities can contain every brainfart that a misguided but smart person has.

You might like http://jsonapi.org which defines a standard that is fairly sane.

Couldn’t agree more. I guess this is one of these insights that come with experience ;-)

left: optimist right: sceptic middle: opportunist

Having shitty paradigm is certainly better than having none at all. But at some point we have to evolve.

me: already groaning at the thought of fixing stuff after that one who didn't "read all that stuff," the "left", and thankfully the right was too busy pontificating to do anything so there's no fixing to be done.

Read. The. Damn. Stuff.

Much easier to fix other people's code if they never got the chance to write it in the first place.

>Why is REST so popular? Because it's easy to implement and works for lots of use cases.

Could have given the exact same non-argument for SOAP -- which in its time dominated corporate services.

Whether it's "easy to implement and works for lots of use cases", it's a moot point if there would be something even easier to implement and worked even better for real use cases.

Why stop at "easy" when you can have easier AND more coherent?

Besides, REST is anything like easy. Case in point, almost nobody ever correctly implemented the original REST spec -- that's why everybody calls their implementations "REST-ful", they are some loosely inspired deviations that cargo cult a lot of useless junk.

Supposedly the "real illuminated REST" (like real communism) doesn't even concern HTTP, it's a philosophy beyond web services. And yet's it's all the web related garbage part of the introductory examples that everybody follows (or tries to).

“Real illuminated REST”? The REST spec?

It’s an architectural style defined by Roy Fielding’s thesis (who also was the editor of HTTP/1.1 RFC). It attempted to describe the Web’s architecture in neutral terms and how it was derived by combining previous styles.

Some people took this and crafted a quasi religion out of it, but that’s partly because vendors in 2002 were all lined up trying to replace the web with a CORBA equivalent and then almost succeeded with SOAP/WS-*. People got strident to fight the dollars that were lined up. It’s easy to forget there was no powerful web/internet community with social media and blogging platforms in those days, it was all mailing lists and a couple of conferences vs. Marketing budgets, sales teams, and agenda-wielding engineers on standards bodies from Microsoft, IBM, BEA, Sun, HP, etc.

All RESTful means is that something is attempting to conform to the style. There never was a spec.

>It’s an architectural style defined by Roy Fielding’s thesis (who also was the editor of HTTP/1.1 RFC). It attempted to describe the Web’s architecture in neutral terms and how it was derived by combining previous styles.

It's application for what are essentially RPC needs is which I consider cargo cult.

> What is it with Medium.com

The people you describe used to have blogs with crappy Wordpress themes that barely got noticed by Google, now they are on a big site so get discovered I guess?

It's full of click-bait because of the opportunity for articles to go viral. Spread is built into the platform, so you have people writing the most click-bait articles possible.

I don't get why the main thread is an attack to the blogging platform instead of a discussion on the topic.

IMHO REST is not easy to implement, it's easy to say "oh... okay I think I've got it... let me try" but it's very hard to implement RESTful APIs and the author mention a few valid pain points.

Simple use cases like: a user forgot their password, what the proper way of handling this case? A PATCH for a User based on ID? If the ID is a integer ID then you have to query user by email or username before you are able to request a new password, if the PATCH handles this case what other cases does it handle? Can a user request a new password and change their age at the same time, is this a valid case? How you structure your controller to route to these special cases, do you create a custom endpoint thus making your API less restful?

One thing I like about GraphQL is this idea of mutations, in many cases they are analogous to RPC calls, `userForgotPassword(usernameOrEmail) -> forgotPasswordResult`.

Hate gets more clicks than producing.

Twist on “easier to destroy than create.”

Because there is no "booo" button. Every post in medium just have Likes or none, you can like or comment just that, making a critical comment is to much for most of the users that just want to say "I disagree"

If medium has some kind of down vote, it will regulate itself a lot more, and users that disagree will not have to go to make a comment and expose themselves being critical.

Right now is full of "Content Hackers" trying to make reputation.

So long as an article makes a reasonable attempt to present a viewpoint, downvoting does not add anything to the issue. If you disagree, upvote those replies that you agree with (unfortunately, Medium makes that more complicated than is should be.) If no-one can be bothered to say what's wrong with the article, maybe there isn't much wrong with it.

For example, xmlrpc has been recommended in comments as alternatives to consider. Maybe what's wrong with Medium is that the informative replies are on HN... (to be fair, grpc is mentioned on Medium.)

> Why is REST so popular?

You think this guy is a hateful hipster, but most of what he says probably resonates with most software vets. To me, it's mostly against the REST zealots who demand REST is done in a very particular manner. I've seen companies with a very stable, robust RPC framework that had a small faction of REST zealots who were extremely against it because it didn't do things according to REST. It didn't matter to them how stable it was or how well it worked.

Also, REST is relatively not popular - what percentage of the world's APIs use REST do you think?

TBH, you're the one coming off like the hateful hipster to me.

> "Everyone seems to want to find a reason to dislike product/technology/feature X..."

Furthermore, and it's been this way for as long as I can remember, too many want a tool to be the perfect fit for every problem.

That is, they pick up a screwdriver and then are shocked that it's not good for driving nails.

There is no OSFA technology.

I thought that SOAP was great, in that the library handed remote executions without extra work.

There are HA microservice and REST model libraries/clients that do it much easier and without XML :D

You have to give it up to Colfer protobufs on zeromq and kafka :D

edit: stray click

GRPC is fun too. I miss my fiber interconnected compute though.

I don't follow links to medium.com for that reason. I'm getting old and grumpy and medium.com posts seemed authored specifically to irritate me.

I'm not part of the new world so I guess I don't feel at home there.

And REST is easier to debug. This make a huge difference when you have a big and complex system.

GET / POST RPC is just as easy to debug, IMO, and with less righteous orthodoxy.

The orthodoxy (aka "standard") is there for a reason. It's very easy to shoot yourself in the foot.

If you're only doing small-scale internal interop, especially where you control both ends of all connections, you don't need standards, just do whatever works, but if you have scale dreams, thinking about the rules (What is 'state'? How is it represented? Where?) and why they are there will save you from a lot of headache going forward.

You need a standard within the ecosystem you play in, if only to reduce the amount of work integrating bits and pieces.

For example, I work on an application that has reasonably tight integration between pagination on the front end and parameters / headers on the back end. Works quite well, particularly since we can control both ends. It's not completely ad-hoc - we chose one specific idiom for pagination that had reasonable support, but not necessarily the most widespread - and then extended it slightly when we needed a few operations that had no direct support (e.g. multi-delete, multi-update).

But if you're integrating from everywhere, there's no upside on a unifying standard because any unifying standard will either have so many wrinkles, complexities and caveats to cover every special use case that nobody will implement it correctly or understand it correctly; or it will be ill-suited to many domains, forcing a poor mental model, increasing the probability of bugs and reducing extensibility.

Some http clients can only do GET and POST. If your API demands PUT, PATCH, or DELETE it may be less useful.

How exactly do you think REST is easier to debug and what alternatives are you comparing it to?

Crafting REST request with JSON payload is significantly easier than doing that with SOAP (or, rest it in peace, CORBA).

I don't mind a loose JSON REST-ish API, where everything is a GET, POST or (maybe) a PUT, with explicit endpoints to resolve ambiguities.

Once people start exhibiting excruciating pedantry in designing their APIs, I'll switch off somewhat.

The author was comparing REST to json-rpc. Nowhere in the article does he propose that people use SOA or CORBA.

There's no difference between debugging REST and JSON-RPC over HTTP, so there's no point to compare them. The other alternatives, that I've mentioned, are less easy to debug.

Have you ever seen SoapUI tool[1] ?

It takes just a few clicks to generate complete set of requests from WSDL (set, as in "one request per each operation"). I think it's much better user experience than hand-crafting json to use with curl.

[1] https://www.soapui.org/

That is highly dependent on the language you are using.

Many languages support auto generation of consumer and producer code based off of WSDLs.

And AFAIK, most languages that don't have the auto gen tools do have XPATH libraries for easily manipulating templated SOAP requests.

It's still much faster to run curl with JSON, than to run auto-gen tool and write a whole test program to call an API.

The nice thing about SOAP was it had a full-fledged type system built in. REST is certainly easier to debug, but it's also more likely you'll be required to debug it, since many of the problems you'll run into in a REST interface would have been caught at compile time with SOAP.

SOAP's big problem, IMO, was the crazy insistence on URI formatted namespaces, which took a simple XML message and turned it into something bloated and confusing.

And caching is easy to implement independently of the API using a proxy

Because they drive traffic ?

You're probably spending too much time reading tech blogs.

I think there’s a tendency in software for people to start out without understanding all the complexities they’re going to encounter. I think this is just human nature.

When you start out doing RPC you think, I don’t want to bother with schemas, I don’t want to bother with hierarchical error codes, I don’t foresee the need to set the user’s password but not retrieve it. So you don’t want to bother with a technology which makes your life more difficult, to solve problems you don’t have and cannot foresee.

So you choose something simple. But you run into all these problems anyway, because they exist, no matter if you were capable of foreseeing them or not.

But by then it’s too late. You’ve written 50 KLOC and you just have to keep going.

I believe this is why many technologies become popular which are actually too simple to handle the types of problems they try to solve.

I blogged about this concept here: https://www.databasesandlife.com/the-cycle-of-programming-la...

Yup. It's the mechanism behind programming as a pop culture. Kids without a lot of experience are sick of the old way because it's too hairy and complicated, they come out with a fresh new approach that isn't nearly as broadly applicable, then it gets improved until it's fit for general purpose, at which point it's hairy and complicated and the cycle starts again.

I don't think that everything is standing still, though. Usually each successive generation has an edge on the previous one; either the previous generation was constrained by memory or CPU or bandwidth and had self-limiting architecture because of it, or the next generation needs to solve a problem involving an order of magnitude more data or compute and it needs a different approach.

But, of course, not everyone (or, realistically, not many people at all) is constrained by the thing that causes the revolution; people usually just get on the bandwagon because you must, if you don't you won't be as employable, won't be as hip, you'll find it harder to employ engineers to work on your project, etc.

The best technologies can be understood and used in a simple case by a beginner but still "unfurl" to handle the general case.

The worst force you to embrace the entire complexity before you can even hello world.

Progress is being made.

Aka JWZ's Cascade of Attention Deficit Teenagers (CADT).

Progress is being made

This is what I said.

This is a self limiting mindset.

"All good solutions to existing problems have been discovered. There may be new problems which need new solutions, but no one will ever improve on what we have already done. Anyone who things they can is a child playing in the dirt."

Did you read the bit about why I think we're making progress?

I think that approach of creating abstractions and concepts just in time is good tbh. It allows for thing to get incrementally complex. I wish there were a way to manage concepts through that process so that you could have all of the simple, easy to change stuff for new concepts while using the more comprehensive and often complex stuff for more "hardened" concepts.

This is consistent with my experience.

One element of it is also that in many cases the path to "becoming big" starts with bootstrapping and experimental projects that you may not know if it is going to survive and exist in a few years or if you are going to throw it away in 3 months.

So in those cases you may still choose to be scrappy, knowing that it will come back to bite you, but at that stage that is thought of the success scenario and a good problem to have (if it happens at all).

This is in contrast to large serious projects for large existing companies where you can much more confidently know that X and Y are going to be required because from day 1 you know the project is not just some casual thing.

This is also why I think the software creation process and tools need to seriously think about having adjustable safety/pain knobs to allow for cheap scrappy prototyping but also allow to tighten the screws for production.

You can kind of see a glimpse of this between various programming languages, particularly in their type system. But the general concept is broader than just that.

It occurs to me that young people may be better at bootstrapping in a scrappy way, because they don't yet have the knowledge and experience to immediately consider all the things that would be required in a mature implementation. Speaking for myself, now that I'm approaching middle age (37), if I contemplate developing a new product from scratch, I risk paralysis by analysis, overthinking every aspect of it. I certainly didn't do that when I was 21.

Scrappy bootstrapping is a double-edged sword. On the one hand, it brings us great new products. On the other hand, ignoring some real-world concerns can be a major problem for users. As just one example, consider the impact for people with disabilities (e.g. blind, mobility impaired) who need to use an app that was developed with no regard for accessibility. And I've blissfully ignored other real-world concerns myself. For instance, the first desktop app that I worked on (in my 20s) had no support for HTTP proxies (as often found in corporate networks back then).

You're right of course, like the first time you hit a race condition (with a week of debugging) and build a distributed lock system. You publish it and people find it useful!

Only to realize later postgres offers fine locking capabilities far beyond what you've created (now that you get it).

Then you realize that all anybody is doing is creating subsets of Erlang (half serious). So why aren't we all using that?

In the grand scope, is it really a bad thing?

We end up with X ways to solve problem Y and you never know X + 1 could have some advantages. Exploration should be encouraged IMO!

Unfortunately, we get a whole lot of ad-hoc, informally-specified, bug-ridden, slow reinventions of the wheel for every better mousetrap. Usually (though not always) the best simple solutions come from those who understand where the complexity lies.

Exploration will be far more effective, and get farther, if it learns from previous expeditions, starts from established frontier outposts, etc.

Is there a superset of Common Lisp and Erlang? Such a language would be unmatched for already containing everyone's clever ideas!

> Is there a superset of Common Lisp and Erlang?

http://lfe.io :)

Then you have phase two of engineer development, where having been burned by something surprising engineers overbuild everything. That's when you get technologies that are too complex to handle the problems they try to solve. That's when you get seven layers of dependency injection for something that could be a single-line algebraic statement with five variables.

Yes, and it is this lack of understanding which ironically enough pushes technology forward. One RPC implementation after another.

That's exactly what YAGNI/KISS encourages. Don't make things more complicated than they need to be right now. That includes used technologies.

I'd much rather use a simplistic tool while I still get away with it, risking having to switch to a more complicated tool later, than starting off with something way too complicated for what I need.

Chances are the difficulty isn't in switching from REST to SOAP, or the other way around, but in dealing with the all the assumptions that permeate those 50 KLOC. In the end it just comes down to having a clean code base that can be steered in another direction.

Note that I only mean not dealing with what cannot be easily be foreseen. Turning a blind eye to what you should know will soon be a problem, is a different story. As an example, if you ignore concurrency from the start, that will be hard to set straight later.

But you can't just decide to avoid making assumptions, because the shape of your API strongly dictates the shape of your handler methods. I've never seen an API migration that didn't involve rewriting nearly the entire API layer.

Even worse, you have to maintain both the old and new versions of the API code, because turning off the old API endpoint is going to take at least a year.

I think this is best labelled a meta-Greenspun:

> "Any sufficiently complicated X contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Y." (where X is the new "simple" way and Y is whatever the greybeards are using.

Applies just as much to "middleware" as languages, sadly.

And thus we have a constant churn of dependencies an broken APIs/ABIs...

I agree 100% with this article. A simple RPC API spec takes minutes to define. 'Rest'ifying takes much longer, there are a million little gotchas, no real standard. Everyone has a different opinion of how it should be done. Data is spread across verbs, urls, query params, headers, and payloads. Everyone thinks everyone else doesn't 'get' REST. If you try to suggest something other than REST in the office you become the subject of a witch hunt. It really is a cargo cult of pointlessness. My co-workers have spent sooo much time trying to get swagger to generate documentation correctly as well as generate client side APIs, and there are countless gotchas we are still dealing with. It really is SOAP 2.0, when a simple JSON/RPC protocol would of done fine. Don't get me started with conflating http server errors with applications errors. And trying to do action like requests with a mindset optimized for CRUD. How much time have we wasted figuring out the 'standard' way to do just a login API call RESTfully. Please comment below how to do it, I love the endless debate of what is REST and what is not.

I agree with a lot of the things in your post, but this one in particular has produced the most grief for me:

> Don't get me started with conflating http server errors with applications errors.

I've wasted so much time dealing with 404 errors that were returned by the webserver itself (not the app) because the endpoint I was hitting was wrong or had moved, and vice-versa when I was correctly hitting the app but got a 404 error back from the API and I thought that the endpoint was wrong. And, of course, similar issues for 500 errors and the app itself dying versus the app processing normally and indicating an expected failure response via a 500 error code.

To add to all that badness, a lot of JS libraries in their async API method calls have different error handlers for success and for failure response codes, so you end up having to lump together business logic (for resources not found) and retry/error-handling logic (for the server the not working correctly) into the same failure response callback handler. It'd be much cleaner if all the business logic could be handled in a single callback and all of the failure logic could be handled in another. And, of course, you only even get to this level of badness once you figure that out; you can still waste quite a bit of time before you even realize that your callback is not being called because the JS framework is interpreting the expected 404 your API endpoint is returning for non-existent things in business logic differently than you are.

I still wouldn't go back to SOAP, but I do tend to prefer HTTPS/JSON-based APIs that don't abuse verbs, HTTP error codes, and mixes of URLs/params/headers/payloads. Better to put all of that stuff inside the JSON payload where it will only be handled by the application business logic, rather than mixing it in with all of the HTTP constructs that are used for other things as well.

Agreed I'm very much in favor of JSON body in, JSON response. The URL is just a way to hierarchically organize the endpoints. Just like in binary APIs where public methods are organized in classes and namespaces.

I've always made up my own error codes, which I embed in the 200 response since I end up having to map HTTP codes to what they mean anyway, and many libraries have their own behavior on how to handle various HTTP codes or cannot recognize anything other 200 and other (some LUA engines, for example).

The proper REST API should be specified as the set of domain-specific document formats (media-types) and have a custom browser as a client. Turns out, we already have HTML and web-browsers, so there is little point in actually building such APIs. It's always more appropriate to build a website instead. On other hand, what usually called 'REST' is nothing else but RPC where 'procedure call' = 'http method + url'. There is nothing wrong with that (with the exception of the name), but trying to satisfy any REST/HATEOAS constraints on top of RPC foundation seems difficult and pointless.

Don't agree at all. There is a huge difference between calling a function "foo()" that makes an RPC call and the relatively equivalent REST call "http.GET('/foo')". The former feels like a function call, and callers will assume it operates like one. However, in reality the former is not a function, it's making a network call, and it's incredibly unreliable.

In theory, the latter does the same thing, but it's far more explicit, the developer knows it relies on the network and accordingly that it may fail. Developers will be more inclined to plan for errors when the possibility of such errors are more obvious.

What's funny is that most consumers of REST APIs do it through a wrapper that turns it back into a statically typed RPC. REST truly is a useless middleman that no one realizes they just don't need.

I doubt that most consumers of REST APIs are doing anything statically typed. I would guess that the vast majority of REST consumers are written in browser javascript. Server side, I bet at least half are written in a dynamic language.

I don't think the parent here is talking about whether the language is static/dynamic. The point is that most REST calls are wrapped in a statically 'dispatched' function call.

So in most cases you'd do something like:

    let foo = () => http.GET('/foo');
    foo() //foo is statically dispatched in the source code here

It seems like "RPC" is being used pretty loosely in this thread. Whether foo in your example is RPC or not depends on its signature, if it attempts to synchronously return the response from the server, then it is RPC, if it just returns a Promise then it isn't.

> There is a huge difference between calling a function "foo()" that makes an RPC call and the relatively equivalent REST call "http.GET('/foo')".

Is it really a useful distinction? Let's rename our 'foo()' to 'dangerously_unreliable_with_unpredictable_latency_foo()'. Is there still a huge difference?

> accordingly that it may fail. Developers will be more inclined to plan for errors when the possibility of such errors are more obvious.

That part of your comment looks suspiciously similar to the usual argument against exception handling to me.

Dangerous and unreliable don't really capture it... how about potentialy_async_call_relying_on_network_that_raises_lots_of_exceptions_foo(), then I'd agree they are pretty similar. However http.get("foo") often says the same thing more succinctly.

And to your second point... yes, all developers should check possible exceptions, just like all children should brush their teeth. However, if you have bad habits and you aren't good all the time, then at lease brush your teeth after eating sweets, and likewise, developers should please check for exceptions around network calls.

Some time ago, another HN user commented a lack of good async support in languages and libraries caused some of the issues with early RPC. With more languages introducing futures and promises as return values or asyncronous functions, don’t you think we might finally have the tools to express that unreliability in a simple function call?

Sounds trivially avoidable.

Access the RPC call as httpSerbice.getFoo() and make it return a Future/Promise, or even a special subtype of those. 100% obvious what it does.

You're going to be calling REST methods the same exact way anyway - you have to pass around the result of that restful http.get call somehow.

How is that different than any function call that start something in the background? I have used and even written myself simple job code (with threading) that has functions like (simplified):

    int start(void (*job)(void*), void (*done)(void*), void* tag);
`job` is called on a separate thread at some point (goes through a job scheduler) and once it is done, `done` is called at the "main" thread (at some synchronization point, usually during the event loop), `tag` is just passed around for context. `start` returns zero on failure.

I don't see RPCs as anything different conceptually (after all the job might also fail). The only issue someone might have is when expecting a synchronous API, but even in non-networked code there are tons of asynchronous APIs.

The main difference is that networks are inherently unreliable. Threaded or multi-processed applications can also be unreliable but for different reasons.

The goal should be to inform the caller of the types of errors that may pop up. Obviously, with network or RPC calls, the caller should handle the case where the network is down. With threaded apps, the potential errors are more subtle, but the caller should definitely be aware that it's not a synchronous call. The function header you proposed is a bit clumsy due to the c semantics, but gets the general point across well enough.

This. REpresentional State Transfer stands in opposition to Remote Procedure Calls - except for a very narrow subset of hypertext/hypermedia applications.

The part I find most interesting about Fielding's thesis[1] is the introduction with architectural overview. He managed to map out modern Web apps perfectly - they can be REST (Web app with db/storage backend, perhaps extended with something like webdav) which is amenable to multilevel caching, smart client;movable data: json api js app, or smart client;movable code: js/Ajax - executing js delivered by server on client (subtly different from a "pure" js/json app (which is similar to an XML/xslt App).

I don't know why the hype of rest lead people to insist on conflating their architectures.

[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

People treat the dissertation like it is a standard. It is nothing close to a standard. It is the source of a decade of time wasted bickering over it.

Kind of my point. It's a great dissertation, with some great ideas in it. One of them is REST (Web pages). But the other architectures are well documented in there too - with trade-offs.

Ed: not sure about "time wasted bickering over it". Bickering is always time wasted. Careful analysis of software architecture, patterns, and figuring out what you're actually trying to achieve - is time well spent.

There are fundamental trade-offs between REST and different patterns - depending on where the truth of your data recides, if you need acid or not, and where (what part of) your code executes.

Well, I'm glad you can put together an RPC api that quick, but the reason REST is so ubiquitous and why arguing against it is going to make you the subject of a witch hunt is because it's so easy to consume. Your API is useless if people don't want to use it.

But like the article mentioned, clients using REST are used to dealing with wrappers written for their language anyway. They'd prefer to not bother with URLs, query strings, and MIME types, and simply consume an API in the language that feels natural for them.

You can argue that REST is easier to debug for developers, but nothing makes XML-RPC or binary protocols inherently _less_ easier to debug. It depends on the platform and library you're using.

I wholeheartedly agree with the article. Well done.

> are used to dealing with wrappers written for their language anyway

And the wide availability of those is because it's so easy to build one over obvious REST apis.

Or in the case of SOAP, simply cannot work out how to consume it. In at least one case there was a SOAP service offered and I had a good quality SOAP client library in a popular language and I couldn't work out how to make a single working request.

Really? Was something wrong with the WSDL?

Not all APIs are public though. You could picking a communication protocol among services within a single technical organization. In that case you can decide to train everybody to use Thrift, for example, if the pros are strong enough.

I agree. My main goal building an API is to make it easy to consume.

Clients are lazy and impatient, and this is a good thing because it makes the developers work hard to make it easy to connect to their API.

Are you assuming REST is easier than RPC to develop and/or consume?

After moving to a REST based API there were endless meetings between co-workers of what is and isn't a good REST url. Our clients often come to us with dumb mistakes. Unlike RPC where the parameters go in a single place. With REST the parameters are spread across the verb, url, header, query param, etc..

I think they were suggesting that it was more difficult to design/develop (as per your example), but easier to consume if designed/developed well, and that this was the proper tradeoff.

Yes, I'm assuming REST is easier because of tools like curl, postman and even the major browsers with HTTP GET and great dev tools.

With RPC your consumers probably need to know some coding and even maybe a specific language, a framework or a library.

So yes, for me REST is easier, and I always love to see landing pages like this one - https://freegeoip.net - where the client can test the API in a few seconds by copying an example to its browser address line. This is a simple use case, but I hope you get my meaning.

I don't really understand the debate here regarding tooling. REST, SOAP and RPC are all ways to define/codify the API and the parameters. It all goes over HTTP at the end. So Curl, postman and all other HTTP-enabling tools/libraries can be used.

SOAP even has handy discovery tools that frameworks can consume and construct entire APIs in most popular languages.

It's just that no-one uses SOAP from a browser because XML is a royal PITA to write in JS. I would assume it's because all JS developers are too-busy writing more libraries-for and layers-over JSON.

It's precisely the 'simple' examples which don't address any of the complexities in the original article. read-only properties in a resource being PUT back, for example. You're not having to deal with that with simple read-only services like freegeoip.

> If you try to suggest something other than REST in the office you become the subject of a witch hunt.

It was the same way 15 years ago if you suggested anything other than SOAP. The more things change, the more they stay the same.

It's often new employees fresh out of college who are the worst. They can be obsessed with doing things 'right', which you can't blame them because they have little experience. The internet tells them REST is right, so anyone who doesn't agree with them is wrong.

POST /access-tokens

{“username”: “sam33”, “password”: “hunter42”}

Would be fine, but could include more if you were following a spec like JSON API 1.1. I really don’t have any of the problems you seem to have. But I work in a high level programming language, so maybe it’s that? Either way I find RCPs to get messy when they get bigger. Sometimes it’s the right move, but for web apps I generally prefer REST.

If you're defining an RPC protocol in just a few minutes, you're leaving a ton of stuff out. Anyone that assumes an RPC call will successfully complete or assumes the network is always there, is writing buggy code. An "RPC protocol" makes writing such buggy code easier. A REST protocol makes it slightly harder. In theory, they are almost identical. But in practice, developers equate RPC calls with function calls, which they are definitely not.

Too bad the first thing people do when consuming a REST API is to put a RPC wrapper around it (or find software that with auto-gen a wrapper for them)

Subconsciously no one wants to deal with your carefully constructed REST URLs. They just want a function name and some parameters.

Yep, this is pretty much how it always goes.

A good example to me is Stripe.

I've written over 10 applications that use Stripe and every single time, the first thing I did was use Stripe's official library for the languages I work out.

There's nothing wrong with wrapping REST APIs, as long as you wrap them in something that makes it clear they are a rest API. There's a huge difference in call foo() and calling http.get("foo").

Why does that matter? Most of the time the client doesn't care about the implementation details of the data transfer.

How is it any different in any way than any other async function? You just end up providing an unusually large number of parameters via headers and body, then at some point in the future the request completes with values and/or errors. What separates them?

Classic RPC functions would never be async, since the idea behind RPC is to replace a sync local function with a sync remote function, without having to make any changes to the calling code.

"Async RPC" is a more recent idea, but still gets referred to as "RPC", so the complaints about classic RPC still get raised since the term is overloaded.

Reading the article I wondered why we even integrate services so deeply with HTTP. The things I care about is the ability to cache at HTTP layer and the option to move endpoints. Which can be added indepentendly of the actual protocol. Moving a service to another protocol than HTTP could be an interesting option.

I think some reasons for still using HTTPS as a starting point are the ease of proxying (load balancing), the support for virtual server names, the fact it's easy to use from a web browser, the built-in encryption with TLS.

Plus, for any given large organizational customer HTTP/HTTPS are allowed through their firewall/other network security apparatus, wheras other ports require a bunch of special exceptions from the security people to use. Said people often refuse to give exceptions no matter how reasonable the request might be, so everybody ends up doing everything on port 80 or 443.

Fully agree with this (from experience)...

> A simple RPC API spec takes minutes to define. 'Rest'ifying takes much longer, there are a million little gotchas, no real standard.

The thing is, RPC is fundamentally broken due to the nature of distributed computing, and REST is not. All the time you have to spend doing things right is … the time necessary to do things right.

And REST really is very simple. The problem is the cargo-cult nature of folks who don't really understand it.

If I had a nickel for every time someone said "don't really understand REST".. often people who both think they understand it say it to each other. It's pretty funny. Guess what, we understand it fine, and we don't like it.

"cargo cult of pointlessness": that is hilarious!

Surprised to have scrolled this far and not see one mention of GraphQL. It has a discoverable, schema based design, strongly typed. It segments requests into three types, queries, mutations, and subscriptions. Queries are simple data fetching. Mutations can be treated like RPC calls. Subscriptions are for long lived connections to receive live updates for data queries. I think it fixes a lot of problems with REST. I think it works extremely well with microservice architecture.

I thought this was a snarky title for a GraphQL article. Since implementing a server in GraphQL, I much prefer it to REST

Where I'm at we have an entity component system in postgres for DB (Entity table with just an id as primary key, then all other tables only have foreign keys to the Entity table). We were implementing random REST routes which tried to line up with typeful ideas which don't exist in the DB but which the page structure of the site exposes. Switched to GraphQL, bunch of methods on a ReadEntity, an UpdateEntity, & a CreateEntity; currently implementing a clientside ECS to mirror this so that the clientside can work on intermediate entities & then submit them together. GraphQL server's real simple, just focuses on access control. Frontend gets to grab whatever components it needs for a given React component. Have ideas on how optimization can be added by adding in prefetch hints to avoid staggered loading

Sorry if this feels like a tangent hijack rant, but figured I'd drop a line on trying to explain what makes GraphQL so good

CORBA is the best protocol I ever dealt with. Strong contractual semantics, exceptions, interface definitions. Spiced with transparent compression, encryption and bi-directional communication. All those goodies were already available 12 years ago.

The only downside - it required a reliable and precise implementation that took a lot of efforts. I always used IIOP.NET for most of my gigs and it was excellent. I also ended up as an active IIOP.NET contributor.

CORBA ORB implementation was a fine art a few could grasp. And this was the biggest drawback - the standard was (and is) excellent, but most implementations tend to be complex and shaky.

I still actively use CORBA. The server usually offers two kind of endpoints: REST for third-party integrations (which are usually naive and simplistic), and CORBA for the system itself. I've built nice things with such architecture that involved worldwide deployments including embedded hardware. I am very proud of my involvement and the fact that I could help to improve the everyday routine for many people worldwide.

I was somewhat involved with CORBA in its early days. It had some very smart people driving it. It was derived from work already being done by the large companies like IBM, DEC, Apollo, Sun, HP, and Microsoft.

But CORBA was haunted by a key principle that limited its influence. Unlike the IP protocol stacks, which are layered from the lowest wire protocols on up to the highest layers, CORBA dictated the highest level protocols and didn't address the lower level protocols. Different CORBA implementations couldn't talk to each other; consequently, CORBA didn't work for my company because we were was trying to design a product that could work across heterogeneous networks of workstations and servers.

If the CORBA folks were so smart (and they were), how could this happen? Why didn't they design the original CORBA protocols from say UDP or TCP on up? The CORBA members were all from different companies, and all had different independent products. There was fierce competition in this space so it was impossible for the members of CORBA to agree on the low level networking protocols because doing so would harm some companies' product lines while benefiting an others.

Thanks for this input. I always love reading things things that get me excited to try and look into stuff I have previously discarded for probably "popular" opinion.

Lots of comments here arguing REST is popular because it's easy, but there's another higher level reason too: it forces you to think about the network.

In far too many RPC protocols, calling functions that operate over a network are treated like normal functions. A function call, almost by definition, fails to take into account network errors, and race conditions where multiple events overlap. Network calls are not function calls, and the fact that REST calls are relatively distinct from normal function calls is a good thing.

If a network (or endpoint) fails you usually only few options during runtime, retry, skip, stop. That is pretty much all you want to know. Everything else is specific to the endpoint, which is more about contracts and constraints then about networking. You either use the endpoint correctly of not. I.e. using a database like MySQL has similar constraints. And decent engineers know how to work with it and where it is happening.

Yep, and no amount of REST / other introspective boilerplate can help about the fundamental problem of not being synchronized.

There is no solution to the "A knows X, but B does not know that A knows X, or A does not know that B knows that A knows X, or B does not know that A knows that B knows that A knows X..." problem.

Other than that, I think at some level networking is nothing more than function calls that can take a long time and/or fail.

And yet most popular "REST" APIs provide lang. specific clients that expose regular function calls to consume the API turning it into RPC.

Sure. Those are conveniences.

However, most of the Getting Started articles that you find about any publicly published REST API usually starts you off with a bunch of curl commands.

Even if the networking aspects are completely hidden from you in your application, your formative experiences with the REST API almost certainly was with the network requests.

Given the dozens of languages and libraries that interact with APIs, I don't see the issue of starting with a curl command. It's a common denominator, programmers of almost all languages understand, like international sign language. No one would seriously use curl commands in production, but load up a command window, and it's an easy way to start messing around.

Agree with your second point though... network requests are hard. They are not much different from real distributed programming. Making REST requests should be done with care.

I do use curl commands in production for stuff like user data in an aws cloudformation template. Having an api I can hit with curl that returns json I can parse with jq is super convenient.

I don't think the parent commenter sees curl-centric getting started guides as an issue either. Actually from this thread's context I assumed they thought it was a good thing.

The "force to think" part gets real old real fast though.

It's fine, people can understand "this function does network I/O so be careful".

I worry that this is something that can be applied to programming in general these days.

Back in the day of the C64 etc, programmers had to worry about the underlying hardware as they were for the most part working with assembly.

But as increasingly abstract languages have come to be (most modern ones use virtual machines and garbage collectors), the programmer never have to consider the hardware their code has to interact with at some point. End result is all manner of bloat and memory leakage.

I don't think you closely read what parent wrote. Do you think a function call is "too abstract" because "nobody knows what it's doing under the hood"?

I feel like this is the root of the author's agitation:

"I don’t care. Trees are recognized by their own fruits. What took me a few hours of coding and worked very robustly, with simple RPC, now takes weeks..."

He seems unhappy that REST doesn't work the way his familiar tool (RPC) does. I myself worked with middleware-messages-over-TCP systems for a decade before switching to web apis. I don't have this issue. And I personally don't follow the "holy specification", and REST works just fine for me.

The problem is the assumption of a "simple RPC" protocol... there is no such thing. There are network issues, proxy errors, and two-sided race conditions that complicate any network related code. Network programming is distributed programming, which is not easy. RPC protocols try to mask that difficulty, but more often than not they sweep it under the rug.

RPCs make it easy to get started on a dev machine. making network calls appear like function calls definitely speeds things up when the network is working perfectly, but it papers over the complexities of debugging network issues and complex distributed race conditions, which will inevitably come up later.

Being explicit with network communication has its benefits.

(corollary: for the same reason as above, I don't like lazy evaluation of database queries that exist in many languages. When you hit a database, it should be intentional, you should know about it, and should properly prepare for any necessary network issues, caching, and cursors. Many modern web frameworks gloss over this).

I don't see how those things are really solved with rest, particularly given that people will often be using a wrapper rather than building their URLs manually everywhere.

Definitely not "solved", just made more explicit, kind of like a warning sign in the road. A while ago I worked with an RPC system where the RPC calls looked just like normal function calls... everything worked well, until it didn't.

Anytime a computer program consumes a potentially scarce resource (e.g., network, disk, database, etc), there should be some warning-sign or flag raised to the developer. RPC hides that, whereas REST makes it more explicit. So much of programming is social, and the mechanics of REST, even though theoretically identical to RPC, raise many social flags warning of danger ahead.

Or in other words, there is absolutely no technical justification for using this overly verbose and unmaintainable mess. It's just some vague philosophical thing that has no clear benefits. Like OOP.

Do you consider clarity, maintainability, and readability a "philosophical thing" or a "technical" one?

I doubt that these qualities are actually achieved. Generally I don't think there have been many (if any) languages where excessive verbosity built in to the language has proven to be a good idea. Think COBOL, Java...

It's worse if not only statements are longer, but you are actually forced to treat similar things in very different ways (like URL resource vs query string vs post parameters). It just increases code complexity without any clear benefit.

I'm not really sure I see the difference. One of the first things done is to wrap all the calls to the webservice in some kind of API (often using a pre-existing library), which puts you back exactly where you were with calling a function which happens to make a web request.

We developers need to understand there are no one-size fits all solutions. No protocol is optimal for all use-cases. Design is always a question of trade offs. Architectures are means to an end.

The OP's story is a bit weird, because it seems they had a system which worked very well with XML-RPC, but they changed it to REST for no apparent reason except that "REST is the future". Regardless of the merits of REST vs RPC, such a change will require a major redesign of the system. The resource-oriented world view of REST is very different from the procedure-call oriented XML-RPC. You really need to clarify what benefit you hope to achieve before attempting such a redesign.

The problem with the article is it doesn't really consider the use cases where REST is appropriate and when it is not. Rather it blames anything on the protocol itself which is considered "good or "bad" completely disconnected from use cases.

> And you’re gone for hours, reinventing the wheel.

So don't do that. The fault lies not with the technology.

Sorry, but these kinds of excuses always remind me of homeopathy zealots explaining why there technology didn't work in this case.

REST, just like OOP, is not a means to an end. It's fundamentally wrong. It's trying to shoehorn strange philosophical viewpoints into what's a technical problem. It's trying to decompose problems that can't be decomposed. It's... never the right solution. I've never seen it succeed. Like, ever.

Are you saying OOP never have value?


Well, the exception perhaps being that its object-verb syntax supports function name completion in IDEs better. But I think it's a net negative, since it has more negative effects on architecture (wrong structure, because developers are encouraged to invent vague concepts to be home to methods that do less and less. It leads to endless bikeshedding).

Not to speak of other bad ideas that once defined what OOP was, and are now commonly seen as wrong - like inheritance or even multiple inheritance.

And you have never seen an OOP project succeed?

I've seen quite a few good projects written in an "OOP" language, but not in an OOP style - mostly misuse classes for namespacing (which I don't think is a good idea either since it makes usages of namespaced things hard to find).

I've never seen a "true" OOP project that wasn't quite a mess and couldn't have been written much cleaner in a plain old procedural style:

Use freestanding functions, the most successful abstraction to date.

Stop with that singleton bullshit. Most things that need to be managed exist precisely once in a program (talk to the sound card, to the printer, to the network, to the graphics card, to the file system, allocate memory...). Making classes first and instanciating once (or how many times, how can I know by looking at that handle?) is just plain silly, overly verbose, and confusing to the consumer of the API.

Don't couple allocation and initialization. It's a stupid idea. It leads to pointless, inefficient, and (in some languages) error-prone one-by-one allocations.

Flat fixed structs for data organization. By default expose struct size for massive decrease in memory allocation. Expose almost all fields (except for truly platform / implementation-defined ones) and stop with that silly getter/setter boilerplate.

Mostly use data tables (with direct integer indexing mostly) like in a relational database, for data-driven architectures. (CS/programming technology TODO: How can we switch between AOS/SOA more seamlessly? Maybe we can get inspiration from Graphics APIs?)

Don't use silly flexible-schema XML/object hierarchies to "compensate" for having no idea what's in the data. It doesn't help.

Make interfaces ("vtables" if you will) only sometimes where they are needed, not by default. Don't call this inheritance. Bullshit. It's an interface, not more, not less. If you think interface descriptions must typically be bundled with each data item, think harder. They are independent data.

We don't need no friggin "doer" objects for every simple thing. It doesn't help a bit, but only makes things less readable and more complex. Just do what needs to be done!

Most of the things you mention are considered anti-patterns in OOD nowadays anyway:

- singletons

- getters/setters everywhere (but not in favour of public fields, which just as much introduce tight coupling, but 'tell don't ask' style which allow to localise functionalities prone to change)

- introducing interfaces upfront - it goes against Reused Abstraction Principle or Rule of Three (discover rather than design abstractions, apply interface only when you have at least 3 classes that would adhere to it)

Both OOD and functional programming try to reach loose coupling and composability by different means. All these 'patterns' have usually some more sane general architectural concern standing behind. It would be interesting to know whether old school procedural approach allows you to achieve all those architectural benefits in large scale applications.

REST has the same problem as object oriented programming. It's too skeuomorphic. Lots of web applications are wrappers around conceptually monolithic resources, so it's convenient to use a protocol that makes that assumption. But as soon as you need to nest resources, or perform some action that has nothing to do with CRUD, or do just about anything interesting, the metaphor begins to fall apart.

(That's also why OOP has all these "patterns." Many are just attempts to cope with the "object" metaphor falling apart. "Is" an AttackingRock a Monster, or "is" it an Obstacle? Hmm...)

Although the article somewhat exaggerates the problem and in fact REST is great for many applications, truth is (IMHO) REST is indeed overrated and the RPC style (don't confuse it with SOAP!) is over-vilified in the IT culture. In so many (though far not in all of them) cases a concise JSON-RPC API would be a much more elegant solution than REST. I believe this is a great example of where the "right tool for the job" principle should be applied rather than a buzzword cult.

Many comments here emphasize on the problem of developers expecting RPC functions to be as reliable as local functions are. Well, that's their own problem, IMHO. The only appropriate solution is to remind them they are doing it wrong, e.g. the same way REST gurus use to remind everybody they are doing REST wrong. In fact there is a huge number of "developers" around who just invoke all the file system, database and network (REST, RPC, SOAP or whatever) calls synchronously, don't validate inputs nor outputs and don't even wrap the calls in try/catch (believe me, I used to support a fairly popular API and had to explain this stuff to just soooo many people complaining about their apps hanging or panicking on occasions when our REST API quirks (e.g. a field is missing from the response) or fails).

This article expressed a lot of what I’ve been mulling for years.

I must’ve spent hours of my life poring over the Wikipedia HTTP Response Codes page, looking for the most expressive error code for my situation. It’s barmy.

You don't need to. For me, REST can be as simple as: encode the type of request into the URL, request parameters into URL parameters and/or query parameters, request data into a JSON payload. Use GET for read-only operations, and if you're really not particular about it, use POST for everything else. Return 200 for success, 400 for client error and 500 for server error. Transport a more detailed application error code and description in the response body. That's it.

You don't even need that. I don't think there is anything wrong with returning a 200 response with a JSON body that has some 'error' tag built into it. It may not be purely RESTful, but if it's obvious to the developer interacting with the API, who cares.

Yep and when I have to investigate production issues, you're the guy that makes me do slow wildcard queries of the detailed error text instead of just filtering on the (indexed) response code. You push your laziness onto the rest of the world that way.

I actually agree with your sentiment, but disagree with your characterization that it's "laziness"... how about not knowing?

You're right, that's most likely the main offender. Apologies. I've been consuming a lot of poorly written REST apis lately and am bitter.

The worst one I worked with recently would return 200 and only a human readable error message (no status). On top of that the message is sometimes phrased differently. Here is an example from memory: "The field email is not valid." and "You provided an invalid username."

I disagree, at a minimum, use 200/400/500. Each grouping of error code defines semantics which are implemented by generic clients / servers / middle boxes / monitoring agents etc. These are things out of your control, and often ran by a range of different companies. Debugging these is .. hard.

That means I can't use any kind of generic retry / back off code because I've now got to start dealing with your custom errors, and if you have html versions of the info Google will start demoting you.

Just add a code, it's seconds of work.

You're assuming you want your API to work with a specialized crawler, like Google-bot. If that's important, then sure, design your API so Google-bot can crawl it nicely, but then it's Google designing your API, not you.

I think you missed his primary point. By doing what you describe above, generic client side code to handle retries/backoff etc etc is rendered useless. Your users now have to implement something custom for this (and, if your doing network operations and DONT do this, you likely don't have a very robust system).

> You're assuming you want your API to work with a specialized crawler, like Google-bot.

Not really, I just don't

In general, the key point as about working in a generic way, given that it's so simple.

I don't like the idea of returning a human readable message that says there was an error but a machine readable message that says everything was fine. I have far too many cases of having to deal with human text explaining that a value is missing already in my data.

What a horrible advice, just because you are too lazy to return correct status code someone who consumes the api has to do twice the work.

Not at all. Some languages and libraries don't handle non-200 status codes nicely, they may raise exceptions for each code, or maybe not. For example, Python's basic urllib library makes it easy to get 200 responses but a hassle to get anything else (you need to wrap it in a try/except).

For some APIs, it definitely does make sense to use proper status codes, but for others, it's like fitting square pegs into triangular holes.

well most consumers work better with 400 errors. (i.e. angular1 or even angular2 where the 4xx error codes will be inside the error clause of the promise/observable)

There are probably hundreds of http clients, not sure if "most" handle error codes elegantly, I know that python's base urllib library does not.

I agree. There are plenty of freedoms built into the REST way that enable you to create more or less detailed responses.

  "I don’t care. Trees are recognized by their own fruits.
   What took me a few hours of coding and worked very robustly, with simple RPC, now takes weeks..."
Weeks? For a REST API? No, I don't think so. REST gives you the tools to be pragmatic and quick, so use them.

Grpc is the future, I'm amazed that nobody seems to be using it. Easy endpoint definitions and code generation in almost every popular language. Much faster than REST and zero boilerplate code. The client libraries even have http baked in so no "controllers" or route mapping to write. It's simply fantastic.

If you run into a language without grpc support you just standup a JSON proxy and pretend it's REST.

I tried some experiments earlier this year with radically simpler RPC calling conventions. It's called NSOAP, and is available for express, koa and React. It gets rid of HTTP verbs and treats the url like code. https://github.com/nsoap-official/nsoap-express

Some examples.

  //Adds two numbers

  //String arguments



While on the surface that might seem like a workable idea, getting input validation right is going to require more syntax (making it far less clean). Specifically, URLs are only ever string data, so without type annotation everything is strings, even if it looks like a number, array, or even more complex data type.

> While on the surface that might seem like a workable idea, getting input validation right is going to require more syntax (making it far less clean)

Input validation will go into the router, which I have created for Express, Koa and React. Application code will not have to deal with validation or parsing.

> Specifically, URLs are only ever string data, so without type annotation everything is strings, even if it looks like a number, array, or even more complex data type.

You'd have to pass more complex data types either in body (as JSON) or as parameters with quoting. Current router does however, infer types to the extent of:

  //Params inferred as string, number, boolean and number.
  curl "http://www.example.com/search(Jeswin,20,true,x)?x=100"

Looks a little bit like OData, at least the querying parts.

I read the article upto the point where they consider using rest to create an entry into the rest_password_email table or some such thing. That's stupid and ludicrous.

That isn't even a use case for REST. I think the writer needs to consider taht http APIs cover many different use cases with subtle differences. REST exists to solve the problem for one of those use cases, i.e. data model interactions over HTTP. But HTTP APIs can also implement function calls. Trying to emulate a function call, using REST is a terrible idea, and the blame lies with the developer that thought it would be a good idea. Not with REST.

Consider the logout operation. This a valid function call. But not using REST or any of it's principles. It's just a valid use case for HTTP. Now consider API methods that deal with a users profile information. Such as creating the user profile entry and later, maybe updating the address or status of the user. This would be a perfect example use case of REST.

TL;DR I think the author fails to understand that REST isn't and doesn't try to be a solution for every possible use case of an HTTP API. It's simply a framework for structuring APIs that directly, or almost directly interact with the models exposed by a webservice.

Now, onto RPC. RPC, is binary protocol over TCP. It's not even HTTP. RPC vs HTTP is a very valid argument depending on the use case/constraints for the project. But RPC vs REST is comparing two very different things. REST, IMO, derives most of it's flexibility from being built over HTTP, and not because REST is some kind of magic sauce that it's touted by many to be.

>Now, onto RPC. RPC, is binary protocol over TCP.

The author specifies two different RPC protocols in their article: XML-RPC and JSON-RPC. Both often do push plain text over HTTP.

> REST, IMO, derives most of it's flexibility from being built over HTTP, and not because REST is some kind of magic sauce that it's touted by many to be

In that case, there is no difference between REST, XML-RPC, JSON-RPC, and SOAP. That could be a valid opinion to have, but I suspect it’s not one most people would agree with.

RPC is a concept - not a protocol.

For instance, the implementation of RPC I use looks like:

    POST /v1/:method
A JSON object in the request body is the only parameter on the method. The response body is the returned value encoded as JSON.

Very simple, and still entirely within HTTP. Just not RESTful.

REST isn't appropriate as a description of JSON-based apis because JSON isn't a natural hypertext and thus makes HATEOAS difficult to implement. Most JSON APIs described as REST are really RPC APIs with a bit of URL layout taken from the REST world.

Unfortunately XML-RPC was such a nightmare that calling something JSON-RPC was out of the question. Shame.

There are a few blog posts up on the intercooler website that discuss this that I found enlightening:



There is a thing called JSON-RPC: http://www.jsonrpc.org/specification

The author explicitly mentions it as something he prefers over REST (though he spends more time on XML-RPC)

I think there’s something even more fundamental.

The article talks about how to communicate between systems. But many systems don’t need to be distributed.

If you can avoid writing a distributed system, that’s easier than even a “better REST”, if such a thing were to exist.

For example, old-style non-SPA web applications, can directly use underlying logic classes directly, can throw exceptions, need not serialize data, one connection to the DB with one transaction capable of rollback, and so on.

Or monolithic servers rather than microservices.

Sometimes you need RPC, but for those situations where you don’t, avoiding RPC completely is a significant reduction in complexity.

HTTP is just a way to communicate state - a protocol that _can_ be used for implementing the architectural style. Unfortunately, most of the rant is about HTTP.

Once you see HTTP just as one example of a more abstract way to interact in a client-server model, your focus will shift towards the more important topic: semantic formats that represent state, formats that make it easy to write clients against. And if you do it right, you build a vocabulary that represents your domains. A good format to start with is JSON-LD. But please don't just describe the entities of your business domain. You have to describe semantics of how the client can interact with the server and change state (like links, actions, feeds). And if you build a business around it, put these semantics / vocabularies at a central place - build something like https://schema.org for your own corporation.

Then use HTTP for communication and building out your systems. And it will be more robust, flexible and scalable than anything built on SOAP.

Those who refuse to learn history are doomed to repeat it.

REST is not the end of history, there can and should be successor architectures, but RPC is not that successor. It’s been around a very long time and fell out of fashion for good reason. It is a convenient approach that can be used if you have major control over all the interfaces, endpoint implementations and underlying infrastructure, as Google does for its use of gRPC. It really falls over if you want independent implementations and variable infrastructures over a large scale, as is the case for most Internet / Web interactions.

RPC is fundamentally flawed in that it tries to pretend that the network doesn’t exist and that distributed systems don’t have fundamentally different concerns than single computer systems. A good overview of this history from 2009 is here: https://www.scribd.com/mobile/document/24415682/RPC-and-its-...

Keep in mind that REST only became popular around 2007, it was an uphill battle to popularize it from 2001 onwards. The web had grown to a mammoth, and vendors couldn’t make money off it, so wanted to replace it with CORBA or some other RPC (SOAP). It took a concerted effort to fight that in standards bodies, on mailing lists, on blogs. Those days didn’t have Github or social media or a myriad programmer conferences. The posts are still up if you want to see them. This history of having to fight to be perceived as relevant and useful is why REST tends to have a bit of misguided religion behind it. SQL proponents had the same issue longer ago.

The Web and REST led to the largest increase in networked computer interoperability in history, after the TCP/IP suite and Telnet. The architectural style is what catalyze JavaScript into such a ubiquitous language as it made mobile code a first class citizen in the architecture. It’s what catazlyed Google into a powerhouse as it baked self describing, uniform interfaces as the standard, enabling spidering, indexing and analytics on a global scale. Moving on from it will be harder than people realize.

By all means, be an engineer - use RPC if it fits your problem and constraints better. Use event streams if it fits your problem better. Use GraphQL or SPARQL if a data query interface is a better solution for your needs. There is no one architecture. But please, rants about how the world would be better if we all did RPC, it comes across as very divorced from the wide variety of problems and suitable architectures out there.

> It really falls over if you want independent implementations and variable infrastructures over a large scale, as is the case for most Internet / Web interactions.

But what exactly? I can't really think of anything that would work without an interface description (like a HTML form).

Let's have a few informal RPC signature conventions for example for retrieving a Web page (the equivalent to HTTP GET). It would make developers lives easier. But essentially there is no problem with replacing links with remote procedure calls.

HTTP GET is one of the most formalized, optimized and tested standards in the world. Replacing that with “an informal RPC signature” would be a nightmare for developers : misunderstood semantics, little to no interoperability, poor performance, poor scalability, and on.

If you think there is no problem replacing hyperlinks with RPCs, be my guest, do so on all your forthcoming projects, and see if it works. It sounds like a bad idea to me but we are both just random people on the internet.

I author REST APIs and try to do them properly, with mediatypes, link relations, and all the good hypermedia stuff most people avoid. Nonetheless, the author's post reflects the sort of rant I've had to coworkers at the watercooler, or anyone who'd listen.

The author nicely preempts the debate about HATEOAS and "most RESTful APIs aren't REST" and shows that the debate is part of the problem. It is. It's not a spec but an architectural style, and people are bad at design (and quite honestly, they have better things to do with their time), so broad-stroke ideas about how to lay out your system aren't as useful as a framework or codegen. So the least-effort solution wins, where you half-ass implement first three three characteristics of REST-as-seemingly-observed-in-the-wild (HTTP, JSON, templated URLs), and call it a day. If it's good enough for Stripe and Twitter, it's good enough for you.

No wonder we're in this boat; one upon a time REST was just architectural and intellectual wankery, specified in an obtuse and hard-to-read thesis by a guy who's smarter than most of us combined. It sat unknown for years, until AJAX became all the rage, and people began making API calls from jQuery to load-and-splice parts of a page for rich interactivity. Here, a public, HTTP endpoint that served easily-parseable JSON made sense. Some prominent companies released public APIs in this style, and then the blogspam began.

Within a short amount of time, REST and "RESTful" was cool and forward-looking, and SOAP was old and crufty, and any kind of ad-hoc RPC was bad. Implementing a REST API wasn't solely about usability, but also signalling that your company was forward-looking too, rationality be damned, which is why most implementations cargo-cult look the same: HTTP, schemaless JSON, templated URLs. Endless debates about HATEOAS begin, and come to an unsatisfying conclusion, because real developers' concerns about architectural purity are dwarfed by their desire to build a mostly-working API that looks recognizable to the public, and move on.

Serious players looking for reliable internal RPC develop stuff like Thrift and gRPC, which are thoughtful reimplementations of the ideas behind 80s and 90s RPC, but with the added advantage of coming from a single vendor and not committee hell. Meanwhile, Facebook also reinvents SQL in JSON and gives it to the client, what could go wrong?

Maybe the reason "REST" won is because it was easy to glean and misunderstand well enough to put out something simple and good enough. This dominance of the low-end is going to be hard to undo.

As I mentioned elsewhere in the thread, there are two posts on the intercooler site that specifically addresses this issue that I found very compelling:



> The author nicely preempts the debate about HATEOAS

What is the debate about HATEOAS?

What's the difference between an API with and without that quality?

The difference is that a real HATEOAS API provides clients with context through the payload for all service interactions. HTTP becomes just the transport layer. Roy Fielding never wanted to point to anything else. He was inspired by how HTML enabled human-to-service interaction through a Browser (the client) and formulated an architectural style for human/machine-to-service interaction. The emphasis must be on how we design formats. Communicating the state is just freestyle, HTTP being the prominent one.

Thank you. This is one that I've been puzzling over for years. If anyone has a straightforward answer, and can provide example REST responses with and without "HATEOAS", I'd really appreciate it.

See the 'Richardson Maturity Model' levels [1], and contrast level 2 with 3. Level 3, satisfying the HATEOAS constraint, has hyperlinks leading to other resources, and the relationship between the origin and destination resource is qualified (with link relations [2]).

Essentially, everything is a graph, resources are the nodes, and the labels on edges are the relations, also serving as an inventory of state transitions.

The "HATEOAS debate" is the trope that says people will inevitably debate whether a particular API that claims to adhere to REST does in fact adhere to REST, because many APIs that namedrop REST don't satisfy the HATEOAS constraint. This is rarely a practical debate, but the fact that it keeps coming up -- and people take the time to explain -- is part of the problem with REST. This was the author's point.

[1] https://martinfowler.com/articles/richardsonMaturityModel.ht... [2] https://www.iana.org/assignments/link-relations/link-relatio...

I think the problem I have a hard time with HATEOAS is that I really can't see what difference it makes to the API. I mean I get that it's always better to return a URI rather than simply an ID. But the link relations feel like they are from a time when we all thought the semantic web (RDF and OWL) was going to be a very big deal.

Maybe REST was the best protocol for the great public API explosion of the past decade, where startups wanted to expose a public API to anyone on the internet. The most important requisite was that the most developers could access the API with a minimum of technical knowledge and tools, and the APIs were simple. I am less and less sure that REST is the best solution for communicating between internal services, which know a lot about each other and where you can spend time onboarding developers on a specific techniques and stack.

What do you think is a better solution?

I don’t know yet because I haven’t had as much experience with other solutions as with REST. I am curious about Thrift and Protocol Buffers. I have worked once on a project migrating from REST to Protocol Buffers for an internal API and it noticeably improved performance, but code complexity remained more or less the same.

> Protocol Buffers

Well, REST is architecture. Protocol Buffers are (de/)serialization.

I've tried it before (& although it has the properties I like[language agnostic schema definition..]), it was finicky to get working with of one the biggest IDEs out there(IntelliJ IDEA)

Also, it doesn't support Kotlin yet.(Java interop only option)

The only place I can see Protocol Buffers being useful is inside Google..cos it was a pretty crappy dev experience.

And I would be pretty confident in saying that not many Googlers/ex-Googlers outside Google use it. Cos if they did, somebody would make the dev experience much much better.

I use protobuf at work all the time (not google) and prefer them over almost all other serialization formats. To be fair, I mainly work in Scala, C# and JavaScript where great implementations exist. In the case of Scala, IntelliJ just needs to know that the generated source files are just that and then it „just works“ for me

> I have worked once on a project migrating from REST to Protocol Buffers

What does that mean? REST is a style while protobufs are a serialisation format: a REST architecture can use JSON, protobufs, Thrift, whatever.

https://grpc.io/ is Google's internal solution I believe

It turns out that schema-first design is useful. It turns out that having a cross-platform industry standard for such schemas is useful.

What developers didn't like about SOAP aren't necessarily the features, it's all the XML.

XML is great, allows lots of tools to be used.

Ahh, to be young and naive.

Decades of horrific developer pain disagree with you. The “tools@ all have different ideas of what is correct, good style, the right name spacing and encodings...

I have been coding since the mid-80's.

Feels refreshing to be called young.

Just this week I was having the pleasure of using XSD schema validation to ensure files aren't corrupted.

We’re about the same age. How is it that you’ve not experienced the same horrible almost-but-not-quite interoperability messes with XML and SOAP that I have over the last 20 years?

Do you love ASN.1 as well?

Man, it's almost like representational state transfer and remote procedure calls are two different things, and sometime your service does fine with the simpler one of those, and other times your service needs finer-grained controls.

But if we were thinking critically about how these protocols differed then we couldn't write an entertaining polemic, right?

Rest is not an RPC protocol. If you need RPC you should use an RPC protocol. Rest is not universally applicable. The verbs don't map well to anything beyond simple CRUD operations. If you want to do something that isn't one of those operations it is going to be painful.

You can get all the benefits of the plain text serializataion by using json or even xml as the payload serialization format with a much richer set of verbs. You can even use HTTP as the lower level protocol if you want and gain all the benefits that that gives you.

Most of the pain people experience is when they try to use REST when they need an RPC protocol instead.

POST represents a non-idempotent operation. The other HTTP verbs (GET, PUT, PATCH etc) are idempotent. The awkwardness surrounding PUT/PATCH stems from the need to ensure that those requests remain idempotent.

Nothing bothers me more than an idempotent request (e.g. a search query) using POST. If you design your API starting with "what should happen if the client makes this request multiple times?", then it becomes much easier to model with HTTP.

HTTP 401 Unauthorized actually means Unauthenticated. 403 Forbidden actually means Unauthorized. Yeah, it is a bit of a mess but the referer header is misspelt so what can you do.

PATCH is not idempotent.

Source: https://tools.ietf.org/html/rfc5789#section-2

Slight aside but I tend to use POST to create a new "search results document" and the GET that document to see the actual results.

Can I send a large request body with a GET? That's usually the reason I end up with POSTs.

> Can I send a large request body with a GET?

Why not?

A GET request's body has no defined semantics, so your server is firmly in the realm of "nonstandard implementation detail" if you choose to take into account the body in your decision-making.

HTTP server following the spec can omit the body from a GET request.

I'd love to see a document defining this. I could not find such to the date.

RFC 2616 §4.3 (https://tools.ietf.org/html/rfc2616#section-4.3):

if the request method does not include defined semantics for an entity-body, then the message-body SHOULD be ignored when handling the request.

Now, RFC 2616 is obsolete, and I couldn't find such explicit wording in the superseding RFC 7231. But §4.3.1 (https://tools.ietf.org/html/rfc7231#section-4.3.1) says:

A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.

I'm not too up on which documents here are current but at least:

> A payload within a GET request message has no defined semantics;

> sending a payload body on a GET request might cause some existing

> implementations to reject the request.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact