I can't think of a single more successful achievement of software engineering, so can someone explain why I shouldn't answer with an emphatic, "yes!" to this question?
I get the web has all kinds of flaws, and maybe we can do better, but as far as I know, we haven't even come close to actually building something as useful or ubiquitous or stable as the web.
I've been using redis pipes (and databases in general) to get services to talk to one another, and that has its uses, but why must I do that or do REST? Why can't there be a place for both?
Because you've taken a single question from TFA out of context as a straw-man. Yes, the web has been enormously successful. But that's simply irrelevant to the practical questions posed by the author. To stitch together the full question that Ben Morris is really posing using a quote from the article:
Is the web a model we really want to emulate [as] "the only collaboration fabric for a large and complex set of decoupled, autonomous and collaborating services"?
To which I could just as emphatically quip "no!" Note the "only" in there. Morris is explicitly not throwing REST out, but questioning the REST-uber-alles mindset by pointing out limitations of that paradigm.
Immediately after the quote you pulled comes (IMO) the best part of the article:
It’s not just HTTP that provides a limited model for service integration. The web has been an inspiration for REST but is it really that successful a distributed system? It’s slow, fragile and prone to security problems. It’s vast, decentralised nature makes it impossible to find anything useful without indexing engines that are so vast that only one or two companies have been able to create them.
It's not, again, that the web isn't successful. It's that the applicability of solution to problem is being questioned.
To me REST in enterprise is simply reflective of how broken SOAP and XML-RPC was in practice in enterprises and made for far too inflexible, monolithic services that'd take years to move forward major versions. You can decompose REST respurce URLs at least and try to scale out from there instead of being stuck to the service definition's XML schema for every resource definition. Sure, I've seen some Schemas that are composed but in the world of enterprise software vendors and internal bureaucratic services you will devolve to the pathological cases of simplicity in your architecture to get the bare minimum done, and REST works better in practice here than anything WS standards related.
I think this illustrates pretty well that an actual understanding of what REST means is rarer than it should be.
Split brain issues are harder to solve, but of course there are protocols to deal with that too.
> And since CPU clock speed won't really be getting any faster, we have to scale out if we're going to scale at all.
What does CPU clock speed have to do with all this? It certainly doesn't affect communication latency and is a rather poor indicator of computing performance.
While clock speeds have stagnated, single core (thread) performance has been steadily increasing. Compilers are just not fully exploiting additional computing power yet.
Then there's always NUMA (non-unified memory access, multiple CPUs and memory subsystems networked together at hardware level) and in a larger scale, RDMA (remote DMA).
So my point was mostly that distributed applications are unavoidable because you just can't scale up past a certain point.
Temporal coupling aspect I also find to be misleading, you can for instance use an ATOM feed (or equiv) to record events, its then up to clients to use them as appropriate.
Horizontal scaling is where you add more machines to your pool of resources (servers).
Horizontal scaling would be adding more web servers behind the load balancer to facilitate more traffic.
Vertical scaling would be adding more RAM to your database server to keep all the data (or just the indexes if your database is that big) in memory.
To me the term "temporal coupling" is skipping some details, since the real consideration is the duration of the transaction vs the duration of the transport session. REST-over-HTTP can't directly represent transactions which span TCP sessions, and this is a problem if the transaction is very long or the connection is choppy.
Since REST is so often misunderstood, even by many of its advocates, I’ll read the title charitably, and assume that the author is addressing claims made by well-meaning but misguided people. But, with respect to the arguments of actual REST experts such as Roy Fielding himself , the title is a straw man.
> Is the web really a model we want to emulate?
People tend to think that the technicians who use tools depend solely upon the engineers who make tools which are based soley upon the theories that the scientists and researchers discover. But, what often happens is that engineers first build things (like bridges) that scientists then study in order to come up with a general theory that is applicable to nature.
The Web is just such an example of this kind of sequence of events. You see, until TLB’s WWW took off, there were several competing efforts to create platforms for networked information systems. The earliest, and perhaps most (in)famous is Ted Nelson’s Xanadu. The thing that kept tripping up some of the other efforts was the focus on information provenance. That is to say, almost everyone thought that we needed two-way hyperlinks so that a document was always connected to its source . Of course, the WWW did away with that and also just happened to become wildly successful. But, here’s the thing: The Web’s success went against some theories that went all the way back to at least 1960, perhaps in 1945 with Vannevar Bush’s Memex. So people like Fielding studied the web to come up with a new theory of networked information systems in the same way that scientists might study bridges to come up with a theory of bridge building.
This is all to say that it’s wrong to think of REST as being a post-hoc justification of “HTTP as a good information architecture”, rather the point was to figure out why the Web’s architecture is successful, and to come up with a theory that may be generally applicable to networked information systems.
> REST creates temporal coupling…
> …and location coupling too
Believe it or not, a “server” is just a name, and “abstraction” is just naming things. Components in RESTful systems are only ever “tightly coupled” to an abstraction. But, one of the most common mistakes people make in trying to understand REST is that they focus on the URI part of the API when it’s the media types that are most important. This is probably because people are used to the somewhat impoverished abstractions afforded by classical OO languages (nb - I am a big fan of OOP) . Indeed, if you follow the prescription of REST which says that most of an APIs descriptive effort should be focused on media types as well as HATEOAS and caching, then you end up with a system that is actually less coupled than even the most interface-heavy OO architecture.
> REST comes in many different flavors
I’m presently working on a paper that I hope will remedy that.
 And also because Fielding didn’t have enough time to give media types proper treatment in his dissertation: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...
Now my rant: I blame frameworks for the "inventor's drama" whenever someone builds a "RESTful API". We need frameworks (or libraries) that focus on the hypermedia part, support linking, URI builders ("forms") and feeds. The frameworks should treat HTTP as a protocol and not as a set of design decisions á la "Do I use singular or plural in my resources?!!".
And of course there is temporal coupling. That's why we do not buy big batch processing closets from IBM to build websites. The notion of HTTP is temporal coupling.
Re spikes in demand, the article ignores that REST unlike SOAP is cacheable. REST is popular precisely because SOAP has struggled in enterprise as business systems have moved online and the traditional services haven't been able to deal with the traffic.
Re transactions, REST architectures are stateless. But the wider problem illustrated in the article is that enterprise architecture is based on pre-web thinking and uses patterns which are suitable for secure, robust, transactional/stateful, low traffic internal systems (banking, payroll, ticketing etc). This is pretty much the opposite in every way to web architectures - so what good looks like in one is what bad looks like in the other.