Since REST is so often misunderstood, even by many of its advocates, I’ll read the title charitably, and assume that the author is addressing claims made by well-meaning but misguided people. But, with respect to the arguments of actual REST experts such as Roy Fielding himself , the title is a straw man.
> Is the web really a model we want to emulate?
People tend to think that the technicians who use tools depend solely upon the engineers who make tools which are based soley upon the theories that the scientists and researchers discover. But, what often happens is that engineers first build things (like bridges) that scientists then study in order to come up with a general theory that is applicable to nature.
The Web is just such an example of this kind of sequence of events. You see, until TLB’s WWW took off, there were several competing efforts to create platforms for networked information systems. The earliest, and perhaps most (in)famous is Ted Nelson’s Xanadu. The thing that kept tripping up some of the other efforts was the focus on information provenance. That is to say, almost everyone thought that we needed two-way hyperlinks so that a document was always connected to its source . Of course, the WWW did away with that and also just happened to become wildly successful. But, here’s the thing: The Web’s success went against some theories that went all the way back to at least 1960, perhaps in 1945 with Vannevar Bush’s Memex. So people like Fielding studied the web to come up with a new theory of networked information systems in the same way that scientists might study bridges to come up with a theory of bridge building.
This is all to say that it’s wrong to think of REST as being a post-hoc justification of “HTTP as a good information architecture”, rather the point was to figure out why the Web’s architecture is successful, and to come up with a theory that may be generally applicable to networked information systems.
> REST creates temporal coupling…
> …and location coupling too
Believe it or not, a “server” is just a name, and “abstraction” is just naming things. Components in RESTful systems are only ever “tightly coupled” to an abstraction. But, one of the most common mistakes people make in trying to understand REST is that they focus on the URI part of the API when it’s the media types that are most important. This is probably because people are used to the somewhat impoverished abstractions afforded by classical OO languages (nb - I am a big fan of OOP) . Indeed, if you follow the prescription of REST which says that most of an APIs descriptive effort should be focused on media types as well as HATEOAS and caching, then you end up with a system that is actually less coupled than even the most interface-heavy OO architecture.
> REST comes in many different flavors
I’m presently working on a paper that I hope will remedy that.
 And also because Fielding didn’t have enough time to give media types proper treatment in his dissertation: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...
Now my rant: I blame frameworks for the "inventor's drama" whenever someone builds a "RESTful API". We need frameworks (or libraries) that focus on the hypermedia part, support linking, URI builders ("forms") and feeds. The frameworks should treat HTTP as a protocol and not as a set of design decisions á la "Do I use singular or plural in my resources?!!".
And of course there is temporal coupling. That's why we do not buy big batch processing closets from IBM to build websites. The notion of HTTP is temporal coupling.