Hacker News new | comments | ask | show | jobs | submit login
Why and How You Should Write REST-Centric Applications (w2lessons.com)
79 points by mwbiz on May 18, 2011 | hide | past | web | favorite | 28 comments



Using HTTP status codes for error messaging leaves quite a bit to be desired. You don't have that many options beyond a handful in the 4xx (400, 404, 409) and 5xx (500, 503) range. For the most part, they are either really general or too specific to be correctly used. It'd be nice if there were a standard for supplemental error messages.

Edit: (to the downvoter) this post explicitly mentioned "You get clean exception handling via HTTP status codes". The exception handling may be clean, but it's extremely coarse.


RESTful APIs can be a pain when you have to mashup different APIs, though. The amount of calls you end up having to do can be quite a lot. From a backend standpoint it's not that bad if you cache, but from a consumer standpoint I have to get a /users/active list, iterate over them, then call a /friends/[id] call for every user. If you add a third list you can easily see how annoying it might be. It's great that your APIs can be so flexible but now your clients are writing completely different code all over... the chances of you breaking something or them having to re-code things every release is a lot greater than updating a getFriendsOfActiveUsers traditional API call. Has anyone else found this to be the case? Are there ways around it? I'd almost want a hybrid or a mash API that I can pass in URLs and have it return a mashup for me.


This is an issue I'd like see more people talk about in REST API design. There's quite a balance to be struck between the number of calls to be made and the size of the data that needs to be download.

I designed a REST API that's primarily (well, currently only) used for mobile applications, and it was often hard to decide whether to “denormalize“ the API (fewer API calls required, but more data per call) or provide very general fine-grained resources.


I know what you mean. I've often shied away from using REST simply because I like to setup the APIs as classes and use the classes directly in my internal code usage. REST breaks that for me, not only because the paradigm is different, but if it was truly REST I'd be making CURL calls to myself to get the data which single-handedly bloats my code by an order of magnitude... It definitely deserves some discussion to find out if there is a way around this problem as blindly going REST can cause some very difficult problems down the road.


One need not exclusively describe every sublist as a separate resource. Sorting and filtering can correctly be implemented as query params on the full list resource. Perhaps the shunning of query params (rightfully so when used as verbs) has gone too far.


Hear, hear. REST APIs are particularly difficult to implement when you have multiple optional value to query/describe a particular set of resources. The RESTful way encourages an explosion of URLs to try and support the different combinations. Adding new ways to list resources at a later date is also next to impossible. Query parameters simplify the process and the API significantly and still make it easy to describe and use. For me the best approach is to mix the two: make often used, pre-defined queries RESTful then support all the specialised combinations through query parameters.


Nice post. The single biggest criticism I usually level at REST implementations is the lack of HATEOAS - the discoverability aspect of REST is about more than just an easily understood URI. As the author of this post states in his 2nd bullet point (emphasis mine):

"It’s expressive, REST paths and CRUD requests are easy to understand _and hypermedia makes it easy to navigate_"

It's that second part that so many implementations gloss completely over, it's probably worth discussing that separately in the same way that authentication is discussed in the post.


It's no accident that many public web APIs don't implement HATEOAS. Conceptually, HATEOAS is fantastic. Practically, it often stumbles.

As an example, 90+% of the web APIs I've designed and worked with are heavily used by mobile clients, which often suffer low bandwidth and high latency. Using proper HATEOAS URIs bloats payloads. Similarly, high latency for requests means that traversing hypermedia links across the API space is untenable.

In the real world, we design a structured API with well-known endpoints, and clients directly retrieve the resources they need. If the API needs to diverge from the specification substantially, then it gets versioned. The result is small, simple JSON payloads and nice, responsive clients.

If I'm missing something obvious here, I'd love to be educated.


Sure, but I'm sure Roy Fielding would argue you can't have REST without HATEOAS. So on some level yes, it is great in theory, but on another level I would also argue that's where the 'ful' comes in ala 'RESTful'. I guess you could look at it in a 'spirit of the law vs letter of the law' kind of way - REST without HATEOAS is certainly in the spirit, but perhaps so is XML-RPC with HATEOAS. Neither are the letter though.

The website example below is a good one, but as I'm and infrastructure oriented kind of guy I'll give another one which is the Sun Cloud API, under the now defunct project Kenai http://kenai.com/projects/suncloudapis/pages/Home. For example, doing a GET on a VM resource will return a payload that contains a URI for a power operation on the VM. What that power operation is obviously depends on what the power state of the VM at the time of the GET. The AWS API's provide a SOAPy interface, but they return information about objects that much more adheres to HATEOAS than the Rackspace API for example, which goes to _great_ lengths to espouse it's RESTy virtues (even consisently, and incorrectly, lowering the 'E' in the API docs lol).

So yeh, of course it all comes down to the infinite scales of grey, I wasn't trying to imply that I know any better than anyone or that REST-without-HATEOAS is wrong or suboptimal or whatever (and I know you're not interpreting it that way either), just that I have sometimes wondered how many REST implementors actually took the time to understand what Fielding was/is on about. And I certainly don't believe you or the author of the post fall into that category!


The important quality that HATEOAS gives you is the freedom to evolve the application on the server without changing the client. It's the reason that the web can be used for things that TBL didn't anticipate when he invented it. If an API can't adopt new functionality without breaking clients then there is no sense in calling it RESTful.

As other commenters, and Fielding himself, have pointed out, REST is inefficient in terms of both computer and human resources. It's an architecture optimized for wide scale and long term use. That's why most APIs don't turn out very RESTful.


I wrote an api that has HATEOAS, but none of the devs really use it. They seem to prefer hardcoding strings.


The hardcore randomly change all the urls in testing to make sure nothing is hardcoded. A hateos api should still work.


Haha, I thought about doing that, but it would just serve to piss a lot of people off.

I think it would be kind of cool to have a little project/developer toy that would be an API where only a single endpoint was provided (think http://mysteryapi.com), and the rest of it had to be discovered. It could be like the labyrinth from House of Leaves, but in REST API format.


Developers Like Hypermedia, But They Don't Like Web Browsers. From Leonard Richardson at WS-REST2010.

http://ws-rest.org/2010/files/WSREST2010-Preliminary-Proceed... starts at page 6


I agree that this is often glossed over. In fact I can't think of any framework implementation that provides it.

About the best example I can think of is the Atom publishing protocol where a service links to it's publishing URLs and a feed can provide pagination through hyperlinks. But still it is not a particularly sophisticated example, do you know of any others?


I'm not sure how HATEOAS could be bundled into a framework as there is no straightforward process for realizing it. Designing a generic hypermedia format is a huge undertaking and a RESTful service must be almost completely specified by such formats.

If you can implement your service entirely with existing formats, that could make things much simpler. But the only kind of hypermedia for machine data access that I've ever heard of is RDF, and there is nothing simple about that.


The best example of that is a website. It provides links to other documents. It just happens to be that the output is usually HTML, but there's no reason why JSON or XML can't contain links to other hypermedia as well.


Most any document format can contain links, but that doesn't make it hypermedia. You can't make a client that knows what to do with any JSON or XML document.


Wait, why not? The JSON could return in a standard format that the client knows where to look for links to other documents.


Sure, you can design a standard format on top of JSON, but that is rarely done. The whole appeal of it is for passing around ad-hoc data structures.

There are many hypermedia formats built on XML, but when it's just used as a data container for an API, it's not hypermedia.


Gotcha, that makes sense. Thanks.


I was introduced to RESTful APIs when Rails started championing them. And while I've read a number of REST articles, it still seems like I'm not understanding the nuances of long-term vs short-term decoupling of client/server to the level that Fielding demands for REST. For example, there's his blog entry here:

http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...

A key point is this paragraph:

"A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]"

---

When I try to absorb this and other points in that blog entry, I feel like I've failed to understand the real lessons and I'm thinking too short-term with fixed URIs for CRUD on data. But then again, in the comments, Fielding says "REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency."

This makes a lot of sense to me and suggests that truly RESTful design isn't necessarily the best fit for people who want to throw stuff up on the web and then evolve to a more stable API.


The lessons of REST are straightforward in the context of human-machine interaction. The common confusion happens when trying to apply it to machine-machine interaction. I've never seen anyone make much of a case for REST in that context, not even its creator.

If the server can evolve new functions independent of the client then it seems to me that the client has to evolve new intentions independent of the server. At present, clients can only do that with a human operator.


IMHO, REST's "software design on the scale of decades" is a lofty claim that has probably been validated only when the "application" in question is a web browser with a human user consuming hypermedia.


Authentication and need for at least a pseudo "session state" always seems like the trickiest part of building a 100% RESTful app. Anyone have details/examples on ways to address these?


One approach I've seen regarding auth is to use a faux-resource:

Login:

    PUT https://example.com/credentials
    {"username":"foo", "password":"bar"}
Check if user is logged in:

    HEAD http://example.com/credentials
    Cookie: ...
Get current user:

    GET http://example.com/credentials
    Cookie: ...
Logout:

    DELETE http://example.com/credentials
    Cookie: ...
From the point of view of the one client, it's no less "real" than any other resource.


Seems like the Rails guys have done a good job sorting all these issues out.


[deleted]


The server itself can and does catch the signal that indicates the user hit Stop. Whether or not your code can detect this -- that's the tricky part.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: