

Some People Understand REST and HTTP - llambda
http://blog.steveklabnik.com/2011/08/07/some-people-understand-rest-and-http.html

======
snprbob86
So there are some parts of REST that make perfect sense to me. In particular,
the preference for nouns w/ CRUD over verbs has many great side effects:
easier caching, better logging, easier discoverability, etc. And I also get
the Content-Type and Content-Language stuff, especially in terms of avoiding
".json" or ".xml" so that you can compare resource identity via a simple
string compare.

But I simply do not see any value in HATEOAS outside of largely read-only
datasets and generic dataset explorer type applications. Maybe it makes sense
for someone like Freebase, but it's completely useless for pretty much every
other API out there.

You simply _cannot_ build a useful API client application without deep
knowledge of the problem domain and the _interface_ part of API. You're going
to have API documentation and you're going to have to read it.

Now, I understand the desire to avoid IDs and manual URL construction. That's
a valuable goal. And I'll admit that I never thought of using 201 and the
Location header on create; clever. But just knowing the list of relative URLs
from a resource is useless. It's not like a Link rel="newcomment" header is
going to show up and magically you'll have comment form. Besides, you need to
know which "rel" to lookup, so you might as well just append "/comments" and
avoid the indirection.

And this all breaks down yet again when you get to offline support. If you've
got a web app which is going to deal with not-yet-saved objects, you're back
to being unable to compare URLs, or constructing them.

Lastly, while I like working with clean URLs and GET/POST over RPC calls. I
dislike the ad-hoc specifications necessary to build real applications. We've
got a "RESTful" API for our app, but we keep running into situations where
different views need subtly different data. For example, decorating a resource
with relationship to the current user (eg. isAdmin) or joining data when
returning a list of related objects (eg. members vs memberships). The query
param spaghetti is growing unwieldy, subtle authorized data leak problems are
an inevitability, client-side models get confusing and easily create bugs if
passed around.

The only solutions to these problems are excessive discipline. Discipline is
something that compilers are great at providing, which is why you see things
like ProtoBufs and Thift. There's no arguing over HATEOAS or RESTfulness or
GET/POST or Content-Type or any of that. The message definition files act as a
baseline API documentation, which are enforced programmatically. The designers
of these tools had things to do and didn't have time to deal with this
nonsense.

Stop the pontificating and get back to work.

~~~
masklinn
> You simply cannot build a useful API client application without deep
> knowledge of the problem domain and the interface part of API. You're going
> to have API documentation and you're going to have to read it.

That has nothing to do with HATEOAS. Of course you have to know the interface
part of the API, but the interface part is the content types. Not the Content-
Type, though they can match, but the content types: the shape and structure of
the documents you get from the service, and send to it. And those content
types tell you, among other things, where to get or send other types.

> Besides, you need to know which "rel" to lookup

Sure, see above, that's part of the content types _which the consumer needs to
know in any case_.

> And this all breaks down yet again when you get to offline support. If
> you've got a web app which is going to deal with not-yet-saved objects,
> you're back to being unable to compare URLs, or constructing them.

I fail to see the issue. You know the data you need to send to the service,
and you know the content types to traverse in order to reach where to send
your data in the service. What is the issue?

> Lastly, while I like working with clean URLs

URL shape has nothing whatsoever to do with REST.

> I dislike the ad-hoc specifications necessary to build real applications.

Why would they be any more ad-hoc than with any other interface standard?

> Stop the pontificating and get back to work.

Oh irony, you're so delicious.

~~~
snprbob86
I'm not following anything you've said about "content types". The well-
defined, well-known content types in typical applications are images and other
"attachment" type resources, as well as the occasional RSS feed or something
like that.

In domain-specific APIs (ie. nearly all the ones that matter), every single
resource type has a unique schema. Ignoring versioning, I can request
resources with a specific URL pattern and parse them with specific logic.
That's all there is to it. It's not complicated. The Content-Type is entirely
irrelevant, unless I decide to use it for versioning or waste my time
supporting both XML and JSON.

> > Stop the pontificating and get back to work. > Oh irony, you're so
> delicious.

My point was directed at the whole "What is RESTful?" debate, including all
the versioning, content types, URLs, headers, verbs, etc. Discussions of
approaches and problems is not pontification. Discussion of "Which approach is
more RESTful?" is pontification.

~~~
masklinn
> I'm not following anything you've said about "content types".

Which amply demonstrates your complete lack of understanding of the subject
"pontificate" about.

edit: you can downvote me all you want, does not change that fact. Here's what
Fieldings has to say on the subject:

> A REST API should spend almost all of its descriptive effort in defining the
> media type(s) used for representing resources and driving application state,
> or in defining extended relation names and/or hypertext-enabled mark-up for
> existing standard media types. Any effort spent describing what methods to
> use on what URIs of interest should be entirely defined within the scope of
> the processing rules for a media type (and, in most cases, already defined
> by existing media types).

(I used "content types" for his "media types", that's about it).

------
icebraining

        The good
    
        GitHub uses custom MIME types for all of their responses. They're using the vendor extensions that I talked about in my post, too. For example:
    
            application/vnd.github-issue.text+json
    
        Super cool.
    

Hmm, I wouldn't call that "good." It's definitively better than sending
'application/json' or 'application/xml', which tell us nothing about the
structure of the data, and it's probably inevitable in their context, but
"good" would be to use an actual _standard_ mimetype instead.

The problem with using mimetypes tied to the service is that it undermines the
concept of Uniform Interface, by forcing developers to write clients
specifically for that service. Imagine if instead of standardizing on (X)HTML,
CSS, JS and a couple of image formats, each website used their own format.

~~~
jallmann
That is actually an interesting point. I can't pull up the HTTP spec or Dr.
Fielding's dissertation right now, but what precisely is the mimetype supposed
to signify?

It seems that that the _encoding_ of the response (html/xml/json/etc) is being
conflated with a content specification for the response. XML already allows
you declare a schema for validation purposes. There have been a few attempts
to standardize this on the JSON side, but that will never be a first-class
citizen in JSON (nor do I think they should be).

It appears to me, if you really care about the content of your resource, then
you should use the schema facilities provided by your encoding, rather than
imposing it universally on the architecture side, as the article implies.

Historically, mimetypes (afaik) have never made any assumptions about the
content of a resource, other than it adheres structurally to whatever the
mimetype declares. Should be kept that way.

~~~
icebraining
That limits the number of XML representations of any given resource to one,
which can be a problem. In the context of a geographical object, XML might
mean a KML file or an SVG image of the location, for example.

There are plenty of IANA approved mimetypes which define a schema - in fact,
HTML itself provides plenty of semantic information about the data.

~~~
jallmann
You could narrow it down using _standard_ mimetypes even if the alternate
representation is a xml derivative. application/svg or whatever the mimetype
for that is. The encoding for svg is well defined; SVG tags merely lend
structure to the content. The same goes for HTML and any other markup.

I'm undecided now though.

It's a slippery slope with custom representations. With generic XML or JSON,
you still need a priori knowledge of the content to parse the reprentation
manually. Then the benefit of a nonstandard mimetype is simply to be
absolutely explicit about your content. Whether that should be canonical, I'm
not sure.

------
technoweenie
Any notion of HATEOAS in the GitHub API is purely experimentation. With the
exception of the pagination links, the rest of them could change format or be
removed at any time (until something is properly documented at
<http://developer.github.com/>). I don't think the Link header is descriptive
enough, so most of them will probably go away.

------
KevBurnsJr
REST is a style, not a pattern. The application of this manner of
classification to application architectures on the web was the broader goal of
Fielding's dissertation, titled "Architectural Styles and the Design of
Network-based Software Architectures".

See chapter 1
[http://www.ics.uci.edu/~fielding/pubs/dissertation/software_...](http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_5)

------
shimonamit
Another interesting post by Steve. What are your thoughts on twilio's date
prefix in the URI versus using "v1" for example? Does discovery appease the
purists qualms about versioning? But if using HATEOS enables discovery (which
twilio is doing), why put a date there in the first place? I am thinking
they're future-proofing their top-level resource discovery, but maybe I'm
missing something.

~~~
necubi
[Disclaimer: I work for Twilio.]

It's actually not future-proofing, in that today we have different version of
the API: 2008-08-01 and 2010-04-01. Some sort of versioning is necessary so
that we can improve our API without breaking current clients or forcing all of
our users to update their code. And while the goal of a completely
discoverable API is laudable, I've never seen it work well in practice. We
have a real service with real users, and we need something that works well and
is simple to consume.

And I don't see how date-based versioning is any better than version numbers
from a HATEOS perspective, since it's not really any more discoverable. There
is however an argument to be made that version numbers are simpler, clearer
and easier to remember.

~~~
apaprocki
Isn't it simpler to infer backwards compatibility from proper major.minor
version number bumps? A major version number bump indicates API(/ABI)
breakage, plain and simple. A minor version bump is simply a backwards-
compatible change to the existing major.

Version numbers for marketing purposes are not really that useful, but if they
are strictly used to indicate compatibility, they are very helpful. No one
cares if the API is at, say, version 1023.301 as long as they know that
1023.200 is compatible with the latest version and 1022.499 isn't.

~~~
johns
(I'm also from Twilio)

We historically have not done version bumps for minor improvements to the API
that do not break existing applications. 2010-04-01 has been updated a couple
times since it was first released with new features (like subaccounts,
applications and short codes) but all of those were additive there was no
reason to change the version.

------
antonyme
Well, I for one don't understand REST. In particular, how to implement it in
current HTML with current browsers.

There's all this nice, theoretical stuff about URI design, HTTP verbs and
stuff. So I went to actually implement this the other day, only to find that
you can't actually do it properly without hacks, because HTML and browsers
only support the GET and POST methods for FORMs. WTF?

Can you actually implement a set of CRUD pages for an entity without resorting
to hacks like hidden _method fields?

Surely REST is intended to be used for more than AJAXy APIs?

~~~
icebraining
You're confusing REST (an architectural style) with HTTP (which defines the
verbs/methods). REST isn't only HTTP, and HTTP clients are not only general
browsers.

But it's true, you can't use the full range of HTTP verbs using normal HTML,
and that's a shame. Apparently HTML5 was supposed to support them, but it was
removed.

------
rjd
Recently I decided to use the ASP.Net MVC framework for a project and tried to
combine the API into the web site project for ease.

It became quick to me all the mentions of REST within the framework where
fictitious. What it actually is an object API exposed via HTTP, its not REST
at all.

Thinking I mis-understood what REST was I started doing some research only to
discover than no I was correct in my understanding (from white papers) and
secondly almost ever single developer article I read (blogs) was wrong. There
is a fundamental misunderstanding of what REST is out there. Its kind of
disheartening to see how many people just don't get it, and are perpetuating
falsehoods :/

ASP.net's main problem is technical. (and I assume this is the same for many
languages) You can't have functions with the same parameters. You can't have a
separate endpoint methods for POST, GET, PUT, DELETE etc... So if you try to
shunt data objects around using it you can't just put them back where you got,
of do smart discovery, to things i like from REST. So the whole framework
falls to pieces, its designed to mimic REST, but not be REST.

For .net devs reading this I'd avoid the MVC framework for REST. But if you
have to for what ever reason you will have to build your own "verb"
dispatchers, and drop the use of "action" functions in your controller
classes, only have one public method, the standard Index() one, expose nothing
else. I may float my samples online at some point but I'm to busy at the
moment so I hope this is enough to help.

~~~
rodh257
You can't have functions with the same parameters, but ASP.NET MVC gives you a
way around this using attributes. See this SO question. <http://goo.gl/wJ4jE>

Does that solve what you were talking about?

~~~
darylteo
You can also use [HttpGet], [HttpDelete], [HttpPost], [HttpGet] attributes.
Personal preference.

~~~
rodh257
yeah but the problem he is having is that you'd need to have different a
different method signature for each of these otherwise it won't compile, and
he wants to have the same path/parameters for each, which is why you need to
rename your method and then add the different attribute.

If your methods already have different parameters or a different name, you can
just use the attributes like you state.

------
extension
Can clients of your API talk to other people's APIs using the same interface?
If not, then the client is obviously coupled to your server and you definitely
do not have a REST application. You have a thick client application. REST is
the antithesis of this -- the client is generic and _doesn't need to change
along with the server_.

Putting links in your proprietary data format does not make it hypermedia, it
just makes your API easy to reverse engineer. Maybe it also allows you to
change your URLs, but it doesn't allow you to change the structure of your
data. Hypermedia does. Hypermedia comes in generic media formats that clients
know what to do with.

If your API is called "The [company] API" then it is almost certainly not
RESTful. If your API is called "[generic type of data] interchange protocol"
then it might be RESTful. But we don't usually call that kind of thing an API,
we call it a protocol or a format. Really, I don't see how an API, as they are
commonly understood, can possibly be RESTful. The main big important point of
REST is to not have APIs.

~~~
lautis
REST known by everyone and what Roy Fielding described are quite different
beasts. There should be a different name for "plain old HTTP APIs" using JSON
or XML.

[http://roy.gbiv.com/untangled/2008/rest-apis-must-be-
hyperte...](http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-
driven)

------
dreamdu5t
Vendor mimetypes are bad. They defeat the purpose of REST.

I should be able to request from Github the content-type(s) I am willing to
accept, and Github serves them to me.

It's ironic that a blog post about people understanding REST highlights their
misunderstanding of it.

What Github does isn't bad, but you shouldn't praise it for being RESTful.

~~~
masklinn
> Vendor mimetypes are bad. They defeat the purpose of REST.

No they don't. Most standard mimetypes are far too broad and completely
useless as an actual content type. "application/json" for instance does not
tell you _anything_ about the content you're getting, apart from the meta-
format in which that content is expressed.

Useless.

~~~
KevBurnsJr
It tells you it's not HTML. That's useful if you hope to parse the message
body.

~~~
reinhardt
Ok, so you can "parse" it, which is just a way of saying you can transform a
bunch of bytes to a nested structure consisting of a few basic types (dicts,
lists, strings, floats, etc). Now what? What can you (or actually your
program, not you as a human) do with it without some sort of schema, explicit
or implicit?

~~~
jcrites
Nothing. Exactly!

REST is not for machine-driven interactions. It's for humans browsing
websites.

~~~
masklinn
Uh... no.

~~~
j-g-faustus

        The REST interface is designed to be efficient for large-
        grain hypermedia data transfer, optimizing for the common 
        case of the Web, but resulting in an interface that is 
        not optimal for other forms of architectural interaction.

[http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch...](http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1)
(Section 5.1.5)

    
    
        When a link is selected, information needs to be moved 
        from the location where it is stored to the location
        where it will be used by, in most cases, a human reader.

[http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch...](http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2)
(Section 5.2.1)

It certainly sounds like "human browsing" was the primary use case. Unless I
missed something?

~~~
icebraining
Yes, it's obviously the primary use case, since REST is modeled after HTTP.
But the keyword is _primary_ \- it doesn't mean it can't or shouldn't be used
for machine-driven workflows.

------
sukuriant
I don't hate to be the guy to point this out, but I absolutely hated the
layout of that website. I am on a 1440x900 screen. I do not want a website to
be the god of my computer while I'm reading. This website absolutely did not
support scaling on screen. It did not resize when I moved it to half of my
screen, and it did not provide me with a scroll-bar at the bottom so that I
could adjust my viewing window of the screen to see all of the text per line.

