

Haters Gonna HATEOAS - bpatrianakos
http://timelessrepo.com/haters-gonna-hateoas

======
mythz
HATEOS is a solution aimed at solving the _clients can't update_ problem. This
was never really an issue since browsers (closest thing we have to a HATEOS
client) always rolled out new upgrades, it's an even less of an issue now with
todays auto-updating browsers.

It was also born in the web's early document-centric era where every action
requires a full loop-back to the server and entire page reload, which is why
it will never be used to create engaging and interactive apps.

The ideal use-case for HATEOS would've been to power native mobile apps since
they can be difficult to update, but even then it's relatively non-existent
with most mobile app developers realizing it provides worse end-user UX that
takes more effort to achieve.

Basically 10+ years on and it's still only being used in academic circles
where the few that are trying to implement it, are doing it to achieve full-
REST compliance, and not anything to do with choosing the best tool for the
job or trying to create the best end-user UX.

I've written an earlier post why HATEOS and Custom Media Types are often poor
choices, generally requiring more effort and producing less valuable results:
<http://www.servicestack.net/mythz_blog/?p=665>

~~~
parasubvert
FIFY:

"[Hyperlinks are] a solution aimed at solving the clients can't update
problem. This was never really an issue since browsers (closest thing we have
to a [Hyperlink-aware] client) always rolled out new upgrades, it's an even
less of an issue now with todays auto-updating browsers. It was also born in
the web's early document-centric era where every action requires a full loop-
back to the server and entire page reload, which is why it will never be used
to create engaging and interactive apps."

Which seems rather nonsensical. HATEOAS is just about about hyperlinks driving
the application. Which is what the Web is, mostly, still, even though we have
lots of crappy Flash restaurant sites.

There's two ways you might be correct: (a) REST API clients are complete silos
and we no longer use hyperlinks to bridge across them or (b) JavaScript is
more important than links.

(a) is sometimes true given the state of today's REST APIs, but not always -
Google's APIs, Facebook's APIs, etc. all are fairly connected. Plus We still
use Hyperlinks to bridge across "apps", of course, because of this
proliferation of API silos still require URIs and even the browser as glue
between them until more mature user agents evolve.

As for (B), REST always had Code-on-Demand (i.e. JavaScript) as an optional
constraint alongside HATEOAS.

The point of HATEAOS is not about building individual apps, it's about
building an ecosystem of integrated applications... i.e. the HTML web itself.

One area where I do agree with you: REST's constraints can and should be
jettisoned if it makes sense for your use case. If you really want to make an
app that's "engaging and interactive" and needs to be a silo by nature, then
have at it.

~~~
mythz
And every time you click a hyperlink, you're requesting a full-loopback to the
server and an entire page-reload. This limits it's ideal usages, e.g. it's
better UX to use web sockets to live-update content, then get end-users to
manually clicking hyperlinks. It's also not suitable for "mashups" (i.e.
content/functionality from multiple sources) or stateful UI's (i.e. how most
native desktop apps work).

And it's irrelevant in today's Single Page Apps, i.e. you're never going to be
able to create a usable "Google Maps" or Google Docs-like applications that
comply with HATEOS restrictions.

I'm not saying you can't build systems with it, just that it requires more
effort to do and ultimately produces a worse UX - which is why it's an ignored
technology/constraints.

~~~
vidarh
I don't understand your arguments at all. Here's a concrete way in which a
single page app may make use of it, without affecting UX at all:

Instead of hardcoding a URI hierarchy to determine where to issue a request to
in order to take various actions on a message in a collection, look for link
tags to specify them.

The immediate benefit is that it allows the server to signal 1) what actions
it supports in the current context. E.g. if you're logged in as a user with
restricted credentials, it might not return a link for the "delete everything"
action, and when the rules change only the server side (which already needs
the knowledge of the rules to validate requests) will need to change 2)
endpoints can trivially change and the client side application is
automatically up to date.

How much harder is this? You need to output a few extra tags and attributes.
On the client you need to replace some hardcoded strings with lookups based on
an xpath expression. But at the same time there are many cases where you may
be able to remove duplicated logic.

------
kt9
IMHO HATEOAS seems like a solution looking for a problem. I've built and
consumed lots of APIs and I've never encountered a solution where I wished
that a mechanism existed to discover the next step automatically from the
first endpoint.

Using HTTP verbs and resources is super useful and has caught on because its
super simple. HATEOAS seems like a whole bunch of complexity to me that
doesn't seem to add much value.

Maybe I'm wrong and just need to better understand the use case and value
proposition.

~~~
bpatrianakos
I don't think you're wrong. I can agree completely. But at the same time maybe
we feel this way because we're not used to working with HATEOAS APIs. I
personally haven't ever wished I could discover the next step while using an
API either but maybe if those APIs were more common we'd be wondering how we
ever lived without them.

~~~
icebraining
How did you find out the reply page to post that message?

HATEOAS, like all of REST, is modeled after the biggest API ecosystem we have:
the (HTML) web.

------
comex
Making a client add a level of indirection through <link> elements to figure
out what URL to use is not inherently a good thing: it's easy for clients to
get wrong, requires the use of XML rather than other output formats (sort of),
and generally adds complexity without an obvious benefit. The example given
was that the URL might change, and clients would be able to follow along like
humans; but unlike humans, which can autonomously start using new
functionality and new site organization, computers are very unlikely to be
able to automatically provide a useful submission to a new API - and if the
API doesn't change (or is backward compatible), there is no good reason to
change the URL unless the URL is excessively coupled to the implementation.
The only other benefit I can think of is that humans testing the API might use
the list of available API calls, which is actually pretty reasonable, but the
article seems to be articulating this as something more fundamental than a
debugging aid.

In other words, how is this a good thing?

~~~
joelhooks
You definitely aren't tied to XML, it is just one possible format for
delivering hypermedia. I'm nothing close to an expert, but enjoy the concept.
Github is doing cool things with hypermedia in their API.

------
markburns
There's two massively under-addressed problems with the HATEOAS constraint:

1\. Code-on-demand requires coupling of technology stack of client and server.
(If you're going to assume your clients can run javascript or x-technology,
where is the decoupling?)

2\. Out-of-band communcation. Some of the recommendations I've heard have been
along the lines of: let's avoid out-of-band communication in API docs etc and
have it completely discoverable based on the media type. "And what happens if
the media-type isn't sufficient to represent your use case?"

"That's easy, you create your own media type"

So in order to have no need for API docs for an actual API, I create a
completely separate media type (which will need its own RFC type
specification, or at the very least its own API docs).

This is purely academic, and I still think that nobody is doing it because
only those that have tried it or seriously thought through the implications
are naysayers. Nobody who's actually built a system like this is coming forth
and explaining the benefits.

Maybe if we get some media type that everyone starts using because (unlike
XML/JSON etc) it supports full REST-like semantics i.e. forms for POSTing and
not just hyperlinks for GET requests, then we may see the purported benefits.
But you still run into the problem of expressing field level validation
messages in a 422 response. And expressing the valid values in a reasonable
way in a template for an object.

Yes, expressing a String or a DateTime object or any other relatively
primitive data type with simple validations: presence of, within array of,
valid email, valid phone number may be possible.

But most of the time you can't predetermine this kind of contract up front. So
there's no way of me, the producer of the API, expressing to you, the consumer
of the API, that this particular regex completely satisfies my business
requirements in all circumstances.

If I could nail down the expression of the required values for an API call in
any sufficiently complex business system and convey it to you in some easy to
digest, computer-understandable format then I'm sure we'd be gold.

But reality sets in and you realise you've added a whole bunch of cruft to
every API call, and a whole bunch of unnecessary API calls to express some
theoretical idea by an academic who admits that he's too busy to actually go
away and start building some of these systems.

Not only that, you're making it harder for your consumers to integrate.
(Unless they are using an already known media-type, which is a chicken and egg
situation as the closest we have to being good enough is HTML, and you try
explaining to API consumers, yes the future is HTML, and yes I want you to
parse HTML to get the data from my API).

We could start saying, well hell to efficiency as technology consistently
improves, but we're dealing with routing packets over the internet at
approximately the speed of light. There is a real-world upper limit to this
stuff and we need to improve efficiency because if you add 10 extra HTTP
requests per API call, your users _will_ notice. This stuff matters.

Hell, even if these are not user critical applications and there is some as
yet not understood benefit to the world as a whole having asynchronous
processes running in the backends of our systems optimising some problem space
and searching for solutions on their own, then we still have another real
world problem. There has to be some short to medium to even long-term benefit
to building this kind of system to someone.

Nobody wants to build this theoretical hypermedia, semantic web on their own
dollar when there is no visible pay-off in the near future or even idealistic
theoretical pay-off in the long term.

------
5avage
We're working with a HATEOAS API on our current project. One area it really
shines is in pagination. The API abstracts away both a MySQL and a CouchDB
database. When paginating through SQL, the "next" and "prev" links are simply
setting skip and limit parameters, but in Couch they're supplying keys and
documents (because "skipping" 10,000 records in a b-tree is a really bad
idea). The application no longer is exposed to implementation details; it
simply follows the links.

Plus, when you can "surf" your API by following links in JSON documents, you
start to twig how it's a pretty powerful thing. It takes client developers
about a day to get over the strange feeling they get yielding state control to
the back-end...

------
Osiris
At work we had a team develop a corporate standard for REST APIs that includes
discoverability, such as including a 'links' and 'actions' properties on
resources.

Our team determined that implementing the discoverability aspect was going to
take a significant amount of engineering resources and wouldn't provide much,
if any.

The other problem I have with the idea of discoverability in the API is that
it doesn't really help the client. For example, with a forum API there may be
a link to create a new comment. The client has to be written to recognize the
new comment link and attach the URI to a button. What if the API changes the
new comment link? How does discoverability help here? Sure, the server is
sending the new link but the client doesn't know that new link is really the
new comment link that it's expecting so the client has to be updated anyway.

So I don't really see how that helps prevent the client from needing updates
when the server changes something.

~~~
vidarh
> Sure, the server is sending the new link but the client doesn't know that
> new link is really the new comment link that it's expecting so the client
> has to be updated anyway.

Then you're doing it wrong.

See the article example. E.g.:

    
    
        <link rel = "/linkrels/entry/newcomment"
                uri = "/entries/1337/comments" />
    

The "rel" attribute remains static. You don't change that. It is what tells
your client that "this link is a new comment link".

The "uri" attribute can change at will.

The client knows that "/linkrels/entry/newcomment" means that this is the link
to follow to post comments to this entry.

It does not need to be updated when the url's to server resources change, as
long as the "rel" attributes stay the same.

~~~
Osiris
I was referring to the rel attribute, not the uri. So your point is that once
the server establishes a defined API that it can't change in order to maintain
compatibility with the client, which is the need for API versioning. So the
client still has to be updated when the API changes.

I suppose it provides a little flexibility for the API to adjust the
endpoints, but in practice it still seems like a bad idea to modify the
endpoints because clients could have cached data pointing to existing
endpoints that you now break.

------
waxjar
I really like his articles on HTTP APIs. I learned quite a lot from them. What
he writes makes a lot of sense, except for the "HATEOAS" part.

What he consistently doesn't tell us is _why_ it's necessary and _how_ exactly
we're supposed to implement HATEOAS—wouldn't HEAS be a much better acronym?—in
our APIs.

I'm guessing the reason why is so API wrappers would be trivial to write. A
single library could simply consume a "standards-compliant" API. What that
means I do not know.

That's where the how comes in. It seems from the XML example that these links
have to in the body of the response. That would mean _every single response
format_ has to be dealt with separately. That's not very practical. I also
have no idea how something like it would look in JSON for example, let alone
how it should be parsed in a meaningful way.

~~~
zachrose
Why not:

{ id: 35, name: "Kennedy, John F.", predecessor: { id: 34, name: "Eisenhower,
Dwight D.", href: "/presidents/34" }, successor: { id: 36, name: "Johnson,
Lyndon B.", href: "/presidents/36" } }

~~~
icebraining
Besides killing ids, use full URIs, not just the path. That enables you to
replace the links to other domains, possibly to a third-party. Just like we
often link to Wikipedia on our posts, you could decide tomorrow that your
service should link to dbpedia.org or Freebase instead.

------
tylerpower
The concept of discoverable resources is something I'm very passionate about,
we've developed client libraries for our API in JavaScript and .NET that start
from a single document, and discover other resources via links. I wrote a blog
post about it a while ago here: REST, API’s and The Missing Link -
<http://blog.appsecute.com/?p=98>

------
gumbo
HATEOAS seems great if you look at it on the design perspective. You can
navigate the whole API just by having the entry point. Seems great right.

The HATEOAS has been introduced to solve "discoverability" of API. But in
practice this don't really work. This would mean a client can't bookmark a
link to a resource, but instead he would need to navigate the whole "state
machine" to discover the hypertext of the said resource. This has two problem
on my perspective: \- It don't look like the WEB. It is like if each time i
need to find a thread on hackernews, in need to go from the frontpage and
navigate until i find the thread. Why not just bookmark the url of the post
and be done with it. \- This HATEOAS just translate the issue: Instead of
hardcoding the links, I will be hardcoding the Relations between resources.

So if the goal of this HATEOAS is to allow one day to use a single client API
that will be able to work with any REST API, Rest assured we're not quite
there yet. (pun intended)

------
dstroot
People don't don't do it because the state is already managed in the Web app
itself in general.

Get /widgets Get /widgets/:id

The app says "view" and then "edit item". We don't expect the API to tell us
this.

People are just trying to get stuff done and many design from the UI
backwards.

If we start to see more useful examples maybe the herd will thunder. ;)

~~~
alttab
You clearly come from a rails background. I would even go as far to say that
"many design from the UI backwards" is extremely true in the Rails crowd (I
would know).

This level of thinking create monolithic, poorly organized, unstable
applications.

------
michaelw
I think it's easy to miss the benefits of HATEOAS. The relative ease with
which javascript clients can use so-called level 3 REST to construct resource
URLs from IDs is compelling.

We've been using a HATEOAS approach for the interface between our back-end
rails app and our front-end javascript rich client.

We've seen significant benefits during development because the hypermedia
links allow us to DRY up the coupling between client and server.

The overall result is much less boiler plate duplication and much clearer
client code. It's easier to maintain and extend.

We've judiciously identified some exceptions where the client needs to
construct URLs explicitly, in our case because the URLs are tied to dates and
we think it's silly to have the server construct links for next_day and
previous_day let alone an entire calendar view.

------
aidos
I spent a long time trying to find a framework that does hateoas because I was
desperate to do the right thing. I was even willing to use a different
language for the api to make it happen.

Thankfully, during my hunt Tom Christie released the django rest framework 2.
It does this stuff the right way. All 4 levels of goodness.

From the outside it might not be obvious why it's worth having hateoas in your
api but when you're exploring an api as a human it makes it so much easier.

The client tools are not really there yet to take advantage of it, but they
will be, and our systems will be more stable for it.

------
mehdim
This post makes me think to this :
[http://apijoy.tumblr.com/post/34286521181/when-i-assist-
at-a...](http://apijoy.tumblr.com/post/34286521181/when-i-assist-at-a-debate-
on-what-is-a-true-rest)

------
marcloney
If HATEOAS implementation was more widespread the biggest advantage would be
the ability to create standard libraries for API calls rather than thin
wrappers for every language. Your API would also be more or less self-
documented, hopefully making it easier to navigate for humans and machines
alike.

I am curious as to why this can't be handled at a HTTP level with OPTIONS?
Could I not OPTIONS query a URI, and receive a list of supported HTTP methods
(GET, POST, PUT, etc) that are available on this resource?

------
T-R
A large part of the reason that many APIs fail at content negotiation has to
do with the fact that many clients don't support it properly. Chrome in
particular completely ignores the Vary header (and its devs have marked the
issue as "won't fix"), so it's not practical to have the same URL respond with
different media types based on the Accepts header (it overwrites the cached
item, so the back button doesn't work properly). Many APIs compensate for this
by putting the format in a query parameter.

~~~
alistair77
Good news is that this will be fixed in Chrome v25
(<http://code.google.com/p/chromium/issues/detail?id=94369>). Recent versions
of Firefox and IE already provide good support.

~~~
T-R
That's great to hear. Thanks for the heads up.

------
charlieok
This site (“The timeless repository”) looks interesting but the “Recent
Changes” page [1] does not appear to be something I can subscribe to in a feed
reader. When I try, I instead get the main feed [2] which shows no updates
since May of last year.

    
    
      [1] http://timelessrepo.com/changelog
      [2] http://feeds.feedburner.com/TimelessRepo

------
ako
Hateoas would enable you to spider Apis and data. Seems like a useful concept.

~~~
gumbo
See my comment about the challenge we still have.

