Hacker News new | comments | show | ask | jobs | submit login
Getting hyper about hypermedia APIs (37signals.com)
61 points by thisduck on Dec 20, 2012 | hide | past | web | favorite | 71 comments



I wrote the spec for application/hal+json that got compared to WS-star , here's where I'm coming from:

JSON doesn’t have links. Establishing some basic conventions for that makes complete sense. Defining those conventions is called a spec. Giving a payload that follows conventions a name also makes sense. Establishing that is called registering a media type identifier.

It makes no sense to keep reinventing the linking wheel in every API . Pretending like a very minimal media type like hal+json is akin to WS-* is incredibly disingenuous and/or stupid.

Establishing a standard media type like hal+json with a bunch of conventions allows us to build generic tooling that can help with both serving and consuming payloads that contain links.

Being pragmatic is great, but misrepresenting a genuine effort to improve the status quo and improve the API ecosystem in a reasonable, non-complicated fashion as ‘hand-waving’ or 'unnecessary' is not very constructive.


Are there any examples of what, for example, the Flickr or Twitter API would look like under hal+json?

I'm struggling to see how application/hal+json would help me write a client to upload a photo. I imagine that with or without hal I'm ultimately going to POST the data to some endpoint. The question is, how do I figure out what endpoint to use?

Without hal+json I need to read the API docs to discover that "/photos/" is the correct endpoint to post to. But with hal, it seems I still need to read the docs to discover that within the hypermedia file, the key "photos" holds the value for the URI template. So either way, I need to read the docs, and either way, my client breaks if the meaning of the string "photos" changes.


You still need to know the meaning of each link relation, that is true. When hypermedia advocates gloss over this, or imply that clients will just magically know what to do with all the data and links coming from the API entry point, it annoys me. It's dishonest.

But what you gain from link relations is worth advocating for:

* Link relations can be standardized across APIs. This opens the door for clients to infer functionality when presented with links it recognizes. There is a list of currently standardized link rels here: http://www.iana.org/assignments/link-relations/link-relation...

* Link relations provide an abstraction layer over the implementation, which may change. They're not unlike an API in and of themselves. As long as the 'photos' link relation does not change, it can point to whatever URL it wants, and that URL or URL structure can change over time without damage to the client.

* Using exclusively URLs to identify resources, as opposed to making the client memorize how to take any ID and map it into a URL, frees up an API to refer to resources outside its scope. That is big, enabling multiple APIs to connect as a proper ecosystem, with outgoing and incoming links. We can't do that today.


> As long as the 'photos' link relation does not change, it can point to whatever URL it wants, and that URL or URL structure can change over time without damage to the client.

So here's where my skepticism kicks in:

How often are you really expecting your URL structure to change, in practice? And when the URL structure does change, what percentage of the time does the change in URL structure also coincide with a change in semantics that clients logic will need to be changed to take into account?

In my experience the answer to those questions is 1) Very rarely. 2) Nearly always.


You're preaching for YAGNI, which is a philosophy that's more good than bad, for sure.

I am lamenting that doing hypermedia APIs properly isn't the default in Rails. I call out Rails because its massive appeal (to which I am no stranger) is doing It right by default, all over the place for the common case.

That obviates YAGNI, because it implies no extra cost incurred for doing it right. I'd rather enable It rather than assume I ain't gonna need It, all things being equal.

That said, I have worked in companies large enough that the interlocking pieces are pretty far removed, yet expected to interoperate fully. Having and using link relations could have saved us a couple of headaches. I am hopeful the Rails-api project might approach this with more.... maturity?


> That obviates YAGNI, because it implies no extra cost incurred for doing it right. I'd rather enable It rather than assume I ain't gonna need It, all things being equal.

All I can say to this is that we have vastly different approaches to engineering.

Features are debt. Adding features you don't need now and will probably never need in service of some kind of ideological purity ("doing it right") is the worst kind of deficit spending.

I've done my time in big enterprise vast complex interlocking highly abstracted development. For every 1 time prematurely optimizing for flexibility happened to pay off, it resulted in onerous development and maintenance overhead for no benefit 9 other times. Do the simplest thing that works. If you don't need it now, assume you ain't gonna need it and only build it if and when you have a pressing need for it. To me, these are the very core of sound, reliable engineering.


Your applicaiton's link relations should be URLs. This means that you can expose each bit of documentation for each rel at their URL. So every time you see a link in a hal+json document, you can follow the _rel's_ URL, and fetch the documentation for it.. that turns out to make the API very easily discoverable, and gives you a way to model/manage your API's documentation in a consistent way.

You can actually see what that looks like in practice here (click one of the book icons on one of the links on the left):

http://haltalk.herokuapp.com


That's a really nice example of it in action. Thanks for putting in the effort to explore standardising links like this.

I really like the human discoverability feature that you've demoed, and gameche's point about being able to include resources outside the scope of an API is certainly interesting, but I'm still confused about the primary feature you championed, which is the ability to easily change URL structures.

You championed that as a core feature in your post: "make it painless to change your application’s URL structures further down the line".

dhh's retort knocked that feature on it's head, didn't it? Once you settle on coding against a particular section of an API, you're relying on a URL. Your feature works with an API depth of 1, but beyond that what is the proposed approach? To traverse the API from "/" every time, making multiple calls until you discover the right link reference which contains the latest URL for that resource?


Do you mean like http://haltalk.herokuapp.com/rels/signup? But every site will have different requirements or mandatory fields on signup, so there's still no "discoverable" way to sign up to Amazon and eBay and Google without a human looking at http://amazon.com/rels/signup and http://ebay.com/rels/signup, etc., in which case you're back to still needing to read the documentation.

I guess it's a slight improvement to know exactly where the documentation for a particular link relation will live instead of having to search for it, but it doesn't seem to solve an especially difficult problem.


it is an example of a discoverable API with discoverable documentation though, which is what you were asking for I think.. maybe not?


Ah, maybe I misunderstand what you're trying to do. I thought Hal was trying to make it possible to write clients that can buy from Amazon or eBay without knowing anything about the specifics of either. But actually what it's attempting is a sort of machine-assisted documentation system? (Since you still need a human to read the docs to find out how to signup, place orders, and so on.) I can see how that might work, but it doesn't seem like an important problem to solve.


That is one of the benefits that relates to documentation, which is the part of your comment I was responding too.

The larger goal of hal+json is to establish some conventions for linking that allow the development of generic tools for doing hypermedia. Not to create magical machine clients that can interact with any random API you point them at. Nobody made that argument so I don't know why DHH addressed it in his post. I'm guessing he ran out of things to be an angry-pragmatist about.


Well ... the spec does say "HAL is a bit like HTML for machines, in that it is generic and designed to drive many different types of application." To me (and perhaps DHH), that suggests that it's designed to be used to create clever clients. Perhaps you can clarify this point in a future edition of the draft?

http://stateless.co/hal_specification.html


I have no idea why you would conclude that from that sentence. HTML allows browsers, the same way that HAL allows generic libraries. I'm struggling to understand why this is controversial, there are already 16+ examples of generic HAL libraries across a bunch of different languages.


Hypermedia doesn't claim that you don't have to read the docs, that's a misconception. You do have to read and know how to parse the file formats.

The main advantage is that know that your application supports Twitter, it also supports every other service that uses the same file formats, since they can be shared, re-used and standardized. Obviously, two services can't use the same URLs, so if you hardcode them, you're locked in.


I can imagine Twitter publishing a application/tweet+json media type or similar, and clients and servers either supporting this or not (and exchanging Accept headers and 415 Unsupported Media Types as they go), but I don't get what that has to do with HAL.

For example, the "comments" link relation in

http://developer.github.com/v3/pulls/

points to

https://api.github.com/octocat/Hello-World/issues/1/comments

but no-where in the document does it say what media types that endpoint will accept.


JSON and other non-natively-hyperlinked-media (e.g PNG, MP3, etc) can use the Link header to add hyperlinks. It's HTTP that is hypermedia aware, not the content-type, so you dont have to go out and invent image/png+hal.

TL;DR You dont need to create a new media-type to make JSON hypermedia. All requests are hypermedia by virtue of the fact that it uses HTTP as the transport protocol.


> It's HTTP that is hypermedia aware, not the content-type

1. That's highly debatable, the very content type from which REST was extracted is hypermedia-aware

2. The LINK header only works for very shallow and broad linking, making it contextual to the content type will have as high a complexity (if not higher) as codifying hypermedia in the content type (consider a resource listing other resources, the equivalent to the HN frontpage, how are you going to match a given entry in the media — which may have a number of inline metadata — to its LINK? anchor? now you need to define an anchoring scheme in your content type. A link-extension? Now you need to define that, and you need to define the relation between the link extension and the content type. And of course you also need a custom relationship, which you'll also have to define, and you need your clients to correctly handle upwards to hundreds of LINK in a single resource) (and even with that, you're also making the client more complex because he'll need an explicit link-resolution step... and then, are URI templates allowed at all in a LINK?)


"JSON doesn’t have links."

JSON doesn't, but HTTP does. Why not use Link headers?


Link header parsers are far less ubiquitous than json, and Link headers aren't very good for use cases like representing links that come from items in a collection, there are also issues relating to the maximum feasible size for HTTP headers and/or the header block as a whole.

Link relations are useful for adding links to media types that can't support links (e.g. images, etc), and for layering protocols (e.g. Linked Cache Invalidation), but for normal APIs it makes your clients life much easier if you just put them in the body.


Would this be a better fit as part of an OPTIONS request?

OPTIONS /orders

  {
    "GET": {
      "description": "All the orders."
      "links": {
        "self": { "href": "/orders" },
        "next": { "href": "/orders?page=2" },
        "find": { "href": "/orders{?id}", "templated": true },
        "admin": [
          { "href": "/admins/2", "title": "Fred" },
          { "href": "/admins/5", "title": "Kate" }
        ]
      }
    }
  }



That's another option, yes.


DHH's API philosophy (send simple JSON serializations over the wire, mostly from the server to the browser) got him through the 2000's OK, but the longer he argues against hypermedia, the less relevant Rails becomes for API design.

These are not particularly good arguments he puts forth here. I believe they are in response to Mike Kelly (designer of the HAL+JSON hypermedia format) and his recent post: http://blog.stateless.co/post/38378679843/hypermedia-apis-on...

DHH's comment that URLs instead of IDs are a good idea is true (but he even gets that part wrong in how he implements it, leaving a '.json' extension on the URL). And the rest of the article is trying to hand-wave away the value of hypermedia (links -- all hypermedia means is links, at its core).

And comparing HAL+JSON, a blessedly lightweight standard, to WS-* is just dirty.

This post didn't convince me of much other than that DHH may run out of steam on this issue before too long.


These are not particularly good arguments he puts forth here

Would you care to explain why you think they're not particularly good? You're blaming DHH of being "hand wavy" but I don't see any more depth from your comment.


I spell out some concrete advantages to hypermedia APIs elsewhere in these comments. http://news.ycombinator.com/item?id=4948652

But to specifically address DHH's arguments from the article:

* Enabling Discovery is a strawman argument; no one expects an API to use / as its only documentation (at least no one expects it of a good API).

* Standardizing API Clients is also a strawman; no one expects to have one generic client magically make sense of any API.

* Comparing HAL to WS-* in order to paint it as committee-driven standards bloat is not fair to HAL, which is an admirably tight and cogent specification.


Bias alert: I'm the newest Rails committer and one of the bigger proponents of Hypermedia APIs in the Ruby world.

Anyway, many, many other companies _do_ find hypermedia principles to be useful. See Balanced Payments, for example:

> Fun fact: our internal statistics show that client libraries that construct the uri receive roughly 2 orders of magnitude more 404 status codes from Balanced than clients which use the uri directly. > > http://www.theatlantic.com/magazine/archive/1999/03/the-mark...

GitHub is starting to add hypermedia stuff to their responses:

http://developer.github.com/

Here's their main API guy talking about the advantages even this partial implementation has achieved: https://twitter.com/pengwynn/status/281849041707474944 https://twitter.com/pengwynn/status/281849329243787265

Twilio has always had elements of hypermedia in their API, and are considering moving further in that direction in the future. They had me speak at their conference for the last two years in a row about the topic specifically.

That said, if not doing hypermedia doesn't hurt you, don't change what you're doing! If you're interested in evolving your API over time while supporting old clients and behaviors in a simpler way, then consider checking it out. REST/hypermedia APIs are focused on long-term stability, evolvability, and massive scalability. If you don't need those things, you don't need hypermedia.

That said, I'd be happy to answer any questions on the topic, though I'm really busy today, so it might take a while to get back to you.


What are the best options to implement that for an API backend with Rails <-> a javascript library/framework for the interface?


The current Rails Party Line is "Jbuilder + you don't need integration" https://twitter.com/dhh/status/281802247480958976 https://twitter.com/dhh/status/281802391316201472

Some members of the core team and I have started a "Rails API" project specifically to explore these possibilities, and to extract common patterns from the apps we build: http://github.com/rails-api

This will encompass hypermedia and non-hypermedia APIs, as well as SPAs with a Rails backend. Most of us are focusing on Rails 4 at the moment, and will start working hard on this post-Rails 4 release, but about 23,000 people have installed the main gem, and I know several running it in production already.

I personally feel the best option right now is "Rails API + ActiveModel::Serializers + Ember.js". But we want to encourage a multiplicity of options. Not all apps are identical.


We also have developed custom Python tools and frameworks that help us with our API at https://balancedpayments.com/

If there's enough interest, I'd love to share some of our ideas with the Rails community and discuss some potential downfalls and successes we've had and why we needed to build these tools internally.

I'm sure Rails would benefit a lot from these.

Love love love the hypermedia work -- keep it up!


That would be fantastic. If you find the time, we have a discussion list over at https://groups.google.com/forum/?fromgroups#!forum/rails-api... , please feel free to share thoughts there!


His questions about discoverability seem to assume that people are suggesting not writing any documentation at all.

"The idea that you can write one client to access multiple different APIs" is a straw man.

The connection he's making between HAL and WS-* is ridiculous.


hypermedia APIs mean that our API's basically become web pages and every client should act like a browser. That's great if you want... a browser app.

If you want to just expose data to let people build interesting things with it, hyperlinks in your api are a bit silly. They might be nice if you want someone to write a browser for your api, I don't see where else it would be awesome. Maybe if you want a web crawler to crawl your api.

Also, they add a layer of chattiness to your app. If you have well documented, unchanging url's people can write apps against those. If people have to go to a resource to find the url to another resource, you're going to have a lot of API requests just to look up URL's. People will realize that is a waste of bandwidth, and will hard code URL's anyway.

I rarely agree with DHH, but he's right.


So you could write an app against the API of some service, then a competitor of that service comes and implements the same API, and so you want to switch.

What should you do? Replace every URL? What if you want to support both (say, Twitter and Identi.ca)? Should you implement a map from codename ⇒ url?

Now what if you want to support every possible implementation, even if you - the developer - don't know about them? Why shouldn't the user be able to plug-in the entry URL and use your app?

People snickering about hypermedia APIs seem to me like people ridiculing the idea of having standard ports and protocols for devices, because they can't imagine a world where you don't have to install yet another crappy 200MB driver that is only available for Windows 95 to use a damned mouse.


I think that assumes some serious hand waving on the part of both your app and the api's. Assuming that just pointing at the root of another API is going to mean your app will magically behave properly using both API's is making a lot of assumptions either in your app about how the API's work. For example, say one API calls a delete call "delete", another calls it "remove", another calls it "obliterate", and at the same time one API takes an id on that method as input, another takes an email address, another takes a username. In that scenario your app might be written to remove something based off an id and when you switch API's the new one expects you to obliterate based on username. Hypermedia doesn't save you there.

I'm all for pluggable protocols and standards, but Hypermedia links in your API don't magically get you there. They might help, but no more than a standard spec with standard URL patterns would.


Both service will still not have the same resources behind the same IDs, even though they would provide the same services behind the same path. What you would have to do is use the same code to interact with two endpoints differing only by their domain. You could then use the resources delivered by both (statuses in you exemple) in one big merged list, each status still having in its data a pointer (URL) to the profile info of the author on the right service.

[Edit: I didn't get your point right away. So I guess we agree. It's a bit late here, I guess I'm off to bed !]


Does this ever actually happen? What new web service has unveiled itself and implemented a competitors REST API?


According to hypermedia lore, you will be able to willy nilly change your URLs without needing to update any clients. But that’s based on the huge assumption that every API call is going to go through the front door every time and navigate to the page they need. That’s just not how things work.

This is missing forest for the trees. The point is not that you, the API implementor, will be able to change URLs willy nilly. It's that I, the client, can support a different API implementation by just changing the entry point URL, without changing the application.

So, if my application supports the API that Flickr implements, and tomorrow someone creates Blinkr, which implements the same API, the user could just copy-paste the entry point URL and use it, just like I use my RSS reader for all the blogs and news sites out there.

Of course, this depends on using standard document formats and a restricted, standard set of methods. Rings any bells?

Thinking that we can meaningfully derive all that by just telling people to GET / and then fumble around to discover all the options on their own just doesn’t gel with me.

Well, am I glad that straw-man was burned to the ground!


I'm the CTO @ https://balancedpayments.com.

This is exactly the case. We asked our customers to store the URIs in their databases and we ended up changing core resource locations and migrating older clients by just issuing 301s.

The extent was that we restructured entire URLs, changed endpoints. Hypermedia is baked into our clients from the start and they make API versioning and updates trivial.

We were able to quickly move customers to different endpoints and easily restructure our API without having to worry about backward compatibility.

Hypermedia APIs are a godsend. There's still lots of work to do and I'm happy to contribute to adoption.


> and tomorrow someone creates Blinkr, which implements the same API,

So the benefit of hypermedia apis is predicated on something happens that almost never ever happens?


On the contrary; REST, of which hypermedia is a constraint, is an architectural style derived from the observation of a very successful implementation of just that.

It's called HTTP + HTML, and there are millions of services implementing it.


RSS is one prominent example of an API that many different sites implement.

And it's a media type! With Links!


This post seems hyper-defensive and hyper-reactionary. URLs are fundamental to HTTP web services - they're how the web works. Imagine if you visited a website in your browser and instead of link to the various pages, the page just printed IDs and you had to copy/paste them into the address bar after the domain name.


Sounds like a redefinition of HATEOAS. I don't put URLs as my ID's, but every resource has an "href" value which is the URL which makes it somewhat discoverable by tools such as https://github.com/jed/hyperspider.


"Hypermedia APIs" are just "Real REST." Arguing over terms isn't productive, so many RESTafarians just stopped: http://blog.steveklabnik.com/posts/2012-02-23-rest-is-over

HATEOAS sounds big and scary. "include some links" is much more understandable.


I use http://www.remobjects.com/ro/, and have something similar, called RODL Files (http://wiki.remobjects.com/wiki/RODL_Files), this is how look like:http://wiki.remobjects.com/wiki/RODL_Library_(Service_Builde.... This are like SOAP, but better. With them, I can parse the RODL file, and build a python client automatically in seconds, with a python script that output a python client for the remobjects server, with all the class, method calls and that stuff. RemObjects also generate automatically the clients for .net, delphi, js, obj-c, php (http://wiki.remobjects.com/wiki/Generating_Interface_Code_fr...). Is something I miss very much when try to do a REST server without RO.

And the documentation is embebed in the RODL file, so is possible to output the client with the docs inline, and get it to show in the IDE when a call is made to the python client...

So, I think a meta-data about the service is VERY usefull. But the hypermedia is a poor attempt at that (IMHO).


I think these are good points for general public facing web APIs that are specifically designed for being mashed up or will be consumed by third-parties that will never talk directly to your company. I feel hypermedia lore has some really worthy ideas for APIs designed for business use where workflow and data integrity are under strict control but will change frequently and where several client apps will be made by the same company or companies working closely together.


Wow. Very surprising (in a bad way).

Basic Web architecture: 1. expose resources, 2. resources have names (URL), 3. allow basic actions on these (GET, DELETE , PUT, POST ) as needed. 4. Include URLs (as links/forms) in representations.

OK, now build your API . Please do NOT start w/the API and work back to basic web architecture. Servers should always provide URLs (url templates are fine), NOT the client (by way of snowflakey construction algorithm).


Hypermedia hype has always struck me as parallel to the "semantic web" nonsense from the late 90s.

"If we just use RDF triples to encode everything then machines can learn that apples are fruits and fruits are good for you, thus apples are good for you! Huzzah!"

Then, Microsoft invented SOAP.

"If we just have a WSDL that explains all of our API, then we can have automated methods to communicate between services! Programmers can just auto-generate code and life is great!"

Except it doesn't work that way. Most WSDL parsers auto-generate code that you then hand-edit and maintain over time. And, you, as the programmer have to know what methods to call. SOAP is just excessive ceremony transmitted over XML - another excessively rigid structure.

So then we got JSON REST APIs. Simple text structures. Reasonable defaults. Basic vernacular. Easily understood.

What concerns me about Hypermedia APIs is that folks are using the same sorts of grandiose, architecture astronaut-y stuff that we got out of the last two failed revolutions.

Hypermedia API proponents say that REST APIs are "highly coupled" (to the data model and versioning) and don't expose workflows.

Heck, that's why REST is so pervasive - they're super easy to write and consume/interact with. Enterprisey folks are so focused on long-term extensibility and maintainability that they overlook the cognitive overhead and inability to work with it on a daily basis. And that you in fact, tend to move slower in development because you can't comprehend or follow what is going on. And that all abstractions leak, leading to libraries and tools (SavonRB) that don't quite work if an API doesn't follow the spec exactly (and they never, EVER follow the spec 100%).

My attitude is the opposite: Don't design your APIs as if you expect them to be the next 1,000 year reich. If your API stays small and nimble enough, your consumers will also be able to be flexible to accomodate it. Yes, if you have a User REST API and you decide that your app no longer has users, well, then you have to get rid of it. But adding another layer or two of hierarchy and ceremony on top via Hypermedia wouldn't solve that either! Fundamental universe changes ought to break shit!

From what I've seen (I have a subscription to designinghypermediaapis.com which is very well written) Hypermedia APIs are an over-complicated solution to a problem that has a reasonable solution. Yes, links are nice. Want to propose a "Standard" to handle links? Okay, although that's REST! If you have an Object and you want to DELETE it, I don't need to know the URL. I have a convention via REST that allows me to derive it from my data. If that's different, OK, use this thing. But Hypermedia APIs are a lot more than just links (state machines, workflows, media types, etc.)


> Hypermedia hype has always struck me as parallel to the "semantic web" nonsense from the late 90s.

That's funny, because DHH's proud ignorance about hypermedia strikes me as parallel to the usual proud ignorance about the semantic web.

Your first 2 paragraphs don't actually say anything, they just signal your allegiances.


You seem to be using the word REST as though you have never read any of the foundational material behind it. If you had then you'd have found that hypermedia links are an essential part of the REST philosophy and that formats like HAL are simply expressing REST concepts in JSON.


Good point; when I say "REST", I really mean: "DHH's version of REST as implemented in Ruby on Rails"


AKA "not REST in any way, shape or form" AKA "good old RPC over HTTP".


I'm one of the biggest proponents of hypermedia (been speaking about it for the last 18~ months at conferences) and I think the semantic web is bullshit.


Hi Steve,

Thanks for the reply. I'm basing my interpretation off of your http://designinghypermediaapis.com site and the associated listserv, so either I'm misinterpreting it, or we have a difference of opinion as to the complexity and utility of the proposed solution(s).

My fear is we're redefining WSDL in JSON's clothing. Yes, adding hrefs to the API isn't complex -- but the code to actually do something with it is, and that's where my spidey sense starts tingling (having just spent a few weeks in Savon/SOAP/WSDL/WSSE hell).


> My fear is we're redefining WSDL in JSON's clothing.

Absolutely not. There is a 'WSDL in REST' called WADL, and it's _terrible_. WSDL/WADL is like static typing: you have to declare everything up front, it's super rigid, and prone to breaking. Hypermedia is like dynamic typing: it all happens late bound, it's flexible, and open to change.

> the code to actually do something with it is, and that's where my spidey sense starts tingling

Here's one of the simplest examples I can show you: the 'hypermedia proxy pattern':

https://gist.github.com/3172911

Here's the core of the code: https://gist.github.com/3172911#file-client-rb-L27-L37

This says "parse the links out and save them. When I try to load a name, if it doesn't exist, go fetch it from the link pointed to by 'self.'

This allows you to change the client behavior by modifying the server: by compressing or expanding responses, the client makes more or less requests without changing its code. Jon Moore demonstrates this with Java, Python, and XHTML here: https://vimeo.com/20781278 I demo'd this exact example at the end of my talk here: http://oredev.org/2012/sessions/designing-hypermedia-apis

Did you see my Shoes Microblogging example for ALPS? https://gist.github.com/2187514

The meat of it is here: https://gist.github.com/2187514#file-microblog_client-rb-L33... This isn't the best factored example, but I wanted to show a tiny client: this is a GUI program that can read from any ALPS compliant server (like http://rstat.us/) and read/post new status updates.

------------------------------------------------------------

What I will say is this: I don't feel it's _harder_, but I do feel it's _different_. Just like if you try to write Java in Ruby, if you try to write hypermedia APIs like another style, it will feel hard and foreign. I think it's easier to implement a number of clients over time for a hypermedia service, than it is to write a bunch of clients over time for a "Rails REST" one.


Thanks, I will dig into those examples!


Awesome. Please let me know, and post to the list if you have questions.

By the way, my student loan creditors thank you. :)

(I'm planning on a major new iteration of the book project in the new year that's much more linear, clear, and practice driven rather than theory.)


What concerns me about Hypermedia APIs is that folks are using the same sorts of grandiose, architecture astronaut-y stuff that we got out of the last two failed revolutions.

You owe it to yourself to read the HAL+JSON specification: http://stateless.co/hal_specification.html

Anyone who considers a three-page spec like that astronautics probably has a low capacity for complex thought.


A hint: when trying to convince, it's not a great idea to passive-aggressively insult the other party. Since they're human too, they will probably just get pissed off and write you off as an asshole. Like what I'm doing right now.


Given the level of respect I have for the OP's exhaustively researched points, I should not have said a thing. Someone on the Internet is wrong. My mistake.


has a low capacity for complex thought

So rude. I'm sure you have a lot of good things to say (given your other posts), but resorting to insults and ad hominems is the sort of thing that comes from those who have a low capacity for intellectual discussion.

Besides, per the URL you provided, it's a 10 page printout, not 3.


Requiring a low capacity for complex thought sounds like a good API...


You may be surprised that the semantic web continued to develop after you dismissed it, the latest milestone being Google's Knowledge Graph.

http://news.ycombinator.com/item?id=3983179


Aren't links also proper global IDs, allowing us to address objects cross-service ?


Yep. URIs are 'universal.'


I feel there is a lot of noise around IDs as URIs yet I've seen nobody (I may not have looked deep enought) talk about their main benefit as I perceive it, which is putting resources into the perspective of a global namespace. Thus allowing a service to reference resources provided by another service. This to me, looks like a practical revolution. We may at last have our contacts served by one entity, our statuses by another, our shared files by still another, and yet all of them could be handled as one distributed API.

The code glueing the whole to produce, say, a distributed facebook-like app, would still have to know the specifics of each provider (before standards emerge), but it would effectively break the isolation and allow us to composite services much more easily.

Where am I missing the point ?


> their main benefit as I perceive it, which is putting resources into the perspective of a global namespace.

Sometimes people do. I mentioned this problem in my talk at Øredev this year: http://oredev.org/2012/sessions/designing-hypermedia-apis

> Where am I missing the point ?

Well, you're on HN: everyone wants to control everything. Startups don't 'win' by playing nicely with each other, they drive their competitors out of business.


One neat trick that I think is worth adding to the discussion: URL templates.

Returning URL templates as part of your API response can give you the benefits of having a clean way to access sub-resources, without the headaches and bloat of having to enumerate every possible desired sub-URL.

For example, in DocumentCloud, a document's canonical representation has a unique URL for the content of every page as plain text, and as an image, in several different rendered sizes. Instead of doing something silly like this:

    resources: {
      text: [
        "http://www.documentcloud.org/documents/1/pages/page-1.txt",
        "http://www.documentcloud.org/documents/1/pages/page-2.txt",
        "http://www.documentcloud.org/documents/1/pages/page-3.txt",
        ...
      ],
      largeImages: [
        "http://www.documentcloud.org/documents/1/images/page-1-large.png",
        ...
      ],
      thumbnailImages: [
        "http://www.documentcloud.org/documents/1/images/page-1-thumb.jpg",
        ...
      ]
    },
    ...
... where you might have 5,000 pages in a document, you can imagine how unacceptably large that response might become. Instead, a single URL template can do the work. (http://tools.ietf.org/html/rfc6570) The spec has a whole bunch of goodies in it, but we just need the most basic interpolation feature for this case (real example, you may have to scroll sideways to see the complete URL):

    "pages": 5058,
    "resources": {
      "page": {
        "image": "http://s3.documentcloud.org/documents/21939/pages/sotomayor-s-senate-questionnaire-p{page}-{size}.gif",
        "text": "http://www.documentcloud.org/documents/21939/pages/sotomayor-s-senate-questionnaire-p{page}.txt"
      },
      "pdf": "http://s3.documentcloud.org/documents/21939/sotomayor-s-senate-questionnaire.pdf",
      "published_url": "http://documents.nytimes.com/sotomayor-s-senate-questionnaire",
      "related_article": "http://www.nytimes.com/2009/06/05/us/politics/05court.html",
      "search": "http://www.documentcloud.org/documents/21939/search.json?q={query}",
      "text": "http://s3.documentcloud.org/documents/21939/sotomayor-s-senate-questionnaire.txt",
      "thumbnail": "http://s3.documentcloud.org/documents/21939/pages/sotomayor-s-senate-questionnaire-p1-thumbnail.gif"
    },
Basically, all of the simple URLs a Viewer might need to use in order to browse the document, search the text, and view related resources. In the past, when we've needed to change or expand the number of resources (adding HTTPs-only support, changing the URLs at which the page images are stored, or adding larger sizes of page images), it's been relatively easy to do, without breaking the viewers, or invaliding previously-valid JSON representations of the document. Here's the complete link to the above:

http://www.documentcloud.org/documents/21939-sotomayor-s-sen...


> One neat trick that I think is worth adding to the discussion: URL templates.

100% agree - URL templates enable succinct discoverability. Moving link generation to the client reduces computation time on the server which no longer has to generate inline links as well as the payload size of the response.

The RFC6570 spec is great, has numerous implementations[1] and a through test suite[2]. I am also surprised that this is not a standard part of every hypermedia implementation.

[1] http://code.google.com/p/uri-templates/wiki/Implementations

[2] https://github.com/uri-templates/uritemplate-test


Just a nit, but the syntax for query arguments would be: "http://www.documentcloud.org/documents/21939/search.json{?q}"




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: