Hacker News new | comments | show | ask | jobs | submit login
Stop Writing REST API Clients (github.com)
153 points by ttezel 1697 days ago | hide | past | web | 130 comments | favorite



Substitute "XML" for "JSON" and we've now come full circle.

The point about REST is that it is self-describing. And ideally should be using the same URIs as the version people clicking around in Firefox or Chrome see. The API is just the XML or JSON or whatever is flavour of the week version of the HTML version.

(Or we could use embedded data—microformats, microdata, RDFa—and get rid of that distinction.)


Agreed. I came here to post something similar but I was going to mention working with SOAP[1] in addition to what you mentioned. It sounds like the OP is trying to do something which sounded very much like using SOAP and XML to me.

To paraphrase the OP, "Since so many APIs can be described in similar terms, why don't we have some sort of standard that one can look at to identify how to use the API instead of letting the API speak for itself?"

When you start going down this track, you're not only making things complicated on the client's end of things. On the server side, you're having to maintain two things for the API now. First: the ruleset, ensuring it's 100% to spec lest a client fail. Second: the code generating the response in the first place.

I've built clients and servers for both RESTful and SOAPy APIs and I can say I would take REST any day.

[1] - http://en.wikipedia.org/wiki/SOAP



I don't think there is a well-known way to self-describe available and required parameters nor any other validation requirements, or am I wrong?


I agree. The promise of REST APIs is that will be self describing, but for that benefit to be realized, we need general purpose REST clients that can "discover" everything they need to know given just a root uri. Are there any such clients? And no, web browsers do not count.


A YC company has a hypermedia-ish API: https://www.balancedpayments.com/docs/api?language=bash

You can see links to the clients on the right.


You could use

OPTIONS /somePath?


I built out a proof of concept on top of my open source project:

https://github.com/caseysoftware/web2project-slim

But one of the things I did a little differently is that instead of writing the code and then the docs separately, I pass the validation information from the object itself.. so the API layer doesn't have to know any of it in advance. It can pass that along to the end clients.

I'm not convinced this is the solution but it mostly works for now and would love any & all feedback.

My next proof of concept will be to use Javascript to retrieve the required fields and decorate a simple html form.


Thank you for being realistic.

REST means more than just GET, POST, PUT, DELETE.

Some may make the excuse that call OPTIONS /path isn't straightforward, but I have no clue how you could get more obvious than that.


The problem with OPTIONS is it doesn't describe anything about the data the resource returns.

I really wish there was a DESCRIBE verb that would return a structure of what is expected to be received/sent.

Then microformats could spring up around datatypes returned by DESCRIBE. This is of course very XML.

Edit: This would also allow for automatic discovery of new APIs.


That's what resources that have form-style affordances offer. See, for example, Collection+JSON.


Yes my argument is simply that we publish API specs in a machine-readable format to avoid wasting time implementing clients repeatedly. WSDL and WADL had good intentions at heart, but XML is ugly. JSON is nice since it's human-readable and light. Why not publish JSON versions openly for REST APIs, reducing implementation cost for clients?


XML is human readable. It was meant to be (this is why it's text instead of some binary format). It's just really, really verbose, which is what this JSON spec is going to end up being once it's dealt with all imaginable edge cases.


> ... my argument is simply that we publish API specs in a machine-readable format

...

> ... XML is ugly. JSON is nice since it's human-readable and light.

???


Yes, there are some dots you need to connect between his two statements. I'm assuming that he meant the following: those machine-readable specs also need to be read by humans at some point, just like code, and, since JSON is more readable, he proposes using it instead of XML.


XML is more human readable than JSON, to my eyes.


That may be so, but most opinions I hear in discussions about XML vs JSON say that JSON is more readable, probably because it's less redundant and similar to data structures found in some programming languages.


Really? Let's write a shopping list on a piece of paper in JSON and XML. Which one would be more similar to the way we write down lists in real life?


Really? Let's write a shopping list on a piece of paper in JSON and XML. Which one would be more similar to the way we write down lists in real life?

XML:

   <list>
    <items> 
       <item>Milk</item>
       <item>Eggs</item>
       <item>Bread</item>
       <item>Butter</item>
     </items>
   </list>

JSON:

    {
       "list": {
         "items": [
            "Milk",
            "Eggs",
            "Bread",
            "Butter"
        ]
      }
    }

I don't know that either one is particularly close to the way I'd write down a list in real life, to be honest. This is a pretty trivial sample, and neither is especially hard to read/parse by a human. But the JSON still looks closer to line-noise to me. :-)


This seems similar in goal to the SPORE project (https://github.com/SPORE). Have you seen that and/or considered merging efforts?


Yes, this sounded an awful lot like SOAP/WSDL.

Hmmmm. Maybe JSON should just take pages from their book instead of reinventing the wheel?


Sigh. This is optimizing for the wrong problem.

Stop creating REST APIs that are only level 1 or 2 (see http://martinfowler.com/articles/richardsonMaturityModel.htm... ).

Start writing HATEOS systems where the client is coupled to the semantic rather than the syntax.

Machine parseable interface descriptions might get rid of some boilerplate but it doesn't make for a more robust client-server relationship.


Yes, URIs in the response sounds amazingly cool, but it won't change anything.

The inline URLs of the web work because the consumers are humans who can deal with changes (more than just trivial URL changes, like added, removed features) and now click on this button or that button.

Software isn't that flexible, so it will be just as coupled as it is today--you're just moving the coupling around.

So this idea of a "robust client-server relationship" is a pipe dream IMO.


With some conventions how the api is structured it's possible to have loose coupling the between client and server. This video demonstrates how it is possible to make changes in structure of the API that the client detects automatically:

http://oredev.org/2010/sessions/hypermedia-apis

And he's release a Java library for creating such a client:

https://github.com/cimlabs/hypermedia-client-java


I've built these kinds of things before: http://words.steveklabnik.com/i-invented-hypermedia-apis-by-...


Not saying that system doesn't sound cool, but my point about hypermedia APIs is that when you serve back:

<link href="http://example.com/low.rss rel="podcast_url" />

You've moved the coupling from "look at URL xyz" to "look for rel podcast_url". Okay, yes, you can now change the URL, that's cool, but I assert that's relatively trivial. You can't truly add/remove new functionality (or break existing contracts, like "look for rel=podcast_url") that some omnipotent client would suddenly start taking advantage of.

IMO this omnipotent realization/utilization of new/changed features is what hypermedia advocates get all excited about, without realizing that humans are really the only ones that can deal with that level of (true) decoupling.


The important part is that this is all documented in the media type. Of course, computers aren't able to just figure out what's up, that's why it's agreed upon beforehand.

> You can't truly add/remove new functionality

You can absolutely add new functionality, because rule #1 of clients that work this way is 'ignore things you don't understand.' Removal will obviously break things, so it's important to handle this in the appropriate way.

I guess ultimately my point is that these kinds of APIs are significantly different, and come with a very different set of constraints/goals/implementation details. It's like you're saying "well, I don't have a full list of function calls I can make!" because you're used to SOAP where there's a WSDL and 'RESTful' APIs don't have description documents. Of course! It's a different architecture!


As one of my friends on twitter said, "Yes, people won't put their URIs in responses, so let's just put them in this other, totally unrelated file."


true story. we could do worse than moving toward something like: http://stateless.co/hal_specification.html.


Can you give an example of an acceptable implementation of a HATEOAS REST API (wow, that's a lot of letters) with an associated client that actually uses it?

My experience has been that you can't communicate much through HATEOAS that's actually beneficial to a human programmer writing a client. Sure, you can add all the hypermedia links you want in your API responses, but how does that make writing a client easier? Wouldn't it just be helpful to crawlers?

Not trying to put down the idea - I want to believe, but I just haven't seen any obvious examples using it in the real world yet.


> (wow, that's a lot of letters)

We're all calling these "hypermedia APIs" these days.

> with an associated client that actually uses it?

I have written a toy client here: https://gist.github.com/steveklabnik/2187514

You can run it against this site, written in node: http://alps-microblog.herokuapp.com/

Or this site, written in Rails: https://rstat.us/

It (should, I haven't tried it in a long while) work with both just fine. They both use the ALPS microblogging spec. yay generic clients!

As for people who have 'more real' ones: GitHub, Twilio (partially, more in the future), Balanced Payments (YC W11, iirc), Comcast (though that's internal :() Netflix has aspects, FoxyCart.

This year will be the year of examples.


Interesting. I didn't realize that there were specs like ALPS for defining how to implement this kind of thing. Isn't it still a little difficult to do this for APIs that don't neatly fit into some kind of generic profile (e.g. microblogging)?


There's a few different ways to go about things, but every app has some kind of domain. If you're the first, then you can define how it works. :)

Mike Amundsen's terribly named but amazingly well written 'Building Hypermedia APIs in HTML5 and Node' is a really thorough examination of this topic.


We could give a name to the language we use to define such files. It's a language that defines Web services, so perhaps Web Services Description Language? :-) http://en.wikipedia.org/wiki/Web_Services_Description_Langua...

Flippancy aside, maybe there's a need for a next generation of this that skips all the XML headaches after all.


There are no new problems, only new engineers.

I have been puzzling over this API discovery issue on my current project (building out a reporting API).

I'm starting with self documentation for developers built in, not this (admittedly admirable) goal of a machine generated API mapping layer. I think the main issue is, you're trying to generate a generic interface to something that isn't, itself, generic.

How many versions of "RESTful" have you encountered?

Building a generic interface to non-generic interfaces is the domain of software engineers. Until we have machines building both sides of this equation, there will always be a need for human intervention.


The problem wasn't XML itself, but the complexity they tried to encode in XML. The same sins could be committed in JSON.

JSON has just been lucky that its user base hasn't been the same enterprise architects that ruined Java.



It's still XML but that does look closer. I thought the other commenter was joking that this existed.. :-)


I don't see how writing JSON REST API descriptions isn't practically the same as writing REST API clients anyway: they're still clients, just written declaratively rather than procedurally.

If the point is "stop writing procedural REST API clients and write them declaratively instead" then that advice is by no means restricted to REST API clients.

If the point is "hey, I noticed that REST API clients are another thing that we can now comfortably write declaratively" then OK.


Am I the only one who doesn't like receiving direct orders from article titles?


Writing using authoritative language is very common, and widely considered a best practice.

I highly recommend you simply accept it as what it is: the way some people communicate, especially online. It's not worth your time/attention/care to think about this.


Yes, you're right

But it still looks like they pulled some rules out of nowhere and are enforcing them.


Writing 'authoritatively' is used as a pop-culture substitute for reasoned argument. I find that it's a good litmus test for determining who I should ignore.


I was just thinking that, I hate article titles that are phrased exactly as a command and it's merely a blog post that wants to change an entire body of thought that's well founded.


I take all direct orders as suggestions. Sometimes this gets me into trouble; The same way it does my 2 year old. Most of the time though, it's the way to go.


No, I got quite mad when I read the whole promotional article for his new node.js program without seeing a single argument against REST APIs.


That's because the article does not argue against REST APIs. It argues against coding wrappers for them and instead proposes a solution that allows you to define API specific behavior in JSON and use one library to rule them all.


Would it help if he prefixed it with "In this article, I argue for why I believe that you should"?


"A way to stop reinventing REST API clients."


Yes. Assuming guru voice just makes you sound like a blowhard.


Personally, I just assume that implicitly.


Yes! :)


I was thinking about this yesterday, but its seems like HN likes that kinda thing. Lots of frontpage article are direct orders from blogs with who knows what credibility.


You aren't, but it's a mistaken thing to dislike: there's no value to rephrasing it as "I believe you should [do X]" or "I am arguing that you should [do X]" because that's necessarily always true anyway, i.e. no article can ever be anything other than what the author believes and argues for. So 'softening' the language would be inefficient - it would use more words while adding nothing of substance.


I read it as (I'd like to show you something that may help you) "Stop writing REST API clients". Imagine trying to visually scan HN article titles having to filter through useless pleasantries like that.


It irritates me too.


Try reading that with the voice of Morgan Freeman. Now THAT is something you would likely do :)


This article is not at all about REST, it is about RPC and its shortcomings. These shortcoming were fixed by REST, and the author of the article rediscovers these fixes.

A key features of a REST API is that is self describable, in a sense, that it has a single entry point from which a generic client can automatically discover available resources and actions. APIs in the given examples are not like this, they require custom clients that are strongly coupled with an application. These are not REST APIs but RPC APIs.


> A key features of a REST API is that is self describable

How practical is that, in reality?

I know I've added the whole HATEOAS thing to my API and I am not sure if it just makes my IDs longer. Customers seem to hard-code the API entry points anyway. Everyone of course says "Oh, yeah this is cool" but when it comes to doing it given performance constraints, they don't want to start generating 10s of extra GET requests on startup to rediscover the API.

Now I can say "well not my problem" and that is what I say, except that looking back and I just don't see the practice match with the supposed theoretical advantages of the REST-ful interface.

Another issue I see happening, is the return of message bus like interface brought about by Websockets and server push optimizations it makes possible. I think REST and Websocket's channels/message bus architectures will have a battle at some point -- and one will end up dominating.

Just like AMQP is becoming a standard for message brokers, I think at some point that will be extended to the browser. Kind of like RabbitMQ has the web-STOMP plugins. I can see future hotness being just that -- message broker architecture all the way to the web client and everyone will laugh at REST-ful hype just like we are laughing at SOAP now.


It of course depends on the problem, no architecture is good for everything. REST has its cost, it usually requires more work and careful design then going RPC way, but for some kind of problems it can be really beneficial when done right.

Imagine you have a company that does custom mobile apps for external customers. A very popular topic, a lot of companies today want to have their own apps in addition to standard web pages.

Most of these apps are very similar (you can browse come content, purchase some service, etc.). Your company can go RPC way and create a custom interface and a custom client for each customer, with a lot of duplication and substantial maintenance cost. Or your can make larger upfront investment and create a generic REST client and then only design resources and representations for each new customer.


In what way is REST self-describable? REST is not a standard, but rather a widely accepted convention.

I have seen a few RESTful servers self-describe, (ie. GET /api/v1/ returns ['/users', '/posts']). However you can't claim this is a key feature of REST clients because there is no agreed-upon standard to have services describe themselves. HTTP is not sufficient.

If there were a real standard here, we would not have this problem. Like it or not, everybody is calling their custom API a 'REST' API nowadays, and without a real standard, nobody is wrong.


REST architecture was introduced by this paper https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

'Semantics are a by-product of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI -- they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations.'


All that these sorts of description produce is a low-level API. That can be useful, but what's really needed are high-level APIs that provide meaningful semnatics:

    my $me = Facebook->new( username => 'autarch' );
    $me->post_status("I'm on Hacker News writing this comment");

    my $friend = Facebook->new( username => 'imaginary' );
    $me->post_on_wall( $friend, "Hey buddy, I am on Hacker News writing this comment" );


Exactly. And this is what clients can provide over generic, one-size-fits-all solutions: fluent, idiomatic, and terse access to APIs.


Here's another way to phrase it. A good API is based on the data and actions related to a specific domain of knowledge. Generic solutions produce APIs that are oriented around the communications protocol (REST).

On the client side, I don't really care (too much) if something is a POST or PUT, I want to send a message or update a repository's metadata or share a photo.


swagger (http://developers.helloreverb.com/swagger/) has been doing whats proposed here. By describing your APIs with swagger-spec you get a beautifully documented APIs, code generation of client libraries and some more stuff: https://github.com/wordnik/swagger-core/wiki/Downloads


While this has some benefits, it seems like a slippery slope leading back to SOAP.


Exactly. But to be precise, REST would be the equivalent of SOAP and the format described in this post would be equivalent to WSDL [1].

And then, we'll need a central location to store all of these API descriptors and UDDI [2] will be back with a vengeance.

[1] http://www.w3.org/TR/wsdl

[2] http://en.wikipedia.org/wiki/Universal_Description_Discovery...


When I see this post, the first thing to pop up is SOAP as well. Just that SOAP is not human-readable. Then I suddenly remembered that it wasn't SOAP itself that include the schema but the SOAP providers would generate WSDL alongside a SOAP endpoint.


From someone who is unexperienced in soap, what is the problem with it?


SOAP is an abstraction that is fundamentally unhelpful (at least in my opinion). Without SOAP, to call a webservice, as a developer, I need to read the documentation for the webservice to understand what data needs to go into what places. With SOAP, as a developer, I need to read the documentation for the webservice to understand what data needs to go into what places; but the documentation is much harder to read, and complex types are usually much harder to use (there's a tendency to model things as lots of complex xml, which are often hard to construct, instead of just a specified transform to a string)


Hmmm....JSON documents (jsonSpec) describing the restful services is the new WSDL, feels like 2002 all over again.

I think too many people consume REST APIs in different manners, utilizing different data in unique relations. This is the beauty of it.


whoops, just saw other comments on the whole, feels like WSDL all over again.


Well, to be fair, JSON is at least a lot less verbose than WSDL. So there's that.


WSDL describes a pact between a client and a server so they can't screw it up. Comparing WSDL to WADL would make sense. JSON just describes objects in compact notation. Comparing JSON to XML would make sense when XML is only used to describe objects and nothing more. WSDL and WADL and HTML are all XML derivatives. WSDL and WADL could never exist or HTML for that matter if trying to use JSON to describe them since that's not what JSON was designed for.


WSDL is an object notation for objects that describe what a web service should do. This could just as well be done in JSON. It would be marginally less verbose, too :-)

Oh, and it's perfectly feasible to translate a well formed HTML document into some sort of JSON object. After all, HTML is just a set of tags with values, no?


Hypermedia APIs are an approach to solve this problem. It essentially does what you did, and add some other benefits like de-coupling URIs.


Really? Seems to me like Hypermedia APIs should move the problem from the wire protocol to the application protocol?

Hypermedia says "oh yeah, here's some markup, look there are URIs in it". For a human user, we're like "cool, I'll try and click these, see what they do".

But software is going to want "um...okay, how to I parse this markup, and how do I generate the submissions you want? And, okay, you can change URIs, but please don't change anything else about that operation, or I will break completely. That's right, we're not really decoupled."

So, even with hypermedia APIs, AFAIK you're still going to want some marshaling to/from host language. ...and so you're back to having a spec, and coupling, you've just moved it around.

(Rant on coupling, people seem to think it's always bad and you can make it go away. Reality: you can't make it go away, and sometimes just accepting it directly is a whole lot simpler than deceiving ourselves about it's existence by over-applying abstractions.)


You're forgetting form-style affordance. For example, if you were using HTML as your media type, the equivalent of the first example would be

  <form method="get" action="/me">
    <input type="text" name="access_token" />
    <input type="text" name="fields" />
  </form>

> AFAIK you're still going to want some marshaling to/from host language.

Actually, you explicitly _don't_ want this. That's what hypermedia APIs are trying to remove.


We're replying to each other in separate comment threads :-), but this input form is still coupling--you can't add/remove/change the access_tokens/fields without clients breaking. Humans can handle that. Software can't.

That's why I think hypermedia makes all sorts of sense for explaining why the www is awesome--it change deal, users will adapt. But IMO it falls flat as some new paradigm for building client/server systems.

> Actually, you explicitly _don't_ want this. That's what hypermedia APIs are trying to remove.

Hm? I am skeptical...any links/explanations?


Haha! Ping-pong!

> But IMO it falls flat as some new paradigm for building client/server systems.

I know of one company which you've absolutely heard of who has a 30-person team building a hypermedia API. They haven't talked about it publicly because they see it as a strategic advantage.

This year will be the year of code and examples; last time I was in San Fransisco, 5 different startups came up to me and told me that hypermedia is solving their problems. Expect to see more of this going on soon.

> Hm? I am skeptical...any links/explanations?

Mike Amundsen's "Building Hypermedia APIs in HTML5 and Node" has a pretty big section on this, and my book has a section entitled "APIs should expose workflows."

I previously commented more about it here: http://news.ycombinator.com/item?id=4951477

RPC APIs expose functions, SOAP/"REST" APIs expose objects, hypermedia APIs expose processes.


We've been experimenting with this at my office. We use yaml descriptions of all of our routes to generate test coverage. We plan to later generate our documentation and client libraries with the same docs.

Document-generated server behavior is something we're researching as well, to possibly represent business logic. We're hoping that patterns can be found and condensed into notations, like Regular Expressions do for string-parsing. We'll post about anything that we come up with.

One of my side projects us an Ajax library which allows javascript to respond to requests (LinkJS [1]). It has a helper object called the Navigator, which is like a miniature Web Agent. It retains a context, and uses the Link header from the response to populate the navigator with relations. It works out like this:

  var nav = Link.navigator('http://mysite.com');
  nav.collection('users').item('pfraze').getJson()
    .then(function(res) {
      console.log(res.body); // => { name:'pfraze', role:'admin' ...}
    })
    .except(function(err) {
      console.log(err.message); // => 404: not found
      console.log(err.response.status); // => 404
    });
The advantage is that the link header is a relatively condensed representation of the resource graph. As a result, it's not a problem to send it and process it every time. You do gain latency, but the internet is only getting faster, and caching can be used. Meanwhile, the server can rewire links without interrupting their clients.

1 https://github.com/pfraze/linkjs


> This doesn't scale.

Most overused phrase right there.


Isn't this what http://json-schema.org aims to provide? Or am I missing something. It's a solid spec.


Json Schema is pretty good at describing data coming through JSON. But describing (REST) APIs requires more. For example a standard way to describe API endpoints with parameters and response types, errors, related models, default/allowable values etc. This was what the OP was referring to and this is what Swagger is trying to do. The Swagger Spec is here with some more details on whats required in addition to JSON Schema to document APIs: https://github.com/wordnik/swagger-core/wiki/API-Declaration Incidentally model/data specifications in swagger spec does map closely with json schema.


Personally I'm partial to http://jschema.org/ (also see http://jschema.org/rpc.html) because it seems simpler.

My naive impression is that JSON-Schema is trying to be just like XML Schema, but in JSON. Which doesn't seem like a good thing.


But neither of these are geared towards a RESTful implementation, unless I'm missing something.


While I agree with the title, I am not so sure about the solution presented. HATEOAS, whether encoded in JSON or XML, can only give you so much information about the semantics of links.

IMHO, what's needed is better support for "generic" REST in programming languages and/or libraries. Objective-Smalltalk (http://objective.st) features "Polymorphic Identifiers", which make it possible to both interact directly and abstract over web interfaces.

To reference a URL, just write it down:

   news := http://news.ycombinator.com
Arguments can be added without string processing:

   #!/usr/local/bin/stsh
   #-zip:zipCode
   ref:http://zip.elevenbasetwo.com getWithArgs zip:zipCode
This is a file downloader, similar to curl:

   #!/usr/local/bin/stsh
   #-<void>scurl:<ref>urlref
   fileComponent := urlref url path lastPathComponent.
   file:{fileComponent} := urlref value.
For abstraction, you can build your own schemes, either directly in code or by composing/modifying other schemes. For example, if I want to look up RFCs, I can define the rfc scheme:

   scheme:rfc := ref:http://datatracker.ietf.org/doc asScheme
Or I can compose schemes so the rfc scheme looks in a bunch of different places (memory, local directoy, several http/ftp servers).


Stop writing self-documenting API specs and settle on a hypermedia spec, like Hal or Siren! reply


I actually have no idea why the hal people aren't writing hal specifications for existing services right now--it's not quite as nice as "native" support, but the format supports it, and it would be useful to see what a hal version of the Twitter API looked like, for example.


hal is currently going through the ID -> RFC process, once it's an RFC, I'm sure you'll see stuff like this.


This is essentially trying to solve the same problem as Swagger. Swagger is "a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services" [1]. Check out the spec on GitHub here [2].

[1]: http://developers.helloreverb.com/swagger/

[2]: https://github.com/wordnik/swagger-core/wiki


There is a lot of talk about this idea being SOAP-like, but I disagree.

SOAP was insane and its counterpart, WSDL (which is really the part that is most comparable to this idea), was even more insane.

But, the basic premise was not bad. It was the execution which sucked by trying to account for every situation, adding namespaces, etc. And if you ever worked with language libs designed to interface with SOAP/WSDL, it would make you slap a bunny.

With this idea, however, adding an optional JSON-based descriptor language could be helpful. Key would be to keep it simple, allowing the bare mnimum number of data types, with one simple array structure for collections. Allow object definitions with an infinite number of nesting levels, and that would be it. I wouldn't even get into optional vs required stuff, validation, etc. That stuff should stay at the application level. Why stuff it into the interface layer?

From there, it would be easy to develop libraries to generate clients in any language for any API just by feeding it the JSON descriptor. Or (as I think the author intended) just use one universal client that any app can use. For languages that aren't strongly typed anyway, the latter would be fine.

Someone mentioned that it would require the server side devs to keep the descriptor in sync with the code. No biggie for apps that already offer client libs in different languages and must keep them up to date anyway. Not to mention there should be some doc that needs to be kept in sync (REST is not typically self documenting in reality).

In any event it wouldn't be required. What would be the harm in creating a standard for those apps that choose to use it?


Can't we just stop writing clients for specific REST API:s period and rather just build one good API client that can easily be extended and adapted to any API?

That's the path I've been using in all projects lately - because frankly - I don't want to deal with a bunch of different API clients for Twitter, Facebook, Soundcloud, Instagram or whatever sites it is that I integrate with - all those different syntaxes and all that duplicated code etc doesn't help me - I want all of their individual differences hidden away for me and colleagues behind a single well known syntax which I myself can extend to expose the resources and methods that I need - like if I need it a method for posting a photo for the API:s that support that and so on.

My advice today would be: Pick a good HTTP client, preferably with good OAuth support, and then build your own extendable API-client on top of that and integrate all the different API-resources you need with that client whenever you need them.


A client template library for REST API need to be Turing complete, otherwise it will be too weak to be able to handle complex services or complex client-side tasks, such as caching, dependency relations, data that span multiple services etc. Even if you make the template library simple, you'll need to wrap that with a layer of complex code. All you've done will be adding another layer on your code. You could re-design your code to fit a manageable design, but server side of the REST APIs are usually design by others whose priority is the code on the server side. So, by definition of the very task, the REST API client code have to be a complex soup where the client considerations mix up with those of the servers'.


While I understand the allure of writing specs and using a unified library, good API clients are more terse, as they're written to take advantage of the programming language you're using, and understand the particulars and idioms of the API they're written against. For example, the client might pick up the correct environment variables for your API credentials, or reduce certain repetitive code.

Another example: I wrote a client that returns a queue message. Attached to that message are some helper methods for deleting, releasing, and touching the message. It makes your code cleaner and easier to understand.


I think Google is already doing this using a form of JSON-Schema: https://developers.google.com/discovery/


Given that the prevailing sentiment is that REST is self-describing and the API description doc is unnecessary, are there any examples of client generators that work directly off of a REST service?

I'm curious how this works in practice. What about authorization and parts of the API that are only available to certain users, does the client generator need to be authenticated? Are there standards for describing the meta-data associated with URLs (validation, optional parameters, etc.)?


The article fails to mention the existing JSON Schema and JSON Hyper-Schema standards that he is advocating: http://json-schema.org/

Both are currently used by Google's public APIs to auto-generate clients. Ruby/Python clients load the schema docs at runtime and do method_missing magic, Java/.NET clients generate static typed libraries periodically.


How about stop calling REST something that is not. You'll be surprised how many things it will solve.


There is for example RestTemplate from Spring for Java/Android apps, which solves this problem.

http://static.springsource.org/spring-android/docs/1.0.x/ref...


I've gone a step further and converted the JSON definition into JavaScript method calls: https://github.com/olegp/restwrapper

RPC FTW ;)


Can someone please explain me why it doesn't scale ? True question.


The author means, "It is a lot of work", I think.


So we solve the problem of too many REST clients via another REST client? I agree with ttezel, and unio does look pretty cool, but I got a chuckle out of this :)


I wrote a quick port to PHP that is installable via Composer. https://github.com/andruu/unio-php


Yay, SOAP in JSON!


Yeah, let's do CORBA once again. No thanks.


Oh Shit! Common Object Request Broker Architecture!! I'd thought all fellow humans from that era had been killed and buried :)


Well, it wouldn't be so bad if it were universal enough, but we'd probably have to wrap it in some sort of inter-ORB protocol for the internet or something...


Yes it would. Because making it universal enough makes it incredibly verbose and nasty to work with. JSON became popular because it was simpler than Web Services which became popular because it was simpler than CORBA which became popular because it was simpler than just talking over raw sockets using some sort of protocol ... oh, wait.

Actually not having a "universal" spec helps: it forces every provider to give some thought to how to make his API as lean as possible. Hopefully.


What is COBRA?



We need WSDL for REST to solve this problem. Then we could code generate clients.


I like the idea. This is useful to generate stubs.


so, define web services as a json file. JSON-SOAP?


Ha. So, now we're back to SOAP and WSDL's.


There it is: Zombie SOAP again!


skeptical programmer is skeptical this could ever work.



I want to give you an idea of how bad things are with REST Api Client . This is a Maven POM for Google APIs for java web project that uses Google APIs for Profile, Drive and Oauth2. Its insane:

  <project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>com.google</groupId>
    <artifactId>google</artifactId>
    <version>5</version>
  </parent>
  <groupId>com.google.api.client</groupId>
  <artifactId>google-plus-java-webapp-starter</artifactId>
  <packaging>war</packaging>
  <version>1.0.0</version>
  <name>google-plus-java-webapp-starter</name>
  <description>
    Web application example for the Google+ platform using JSON and OAuth 2
  </description>

  <url>https://code.google.com/p/google-plus-java-starter</url>

  <issueManagement>
    <system>code.google.com</system>
    <url>https://code.google.com/p/google-plus-java-starter/issues</url>
  </issueManagement>

  <inceptionYear>2011</inceptionYear>

  <prerequisites>
    <maven>2.0.9</maven>
  </prerequisites>

  <scm>
    <connection>
      scm:hg:https://hg.codespot.com/p/google-plus-java-starter/
    </connection>
    <developerConnection>
      scm:hg:https://hg.codespot.com/p/google-plus-java-starter/
    </developerConnection>
    <url>
      https://code.google.com/p/google-plus-java-starter/source/browse/
    </url>
  </scm>

  <developers>
    <developer>
      <id>jennymurphy</id>
      <name>Jennifer Murphy</name>
      <organization>Google</organization>
      <organizationUrl>http://www.google.com</organizationUrl>
      <roles>
        <role>owner</role>
        <role>developer</role>
      </roles>
      <timezone>-8</timezone>
    </developer>
  </developers>

  <repositories>
    <!--
        The repository for service specific Google client libraries. See
        http://code.google.com/p/google-api-java-client/wiki/APIs#Maven_support
        for more information
    -->
    <repository>
      <id>google-api-services</id>
      <url>http://mavenrepo.google-api-java-client.googlecode.com/hg</url>
    </repository>
    <repository>
      <id>google-api-services-drive</id>
      <url>http://google-api-client-libraries.appspot.com/mavenrepo</url>
    </repository>    
  </repositories>

  <build>
    <plugins>
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>2.3.2</version>
        <configuration>
          <source>1.6</source>
          <target>1.6</target>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.mortbay.jetty</groupId>
        <artifactId>maven-jetty-plugin</artifactId>
        <configuration>
          <contextPath>/</contextPath>
          <systemProperties>
            <systemProperty>
              <name>configurationPath</name>
              <value>./src/main/resources/config.properties</value>
            </systemProperty>
          </systemProperties>
        </configuration>
      </plugin>
    </plugins>
    <finalName>${project.artifactId}-${project.version}</finalName>
  </build>
  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <netbeans.hint.deploy.server>gfv3ee6</netbeans.hint.deploy.server>
    <project.http.version>1.13.1-beta</project.http.version>
    <project.oauth.version>1.13.1-beta</project.oauth.version>    
    <webapi.version>6.0</webapi.version>
  </properties>
  <dependencies>
    <dependency>
      <groupId>com.google.guava</groupId>
      <artifactId>guava</artifactId>
      <version>13.0.1</version>
    </dependency>

 <dependency>
      <groupId>com.google.apis</groupId>
      <artifactId>google-api-services-drive</artifactId>
      <version>v2-rev53-1.13.2-beta</version>
    </dependency>
  <dependency>
      <!-- A generated library for Google+ APIs. Visit here for more info:
          http://code.google.com/p/google-api-java-client/wiki/APIs#Google+_API
      -->
      <groupId>com.google.apis</groupId>
      <artifactId>google-api-services-plus</artifactId>
      <version>v1-rev22-1.8.0-beta</version>
    </dependency>  


   <dependency>
      <groupId>com.google.api-client</groupId>
      <artifactId>google-api-client</artifactId>
      <version>1.13.2-beta</version>
    </dependency>

  <dependency>
      <groupId>com.google.api-client</groupId>
      <artifactId>google-api-client-servlet</artifactId>
      <version>1.13.1-beta</version>
    </dependency>   

    <dependency>


      <!-- The Google OAuth Java client. Visit here for more  info:
          http://code.google.com/p/google-oauth-java-client/
      -->

      <groupId>com.google.oauth-client</groupId>
      <artifactId>google-oauth-client</artifactId>
      <version>1.13.1-beta</version>
    </dependency>    

    <dependency>
    	<groupId>com.google.oauth-client</groupId>
    	<artifactId>google-oauth-client-servlet</artifactId>
    	<version>1.13.1-beta</version>
    </dependency>


    <dependency>
    	<groupId>com.google.http-client</groupId>
    	<artifactId>google-http-client-gson</artifactId>
    	<version>1.13.1-beta</version>
    </dependency>

    <dependency>
    	<groupId>com.google.code.gson</groupId>
    	<artifactId>gson</artifactId>
    	<version>2.1</version>
    </dependency>

  <dependency>
     <groupId>com.google.http-client</groupId>
     <artifactId>google-http-client</artifactId>
     <version>1.13.1-beta</version>
   </dependency>

  <!-- Third party dependencies -->
    <dependency>
      <groupId>com.google.http-client</groupId>
      <artifactId>google-http-client-jackson2</artifactId>
      <version>1.13.1-beta</version>
    </dependency>

  <dependency>
        <groupId>javax</groupId>
        <artifactId>javaee-web-api</artifactId>
        <version>${webapi.version}</version>
        <scope>provided</scope>
    </dependency>    
    <dependency>
      <groupId>org.apache.commons</groupId>
      <artifactId>commons-lang3</artifactId>
      <version>3.0.1</version>
    </dependency>

  <dependency>
	<groupId>commons-logging</groupId>
	<artifactId>commons-logging</artifactId>
	<version>1.1.1</version>
  </dependency>

 <dependency>
	<groupId>org.apache.httpcomponents</groupId>
	<artifactId>httpclient</artifactId>
	<version>4.0.3</version>
  </dependency>

  <dependency>
	<groupId>org.apache.httpcomponents</groupId>
	<artifactId>httpcore</artifactId>
	<version>4.0.1</version>
  </dependency>


  <dependency>
	<groupId>org.codehaus.jackson</groupId>
	<artifactId>jackson-core-asl</artifactId>
	<version>1.9.4</version>
  </dependency>

  <dependency>
	<groupId>javax.jdo</groupId>
	<artifactId>jdo2-api</artifactId>
	<version>2.3-eb</version>
  </dependency>

  <dependency>
	<groupId>com.google.code.findbugs</groupId>
	<artifactId>jsr305</artifactId>
	<version>1.3.9</version>
  </dependency>

  <dependency>
	<groupId>com.google.protobuf</groupId>
	<artifactId>protobuf-java</artifactId>
	<version>2.2.0</version>
  </dependency>

  <dependency>
	<groupId>javax.transaction</groupId>
	<artifactId>jta</artifactId>
	<version>1.1</version>
  </dependency>

  <dependency>
	<groupId>xpp3</groupId>
	<artifactId>xpp3</artifactId>
	<version>1.1.4c</version>
  </dependency>

  </dependencies>

  </project>


What would you remove? How could it be simpler? I don't mean how could it be less verbose, but how could you describe those various project attributes in a way that wouldn't lead you to another markup language with the same data?


I am getting rid of all Google api jars. Google has a well documented REST API for OAuth 2.0 and drive ; I am refactoring my code to only use standard commons http client jars along with java JSON (e.g. jackson )jars and invoke standard REST api.


Is there a public repo where I could follow your progress? My interest is piqued.


Yes, see the github repo here: https://github.com/ttezel/unio


Thats a great idea. I haven't set up my github profile yet -- this may very well be the project to start with.


Can you stick that in a gist[1] so as to not mess up the HN comments?

[1] https://gist.github.com/


I realize now that cut-paste from my code into comment was a bad idea ; I wish I could edit this post -- but i am unable to do ( no edit link). Lesson learnt for next time.


Argh, if this is what REST looks like, we should just go back to SOAP.


Nice idea!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: