Substitute "XML" for "JSON" and we've now come full circle.
The point about REST is that it is self-describing. And ideally should be using the same URIs as the version people clicking around in Firefox or Chrome see. The API is just the XML or JSON or whatever is flavour of the week version of the HTML version.
(Or we could use embedded data—microformats, microdata, RDFa—and get rid of that distinction.)
Agreed. I came here to post something similar but I was going to mention working with SOAP[1] in addition to what you mentioned. It sounds like the OP is trying to do something which sounded very much like using SOAP and XML to me.
To paraphrase the OP, "Since so many APIs can be described in similar terms, why don't we have some sort of standard that one can look at to identify how to use the API instead of letting the API speak for itself?"
When you start going down this track, you're not only making things complicated on the client's end of things. On the server side, you're having to maintain two things for the API now. First: the ruleset, ensuring it's 100% to spec lest a client fail. Second: the code generating the response in the first place.
I've built clients and servers for both RESTful and SOAPy APIs and I can say I would take REST any day.
I agree. The promise of REST APIs is that will be self describing, but for that benefit to be realized, we need general purpose REST clients that can "discover" everything they need to know given just a root uri. Are there any such clients? And no, web browsers do not count.
But one of the things I did a little differently is that instead of writing the code and then the docs separately, I pass the validation information from the object itself.. so the API layer doesn't have to know any of it in advance. It can pass that along to the end clients.
I'm not convinced this is the solution but it mostly works for now and would love any & all feedback.
My next proof of concept will be to use Javascript to retrieve the required fields and decorate a simple html form.
Yes my argument is simply that we publish API specs in a machine-readable format to avoid wasting time implementing clients repeatedly. WSDL and WADL had good intentions at heart, but XML is ugly. JSON is nice since it's human-readable and light. Why not publish JSON versions openly for REST APIs, reducing implementation cost for clients?
XML is human readable. It was meant to be (this is why it's text instead of some binary format). It's just really, really verbose, which is what this JSON spec is going to end up being once it's dealt with all imaginable edge cases.
Yes, there are some dots you need to connect between his two statements. I'm assuming that he meant the following: those machine-readable specs also need to be read by humans at some point, just like code, and, since JSON is more readable, he proposes using it instead of XML.
That may be so, but most opinions I hear in discussions about XML vs JSON say that JSON is more readable, probably because it's less redundant and similar to data structures found in some programming languages.
I don't know that either one is particularly close to the way I'd write down a list in real life, to be honest. This is a pretty trivial sample, and neither is especially hard to read/parse by a human. But the JSON still looks closer to line-noise to me. :-)
Yes, URIs in the response sounds amazingly cool, but it won't change anything.
The inline URLs of the web work because the consumers are humans who can deal with changes (more than just trivial URL changes, like added, removed features) and now click on this button or that button.
Software isn't that flexible, so it will be just as coupled as it is today--you're just moving the coupling around.
So this idea of a "robust client-server relationship" is a pipe dream IMO.
With some conventions how the api is structured it's possible to have loose coupling the between client and server. This video demonstrates how it is possible to make changes in structure of the API that the client detects automatically:
You've moved the coupling from "look at URL xyz" to "look for rel podcast_url". Okay, yes, you can now change the URL, that's cool, but I assert that's relatively trivial. You can't truly add/remove new functionality (or break existing contracts, like "look for rel=podcast_url") that some omnipotent client would suddenly start taking advantage of.
IMO this omnipotent realization/utilization of new/changed features is what hypermedia advocates get all excited about, without realizing that humans are really the only ones that can deal with that level of (true) decoupling.
The important part is that this is all documented in the media type. Of course, computers aren't able to just figure out what's up, that's why it's agreed upon beforehand.
> You can't truly add/remove new functionality
You can absolutely add new functionality, because rule #1 of clients that work this way is 'ignore things you don't understand.' Removal will obviously break things, so it's important to handle this in the appropriate way.
I guess ultimately my point is that these kinds of APIs are significantly different, and come with a very different set of constraints/goals/implementation details. It's like you're saying "well, I don't have a full list of function calls I can make!" because you're used to SOAP where there's a WSDL and 'RESTful' APIs don't have description documents. Of course! It's a different architecture!
Can you give an example of an acceptable implementation of a HATEOAS REST API (wow, that's a lot of letters) with an associated client that actually uses it?
My experience has been that you can't communicate much through HATEOAS that's actually beneficial to a human programmer writing a client. Sure, you can add all the hypermedia links you want in your API responses, but how does that make writing a client easier? Wouldn't it just be helpful to crawlers?
Not trying to put down the idea - I want to believe, but I just haven't seen any obvious examples using it in the real world yet.
It (should, I haven't tried it in a long while) work with both just fine. They both use the ALPS microblogging spec. yay generic clients!
As for people who have 'more real' ones: GitHub, Twilio (partially, more in the future), Balanced Payments (YC W11, iirc), Comcast (though that's internal :() Netflix has aspects, FoxyCart.
Interesting. I didn't realize that there were specs like ALPS for defining how to implement this kind of thing. Isn't it still a little difficult to do this for APIs that don't neatly fit into some kind of generic profile (e.g. microblogging)?
I have been puzzling over this API discovery issue on my current project (building out a reporting API).
I'm starting with self documentation for developers built in, not this (admittedly admirable) goal of a machine generated API mapping layer. I think the main issue is, you're trying to generate a generic interface to something that isn't, itself, generic.
How many versions of "RESTful" have you encountered?
Building a generic interface to non-generic interfaces is the domain of software engineers. Until we have machines building both sides of this equation, there will always be a need for human intervention.
I don't see how writing JSON REST API descriptions isn't practically the same as writing REST API clients anyway: they're still clients, just written declaratively rather than procedurally.
If the point is "stop writing procedural REST API clients and write them declaratively instead" then that advice is by no means restricted to REST API clients.
If the point is "hey, I noticed that REST API clients are another thing that we can now comfortably write declaratively" then OK.
Writing using authoritative language is very common, and widely considered a best practice.
I highly recommend you simply accept it as what it is: the way some people communicate, especially online. It's not worth your time/attention/care to think about this.
Writing 'authoritatively' is used as a pop-culture substitute for reasoned argument. I find that it's a good litmus test for determining who I should ignore.
I was just thinking that, I hate article titles that are phrased exactly as a command and it's merely a blog post that wants to change an entire body of thought that's well founded.
I take all direct orders as suggestions. Sometimes this gets me into trouble; The same way it does my 2 year old. Most of the time though, it's the way to go.
That's because the article does not argue against REST APIs. It argues against coding wrappers for them and instead proposes a solution that allows you to define API specific behavior in JSON and use one library to rule them all.
I was thinking about this yesterday, but its seems like HN likes that kinda thing. Lots of frontpage article are direct orders from blogs with who knows what credibility.
You aren't, but it's a mistaken thing to dislike: there's no value to rephrasing it as "I believe you should [do X]" or "I am arguing that you should [do X]" because that's necessarily always true anyway, i.e. no article can ever be anything other than what the author believes and argues for. So 'softening' the language would be inefficient - it would use more words while adding nothing of substance.
I read it as (I'd like to show you something that may help you) "Stop writing REST API clients". Imagine trying to visually scan HN article titles having to filter through useless pleasantries like that.
This article is not at all about REST, it is about RPC and its shortcomings. These shortcoming were fixed by REST, and the author of the article rediscovers these fixes.
A key features of a REST API is that is self describable, in a sense, that it has a single entry point from which a generic client can automatically discover available resources and actions. APIs in the given examples are not like this, they require custom clients that are strongly coupled with an application. These are not REST APIs but RPC APIs.
> A key features of a REST API is that is self describable
How practical is that, in reality?
I know I've added the whole HATEOAS thing to my API and I am not sure if it just makes my IDs longer. Customers seem to hard-code the API entry points anyway. Everyone of course says "Oh, yeah this is cool" but when it comes to doing it given performance constraints, they don't want to start generating 10s of extra GET requests on startup to rediscover the API.
Now I can say "well not my problem" and that is what I say, except that looking back and I just don't see the practice match with the supposed theoretical advantages of the REST-ful interface.
Another issue I see happening, is the return of message bus like interface brought about by Websockets and server push optimizations it makes possible. I think REST and Websocket's channels/message bus architectures will have a battle at some point -- and one will end up dominating.
Just like AMQP is becoming a standard for message brokers, I think at some point that will be extended to the browser. Kind of like RabbitMQ has the web-STOMP plugins. I can see future hotness being just that -- message broker architecture all the way to the web client and everyone will laugh at REST-ful hype just like we are laughing at SOAP now.
It of course depends on the problem, no architecture is good for everything. REST has its cost, it usually requires more work and careful design then going RPC way, but for some kind of problems it can be really beneficial when done right.
Imagine you have a company that does custom mobile apps for external customers. A very popular topic, a lot of companies today want to have their own apps in addition to standard web pages.
Most of these apps are very similar (you can browse come content, purchase some service, etc.). Your company can go RPC way and create a custom interface and a custom client for each customer, with a lot of duplication and substantial maintenance cost. Or your can make larger upfront investment and create a generic REST client and then only design resources and representations for each new customer.
In what way is REST self-describable? REST is not a standard, but rather a widely accepted convention.
I have seen a few RESTful servers self-describe, (ie. GET /api/v1/ returns ['/users', '/posts']). However you can't claim this is a key feature of REST clients because there is no agreed-upon standard to have services describe themselves. HTTP is not sufficient.
If there were a real standard here, we would not have this problem. Like it or not, everybody is calling their custom API a 'REST' API nowadays, and without a real standard, nobody is wrong.
'Semantics are a by-product of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI -- they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations.'
All that these sorts of description produce is a low-level API. That can be useful, but what's really needed are high-level APIs that provide meaningful semnatics:
my $me = Facebook->new( username => 'autarch' );
$me->post_status("I'm on Hacker News writing this comment");
my $friend = Facebook->new( username => 'imaginary' );
$me->post_on_wall( $friend, "Hey buddy, I am on Hacker News writing this comment" );
Here's another way to phrase it. A good API is based on the data and actions related to a specific domain of knowledge. Generic solutions produce APIs that are oriented around the communications protocol (REST).
On the client side, I don't really care (too much) if something is a POST or PUT, I want to send a message or update a repository's metadata or share a photo.
When I see this post, the first thing to pop up is SOAP as well. Just that SOAP is not human-readable. Then I suddenly remembered that it wasn't SOAP itself that include the schema but the SOAP providers would generate WSDL alongside a SOAP endpoint.
SOAP is an abstraction that is fundamentally unhelpful (at least in my opinion). Without SOAP, to call a webservice, as a developer, I need to read the documentation for the webservice to understand what data needs to go into what places. With SOAP, as a developer, I need to read the documentation for the webservice to understand what data needs to go into what places; but the documentation is much harder to read, and complex types are usually much harder to use (there's a tendency to model things as lots of complex xml, which are often hard to construct, instead of just a specified transform to a string)
WSDL describes a pact between a client and a server so they can't screw it up. Comparing WSDL to WADL would make sense. JSON just describes objects in compact notation. Comparing JSON to XML would make sense when XML is only used to describe objects and nothing more. WSDL and WADL and HTML are all XML derivatives. WSDL and WADL could never exist or HTML for that matter if trying to use JSON to describe them since that's not what JSON was designed for.
WSDL is an object notation for objects that describe what a web service should do. This could just as well be done in JSON. It would be marginally less verbose, too :-)
Oh, and it's perfectly feasible to translate a well formed HTML document into some sort of JSON object. After all, HTML is just a set of tags with values, no?
Really? Seems to me like Hypermedia APIs should move the problem from the wire protocol to the application protocol?
Hypermedia says "oh yeah, here's some markup, look there are URIs in it". For a human user, we're like "cool, I'll try and click these, see what they do".
But software is going to want "um...okay, how to I parse this markup, and how do I generate the submissions you want? And, okay, you can change URIs, but please don't change anything else about that operation, or I will break completely. That's right, we're not really decoupled."
So, even with hypermedia APIs, AFAIK you're still going to want some marshaling to/from host language. ...and so you're back to having a spec, and coupling, you've just moved it around.
(Rant on coupling, people seem to think it's always bad and you can make it go away. Reality: you can't make it go away, and sometimes just accepting it directly is a whole lot simpler than deceiving ourselves about it's existence by over-applying abstractions.)
We're replying to each other in separate comment threads :-), but this input form is still coupling--you can't add/remove/change the access_tokens/fields without clients breaking. Humans can handle that. Software can't.
That's why I think hypermedia makes all sorts of sense for explaining why the www is awesome--it change deal, users will adapt. But IMO it falls flat as some new paradigm for building client/server systems.
> Actually, you explicitly _don't_ want this. That's what hypermedia APIs are trying to remove.
> But IMO it falls flat as some new paradigm for building client/server systems.
I know of one company which you've absolutely heard of who has a 30-person team building a hypermedia API. They haven't talked about it publicly because they see it as a strategic advantage.
This year will be the year of code and examples; last time I was in San Fransisco, 5 different startups came up to me and told me that hypermedia is solving their problems. Expect to see more of this going on soon.
> Hm? I am skeptical...any links/explanations?
Mike Amundsen's "Building Hypermedia APIs in HTML5 and Node" has a pretty big section on this, and my book has a section entitled "APIs should expose workflows."
We've been experimenting with this at my office. We use yaml descriptions of all of our routes to generate test coverage. We plan to later generate our documentation and client libraries with the same docs.
Document-generated server behavior is something we're researching as well, to possibly represent business logic. We're hoping that patterns can be found and condensed into notations, like Regular Expressions do for string-parsing. We'll post about anything that we come up with.
One of my side projects us an Ajax library which allows javascript to respond to requests (LinkJS [1]). It has a helper object called the Navigator, which is like a miniature Web Agent. It retains a context, and uses the Link header from the response to populate the navigator with relations. It works out like this:
var nav = Link.navigator('http://mysite.com');
nav.collection('users').item('pfraze').getJson()
.then(function(res) {
console.log(res.body); // => { name:'pfraze', role:'admin' ...}
})
.except(function(err) {
console.log(err.message); // => 404: not found
console.log(err.response.status); // => 404
});
The advantage is that the link header is a relatively condensed representation of the resource graph. As a result, it's not a problem to send it and process it every time. You do gain latency, but the internet is only getting faster, and caching can be used. Meanwhile, the server can rewire links without interrupting their clients.
Json Schema is pretty good at describing data coming through JSON. But describing (REST) APIs requires more. For example a standard way to describe API endpoints with parameters and response types, errors, related models, default/allowable values etc. This was what the OP was referring to and this is what Swagger is trying to do. The Swagger Spec is here with some more details on whats required in addition to JSON Schema to document APIs: https://github.com/wordnik/swagger-core/wiki/API-Declaration Incidentally model/data specifications in swagger spec does map closely with json schema.
While I agree with the title, I am not so sure about the solution presented. HATEOAS, whether encoded in JSON or XML, can only give you so much information about the semantics of links.
IMHO, what's needed is better support for "generic" REST in programming languages and/or libraries. Objective-Smalltalk (http://objective.st) features "Polymorphic Identifiers", which make it possible to both interact directly and abstract over web interfaces.
For abstraction, you can build your own schemes, either directly in code or by composing/modifying other schemes. For example, if I want to look up RFCs, I can define the rfc scheme:
I actually have no idea why the hal people aren't writing hal specifications for existing services right now--it's not quite as nice as "native" support, but the format supports it, and it would be useful to see what a hal version of the Twitter API looked like, for example.
This is essentially trying to solve the same problem as Swagger. Swagger is "a specification and complete framework implementation for describing, producing, consuming, and visualizing RESTful web services" [1]. Check out the spec on GitHub here [2].
There is a lot of talk about this idea being SOAP-like, but I disagree.
SOAP was insane and its counterpart, WSDL (which is really the part that is most comparable to this idea), was even more insane.
But, the basic premise was not bad. It was the execution which sucked by trying to account for every situation, adding namespaces, etc. And if you ever worked with language libs designed to interface with SOAP/WSDL, it would make you slap a bunny.
With this idea, however, adding an optional JSON-based descriptor language could be helpful. Key would be to keep it simple, allowing the bare mnimum number of data types, with one simple array structure for collections. Allow object definitions with an infinite number of nesting levels, and that would be it. I wouldn't even get into optional vs required stuff, validation, etc. That stuff should stay at the application level. Why stuff it into the interface layer?
From there, it would be easy to develop libraries to generate clients in any language for any API just by feeding it the JSON descriptor. Or (as I think the author intended) just use one universal client that any app can use. For languages that aren't strongly typed anyway, the latter would be fine.
Someone mentioned that it would require the server side devs to keep the descriptor in sync with the code. No biggie for apps that already offer client libs in different languages and must keep them up to date anyway. Not to mention there should be some doc that needs to be kept in sync (REST is not typically self documenting in reality).
In any event it wouldn't be required. What would be the harm in creating a standard for those apps that choose to use it?
Can't we just stop writing clients for specific REST API:s period and rather just build one good API client that can easily be extended and adapted to any API?
That's the path I've been using in all projects lately - because frankly - I don't want to deal with a bunch of different API clients for Twitter, Facebook, Soundcloud, Instagram or whatever sites it is that I integrate with - all those different syntaxes and all that duplicated code etc doesn't help me - I want all of their individual differences hidden away for me and colleagues behind a single well known syntax which I myself can extend to expose the resources and methods that I need - like if I need it a method for posting a photo for the API:s that support that and so on.
My advice today would be: Pick a good HTTP client, preferably with good OAuth support, and then build your own extendable API-client on top of that and integrate all the different API-resources you need with that client whenever you need them.
A client template library for REST API need to be Turing complete, otherwise it will be too weak to be able to handle complex services or complex client-side tasks, such as caching, dependency relations, data that span multiple services etc. Even if you make the template library simple, you'll need to wrap that with a layer of complex code. All you've done will be adding another layer on your code. You could re-design your code to fit a manageable design, but server side of the REST APIs are usually design by others whose priority is the code on the server side. So, by definition of the very task, the REST API client code have to be a complex soup where the client considerations mix up with those of the servers'.
While I understand the allure of writing specs and using a unified library, good API clients are more terse, as they're written to take advantage of the programming language you're using, and understand the particulars and idioms of the API they're written against. For example, the client might pick up the correct environment variables for your API credentials, or reduce certain repetitive code.
Another example: I wrote a client that returns a queue message. Attached to that message are some helper methods for deleting, releasing, and touching the message. It makes your code cleaner and easier to understand.
Given that the prevailing sentiment is that REST is self-describing and the API description doc is unnecessary, are there any examples of client generators that work directly off of a REST service?
I'm curious how this works in practice. What about authorization and parts of the API that are only available to certain users, does the client generator need to be authenticated? Are there standards for describing the meta-data associated with URLs (validation, optional parameters, etc.)?
The article fails to mention the existing JSON Schema and JSON Hyper-Schema standards that he is advocating: http://json-schema.org/
Both are currently used by Google's public APIs to auto-generate clients. Ruby/Python clients load the schema docs at runtime and do method_missing magic, Java/.NET clients generate static typed libraries periodically.
So we solve the problem of too many REST clients via another REST client? I agree with ttezel, and unio does look pretty cool, but I got a chuckle out of this :)
Well, it wouldn't be so bad if it were universal enough, but we'd probably have to wrap it in some sort of inter-ORB protocol for the internet or something...
Yes it would. Because making it universal enough makes it incredibly verbose and nasty to work with. JSON became popular because it was simpler than Web Services which became popular because it was simpler than CORBA which became popular because it was simpler than just talking over raw sockets using some sort of protocol ... oh, wait.
Actually not having a "universal" spec helps: it forces every provider to give some thought to how to make his API as lean as possible. Hopefully.
I want to give you an idea of how bad things are with REST Api Client .
This is a Maven POM for Google APIs for java web project that uses Google APIs for Profile, Drive and Oauth2. Its insane:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.google</groupId>
<artifactId>google</artifactId>
<version>5</version>
</parent>
<groupId>com.google.api.client</groupId>
<artifactId>google-plus-java-webapp-starter</artifactId>
<packaging>war</packaging>
<version>1.0.0</version>
<name>google-plus-java-webapp-starter</name>
<description>
Web application example for the Google+ platform using JSON and OAuth 2
</description>
<url>https://code.google.com/p/google-plus-java-starter</url>
<issueManagement>
<system>code.google.com</system>
<url>https://code.google.com/p/google-plus-java-starter/issues</url>
</issueManagement>
<inceptionYear>2011</inceptionYear>
<prerequisites>
<maven>2.0.9</maven>
</prerequisites>
<scm>
<connection>
scm:hg:https://hg.codespot.com/p/google-plus-java-starter/
</connection>
<developerConnection>
scm:hg:https://hg.codespot.com/p/google-plus-java-starter/
</developerConnection>
<url>
https://code.google.com/p/google-plus-java-starter/source/browse/
</url>
</scm>
<developers>
<developer>
<id>jennymurphy</id>
<name>Jennifer Murphy</name>
<organization>Google</organization>
<organizationUrl>http://www.google.com</organizationUrl>
<roles>
<role>owner</role>
<role>developer</role>
</roles>
<timezone>-8</timezone>
</developer>
</developers>
<repositories>
<!--
The repository for service specific Google client libraries. See
http://code.google.com/p/google-api-java-client/wiki/APIs#Maven_support
for more information
-->
<repository>
<id>google-api-services</id>
<url>http://mavenrepo.google-api-java-client.googlecode.com/hg</url>
</repository>
<repository>
<id>google-api-services-drive</id>
<url>http://google-api-client-libraries.appspot.com/mavenrepo</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<configuration>
<contextPath>/</contextPath>
<systemProperties>
<systemProperty>
<name>configurationPath</name>
<value>./src/main/resources/config.properties</value>
</systemProperty>
</systemProperties>
</configuration>
</plugin>
</plugins>
<finalName>${project.artifactId}-${project.version}</finalName>
</build>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<netbeans.hint.deploy.server>gfv3ee6</netbeans.hint.deploy.server>
<project.http.version>1.13.1-beta</project.http.version>
<project.oauth.version>1.13.1-beta</project.oauth.version>
<webapi.version>6.0</webapi.version>
</properties>
<dependencies>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>13.0.1</version>
</dependency>
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-drive</artifactId>
<version>v2-rev53-1.13.2-beta</version>
</dependency>
<dependency>
<!-- A generated library for Google+ APIs. Visit here for more info:
http://code.google.com/p/google-api-java-client/wiki/APIs#Google+_API
-->
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-plus</artifactId>
<version>v1-rev22-1.8.0-beta</version>
</dependency>
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client</artifactId>
<version>1.13.2-beta</version>
</dependency>
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client-servlet</artifactId>
<version>1.13.1-beta</version>
</dependency>
<dependency>
<!-- The Google OAuth Java client. Visit here for more info:
http://code.google.com/p/google-oauth-java-client/
-->
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client</artifactId>
<version>1.13.1-beta</version>
</dependency>
<dependency>
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client-servlet</artifactId>
<version>1.13.1-beta</version>
</dependency>
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client-gson</artifactId>
<version>1.13.1-beta</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.1</version>
</dependency>
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client</artifactId>
<version>1.13.1-beta</version>
</dependency>
<!-- Third party dependencies -->
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client-jackson2</artifactId>
<version>1.13.1-beta</version>
</dependency>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-web-api</artifactId>
<version>${webapi.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.0.1</version>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.0.3</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore</artifactId>
<version>4.0.1</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.4</version>
</dependency>
<dependency>
<groupId>javax.jdo</groupId>
<artifactId>jdo2-api</artifactId>
<version>2.3-eb</version>
</dependency>
<dependency>
<groupId>com.google.code.findbugs</groupId>
<artifactId>jsr305</artifactId>
<version>1.3.9</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>javax.transaction</groupId>
<artifactId>jta</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>xpp3</groupId>
<artifactId>xpp3</artifactId>
<version>1.1.4c</version>
</dependency>
</dependencies>
</project>
What would you remove? How could it be simpler? I don't mean how could it be less verbose, but how could you describe those various project attributes in a way that wouldn't lead you to another markup language with the same data?
I am getting rid of all Google api jars. Google has a well documented REST API for OAuth 2.0 and drive ; I am refactoring my code to only use standard commons http client jars along with java JSON (e.g. jackson )jars and invoke standard REST api.
I realize now that cut-paste from my code into comment was a bad idea ; I wish I could edit this post -- but i am unable to do ( no edit link). Lesson learnt for next time.
The point about REST is that it is self-describing. And ideally should be using the same URIs as the version people clicking around in Firefox or Chrome see. The API is just the XML or JSON or whatever is flavour of the week version of the HTML version.
(Or we could use embedded data—microformats, microdata, RDFa—and get rid of that distinction.)