The vital piece of information in the graph is the relative size of the wage. If the author had started at 0, it would be far harder to distinguish differences between the skills, by the same principles that make pie charts mostly unusable.
1. As hinted above, what are the standard deviations? Do we even know if Clojure is statistically different from HAML? Do Clojure rates exhibit higher variation than HAML?
2. $4 difference between pattern recognition v. machine learning, where pattern recognition is, effectively, a subset of ML? We see this with legal services v. contract drafting, and (arguably) info architecture v. interaction design (less overlap with these, to be sure).
Also, while John gets credit for joking about causality, I'd be very concerned about sampling bias if he uses oDesk data to examine human capital decisions.
Re: Standard error bars - you're absolutely right. I should have included them, but it was my first time using the googleVis package & didn't see an easy way to add them. I don't have the numbers right in front of me, but standard deviation for each skill was on the order of ~$7/hour and each skill had to have at least 30 obs., though many where much, much higher. If we say n=50, that's an SE of about $1/hour, so clearly one shouldn't take much stock in very small differences in wages. My goal was just to unearth some interesting data---I probably should tone down the implied recommendation lest I'm responsible for a glut of unemployable lisp hackers in a few years:)
I see the graph as answering the question: What is the difference in achievable wages between different "hacking" skills? The graph, as shown, displays that difference well. Like I said, there are no hard and fast rules, but I would have done it the same way!