I don't understand what point he is trying to make about the Dunning-Kruger effect. The only pop-sci version of the study that I've heard is that the bottom percentile tends to overestimate their abilities while the top percentile tends to underestimate their abilities. The data shown certainly seems to support this interpretation?
edit: I read the article linked in another comment below [1] and now I think I understand what claim the original article was referring to. It seems a common misunderstanding of the D-K effect is that the bottom percentile would think that they are as competent or more competent as the top percentile. However, this is not what the study says, the bottom percentile is overestimating their ability but the estimations are still relatively lower than the self-estimations of those with higher ability.
When I read pop-sci interpretations of it (which is basically all I've ever read) my impression was that the top percentile estimate themselves as worse than the lower-percentile does.
The graphs cited in the article show that the top still estimate themselves better than the lower percentile, which is new to me.
"Rather, it’s that incompetent people think they’re much better than they actually are. But they typically still don’t think they’re quite as good as people who, you know, actually are good."
Unfortunately, while the incompetent people may believe that they are not quite as good as people who actually are good, the problem is that they are not qualified to identify people who are actually good. From the original paper:
"That is, the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s."
"As predicted, participants who scored in the bottom quartile were less able to gauge the competence of others than were their top-quartile counterparts." [2]
So while the incompetent people may not believe themselves to be experts in general, they are less able to identify that the person sitting next to them is an expert.
And further: "Bottom-quartile participants failed to gain insight into their own performance after seeing the more competent choices of their peers. ... [they] tended to raise their already inflated self-estimates."
Not only do they fail to identify the expert next to them, but when presented with the expert's answers they will raise the own inflated estimate!
Therefore, I believe that in practice, the DK paper does support the belief that less competent people believe themselves to be more competent than experts.
[2] Kruger, Justin; Dunning, David (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments". Journal of Personality and Social Psychology
The author states, as regards the interpretation of the
Dunning-Kruger diagrams, that
[i]n two of the four cases, there’s an obvious positive correlation between
perceived skill and actual skill, which is the opposite of the pop-sci
conception of Dunning-Kruger.
In my corner of the universe, you don't get to cherry-pick which pieces
of data (ie "what instances of two sets of random variables") you bestow
the golden twig of correlation upon. If I'm not entirely mistaken,
correlation is very much a global feature, not a measure of proximity of
two points on a chart.
So, yes, Dunning-Kruger (as evinced from the diagrams sported here) indeed seems to make a weaker claim: that there's no
correlation between “perceived ability” and “actual ability”. As such,
this claim is as far from the "pop-sci conception" of Dunning-Kruger as
it is from the author's.
The referenced graphs measure performance and perceived ability on 4 different tasks. You're right that ideally you'd pool this data to get to an overall correlation, but for the point the author makes, eyeballing it and taking a mental average does the trick, no?
Also, what corner of the universe are you from? Loess regression, hierarchical modeling, conditional analyses... methods for finding "non-global" correlations aplenty.
Err, no. There is clearly a pattern to the data, and drawing the conclusion that self-assessment and actual skill are uncorrelated is not what you should do (the simplified pattern being that unskilled individuals overestimate their skill, while skilled individuals underestimate theirs, or a regression to the average if you will).
At any rate, even if we don't take that into account self-assessment is worth plenty as demonstrated by many studies which manage to get coherent data from self-assessments. Sure, you should take them with a grain of salt, and you can expect biases, but no need to throw them out.
"It’s a little easier to see why people would pass along the wrong story here, since it’s easy to misinterpret the data when it’s plotted against a linear scale, but it’s still pretty easy to see what’s going on by taking a peek at the actual studies."
"People’s life evaluations rise steadily with income, but the reported quality of emotional daily experience levels off at a certain income level, according to a new study by two Princeton University professors [...]"
So seemingly that has nothing to do with the scale used, but with the definition of happiness. Guess someone didn't look "at the actual study"?
The author only thinks his understanding of Dunning-Kruger is better than average. No, seriously. As others have pointed out, the meme is not that actual ability and self-assessment are negatively correlated. It's just that ignorant people don't know how ignorant they are - i.e. that those at the low end of the scale overestimate their position relative to the maximum. That version is very well supported by the graphs the OP cites. He has scratched the iceberg of skepticism and now thinks he's enough of an expert to tell the rest of us that we're Doing It Wrong.
Hmm, as he kind of points out, there is a difference between knowing your ability in something is poor and being willing to admit it to a stranger. The usual meme of Dunning-Kruger tends to refer to people who are oblivious to their own incompetence. But there are valid reasons for the perceived score compression. I'd wager the people in the bottom quartile of grammar performance are well aware that their grammar isn't very good, they just weren't willing to write 20 on the paper (out of shame) and went for 60 instead.
> the meme is not that actual ability and self-assessment are negatively correlated
I don't think there's a single DK meme, but I do think that this is a common DK meme.
Somewhat-serious suggestion: maybe people who actually understand DK, have a tendency to underestimate how well they understand it relative to everyone else, and so assume that everyone else understands it about as well as they do.
I've definitely heard the negatively correlated variant of the meme. I'm not sure it's the predominant one or always explicitly expressed, but it's out there.
I think it's hilarious that everyone piling onto this guy from being "so wrong" about Dunning-Kruger have a huge blind spot. Maybe they don't understand it as well as they think so, either. If it's possible that he's super-wrong, it's also entirely possible that they are too.
I see the "actual ability and self-assessment are negatively correlated" version of the meme a majority of the time. Maybe I just hang out on the wrong parts of the internet?
People roll out either depending on the situation. It is hand waving, not reasoning. IOW, you can explain anything away by taking one of the two sides of this argument. "You think you are good? Ha, no, dunning. Oh, you are not sure you are good? Ha, wrong, you must be great" type stuff. While these effects exist in large populations, you can't really use them to reason about any specific instance.
> He has scratched the iceberg of skepticism and now thinks he's enough of an expert to tell the rest of us that we're Doing It Wrong.
This sort of gratuitous negativity isn't appropriate in a Hacker News comment. It is unnecessary to make your point and pollutes the atmosphere for all of us. Please don't do that.
This. I work with a support guy like this. He has, at best, a hobbyist's understanding of the technology we use. He tries to paint himself in meetings as this expert who often tries to correct us and says completely mind-numbing things. When discussing a fairly complex Drupal roll-out, "Oh, why even bother with this open source stuff? We can make an access database on the network shares and give people access to it! I hear it can do web now!" I've sat him down and patiently explained to him why we do these things, how they are common and best practices, etc and he just makes frowny face like I'm the one talking crazy and he doesn't seem to learn as he brings up the same suggestions over and over. Last week he was adament that a dedicated linux server for our Asterisk PBX was overkill and that WindowsXP running some windows freeware pbx on any old desktop would be better because PBXs need so little power and XP is a super lean OS.
At my previous job where I worked and managed a few young-ish support and jr devs, I came accross a lot of the same attitudes. The funny thing is that two or three years in, they tend to shed those attitudes (at least the smart ones do). They realize that this stuff is a lot bigger than the limited experience they've gotten in school or in their hobbyist projects. Suddenly the "lets toss out everything and do it my way" motor-mouting gets turned down a notch or two, especially after I let them do things "their way" once in a while only to have it explode in their faces. Hell, 90% of managing young techies is controlling their D-K until they mature into imposter syndrome. Then you have to manage that, which is a million times easier to deal because you're not being know-it-all'd to death in every meeting.
D-K is real. The graphs he points out literally supports it. Personally, I'm getting sick of this kind of uber-skeptic mentality that's popular here and on reddit. I think there's something ego gratifying for the INTJ male about being this loud-mouth contrarian who tells everyone they're wrong. Especially if its against some popular wisdom and if the argument is unusually pedantic and trivial.
D-K is real. The graphs he points out literally supports it.
What the author says is:
In any case, the effect certainly isn’t that the more people know, the less they think they know.
And he's right; the data he shows does not demonstrate that. The more talented people in those plots, at worst, think they know as much as the less talented.
What the graphs do demonstrate is that the less talented people consistently overestimate their ability, whereas the most talented people underestimate. So what's actually negatively correlated isn't actual knowledge and perceived knowledge, it's actual knowledge and the gap between actual knowledge and perceived knowledge.
In conclusion,
Personally, I'm getting sick of this kind of uber-skeptic mentality that's popular here and on reddit. I think there's something ego gratifying for the INTJ male about being this loud-mouth contrarian who tells everyone they're wrong.
> I think there's something ego gratifying for the INTJ male about being this loud-mouth contrarian who tells everyone they're wrong. Especially if its against some popular wisdom and if the argument is unusually pedantic and trivial.
In "Income & Happiness" you're missing that there are two complementary definitions of happiness - "satisfaction" and "affect". Satisfaction, or how you feel about your life, is what is shown in your graphs. "Affect" is your direct emotional experience of your life. What you call "wrongness" is just a failure to distinguish between the two - more money makes increasingly smug, but (beyond a certain point) doesn't make your like any more fun
> more money makes increasingly smug, but (beyond a certain point) doesn't make your like any more fun
That's not necessarily money's fault, and I'd point out that there are certainly rich people that don't fall into this category.
People who want to pursue happiness need the freedom to do so. Freedom to pursue happiness can be directly correlated with how much money, or wealth, is available to the person.
It's not money's fault that some people don't know how to be truly happy. Money simply affords freedom.
The problem is that with enough freedom and no idea how to be happy in it, you're going to end up in a worse place than if you didn't have the money in the first place.
Money doesn't automatically make people smug and funless (new word?). People make people smug and funless (when they have the freedom and inclination to (unintentionally(?)) do so).
I believe what he meant by money making a person increasingly smug was perhaps that (Western) society holds the expectation that the amount of money a person controls is directly proportional to their overall success or fulfilment in life. This is not always the case but it does create the expectation that a person with money should represent themselves or think of themselves as more successful. This attitude is often interpreted (often correctly) as smugness given that it is rather the opposite of humbly acknowledging the fruits of work or fortune.
Happiness means many things to many people. For some, work itself brings them joy whether or not it results in earning a great deal of money e.g. charitable work. For others it is close relationship with their family, a situation that increased wealth often worsens. Many work their entire lives without recognition or compensation to pursue art, giving up comfortable or successful lives and careers in pursuit of something that, more often than not, they are the only ones who see value in. Religious or aesthetic devotions have perhaps the longest history of eschewing wealth and what most would consider a normal life.
As a society, I think that we are beginning to understand the faults in our current model of a successful life and so we've been seeing more of a trend towards simplifying our lives to make room for these other generators of happiness. We are starting to see that the freedom of money is not the same as the freedom from money.
You can still say type systems are obviously a good thing (I do too!). But you, sadly, can't say this is supported by scientific studies. It's an opinion based on your own personal experience.
The good news is that this opinion doesn't contradict evidence either! The research just isn't there yet. Some of research I've seen so far is pretty bad.
The fact that you actually recognized this response in yourself means that you're more than capable of adjusting that response to be less biased. It's a very good first step.
One option is to shrink your world to the things you have direct experience with. So yes, type systems are obviously a good thing - for you. Then the obvious question is why would somebody believe otherwise - and you can learn something by asking it. Worry about figuring out what the world is actually like once you've got a sample size of more than two.
When enough people are aware of the meme, and the meme is ingrained in culture enough to have a statistically significant effect on perceived ability. Or when people decide it is meaningless, out of desire to control their expectations of their performance.
"I suspect we find this sort of explanation compelling because it appeals to our implicit just-world theories: we’d like to believe that people who obnoxiously proclaim their excellence at X, Y, and Z must really not be so very good at X, Y, and Z at all, and must be (over)compensating for some actual deficiency; it’s much less pleasant to imagine that people who go around shoving their (alleged) superiority in our faces might really be better than us at what they do.
Unfortunately, Kruger and Dunning never actually provided any support for this type of just-world view; their studies categorically didn’t show that incompetent people are more confident or arrogant than competent people."
It's also worth pointing out that you almost never see people bringing up Dunning-Kruger to say that they might be over estimating their own understanding of a topic. It's almost always used as evidence that those that disagree with them are ignorant; that it's a study that only says something about their opponents, not themselves.
people on the internet misunderstanding a paper? Might that be because 99% of them don't have access to read the thing, since it's suck behind a paywall?
and to end a post about a paper people can't read with "Maybe I’m being naive here, but I think a major reason behind false memes is that checking sources sounds much harder and more intimidating than it actually is." -- that's just rubbing it in!
Scientific reporting sucks, even in quality media that take it seriously. If I had money I'd pay a select group of bloggers to write about how a popular bit of science has been misreported and then colate links to those on some central site.
A bit like Behind the Headlines, but about any science and by a wide range of science bloggers.
One of Norway's biggest bloggers is a guy named Tjomlid[1]. His blog is "skeptical", as in he looks into various things circulating in the media and either backs it up or debunks it.
When media says "X causes cancer", he comes with a level headed blog-post explaining the data. When anti-vaxxers point to a study, he shows that they have misinterpreted it, etc.
Unfortunately, it's in Norwegian. I'm interested in reading more in this vein, so please let me know if you have good, English resources.
However, I disagree with his interpretation of Dunning and Kruger's figures. There is limited data in those figures of course (3 sets with 12 points of data and 1 set with 8 points), and I don't know how big the actual sample sizes were. But in all 4 cases, it clearly and significantly shows that people with low actual ability overrate themselves and people with high actual ability underrate themselves.
> The pop-sci version of Dunning-Kruger is that, the less someone knows about a subject, the more they think they know.
I can only guess that he is interpreting that too literally. I always took the latter part as "the more they think they know [compared to what they actually do]". You could plot "perceived - actual" or "perceived / actual" instead of the figures in the paper to make it more obvious.
> But in all 4 cases, it clearly and significantly shows that people with low actual ability overrate themselves and people with high actual ability underrate themselves.
I read the data quite differently. I read that people don't use the full scale when rating themselves. The self rating scale goes from 60 to 90. If you correct for that, people kind of seem to do a nice job in self evaluation.
It can be explained, too. People have social pressure not to rate themselves near 100%, nor in the bottom half of the scale.
Presumably the plots represent averages. This hides the actual ranges of the reported self assessments, but it is likely that there is substantial overlap between adjacent test score groups. Just curving out the scale isn't going to address that.
True. Without knowing how are values distributed around the average, my observation that people do good self-assessments is poorly justified. Worse, a reduction in scale introduces worse granularity, so the end result is guaranteedly less precise. My observation is on the formality level of back of napkin calculations (i.e. it may be true, given the information provided, but is not proven true)
It is not the same damn graph--the two "logical reasoning" graphs show two separate studies that measured same quantities. The points on these graphs are not the same, even though the axes and captions are (except that the captions mention that they are two separate studies).
Edit: based on other comments, it sounds like maybe the 3rd and 4th graphs were identical, but the author has now corrected the error.
I always sucked at stats, so can someone clarify the income / hapiness case. From what I understand, we have :
* a straight line on a log scale - which is misleading, since the man on the street is used to straight lines on linear scales - telling that money buys happiness
* a "log-like" graph on a linear scale, which tells you that "money buys happiness a lot at the start, and then it doesn't change things a lot" - and the graph uses a linear scale on both axis, so there is less deformation.
People are making the claim that money buys happiness up to a certain number and then stops.
But there is no inflection point. 10% more money always buys you the same amount of happiness.
Nothing makes $75k the point where you have 'enough' money as opposed to $5k. 75 million is just as far ahead of 5 million (at least if it continues to be logarithmic).
10% more money is much more money when you're rich than when you're poor (Captain obvious to the rescue.)
So if you want to make me 10% more happy, that's going to cost you a lot more actual dollars / euros / yuan / whatever ; and the last 10$ are going to make me less "happier" (in absolute) than the first 10$.
Considering people don't buy stuff in percentage of their salaries, but in absolute dollars (if i'm not mistaken), isn't that a case where the absolute / relative difference is relevant ?
I'm pretty sure I'm having a stats 101-level argument, and there is something obvious that I'm missing, so please forgive and educate ;)
So the point is that 10% more money always makes you a certain amount happier.
People trot this out and say "look, once you have a certain amount of money you don't get happier by having more!"
Why should dollars buy happiness the same way they buy gumballs? It doesn't make any sense.
The first gallon of water you have every day is pretty freaking valuable. The 1000th gallon of water you have every day is just about worthless unless you're a farmer.
In engineering we use log-log charts to look at how systems respond to a HUGE range of inputs from the very, very slow to the very, very fast. Like filters. You want to know how it looks from 1Hz to 1GHz in a useful way. This is generally done with a Bode plot which is log on both x and y. http://en.wikipedia.org/wiki/Bode_plot
If your plot covers several orders of magnitude (and income is many orders of magnitude) then the most informative way to understand it is by compressing it via logarithm. Here's a good example only on the Y since the company is growing X% per year.
> Why should dollars buy happiness the same way they buy gumballs? It doesn't make any sense.
Which is precisely the point of the old "money does not buy happiness" adage, isn't it ?
Science that says "This 1000ths gallon of water is gonna be as good as worthless" seems to favor the point rather than oppose it.
I don't know if "happinness == gumballs" would make "sense" to anyone (I personnaly doubt it), but that's the subtext of a lot of messages in a consumer society (and we're leaving the realm of maths from the realm of politics. Stats 101 vs PolSci 101, all over again, I know, sorry.)
As for your graphs, what I was saying is precisely that, unless you've been trained to read graphs (and most people are not), you're going to be "fooled" more easily by the second one, won't you ?
> Which is precisely the point of the old "money does not buy happiness" adage, isn't it ?
Not the way I read it, really. If money doesn't buy happiness, then the graph should be completely flat right? Or at the very least a uniformly distributed scatter plot? But it doesn't really seem to look that way. While I suspect that there's some truth to the adage, it's definitely not absolute.
> Science that says "This 1000ths gallon of water is gonna be as good as worthless" seems to favor the point rather than oppose it.
That's not the point. The point is that as you have more of a thing, it takes more to move the needle. EVERYTHING has decreasing marginal utility, and money does too. But I don't think the marginal utility of money ever becomes zero, or goes negative. Do you think that? If so, please start sending me your money so that you can be happier!
Here's a great example for you. Raises are nearly always percentages right? Why not just give everyone a $100 a year raise? I mean, that $100 is supposed to be the same to everyone, isn't it? 10% more money increases your happiness approximately the same, no matter what number that 10% increase is applied to.
> As for your graphs, what I was saying is precisely that, unless you've been trained to read graphs (and most people are not), you're going to be "fooled" more easily by the second one, won't you ?
I think that if you want to understand the way a system works across many orders of magnitude the only way to do so without fooling yourself is to look at a log/log plot. If you don't, only the most extreme change on the plot shows up and all the rest gets compressed into a "straight" line by the vagaries of turning an infinitely variable thing into a finite set of pixels.
Note that, for the adage, I realise I was thinking about the "Renaud Sechan" version, which roughly translates as "Money doesn't buy happiness, but it helps buying the groceries." ;)
They are claiming there is a point where money stops being important.
There is no such point.
There is no reason to favor any particular income level. If you use the happiness graph to make an argument that $50k is all anyone needs, I can use the same graph to show that $400 is all anyone needs.
If the 'up to a certain point' hypothesis was correct, with the graph leveling out, there would be a maximum happiness-from-money. You could say "oh well 85% of max is plenty, the ideal income is X". But the data says that's not true.
---
Humans intuitively understand logarithmic scales. Arguably even better than linear scales. A lot of our senses are logarithmic, after all. It's obvious that one dollar does not buy one happiness, given that millionaires aren't in ecstatic shock. But every time I double my income I gain the same number of happiness units? Sounds like money buying happiness to me.
Regarding your comment about log scales, is there a "sense" (or kind of perception, etc...) beside earing that "obviously" logarithmic ?
Also, as another "I don't get stats get me out of here" question, isn't it problematic to plot against a "perception" level that (unless I'm mistaken) is linear ? What do you do with someone that answers "I'm happy at a 10/10 level". Do you consider it an outlier by definition ?
Assuming the "happiness" self-rating is indeed linera, is it still a fallacy to think that, behind the mathematical rightness, there is value in knowing that after $x/year, y% of the population rates itself as above, say, 8/10 ? I'm under the impression that it's a different argument than say "after $x/year, everyone is happy" ; but that it can still be practical (especially if you can corelate lower levels a happiness with stuff you want / should / need to get rid of. But again, talking politics at a math cocktail, probably bad manners.)
Brightness is pretty logarithmic. Touch, smell, taste, all of those seem logarithmic to me.
People rating out of 10 distorts the numbers in multiple ways, that's a separate issue to deal with. Ideally you'd remove self-reporting in some manner...
>value in knowing that after $x/year, y% of the population rates itself as above, say, 8/10
If it's linear with a cap of 10 then all the reasoning from before goes in the trash. The graph won't actually be logarithmic and you can set a cutoff easily.
If you can calculate a linear happiness score with no cap, then it's possible to pick a point that's "happy enough", but that point will be arbitrary. It won't be based on an inflection point on the graph, because the graph has no inflection points. You can use your judgement to say that 12 happiness points is plenty, but that's not a math question.
I'm curious about this too. I'm familiar with the economic concept of diminishing marginal utility, and I get that the $10k you get from $120-$130k/yr is much less valuable to you than the $10 from $40-50k. But there still seemed (to my Stats 101-level interpretation) that there was a decent-enough jump between $75k and $130k and beyond that happiness/utility increased significantly as income did beyond the commonly cited figure.
For those of you who earn the requisite amount, is it true that it's easier to get a significant raise at the higher income levels? I live in a fairly rural area but I've found as my income rises the degree to which I need to fight for marginal pay increases has diminished, which seems counter-intuitive since each point gets more and more expensive, obviously.
You're understanding the actual data quite well as it turns out. Moving from $40k -> $50k is a much bigger gain on the happiness scales since it is a larger % increase. Moving $120k -> $130k is much smaller relatively, even though it's still $10k.
As for getting larger raises, many jobs you get raises based on a percentage of your current salary. So if you get a 5% year-over-year raise and your base salary is $120k, you're getting a similar happiness boost as the 5% year-over-year raise of someone making $40k base salary. But you're getting way more in terms of net actual dollars.
What a gazillion pop-sci articles and blog posts suggest is that there is actually a negative correlation between self-assessment and performance -- i.e., that the higher people rate themselves, the worse they will perform. Looking at the graph we can see that while poor performers overestimate their performance relative to reality, there is still a positive correlation between performance and self-assessment.
Looks like only people in the 3rd quartile are any good at estimating performance, and so perhaps they should immediately all become project managers?! :)
I think there's an error in the blog post, because the 4th graph is the same as the 3rd.
It should probably be http://danluu.com/images/dunning-kruger/dunning_4.png which shows a positive correlation for "perceived logical reasoning ability and test performance". That then helps explain the statement "In two of the four cases, there’s an obvious positive correlation between perceived skill and actual skill".
This looks like a very uncharitable interpretation of the pop version of Dunning-Krugger. For one, people who use an explanation like that pretty much always talk about unskilled individuals in particular. I might be projecting but it seems like a more charitable interpretation of what they are saying would be 'The less someone knows about a subject, the more they [overestimate what] they know' which is roughly correct.
Without commenting on the various claims about memes in this article, I can offer a hypothesis on the closing paragraphs: the most obvious reason why misinterpreted science is much more commonly seen is that sharing an unsupported meme takes far less time and effort (~few clicks) than finding the research paper and reading it, so we would anticipate seeing a lot more of the former.
The pithy version is "a lie can run round the world before the truth has got its boots on", with thanks to Pratchett.
> the most obvious reason why misinterpreted science is much more commonly seen is that sharing an unsupported meme takes far less time and effort (~few clicks) than finding the research paper and reading it
I think your comment contains a highly recursive component:
Indeed, in that the typical usage of DK among the poorly informed is just "hey, I can accuse people of being too misinformed to realize it, without actually refuting their argument, all because I can cite a paper".
He's more right than most people are on the Dunning-Kruger effect. What it actually tells us is that self-assessed competence estimates carry low signal: the correlation is almost zero. This might be a regular, pointwise regression to the mean, or it might be an artifact of the aggregation (i.e. it could be that some people are great self-estimators but others are way off the mark).
What Dunning-Kruger tells us is that, with regard to certain abilities and especially social skills, people are bad guessers of their own competence and that most people think of themselves as slightly above average.
I think that he's also somewhat right on money vs. happiness. Self-reported happiness is not the same thing as actual happiness. You might rate yourself a "7" at two times of life, but be behaviorally different. Your "7" when you are poor might be, "commute sucked, but no one yelled at me at work today"; but when you're rich, it might mean "hotel is nice, but staff forgot to fold the toilet paper into a triangle". Self-reported contentment is the same, but stress levels and behavioral measures of happiness are very different. It is true that after $75k per person, other factors have more of an effect on happiness than money itself. You have to account for cost of living and family size, of course.
Then there are job quality issues. A writer who reliably makes $100k per year is a smashing success; a professor earning that, at 40, is more than respectable; but someone who's a "Software Engineer II", making that, at age 40 is a failure. I think that success is much more strongly correlated to happiness than income. People with Harvard MBAs making $250k per year are often the most miserable people on earth, while famous artists who make a fraction of that often love their lives.
His relating this to type systems is a bit silly. You can't put all "static typing" languages in the same bucket. Java's static typing is a hindrance with little power. Haskell's static typing is extremely effective-- if you know how to use it. Also, quality of the engineers matters more than the language itself: I'd much rather work on a disciplined Clojure or Ruby team than a sloppy Scala team that's still using Java patterns. If it is properly used, a Haskell-like type system can make code quality very high indeed. That said, most businesses aren't willing to budget the time for quality code, and that's a language-independent problem.
The income graph strongly suggests to me that increased happiness scales with the proportionate increase in income, rather than the absolute increase in raw dollars. That is to say, a 20% raise will increase everybody's happiness roughly equally, regardless of the absolute magnitude of the raise (eg, $10k for someone earning $50k/year and $100k for someone earning $500k/year).
This probably has to do less with the utility of the money itself and more with psychological positive reinforcement ("I'm smart and successful and my boss recognizes that") as well as increasing the person's relative social status. I've read that, given the choice, people will prefer to (for example) earn $70k while their peer group earns $50k, as opposed to earning $100k while the peer group earns $150k. Income is an indicator of social status and is largely valued relative to the rest of your social cohort.
Interestingly, the Japanese are said to be more efficient because their corporations tend to award social status directly to "salarymen" (sic) rather than indirectly through differential wages.
I've always thought the Dunning-Kruger meme went: "The less you know about a given subject, the more you think you know, and vice versa", which I don't think is disputed by those graphs.
It's just saying that people tend to overestimate their knowledge of a subject when they are under-educated and tend to underestimate their knowledge of a subject when they are over-educated.
Of course, the study participants were asked to give their own estimates. What they were actually thinking of their abilities is a matter of debate that would need a more clever study to deduce.
How is that not disputed by those graphs? While there is distortion at both ends (particularly the lower end), it nevertheless shows self-assessment is positively correlated with ability.
edit: I read the article linked in another comment below [1] and now I think I understand what claim the original article was referring to. It seems a common misunderstanding of the D-K effect is that the bottom percentile would think that they are as competent or more competent as the top percentile. However, this is not what the study says, the bottom percentile is overestimating their ability but the estimations are still relatively lower than the self-estimations of those with higher ability.
[1]: http://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-k...