Taking the point further, can anyone think of a mechanism by which the paper has a positive effect? The ignorant person who is overly self-confident presumably would be immune from its lessons, while the informed person is not better informed about their domain.
How does the knowledge of the paper's findings benefit society? (I ask this because the paper's findings seem to be widely celebrated.)
To the extent it makes readers more hesitant to boast or claim superior knowledge, it's a good thing, but I have to think that those predisposed to such boasting will continue rather unfazed. Maybe I'm just too much a pessimist.
1. It keeps me aware, especially in domains where I'm a beginner, of how it can affect my self-evaluation of performance.
2. It helps me when teaching beginners; I may need to help them realize where they lay in performance; if you don't know you suck, chances are you won't improve.
3. I've 'counseled' many people who are in the upper-middle of performance, when they don't realize how good they are :) this has helped them go for more challenging jobs etc
So when someone says they are good at something, don't just take their word for it.
Yes - by reinforcing in people that confidence is disjoint from competence, and that 'trust-but-verify' is the best policy when evaluating people's competence.
EDIT: ...because your tools are useless in the face of ignorance.
It is useful in pointing out why I prefer medical professionals who express doubt vs those who are super confident.
Case in point: Physical therapists and massage therapists. The farther you go from what your hospital provides, the greater the certainty and confidence signaling. Same with a lot of alternative medical practices. It is effective in explaining why so many people flock to such professionals - because they always diagnose you and tell you what your problem is. And then the patients turn around and say "These doctors are incompetent, and their methodology is a sham. They could not tell me what my problem was, with all their years of training, whereas the <insert alternative professional> figured it out in just one sitting. And it all made sense, too!"
While I think there are in fact useful insights that the individual can glean directly from this paper, we shouldn't forget that scientific papers exist for a larger purpose than to be the tools of self-improvement or the weapons of our social wars.
Pure research is often useful to society because we don't know what's going to prove useful beforehand. Knowledge can also be useful to a person because simply because it satisfies their curiosity.
But in both cases it's the potential benefit to people that make it worth doing.
Growing up in a bicultural Asian/white household, reconciling these two viewpoints was one of my greatest sources of childhood angst. My Asian academic dad was all about learning for its own sake, and he'd look down upon the politicians and businessmen who sought to use science to further their own ends. My white teacher mom was very much "What's the point of information if you don't use it to do some good in the world?" This is a gross generalization, but I've seen strong echoes of this in the working world as well, eg. the stereotypical Chinese Ph.D working in a lab or at a computer whose results are then going to get commercialized by a Harvard MBA.
The reconciliation I eventually came up with is basically "You get what you ask for." On a factual level, cultures that value truth for its own sake tend to have a more accurate understanding of the truth, but then they get steamrolled by cultures that value truth for what it can do for them, but only if that other culture is on the same basic technology level. Then you get a very interesting dynamic where the usage of science for military purposes (often viewed as a "deal with the devil" by those in academia) ends up "carving out" safe spaces for scientists and academics where pure research is protected and supported by politicians for its usefulness.
I think that satisfying your own curiosity/the desire for knowledge is a valid reason to pursue knowledge.
I don't think knowledge needs to be directly useful to be worth pursuing.
But, all cultures amd people pursue knowledge they believe to be valuable in some way.
No culture encourages it's people to record the number of specs of dust on their floor before they sweep just because they value truth.
At a minimum (assuming that it held up; it's been widely criticized since) it would tell us that self-assessment, even aside from intentional dishonesty, is a dangerous basis for assessing ability in intellectual domains.
Conversely, it also would inform us that people that mis-assessing their own abilities in certain ways may well be honest rather than padding or demonstrating false humility.
I suspect that's exactly why it's popular, and why the critiques and followup papers that show this result doesn't hold up to real-world scrutiny don't seem to have any effect on the public perception of this paper.
It gives people an excuse to feel superior. I think it's a meta effect, because it's one of those specious, great sounding results that is super easy to understand and super easy to believe, but isn't actually true. Everyone has someone in their life that is aggressive and annoyingly over-confident.
This paper has some really big flaws, it's probably a bad idea to apply it to any real world situation including opponents. It can't benefit society to have everyone making incorrect assumptions that confidence proves a lack of skill.
That idea seems positive to me. If we could change our feedback mechanisms (via education and so forth), we could increase self-awareness and reduce incompetence.
Also, we tend to think of "incompetence" as extremely negative, but they use it in a specific way that is meant to be descriptive and not an insult.
That's my take on it anyway. I really don't know.
I like the paper, because it's a good reminder to seek out tutors / mentors early when learning something, and to take them seriously when they say you have a lot to learn.
Humans cannot evaluate competence objectively without formal performance testing. (And sometimes not even then.)
But we can evaluate confidence, charm, persuasiveness, and social proof.
So we tend to use them as a proxy, and choose leaders who lack competence but can demonstrate them, over leaders with genuine competence who can't.
This is a very bad thing, and possibly the single biggest cause of political, social, and economic problems in our history.
You can have it both ways. I've seen competent people go ahead with someone's cocky ideas because their confidence was so strong that they suspected their was probably a gap in their own understanding, and so they put aside their own doubts.
Use tricks like “that’s a great idea, my only concern is that it might fail in X scenario, Y solution is similar but doesn’t suffer from that specific issue. What do you think?”
After all if you start projects with a YES attitude you are more likely to complete the projects, than if you don't start any projects because of your NO attitude.
Sometimes == not always; being competent is generally good thing :)
I always suspected such people succeeded more with that attitude because others are more likely to help them. The attitude gives people a signal of likelihood of success, so they are willing to put in more to help them. People often decline to help others if they think failure is likely.
I'm sure the YES attitude does help you a bit internally, but I suspect most of the gain is in others helping you than you somehow performing better.
If you don't start, you'll never finish.
It's logically obvious :) Sometimes getting started is half the battle, and sometimes we over estimate how much we need to think before we jump.
but then as you advance, as you acquire more skills, you also improve your ability to gauge your own competence.
so, clearly your only option is to not progress in life at all, or you risk ending up with your dreaded impostor syndrome /s
That being said, there might be some statistical magic hidden in the graph, where the group, the further up it goes, does less rely on self-rewarding by guessing a lower score and splitting up into "actual" self-underselling for mimicking, evening out with the ambitious but really good ones.
Bad job, "researchers". That's not a suitable pool for studying this. That group has already been filtered for competence. It's a rerun of the old joke of someone asking a drunk why he's looking for his keys near a streetlight when he didn't lose them there. "The light is better here." (In tech, some clueless types do user experience testing on their own staff. Same problem.)
Try recruiting some subjects at the unemployment office.
What's supposed to happen, then, is that any interesting findings will then be replicated using a more expensive, higher-quality study. That has kindasorta happened - there has been quite a bit of follow-up research, but it tends to still be performed on undergraduates.
All that said, even if the population they studied is only undergraduates, the findings they report in the paper would still be interesting even if they were only useful for improving college-level educational programs.
tl;dr: It's complicated. Science is hard. Effects similar to Dunning-Kruger's result have been observed in multiple different domains, but because competence usually has a power-law rather than normal distribution, it's hard to say whether observed effects are due to real metacognitive skills or sampling biases. Controlled experiments are also very difficult because "training people up" inevitably results in some fraction of the sample dropping out, which also introduces sample bias.
You, I, Dunning and Kruger all understand the limitations of experiments with local undergrads and why it's done. I respectfully suggest that D&K might understand that better than you or me, given our skill sets.
It's not appropriate to insult individuals for using an industry-standard practice. Criticize the practice, even criticize the people for perpetuating the practice, and I have no problem. But calling two well-trained people (Stanford & Cornell PhDs) "researchers" - which reads as "so-called researchers" to the native speaker - when their experiment was run like many, many others for well-understood practical reasons is not appropriate.
See below for the ref and a link to the article.
At first I was getting better fast and learning faster! I didn't often record my playing and I should. After 1 year I thought I made significant improvements and could play some songs "pretty good".
Into the second and third years I started being able to hear how scritchy and a little out of tune I was in almost every song. I also started listening to recordings of myself and shuddered at the sound.
I didn't know how bad I was because I was making such great improvements (from ignorant and being unable to play twinkle twinkle little star). It wasn't until I got mediocre that I could even hear how mediocre I am.
I think some of it is the big leap from total ignorance to beginner knowledge. It "feels" like you are getting better quickly and learning fast because you have no reference.
It's possible; I can't ignore the possibility.
There goes my self-confidence again.
The most skilled people were only off by like 10%. The effect is just that people seem to think the curve goes from 0.5 to 0.85 instead of 0.0 to 1.0 but otherwise fit themselves in the same rank-ordered buckets within that.
>It's possible; I can't ignore the possibility.
Yes you can ignore it, just focus on things that actually matter. While other question their competence, you can actually get stuff done.
This paper is only valid for what it actually studied: some subjective tasks and some standardized test questions. Ability to get the humor in a joke was one of the 4 tasks. Subsequent papers have shown that this result doesn't apply to highly skilled or complex cognitive tasks like computer programming. In fact, the effect reverses, and highly skilled people are disproportionately better at knowing their competence. https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...
This paper also didn't control for a possibly over-confident sample: the subject of the study were all Cornell undergrads, who may be prone to being overconfident, or just acting overconfident.
That said, as a programmer, it's always a good idea to be confident about your willingness and attitude while being humble about your skills. Assume your programs have bugs, because they do. I believe the reason my competence is increasing is because I assume all code is wrong, and I take extra time to make sure it does what I expect.
Anyone know of replications with a wider pool?
 "Why Do People Overestimate or Underestimate Their Abilities? A Cross-Culturally Valid Model of Cognitive and Motivational Processes in Self-Assessment Biases" http://journals.sagepub.com/doi/10.1177/0022022116661243 (paywalled)
People thought they did well on a test but actually performed poorly relative to their peers! Haha, losers!