Hacker News new | past | comments | ask | show | jobs | submit login
Unskilled and Unaware of It (1999) [pdf] (colorado.edu)
108 points by johnny313 on Jan 11, 2018 | hide | past | web | favorite | 82 comments



In general use, I can't recall the Effect being cited outside attacks on a political adversary or disfavored social group or group member. Basically gives a scientific basis to insulting one's ignorant opponents.

Taking the point further, can anyone think of a mechanism by which the paper has a positive effect? The ignorant person who is overly self-confident presumably would be immune from its lessons, while the informed person is not better informed about their domain.

How does the knowledge of the paper's findings benefit society? (I ask this because the paper's findings seem to be widely celebrated.)

To the extent it makes readers more hesitant to boast or claim superior knowledge, it's a good thing, but I have to think that those predisposed to such boasting will continue rather unfazed. Maybe I'm just too much a pessimist.


I'm not sure how it benefits society at large ... but I can tell you, anecdotically, how it benefits me.

1. It keeps me aware, especially in domains where I'm a beginner, of how it can affect my self-evaluation of performance.

2. It helps me when teaching beginners; I may need to help them realize where they lay in performance; if you don't know you suck, chances are you won't improve.

3. I've 'counseled' many people who are in the upper-middle of performance, when they don't realize how good they are :) this has helped them go for more challenging jobs etc


I get what you're saying. Quoting this in a discussion will likely bring more harm than good. But on the other hand, this is just research. If uncomfortable facts surface, we need to be ok with that. All areas need to be studied, including those that shine a poor light on society and those who inhabit it.


The main lesson I take from the paper is that self assessments aren't worth all that much.

So when someone says they are good at something, don't just take their word for it.


But a main lesson you could be taking is that you shouldn't trust your own self assessment. If you think you're good at something, don't take your own word for it.


Another main lesson you could be taking is that this paper is flawed and doesn't actually prove anything about self-assessments in general, so don't automatically second-guess anyone's word for it, even your own. Find the link elsewhere in this thread titled "what the Dunning-Kruger effect is and isn’t".


>Taking the point further, can anyone think of a mechanism by which the paper has a positive effect?

Yes - by reinforcing in people that confidence is disjoint from competence, and that 'trust-but-verify' is the best policy when evaluating people's competence.


Remember the old greek that knew he knew nothing? You don't sound like him. If you don't approach the issue - unawareness of one's incompetence - with humility, I'm sorry, but there's really one thing you are. You're confident. Now of course. Me pointing this out is moot, or is it? Maybe: I don't claim I knew anything other than what both of us already know, which is what, according to the former greek, actually counts. Or is it?


I'm not claiming competence in anything. All I am doing is saying that when evaluating competence that you should test the thing you are looking for - competence - and not accept confidence as a substitute or proxy for competence.


The way I wanted to make it look was that those who thought they knew what they were talking about have no way of knowing on which side of the pond they were. :)

EDIT: ...because your tools are useless in the face of ignorance.


In general, I use it to explain why I tend to listen to those who express doubt more than those who express certainty.

It is useful in pointing out why I prefer medical professionals who express doubt vs those who are super confident.

Case in point: Physical therapists and massage therapists. The farther you go from what your hospital provides, the greater the certainty and confidence signaling. Same with a lot of alternative medical practices. It is effective in explaining why so many people flock to such professionals - because they always diagnose you and tell you what your problem is. And then the patients turn around and say "These doctors are incompetent, and their methodology is a sham. They could not tell me what my problem was, with all their years of training, whereas the <insert alternative professional> figured it out in just one sitting. And it all made sense, too!"


The benefit of any scientific paper is that is benefits science -- it is not necessary that science accomplishes anything more. (The fact that science benefits society is a nice consequence, but not necessarily an immediate one.)

While I think there are in fact useful insights that the individual can glean directly from this paper, we shouldn't forget that scientific papers exist for a larger purpose than to be the tools of self-improvement or the weapons of our social wars.


I get what you're saying, but this seems close to pushing science as a goal in itself rather than a means to an end.

Pure research is often useful to society because we don't know what's going to prove useful beforehand. Knowledge can also be useful to a person because simply because it satisfies their curiosity.

But in both cases it's the potential benefit to people that make it worth doing.


Your viewpoint is cultural, FWIW. In many cultures (eg. most Confucian ones) and subcultures (eg. academia, wealthy ancient Greeks) knowledge of the truth was considered an inherent value that's greater than its application to messy human interests. Application of science debases the pure research - it's no longer as likely to be true, because now you have introduced human incentives and emotions as a consideration.

Growing up in a bicultural Asian/white household, reconciling these two viewpoints was one of my greatest sources of childhood angst. My Asian academic dad was all about learning for its own sake, and he'd look down upon the politicians and businessmen who sought to use science to further their own ends. My white teacher mom was very much "What's the point of information if you don't use it to do some good in the world?" This is a gross generalization, but I've seen strong echoes of this in the working world as well, eg. the stereotypical Chinese Ph.D working in a lab or at a computer whose results are then going to get commercialized by a Harvard MBA.

The reconciliation I eventually came up with is basically "You get what you ask for." On a factual level, cultures that value truth for its own sake tend to have a more accurate understanding of the truth, but then they get steamrolled by cultures that value truth for what it can do for them, but only if that other culture is on the same basic technology level. Then you get a very interesting dynamic where the usage of science for military purposes (often viewed as a "deal with the devil" by those in academia) ends up "carving out" safe spaces for scientists and academics where pure research is protected and supported by politicians for its usefulness.


You're misunderstanding my post. Sorry if I wasn't clear.

I think that satisfying your own curiosity/the desire for knowledge is a valid reason to pursue knowledge.

I don't think knowledge needs to be directly useful to be worth pursuing.

But, all cultures amd people pursue knowledge they believe to be valuable in some way.

No culture encourages it's people to record the number of specs of dust on their floor before they sweep just because they value truth.


I was being casual about my phrasing. Yes, we expect there to be benefit from the general execution of science as a practice at the macro scale. My point is simply that the same expectation cannot be applied to the micro scale of an individual research effort because, as you said, the benefits are unknowable in advance.


> How does the knowledge of the paper's findings benefit society?

At a minimum (assuming that it held up; it's been widely criticized since) it would tell us that self-assessment, even aside from intentional dishonesty, is a dangerous basis for assessing ability in intellectual domains.

Conversely, it also would inform us that people that mis-assessing their own abilities in certain ways may well be honest rather than padding or demonstrating false humility.


> Basically gives a scientific basis to insulting one's ignorant opponents.

I suspect that's exactly why it's popular, and why the critiques and followup papers that show this result doesn't hold up to real-world scrutiny don't seem to have any effect on the public perception of this paper.

It gives people an excuse to feel superior. I think it's a meta effect, because it's one of those specious, great sounding results that is super easy to understand and super easy to believe, but isn't actually true. Everyone has someone in their life that is aggressive and annoyingly over-confident.

This paper has some really big flaws, it's probably a bad idea to apply it to any real world situation including opponents. It can't benefit society to have everyone making incorrect assumptions that confidence proves a lack of skill.


I think they suggest that people can be made aware of their incompetence after feedback, but the right kind of feedback often isn't given because of cultural factors that make it unacceptable to give "negative" feedback and also hard for people to accept "negative" feedback.

That idea seems positive to me. If we could change our feedback mechanisms (via education and so forth), we could increase self-awareness and reduce incompetence.

Also, we tend to think of "incompetence" as extremely negative, but they use it in a specific way that is meant to be descriptive and not an insult.

That's my take on it anyway. I really don't know.


They only highlight the problem (underestimation of skill), and its interaction with current skill level. But that doesn't mean a person's sense of their skill level couldn't be corrected. This is exactly what one-on-one expert feedback is for.

I like the paper, because it's a good reminder to seek out tutors / mentors early when learning something, and to take them seriously when they say you have a lot to learn.


You really only agree with scientific papers that have "positive effects"? You do see the irony here, right?


ballenf didn't mention anything about only agreeing with scientific papers that have positive effects. ballenf is asking if "the knowledge of the paper's findings benefit society" or if "a mechanism by which the paper has a positive effect".


Benefiting society newer was the main goal of science, those benefits are byproducts of science.


Trust me, there's a Dunning Krueger snake oil for EVERY woe.


People value confidence over competency. If you always question things because you known enough to know that you don't know enough, people aren't going to take you very seriously. Better for yourself if you fake it till you make it. At least then you will have opportunities to advance in life. I rather suffer from Dunning-Kruger than impostor syndrome.


This is related to a different effect, which is even more dangerous than DK.

https://en.wikipedia.org/wiki/Dr._Fox_effect

Humans cannot evaluate competence objectively without formal performance testing. (And sometimes not even then.)

But we can evaluate confidence, charm, persuasiveness, and social proof.

So we tend to use them as a proxy, and choose leaders who lack competence but can demonstrate them, over leaders with genuine competence who can't.

This is a very bad thing, and possibly the single biggest cause of political, social, and economic problems in our history.


The Dr. Fox Effect should be called the Lacan effect since he pushed it to the limit.


People don't value confidence over competence, they just use confidence as a proxy for competence where they don't have the competence to judge your competence.


>People don't value confidence over competence, they just use confidence as a proxy for competence where they don't have the competence to judge your competence.

You can have it both ways. I've seen competent people go ahead with someone's cocky ideas because their confidence was so strong that they suspected their was probably a gap in their own understanding, and so they put aside their own doubts.


Absolutely. You've no idea the number of times I've sat in a meeting and watched someone who clearly had no idea WTF he was talking about sway a decision maker because he presented his BS (and also swatted down more knowledgeable dissenters) with extreme confidence.


This is probably the one and only "work scenario" I've seen play out again and again at every company I've ever worked at, without exception. These people are everywhere.


I have never seen this... which makes me wonder if I am the one BSing


You might be working in a sector that they don't consider sexy and exploitable enough.


what is the best way to combat individuals like this? People that swat down ideas with fallacies of logic, etc.


Learn what their goals and objectives are and position your solution to go after that.

Use tricks like “that’s a great idea, my only concern is that it might fail in X scenario, Y solution is similar but doesn’t suffer from that specific issue. What do you think?”


Sometimes being "young and naive" is a good thing.

After all if you start projects with a YES attitude you are more likely to complete the projects, than if you don't start any projects because of your NO attitude.

Sometimes == not always; being competent is generally good thing :)


I wonder how true this is if your project does not involve anyone but you.

I always suspected such people succeeded more with that attitude because others are more likely to help them. The attitude gives people a signal of likelihood of success, so they are willing to put in more to help them. People often decline to help others if they think failure is likely.

I'm sure the YES attitude does help you a bit internally, but I suspect most of the gain is in others helping you than you somehow performing better.


Well, my point was also that:

If you don't start, you'll never finish.

It's logically obvious :) Sometimes getting started is half the battle, and sometimes we over estimate how much we need to think before we jump.


> At least then you will have opportunities to advance in life.

but then as you advance, as you acquire more skills, you also improve your ability to gauge your own competence.

so, clearly your only option is to not progress in life at all, or you risk ending up with your dreaded impostor syndrome /s



What an amazing read! I saw the correlation between actual and perceived ability and thought to myself, yeah, because the further up you go the more you expect the scale to become logarithmically difficult, be it only to imagine (for themselves) that it's even possible they don't get to actually high score (100%).

That being said, there might be some statistical magic hidden in the graph, where the group, the further up it goes, does less rely on self-rewarding by guessing a lower score and splitting up into "actual" self-underselling for mimicking, evening out with the ambitious but really good ones.


"Participants were 95 Cornell undergraduates ... from ... courses in psychology".

Bad job, "researchers". That's not a suitable pool for studying this. That group has already been filtered for competence. It's a rerun of the old joke of someone asking a drunk why he's looking for his keys near a streetlight when he didn't lose them there. "The light is better here." (In tech, some clueless types do user experience testing on their own staff. Same problem.)

Try recruiting some subjects at the unemployment office.


Researchers use students as their test participants when they're doing more scattershot-type research because they're a lot cheaper to work with. The cost of sampling participants from the general public is not necessarily proportional to the expected value of the research.

What's supposed to happen, then, is that any interesting findings will then be replicated using a more expensive, higher-quality study. That has kindasorta happened - there has been quite a bit of follow-up research, but it tends to still be performed on undergraduates.

All that said, even if the population they studied is only undergraduates, the findings they report in the paper would still be interesting even if they were only useful for improving college-level educational programs.


It’s not these researchers specifically, it’s basically every psychology study ever done. We don’t much about human psychology, but we know plenty about the psychology of Ivy League undergrads.

http://www.slate.com/articles/health_and_science/science/201...


I think the Dunning–Kruger effect is fairly well established, so I'm not sure why you'd put "researchers" in scare quotes.


There've been a number of criticisms of Dunning-Kruger in recent years, many focusing on the sample-bias problem that Animats brings up (which is also common to a lot of attempts to reproduce it), and some also mentioning the possibility of floor effects (i.e. if you have a natural statistical variance around some quantity that is close to the bottom of what your test can measure - as if you're unskilled - then your average will be skewed upwards because for most people "there's nowhere to go but up"). Here's a relatively recent literature overview:

https://pdfs.semanticscholar.org/a387/04facf73e00523bff8182d...

tl;dr: It's complicated. Science is hard. Effects similar to Dunning-Kruger's result have been observed in multiple different domains, but because competence usually has a power-law rather than normal distribution, it's hard to say whether observed effects are due to real metacognitive skills or sampling biases. Controlled experiments are also very difficult because "training people up" inevitably results in some fraction of the sample dropping out, which also introduces sample bias.


Not scare quotes. The impression I got was, they’re not competent so not real researchers.


That's what the phrase "scare quotes" means: https://en.wikipedia.org/wiki/Scare_quotes


Yes, scare quotes. That is what scare quotes are.


Please let's not insult professionals - "researchers" - on this forum without good reason.

You, I, Dunning and Kruger all understand the limitations of experiments with local undergrads and why it's done. I respectfully suggest that D&K might understand that better than you or me, given our skill sets.


And so we, knowing this limitation, must ignore it?


No, and that's not what I said. I didn't even say they shouldn't be criticized for it.

It's not appropriate to insult individuals for using an industry-standard practice. Criticize the practice, even criticize the people for perpetuating the practice, and I have no problem. But calling two well-trained people (Stanford & Cornell PhDs) "researchers" - which reads as "so-called researchers" to the native speaker - when their experiment was run like many, many others for well-understood practical reasons is not appropriate.


User andersonfreitas posted the reference for a supporting article with a much larger and international cohort.

See below for the ref and a link to the article.


The joke's awesome btw; never heard this one before.


In which case I commend to you the tales of Nasrudin https://en.wikipedia.org/wiki/Nasreddin


I've never been more aware of my own dunning kruger effect until I started playing violin.

At first I was getting better fast and learning faster! I didn't often record my playing and I should. After 1 year I thought I made significant improvements and could play some songs "pretty good".

Into the second and third years I started being able to hear how scritchy and a little out of tune I was in almost every song. I also started listening to recordings of myself and shuddered at the sound.

I didn't know how bad I was because I was making such great improvements (from ignorant and being unable to play twinkle twinkle little star). It wasn't until I got mediocre that I could even hear how mediocre I am.

I think some of it is the big leap from total ignorance to beginner knowledge. It "feels" like you are getting better quickly and learning fast because you have no reference.


This is a strange way of phrasing the result, since the participants were all from a group of university students their skill distribution in all those fields was probably above average with respect to the overall population (especially since they were Cornell students). So I do not find it that surprising that they assessed themselves higher than warranted because in their experience they probably encountered many more people who had a lower skill level compared to them.


I believe myself to be a competent programmer. Does this mean I'm incompetent?

It's possible; I can't ignore the possibility.

There goes my self-confidence again.


This binary interpretation of the Dunning-Kruger effect leads to wild exaggeration.

The most skilled people were only off by like 10%. The effect is just that people seem to think the curve goes from 0.5 to 0.85 instead of 0.0 to 1.0 but otherwise fit themselves in the same rank-ordered buckets within that.


>Does this mean I'm incompetent?

>It's possible; I can't ignore the possibility.

Yes you can ignore it, just focus on things that actually matter. While other question their competence, you can actually get stuff done.


While it is possible, this paper doesn't really make it more likely. Do read the paper if you're worried about it, because it doesn't prove what it claims to.

This paper is only valid for what it actually studied: some subjective tasks and some standardized test questions. Ability to get the humor in a joke was one of the 4 tasks. Subsequent papers have shown that this result doesn't apply to highly skilled or complex cognitive tasks like computer programming. In fact, the effect reverses, and highly skilled people are disproportionately better at knowing their competence. https://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-...

This paper also didn't control for a possibly over-confident sample: the subject of the study were all Cornell undergrads, who may be prone to being overconfident, or just acting overconfident.

That said, as a programmer, it's always a good idea to be confident about your willingness and attitude while being humble about your skills. Assume your programs have bugs, because they do. I believe the reason my competence is increasing is because I assume all code is wrong, and I take extra time to make sure it does what I expect.


Doesn't this self-doubt imply that you are now competent? I have always found the Dunning-Kruger effect to be more of a fun novelty than a practical tool.


Possibly, but then the minute I stop doubting myself, I become incompetent again.


I would be interested in a cross-cultural evaluation of the Dunning-Kruger effect. This study was performed in the US. Is the effect as prominent in Russia? Sweden? China? South Africa? Across racial boundaries? Across class boundaries? I suspect a strong cultural and environmental influence.


The original study was of 65 undergrad psychologists at Cornell who got extra credit for participation. Probably a relatively narrow pool of personalities even in that region of the US.

Anyone know of replications with a wider pool?


In this study [1] the experiments were replicated in Hong Kong with 4034 high school students and in US with 95 students.

[1] "Why Do People Overestimate or Underestimate Their Abilities? A Cross-Culturally Valid Model of Cognitive and Motivational Processes in Self-Assessment Biases" http://journals.sagepub.com/doi/10.1177/0022022116661243 (paywalled)


here's the article, for scientific purposes:

https://file.io/41t2Gv


Excellent, thanks.


Dunning-Kruger effect explanation should be in every single high school education program. Quoting Bertrand Russell: "The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt" (1933, "The Triumph of Stupidity" essay [1], also discussed in HN [2]).

[1] http://russell-j.com/0583TS.HTM

[2] https://news.ycombinator.com/item?id=10636818


Presumably, the original paper of the dunning kruger effect[1].

[1] https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


Recognized with the Ig Nobel Prize in Psychology in 2000.

https://www.improbable.com/ig/ig-pastwinners.html#ig2000


“Adventures in negative vs positive reinforcement experiments via ‘studies’ shown as social media content”


"Without reading the article"


Every time I've witnessed in person the Dunning-Kruger effect being used in real life, the person bringing it up was under its spell...


Every time you say...


TL;DR

People thought they did well on a test but actually performed poorly relative to their peers! Haha, losers!


How do you test for humor?


The third page of the study explains how they tested for humour. It's kind of fun to propose how they might have tested for each fo the 4 attributes they mention, then check to see how they actually did it.


tldr know thyself


Could this have a (1999) put on it?


Yes, done. Thanks!


lol this is the original Dunning-Kruger paper.. why is it news now?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: