It does not mean that you give the outward appearance of being humble in any way.
It is defined in the article as "an awareness that one’s beliefs may be wrong" and "intellectually humble people can have strong beliefs, but recognize their fallibility and are willing to be proven wrong on matters large and small."
This was tested with a study in which "participants read essays arguing for and against religion, and were then asked about each author’s personality", and evidence that "people who displayed intellectual humility also did a better job evaluating the quality of evidence."
There is a distinction between an individual's internal mental processes (as tested in this study) and the way they present themselves externally. Being outwardly assertive and confident absolutely wins elections and grants, but this is not at odds with an ability to internally re-evaluate one's beliefs.
I particularly enjoy this quote from the article, as it reveals that the authors may have missed a subtlety:
“If you’re sitting around a table at a meeting and the boss is very low in intellectual humility, he or she isn’t going to listen to other people’s suggestions,” Leary said. “Yet we know that good leadership requires broadness of perspective and taking as many perspectives into account as possible.”
I absolutely agree that good leadership requires broadness of perspective, but this does not imply that every suggestion should get an audience at meetings -- capable leaders often have a much broader range of experience than their reports, and dismiss suggestions not out of arrogance or closed-mindedness, but simply because they have already evaluated and discounted that path, and have elected not to spend their limited time bringing everyone else up to speed. (Which can have its own issues, but that's a digression.)
Edit: of course you can often be assertive about some things while being intellectually humble. But what I mean to say is that the level of assurance that is expected of a CEO is not compatible with that humility.
Intellectually humble people have disadvantage in competition, although they may make better technical decisions.
I think so. There are a few humble people at the top. Take Lincoln. He wasn't ethically perfect but set the bar pretty damn high for both success and humility.
In your example the only choices are be definitive or lie, but are many choices in between.
Check out Merkel's statement about Trump right after the election. She's supposed to congratulate and welcome the new leader of a close western ally. Truth is she's disgusted, is she forced to lie? I think she did a great job not damaging the relationship with her statement, while still signaling her concerns.
Incorrect. They are absolutely at odds. You have never noticed that at the times you are most sure of yourself it's easier to appear externally confident and assertive? Compared to times that you have self doubt?
Maybe it's fair to say it doesn't necessarily impact how you project confidence, but sure doesn't mean they not at odds, and sure can mean it requires expending a lot more energy.
Secondly you are misreading the leadership quote. You say it's ok for a leader to dismiss an idea already evaluated, which is true. However the quote refers to leaders who do not listen to suggestions, which very different and certainly poor leadership when given no other context to justify not listening.
Learning it is intimately tied in with learning self-awareness & emotional intelligence, and I suspect that different people all use different mental tricks to get there, depending on how they're wired up there and what their value systems are. But for me, the key breakthrough was understanding that everything is inherently uncertain; you'll never be 100% sure of any mental model in your head, because they are, ultimately, just models of the world. But my raison d'etre is to find out which of my mental models of the world is correct, and which of them should be discarded. Your opinions are taken more seriously when they're expressed confidently; they have more of a chance to impact the world around you, and only through impacting the world will you get feedback about whether they're correct. Therefore, if I want to prove my models wrong, the best way to act is to assume that they're right. Once they are proven wrong it's not worth holding on to them any more, but as long as they exist in the grey area of maybe-wrong-maybe-right, just assume that they're right and see what happens.
The claim is that in general they are forces at odds and in most of those situations in takes more energy to project confidence or assertiveness.
I think I would otherwise mostly agree with your post, that it is a skill that can be improved with practice, and to different degrees in different people.
One of the important realizations as an adult for me was that there was nothing inherent about these mental models. They are, after all, paradigms and they can most certainly be changed. Well, now that I use that word, I think that realization came from reading Covey's 7 habits book.
I understand the leadership quote, but I find it interesting that the author did not go into the nuances of why leaders might not listen to suggestions, instead sending the reader down the garden path of "all suggestions are worth hearing", which I believe to be false.
Not quite, there is a subtle but meaningful distinction. You are inferring the author to mean "all suggestions are worth hearing" or that good leaders should not be able to "dismiss" suggestions. Neither of these is stated or has context to be implied.
The quote should simply be taken at face value to mean generally speaking leaders should "listen to other people's suggestions", and take a variety of perspectives into account.
I disagree. You can harbor uncertainties and doubts about the outcome yet still be confident of the course of action you must take.
My aunt recently had a brain injury and I was the only family available to attend to her. I had to choose between letting her die or opting for a controversial surgery that also carried a risk of mortality, little chance of improving her condition, and the possibility of incurring a lot of debt.
I was uncertain, scared, yet also confident that some attempt to save her should be made.
However I am glad you were able to be strong for your aunt.
> "They were also less likely to criticize that politician for “flip-flopping.” There was less variability among Democrats: Democrats, whether intellectually arrogant or humble, were generally less likely to criticize a politician for changing his mind."
Completely refuting that point only a few paragraphs later. I think that this study, like all social science research, must be taken with an extreme grain of salt.
Saying that correlation 2 is not string (because humbleness is not a good predictor of not-criticizing asking Democrats) does not negate point 1.
It's very badly written, but it says that correlation 2 is present only among Republicans. Which, as you said, doesn't negate point 1, but it's still a weird and interesting result.
Is it possible to buy into only parts of an ideology or is it actually just pragmatic and a display of humility in accepting that some parts of an ideology may work in some situations?
Yes. One example is Christianity.
>is it actually just pragmatic and a display of humility in accepting that some parts of an ideology may work in some situations?
Yes. In fact good science requires that practioners be willing to consider that new ideas that may possibly be contradictory and superior to their own. Not sure why for some of us this seems the obvious and only logical way to proceed while many will spend a lifetime not realizing it.
Very difficult to word. I often thought pragmatism is at odds with ideologism.
My take would be no, it's definitely not ideological in that case. On a macro scale war comes to mind. In WWII would you agree countries had a lot of pragmatic motivations unrelated to Nazi ideology?
Individually what about something as simple as being a vegetarian? Motivation could be ideological like religion or choosing not to harm animals, but some just see it as a healthy life style.
Sure - except if one is outwardly confident about something that they're actually not completely sure about, it implies that they aren't concerned about the possibility of misleading people. This seems diametrically opposed to being humble.
Similarly when we teach things we start with simple concepts and hide the ugly truths. The simplified versions might be wrong, but they are easier to learn and true enough to be useful. There have been many attempts to stop teaching those half truths and just go to the meaty concepts directly, but all of them have proven to be worse than just teaching half truths first. Thus it seems like people are good at handling being misled and later update their understanding to fit new data.
This is not an evil thing per se, because this is simply how human motivation works -- if a leader is intellectually sound but always externalizing caveats and risks, verbally or non-verbally, he will be less effective than a leader who is equally intellectually sound but uses all the motivational tools at his disposal.
It is apparently not confirmed in other cultures, where interpersonal relationships are traditionally conceived of differently.
Corresponding advice from http://highexistence.com/dunning-kruger-effect/: Be confident in the areas that you suck at.
Many people have in mind a curve of confidence vs knowledge that looks nothing like the curve from the paper.
 http://www.talyarkoni.org/blog/wp-content/uploads/2010/07/du..., with a good writeup at http://www.talyarkoni.org/blog/2010/07/07/what-the-dunning-k...
"The first principle is that you must not fool yourself and you are the easiest person to fool."
There are questions that should be answered categorically [straightforwardly yes, no, this, that]. There are questions that should be answered with an analytical (qualified) answer [defining or redefining the terms]. There are questions that should be answered with a counter-question. There are questions that should be put aside. These are the four ways of answering questions.
The core is a 2x2 with strength of views on one axis and holding strength on the other.
A fox is someone who knows many things and a hedgehog is someone who knows one big thing.
The degenerate versions are the cactus, which stubbornly knows one thing, and the weasel, whose views ate too two-faced.
The weasal does double-think while the fox can hold contradictory models in mind and create a new one or use the one that better fits the situation. (e.g particle wave theory)
The fox has better perspective, but because energy is spread out among too many things, has trouble implementing. The hedgehogs energy is directed and more focused with more momentum, but still has the ability to steer, unlike a cactus.
Intellectual humility IMO is about flexibility of mental models. In terms of real world success, hedgehogs have the most while foxes are more armchair philosophers.
The article is telling cactuses to be more like hedgehogs, but the 1 dimensional terminology ropes in foxes as well, who despite having intellectual humility as well, don't usually have as much direct impact on the world.
Did the paper conclude that changing a person to be more intellectually humble causes the benefits described? If not, any promotion or changing of behavior seems premature.
There are people who know that they are intellectually modest that read this article, and it may reinforce their belief that being intellectually modest is the one true path! There are people who know that they are intellectually arrogant and may feel that they are destined for hell by reading this article. There are some advantages to being an egotist as well and disadvantages being modest.
The other subtle incoherence I wanted to point out is, at some point the author claims intellectually modest people are in both classes of people, conservative and liberals.
But he does say that republicans by and far are more likely to not accept a person who has "flip flopped" [may or may not be for good reasons] than a democrat. So assuming most conservative people are republicans, and most liberal people are democrats there is some intellectual quality that differentiates their bases. This probably needed more analysis in my opinion.
Google Rework has some great stuff about it https://rework.withgoogle.com/blog/how-to-foster-psychologic...
Of course this doesn't explain why this election was different from any other. I guess increasing polarization between left and right, makes it more of a problem in practice than usually.
Trump was not too right for the right. The establishment of the RNC was against him because he looked and had liberal ideas in many aspects.
Being politically incorrect may make him look "more extreme right" to the left but there's no evidence that he is (When compared, for example, with another important primary candidate like Ted Cruz).
Its really mindboggling, how he took over the party.
Hillary Clinton was a Yale Law grad, successful lawyer, Walmart board member, First Lady, Senator and Secretary of State. She was 20 times the Most Admired Woman in America.
I'm familiar with equivocation. It really doesn't impress me.
In a recent ambitious and well-funded replication project, only 39% of published psychology studies could be replicated at all, and of the successful minority, the average measured effect size was half that of the original study, in some cases falling below generally accepted standards for statistical significance. The take-away from the replication project is that, when reading a random psychology study, the reader must remember that the study's probability of having any relation to reality is less than 50%.
More on this topic: http://arachnoid.com/psychology_and_alchemy
We all respect your fabulous software work, but HN threads are for conversation. Nothing spoils good conversation like obsessive repetition. Taken as a whole, your comments on this matter are not conversation, they are a harangue—apparently an interminable one. Harangue is off-topic here, regardless of whether you're right or not.
It's interesting to perceive "wrong conversation" as more valuable than "correct statements" and is very telling about HN's current role.
You know, starting out with a wild exaggeration, an "alternative fact," undermines your otherwise meritorious point.
> ... I need to ask you to stop using HN this way.
In what way -- conversations about psychology? On what basis? HN regulars often post links to psychology articles -- people don't complain about that (and IMHO they shouldn't, because most are newsworthy). And by definition, comments about those articles are as topical as the articles themselves.
Also, I never originate exchanges about psychology, I only respond to threads introduced/posted by others. If psychology were never an established topic here, I would never address it. This means you're objecting to a particular viewpoint on the topic, not the topic itself.
> We all respect your fabulous software work, but HN threads are for conversation.
Indeed they are. Case in point -- this conversation.
Solely for balance you might consider objecting to the large number of psychology articles posted here. I'm not recommending this -- I only say this to try to get you to examine your position.
Over 300 results, all saying pretty much the same thing.
The number is higher than I would have guessed, but they're certainly not saying "the same thing." In many cases they're included in the search results only because of the presence of the word "psychology" without any discussion of the topic at all.
In fact, some analyses suggest replication might be worse in the neurosciences than in psychology. The neurosciences are plagued by many of the same conceptual ambiguities in psychology, but with even smaller, expensive studies that are underpowered with more incentives to avoid sunk cost problems.
The most accurate summary is that replicability problems affect the biomedical sciences the most in general (empirically speaking, psychology tends to be classified as a biomedical science, in topic analysis studies and so forth). Follow the money. You have lots of competition for limited resources, lots of money on the line, and soft or absent tenure protections.
The link you provide amounts to nothing more than ignorant intellectual bigotry and killing the messenger.
It's ironically sad, in fact, because psychologists and behavioral scientists are the ones exposing the replication crisis and doing the most to try to address the problem.
The type of rhetoric in the linked essay amounts to a whitewashing of replicability problems in other areas, by denying the problem exists and is only true of "lesser sciences." The sort of defensive response represented by the linked essay allows scientists in other areas to avoid accountability and the threatening implications it has for other fields.
Levels of analysis problems happen in all sorts of intellectual endeavors. Arguing that psychology will give way to the neurosciences is akin to arguing that nothing in the computer sciences is a legitimate topic of study unless it involves bare metal issues.
Having said that, I also distrust this paper.
The same can't be said for psychology.
On the contrary, psychologists were shamed into addressing the replication problem by a Nobel Prizewinner's now-famous "train wreck looming" open letter. They certainly wouldn't have gone there without being pushed.
> Arguing that psychology will give way to the neurosciences is akin to arguing that nothing in the computer sciences is a legitimate topic of study unless it involves bare metal issues.
Yes, unless the computer science theories under study are falsifiable and based on empirical evidence, in which case they constitute science as science is defined. That's what distinguishes computer science from psychology, or, for that matter, science from psychology.
> The link you provide amounts to nothing more than ignorant intellectual bigotry and killing the messenger.
I include this to show the essence and basic character of your argument.
> On the contrary, psychologists were shamed into addressing the replication problem
It was raised by a professor of psychology, somebody IN THE COMMUNITY, and is being addressed, better than elsewhere. Your spin is absurd. A community takes some monumentally difficult steps to right itself in the face of bizarre pressures (mainly a broken funding model) and you take it as more evidence that they're inherently dishonest.
The article you link here seems to go almost completely against your premise, by the way.
If your goal is to make it obvious that some people have an absurd bias against the 'soft sciences', or to reinforce the stereotype of 'STEMlord' on HN, you're doing marvelously. Anything else and it's a flop.
Instead, I think it makes a lot more sense to view it as an upper bound on the average reliability of our own personal beliefs about human behaviour.
I mean, most of our day to day cognating about other people is fraught with unfalsifiable claims, dubious reasoning, and horrendous sample sizes.
Takeaway: Be extra skeptical about psychology research claims and triply so about your own beliefs regarding people, including yourself.
Since the study demonstrates that psychology is unreliable, I think it's something other than abuse.
> I mean, most of our day to day cognating about other people is fraught with unfalsifiable claims, dubious reasoning, and horrendous sample sizes.
Yes, but we don't call that science or open clinics to treat people who also behave that way. Psychologists do.
Your comment is exactly my point though. I suspect that you see the number 30% and equate that with "bad".
However, if 30% is more reliable on average than our own personal heuristics, then doesn't it seems shortsighted to write off every bit of psychology research as "bad science"?
It should have changed the way you view all sciences, period, because the replication crisis concerns biology, medicine, physics, computer science etc equally.
In fact some of the most powerful studies on the subject have been on the topic of biology.
It's the old protestant/puritan spirit, just moved from the certainty of God to another certainty (for all the lip service to the "scientific process", it's the certainty that's valued most in science).
Its probably due from once trusted institutions getting so much wrong from the health pyramid, replication crisis, even election polling.
Nasim Taleb has a good series on why distrust is increasing.
>Trait increases tolerance, improves decision-making, study says
On the other hand, published psychological studies are probably better than opinion articles, essays etc. So maybe there is some value in it that.
I myself find these four studies in the paper interesting in the context of ambiguity tolerance and critical thinking. Studies in personality and social psychology like these are usually best understood as some sort of cluster analysis.
Most interesting points form the discussion chapter that draws from many other studies:
1. Many of the effect sizes for intellectual humility were relatively small
2. High IH might come together with epistemic curiosity, cognitive ability and ambiguity tolerance.
3. Most personality characteristics display substantial within-person variability across situations. So the studies probably reflected domain specific intellectual humility (IH).
4. Intellectual humility did not correlate with religiosity or political orientation.
5. Indicators for stubbornness, rigidity, narcissism, or defensiveness are not central aspects of low intellectual humility.
Amen. In industry, intellectual humility doesn't help get promotions or get hired either.
As a newspaper article, I actually like it.
> Finally, intellectual humility was inversely related to the extremity of participants’ views about religion. People who recognize that their beliefs are fallible may maintain less extreme positions both because they realize that most issues are not incontrovertible and because they believe that extreme positions are, in general, less likely to be correct than moderate positions.
Any meaning the study might have would have to be validated by (a) a successful replication, and (b) construction of a falsifiable theory based on it, one that predicts behaviors and corollaries not yet seen. I won't hold my breath for that.
Much of published psychology consists of taking a common belief that "everyone knows," crafting a study that seems to support it, then publishing the result without bothering to conduct a replication or suggesting a reliable, falsifiable theoretical basis.
The thesis seems sound -- many people see humility as a positive trait. But that's why the study exists -- it's an example of confirmation bias in print.
An equally plausible study could be crafted to prove the opposite point, but that study wouldn't be published. Psychology journals publish articles their readers like, and aggressively reject articles their readers don't like.