Hacker Newsnew | comments | show | ask | jobs | submit login

Seriously? How many more decades are we going to see people mistake correlation for causation as if this problem had not been thoroughly understood? This article reads like one long speculation presented as fact.

Kahneman and other psychologists have long established that we color character traits into stories that fit the result. If a CEO is pursuing his beliefs strongly and the company succeeds, we call him a visionary. If the company fails, we say 'he never listened'.

If you really wanted to check the influence of a certain character trait, you'd need to try to (carefully) compare how CEOs with the trait performed compared to CEOs without the trait, and make damn sure your sample sizes and methods are sound before making any confident assertions. What you definitely don't want to do is look at a handful of failed companies, find some common traits among their CEOs and call it 'The 7 Habits Of Spectacularly Unsuccessful CEOs'. That's intellectually negligent, and it's a testament to something that is fundamentally broken in journalism — we have publications that pretend to educate when they in fact serve misinforming entertainment.

Shame on Forbes for publishing such ill supported claims, they should have more respect for their readers.

"This article reads like one long speculation presented as fact."

Though this might be true about the article itself as posted, this is not a bit of bad journalism. The author Sydney Finkelstein, is a Professor of Management and spent 6 years studying 50 companies and conducting some 200 interviews, here's a link to the journal article : http://www.jacksonleadership.com/pdfs/7Habits_IveyBusinessJo...

So to answer your "If you really wanted to check the effect of a certain character trait, you'd need to try to (carefully) compare how CEOs with the trait performed compared to CEOs without the trait, and make damn sure your sample sizes and methods are sound before making any confident assertions." It seams like 6 years, 200 interviews, and research in 50 companies would seem to be a fairely extensive study IMHO.


Just because a sample consists of 50 companies and 200 interviews hardly makes that a sound sample, given no information about how those companies were selected, and given no attempt to compare those companies to successful companies. For example, 5/7 of these traits easily describe Steve Jobs.

Furthermore, just because someone is an academic authority hardly makes them less liable to fall for the cognitive biases that davidkatz mentions (that we color character traits into stories that fit the result). It goes without saying that we all have a strong bias to attribute other people's actions to their personality rather than conditions in their environment over which they have no control (http://en.wikipedia.org/wiki/Fundamental_attribution_error). We generally don't apply this error to ourselves.

The classic example is that if someone trips over a rock, you'll tend to have the impression that they're clumsy. If you trip over a rock, you won't think you're clumsy, you'll just say to yourself that the rock was sticking out.


I suppose I should have been more verbose above. Basically, I don't want to say that davidkatz's and yours notes about the possible errors and biases of the study are unfounded. IMO, every study or report should be approached with these precautions in mind, if simply because all data has to be interpretted at some point which makes it subjective. But the main point I was trying to get across is that a study of this size conducted by someone of academic authority should not be simply waved off as a piece of bull b/c of possible errors, which was the impression that davidkatz's post made on me.


I'm sorry to say that I've had personal conversations with faculty researchers in diverse fields that gave me the distinct impression that these people really should not be doing research. It's very easy to go through the 'form' of good research. Interview a lot of people, look at a lot of data, but that's no guarantee for good research.

When someone says: 'here are 7 habits of unsuccessful CEOs' and does not even attempt to account for the problem of 'how do we know it's really these habits that caused the company to fail?', but rather just flat out assumes it in the face of decades of findings that suggest that this is an easy trap to fall into, I for one can't take them seriously, and academic credentials have almost nothing to do with it.


I think the issue is journalism vs scientific reasearch. In journalism, you do not have the luxury of spending huge quantities of time developing your methadology and validating your statistics; and it is understood that journalism is much more by-the-gut than science. One of the major jobs of journalism is to help your reader draw connections and get context for current issues. As long as readers get a diversity of sources, and views are expressed in rough proportion to how they are help by people in the field, then I think journalism is doing its job.


Journalists have a responsibility towards facts that is just as strong as a scientist's responsibility towards facts. The major difference is that journalists are not necessarily expected to discover facts, but report on them, which greatly shifts their scope of attention.

Helping readers draw connections and get context is valuable, but only if it's done within the facts. A journalist or a publication that reports something that is factually wrong has to loose credibility.

Reporting something as fact when it is merely speculation is a milder form of reporting something that's factually wrong, but it's almost as deserving of our criticism.


This is a contributor article. Forbes is pretty lax in terms of who they'll let write for them as a contributor - it's more like getting an article in HuffPo than in NYTimes.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact