Hacker News new | comments | show | ask | jobs | submit login

It agrees completely with the data I cited. It shows a variance ratio of 1.1-1.2 across many countries (though not all).

This was the most glaring example of statistical dishonesty in this paper: their data shows with perfect clarity that there's a > 1.0 male/female variance ratio for almost every country in the set, and I encourage anyone interested in this to look at their graphs and draw your own conclusions (http://imgur.com/39pja). To me, it looks like a typical noisy measurement (the authors note that the variation within a single country was about 20% from test to test, so we should expect a decently wide distribution (well, the authors don't actually admit that - as a first mathematical blunder in a series of many, they claim that 20% variation is very small and means we should expect a tight distribution)) with a mean somewhere between 1.12 and 1.15, a variance of maybe .1 (just about right for a measurement with around 20% variation, no?), and a decent bit of skew. Pretty good jumping off point for some analysis and explanation, I would think...

But not in this article. The authors merely point to those graphs and claim that they obviously disprove the greater male variance hypothesis. In other words - they point to a distribution of admittedly noisy measurements that is clearly centered around ~1.13 or so, with almost zero density below 1.0, and claim that it proves that the mean of the distribution is 1.0, (with an implicit "STFU Larry Summers"). When I first read this, I thought they were trolling me, the result is so clearly wrong.

[As an aside, it's worth noting that a variance ratio in the 1.1 to 1.2 range is enough to explain away most, if not all, of the gender imbalance in mathematics, if we make the assumption that the variance ratio is the same throughout all of mathematics education (which is not likely true - IIRC these measurements were all at the 8th grade level)]

Their argument? Because the measured variance ratios are not identical, we should ignore the mean of the distribution. Seriously, that's it. I'm not talking about a statistical calculation showing that the null hypothesis (that the variance ratio is 1.0) should not be rejected, mind you (because such a calculation would not allow us to accept the null hypothesis - a quick look at the graph is enough to be sure of that), or in fact any statistical argument at all. They quite literally claim that any variation in the measured variance means that the entire distribution is meaningless.

They also try to confuse the issue a little bit by pointing out that there's a bit of correlation between the variance ratio and the variance; while interesting and certainly worthy of further explanation (not done in this paper), this is completely and utterly irrelevant to the variance hypothesis, yet they imply that it somehow disproves it as well.

Quite frankly, the referees should have caught this, I can't remember the last time I've seen such bad statistical reasoning in a legitimate math journal. There are glaring issues all over the rest of the paper, too, where they've filtered and re-filtered data many times until they obtain the correlations that they want, where they chop data into bins in suspect ways, etc., but to actually enumerate all these errors in detail would make this a much longer rant.

It's a shame, too, because it appears that some of the article is solid, and it presents some interesting data that definitely warrants more investigation; unfortunately, they went absolutely bananas in several places, drawing completely unfounded conclusions from the data they generated, so I would be hesitant to cite this paper as proof of anything.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: