> Two-thirds of them draw their subjects exclusively from the pool of U.S. undergraduates, according to a survey by a Canadian economist named Joseph Henrich and two colleagues.
This is pretty shocking to me. I participate in these behavioral studies and I thought the pool of candidates would have been pretty much the polar opposite; middle-aged women with, at most, high school degrees or GEDs.
I've probably taken 100s of these behavioral research studies over the last few years on Amazon's Mturk program and the vast majority (according to researchers who release demo data) of participants are always the same; stay at home moms with minimal educational backgrounds. With as much studies that get pushed through Mturk, I'm amazed to see it's only a blip compared to what the guy surveyed. It makes me question his results. There are at least a few dozen academic behavioral studies that get posted every single day to Mturk so I wonder just how many of these studies are actually being carried out.
A science writer starts off his article about science writing. He describes them being "surprised" about a meta-analysis attempting to reproduce scientific results in a "fuzzy" scientific field (my definition of fuzzy science is any that studies humans). They can only reproduce 33% of the experiments.
What a shocker (not really). It is common in the hard sciences to make fun of the "findings" in the fuzzy sciences. Humans are fickle creatures with a large variance in almost everything we do. With large variances, you have to collect large samples to be confident of your results. But as previously mentioned, humans are fickle, and it is difficult to get large statistically relevant samples of humans. Paying volunteers increases your sample size, but can bias your results towards a certain type of person. Not paying volunteers may cause them not to treat an experiment seriously. Gathering a large group of people together to make the study go faster may cause some word of the study to leak out and some group think to start. Some people just say what they think the researchers want to hear. Doing "science" with humans is incredibly difficult.
So really, it's not surprising that one off experiments are hard to reproduce. Their sample sizes are small and can't adequately account for variance across a population of people. And results for region of the world doesn't often generalize to other cultures.
My rule of thumb for trusting fuzzy data is only to start really believing it when there are several studies looking from several different angles and they all come to similar conclusions. If the studies contradict some previously held knowledge, it will take several more studies than forming a conclusion from scratch. But this is how science works, and isn't really surprising.
You should inform the social science researchers of these issues so they don't form an entire foundation of bullshit to build their empire. Otherwise the status quo will go on with the standard, "15 grad student sample confirms all conservatives are actually gay.".
This stuff is rampant because the incentives are there. Academia values original publications all else. Tenure, promotion, and respect of your peers depends almost entirely on publication in academic journals. Doing replications is not considered original and often raises the ire of more senior members of your field (the ones you're probably criticizing). In fact, making your research difficult to replicate is probably a smart career move --- no one can come along and use your data to publish something better!
Until replicability is required by top journals, and replications are appropriately valued by these fields, you'll continue to see this kind of drek trotted out.
Are there any projects dedicated to rating every research with some kind of number that would correspond to the chance of research results being correct? Like sigma something. Like peer review database that would say "Research R (identification number XX-XXXXXXX) is rated 39.6/100 in legitimacy points for reasons A, B anc C." Reasons would be like "it has not yet been replicated" or "replicated studies had same flaws original study had" etc.
“Now, I might be quite wrong, maybe they do know all these things. But I don’t think I’m wrong, see, I have the advantage of having found out how hard it is to get to really know something, how careful you have to be about checking the experiments, how easy it is to make mistakes and fool yourself. I know what it means to know something. And therefore, I can’t… I see how they get their information, and I can’t believe that they know it – they haven’t done the work necessary, they haven’t done the checks necessary, they haven’t done the care necessary. I have a great suspicion that they don’t know.”
— Richard Feynman, The Pleasure of Finding Things Out (1981)
Desiring to wear the halo of the physical sciences, the social sciences have chosen poorly suited methods. Unfortunately, great contributions to the epistemology of social sciences have been ignored from Menger's "Investigations into the Method of the Social Sciences" [1] in 1883 through Mises and Rothbard. Human action is highly resistant to empiricism.
This is pretty shocking to me. I participate in these behavioral studies and I thought the pool of candidates would have been pretty much the polar opposite; middle-aged women with, at most, high school degrees or GEDs.
I've probably taken 100s of these behavioral research studies over the last few years on Amazon's Mturk program and the vast majority (according to researchers who release demo data) of participants are always the same; stay at home moms with minimal educational backgrounds. With as much studies that get pushed through Mturk, I'm amazed to see it's only a blip compared to what the guy surveyed. It makes me question his results. There are at least a few dozen academic behavioral studies that get posted every single day to Mturk so I wonder just how many of these studies are actually being carried out.