
Social sciences suffer from severe publication bias - akkartik
http://www.nature.com/news/social-sciences-suffer-from-severe-publication-bias-1.15787
======
iamwithnail
It's an interesting point, but it's a tough one.

Real example: I'm a social scientist, and I have a hypothesis about a better
method of analysing discourse (borne from theoretical musings, mostly). Once I
spend the next 6 months testing that empirically, and it turns out to be
nonsense or completely unfounded, I'm not sure how much value there is in
putting it out there - a lot of social science hypotheses are (imo) quite
ephemeral and move on quite quickly.

Part of that's a spillover from the fact that it's difficult to scale
qualitative data gathering and analysis as an (often individual) researcher,
and a frequent assumption that only quantitative data 'counts', but yeah.
Maybe there should be a way to summarise null findings, to stop people
replicating unsuccessful or unfounded work.

~~~
lutusp
> Maybe there should be a way to summarise null findings, to stop people
> replicating unsuccessful or unfounded work.

But repeating failed experiments has the same scientific value as repeating
successful ones -- it solidifies the result, increase our certainly about it.

In real science, null results, and their replication, are extremely important.
My favorite example is the Michelson-Morley experiment that expected to find
differences in the speed of light due to the motion of the earth through the
hypothetical ether.

The failure of this experiment, its publication, and efforts to replicate the
result, also dutifully published, finally created a revolution in physics when
Einstein explained why there was a null result, and coincidentally gave birth
to the modern world.

~~~
iamwithnail
'Real science' \- thanks for that. ;-)

I think the issue I'm trying to get at - I suspect there _is_ less value in
null proving a lot of social science hypotheses - the impacts are smaller than
physical constants of the universe (or finding say, that the Higgs Boson
didn't exist) - if I prove, because my empirical work says so, that discourse
can be meaningfully (for a given value of meaningful...) analysed in a new
way, that's great, and will advance some other work. But proving (even
repeatedly) that it can't be won't really change anything, imo.

I guess though, the main issue is: I'm sure as hell not going to get published
saying it, which ties back nicely to the point made in Nature, really.

~~~
lutusp
> I suspect there is less value in null proving a lot of social science
> hypotheses

Yes, but that applies equally to proving positive social science results. My
point wasn't to compare social science to other sciences, it was only to point
out the value of replicating null results in general -- that it stands equal
to replicating positive results.

> if I prove, because my empirical work says so, that discourse can be
> meaningfully (for a given value of meaningful...) analysed in a new way,
> that's great, and will advance some other work. But proving (even
> repeatedly) that it can't be won't really change anything, imo.

Both findings have equal weight, therefore both findings deserve equal
attention from replicators. By contrast, ignoring null results is a disaster
for whatever science they occur in -- it may mean that other people might
spend a lot of time spinning their wheels studying things that have already
been found wanting.

~~~
JoeAltmaier
When you can identify a single null point to replicate, then sure. But there
are lots of endeavors (what rate of positive feedback maximized reinforcing a
behavior?) where there isn't a single null that defines a theory or disproves
a reasonable conjecture. I'd guess in physics this might be possible, and
math, but maybe nowhere else in science.

------
plg
Rigorous, well-concieved hypothesis-driven research with lots of controls,
ought to be considered as valuable as studies that show positive findings. The
sad fact is that they are not.

There is also a chicken-and-egg problem here with changing the culture. Sure,
I could decide tomorrow that I will no longer battle for space in the high-
impact journals that usually display this bias — in favour of more enlightened
journals like PLoS ONE ... and I wouldn't suffer, I have a job... but what
about my grad students and postdocs? The consequences for them would be real,
because biases also still exist in the minds of professors, hiring committees,
and even their peers. 10 papers in PLoS ONE just are not considered the same
as 10 papers (or even 5 papers) in more "established" journals. There is thus
a lot of implicit pressure to just abandon studies that show null results and
pursue different ideas until you find an effect.

There is also huge pressure today (very explicit) to continuously generate
"novel" ideas. If you want a competitive research grant now you have to
generate a groundbreaking idea. If you want to publish in the top journals you
have to generate groundbreaking ideas. Amazingly there are colleagues that
have a success rate with grants and with high-impact publications so regular
(EVERY SINGLE YEAR) ... that wow, how amazing it must be to be able to
generate a brand new groundbreaking idea at such regular intervals!!!
(sarcasm). And so there is yet another kind of bias present. (politics, the
favoured few, the ivy-colorer-special-people, etc)

I could go on with other injustices but the bottom line is that [NEWSFLASH!]
science is not objective, but like all aspects of life, is chocked full of
biases, politics, special interests, and cow-towing to the bottom line (i.e.
money).

~~~
lutusp
> I could go on with other injustices but the bottom line is that [NEWSFLASH!]
> science is not objective, but like all aspects of life, is chocked full of
> biases, politics, special interests, and cow-towing to the bottom line (i.e.
> money).

s/science/the contemporary practice of science/

Science hasn't changed -- it means the same thing. Its practice has changed to
the degree that we may be justified in calling some of it pseudoscience.

~~~
plg
yes absolutely thanks for clarifying what I meant

------
short_circut
Honestly I feel like this is an even bigger problem in the natural sciences.
In my experience journals want strong and interesting positive results. They
rarely want negative results unless it is debunking a previous result that
seemed interesting and positive. I have worked on projects before, that while
interesting and on a solid theoretical base as far as approximations go, were
ultimately incorrect for what ever reason. I should be able to publish and
report on this so no one else wastes their time on it or can find a glaring
flaw in it and build upon it. While the initial idea may be a good one the
implementation of that idea is not always a good one. It can be useful to
other scientists to know these things.

------
ExpiredLink
There is a deeper problem with research in Social sciences. It lacks
theoretical underpinning. Studies and experiments are frequently based on
'pragmatic', ad-hock assumptions that are easy to confirm but irrelevant.

~~~
stewbrew
I'd say this is a biased view. In German speaking countries, social sciences
rather suffers from theory flim-flam. Maybe in the US, you didn't have your
Luhmann-moment yet.

~~~
lutusp
The prior poster lamented the absence of validated theory in the social
sciences. Your point is that many of the theories that do exist have no
validity. I think those views dovetail.

~~~
ExpiredLink
Exactly.

~~~
stewbrew
Based on your original comment I had the impression our views rather diverge
but maybe I got you wrong. Anyway, HN isn't really the place to discuss such
matters.

------
PaulHoule
So far as this is concerned peer review may be part of the problem, not the
solution.

------
gwern
Fulltext:
[https://pdf.yt/d/ZTzcsTMzDeUPKhiq](https://pdf.yt/d/ZTzcsTMzDeUPKhiq) /
[https://dl.dropboxusercontent.com/u/243666993/2014-franco.pd...](https://dl.dropboxusercontent.com/u/243666993/2014-franco.pdf)
/ [http://sci-hub.org/downloads/ce4a/franco2014.pdf](http://sci-
hub.org/downloads/ce4a/franco2014.pdf)

------
CWuestefeld
This suggests to me that there might be a follow-on problem with fabricated
results.

Given the "publish or perish" thing in academia, and devaluation of negative
results, it seems there's a strong incentive to obtain positive results. And
this is likely to increase the incidence of researchers being pressured into
tweaking data to obtain affirmative results.

------
skywhopper
I like that it says "studies show" that null results are rarely published. In
fact, from my friends who are in the social scientists, that's just how the
journals operate.

Sure, null results are meaningless for complex hypotheses, but when testing a
very narrow variable with a well-designed experiment, a null result is
actually a very important result.

~~~
lutusp
> Sure, null results are meaningless for complex hypotheses ...

If that's true, if a hypothesis is vague enough to resist meaningful testing,
then a positive result would be equally meaningless. But I see we are in
agreement.

------
gwern
Gelman comments: [http://andrewgelman.com/2014/08/28/publication-bias-
social-s...](http://andrewgelman.com/2014/08/28/publication-bias-social-
sciences-unlocking-file-drawer2/)

------
evanwolf
Anyone want to join me in starting a Null Results Journal? Honor hard work
with meh outcomes.

~~~
lutusp
> Anyone want to join me in starting a Null Results Journal?

Nice thought, but there's already the JIR, the Journal of Irreproducibe
Results:

[http://www.jir.com/](http://www.jir.com/)

~~~
seanflyon
There are many journals of irreproducibe results.

~~~
lutusp
Very true, and more as time passes, such that the JIR isn't so funny any more.

