

The Reformation: Can Social Scientists Save Themselves? - mr_tyzic
http://www.psmag.com/navigation/health-and-behavior/can-social-scientists-save-themselves-human-behavior-78858

======
wmacmil
I agree with this article, but think that by singling out social sciences it
fails to reveal the bigger picture: that all scientific fields are subject to
these same mistakes.

I just started working in a research lab at a hospital that creates finite
element cardiac models based on data taken during heart surgeries on sheep
along with MRI images taken at various intervals pre and post surgery.
Although I'm still new to this position, it seems that our own methods are
subject to just as much deception. We basically want our models to coordinate
with the actual heartbeat at only two exact moments, during the beginning of
contraction and relaxation. If I've understood what's been done before,
modeling these two brief periods during a single heart beat are all that are
needed for publication.

I bring this up not to criticize my lab, obviously our work is meant to be a
progression towards getting more and more accurate models. I just think that
it shows that even something that is considered hard science is subject to
many of the same faults as anything else. There are so many parameters and
considerations to take into account that I don't think the end goal is to
build a comprehensive theory that explains computational modeling of
physiological function in a similar way to how Newton's laws predict the
motions of the planets. The goal is simply to create a model that works for
the purposes of helping diagnose and treat people more accurately.

It seems that a crisis is imminent in the coming age of computational,
statistical, and mathematical applications to all fields where researchers are
not properly taught to distinguish between data science and building actual
theories. Just as there is a humungous gap between using a computer and
actually coding, there is an equivalent difference between being able to
collect & analyse data with a computer, and able to actually build a
substantial theory that can describe a vast number of phenomena and result.

~~~
privong
> I agree with this article, but think that by singling out social sciences it
> fails to reveal the bigger picture: that all scientific fields are subject
> to these same mistakes.

I actually prefer them writing about a single (or a group of related) field(s)
of study. There have been lots of articles asserting that all of science has
these issues, but it's not clear to me that they all suffer to the same
degree. For example, It's not clear to me that physics or astronomy suffer as
from this type of behavior. I'm sure it does happen, but it doesn't seem to be
as pervasive as in the social sciences and medicine. For social sciences it
seems like it is happening often enough that it's affecting the trajectory of
the field. To my knowlege, this isn't the case for most of physics and
astronomy (for example).

Also, a lot (but not all) of the "mistakes" noted in the article were
deliberate fabrications or unethical tampering of data. So I think labeling
them as "mistakes" risks being too forgiving.

~~~
tedks
Psychology tends to be picked on more because it's easier to understand and
communicate. Researchers in psychology tend to be in areas that have names
that are meaningful to humans (like "short term memory" or "interpersonal
violence") rather than jargon ("systems" or "finite mass spectroscopy").
Really, psychology has the bad luck of having jargon that maps to lay
knowledge -- imperfectly, but the reporters don't care.

Overall, most scientists are ethical. Very few people are deliberately
falsifying their data.

But in every field, the incentives are not set up to do good science. That's a
fact, and until it's understood there won't be any progress. Until then,
you'll just see a slow grind of academia eating itself.

~~~
privong
> But in every field, the incentives are not set up to do good science. That's
> a fact, and until it's understood there won't be any progress. Until then,
> you'll just see a slow grind of academia eating itself.

I think this depends on what you mean by "doing good science". I agree that
incentives in every field are not set up to do the _best_ science, but in many
fields the incentives are not such that people are drawn to doing _bad_
science. So if academia is "eating itself", I don't think it can be
universally attributed to incentives regarding good vs bad science.

~~~
tedks
No, it's the same in every field, because the process of getting tenure in
about every field is the same: publish lots of times. "Lots" might be
different in different fields, but it still comes down to "publish a lot."
Publishing a lot is actually a lot closer to incentivizing bad science than
not.

I'd encourage you to look past your pre-existing notions of what makes a
"hard" science branch and actually talk to people who are the boots on the
ground about what is really going on in academia.

------
oska
>> To actually run the experiment would have been to run the risk that a
disordered environment has no effect on racial attitudes—a “null finding” that
would mean he’d wasted his time, because journals aren’t usually interested in
experiments that don’t prove something new.

This state of affairs has of course been remarked on before. Perhaps this
could be rectified by creating an online journal with the title _Null_ that
only accepts papers recording experiments where the hypothesis was _not_
proved. Of course it wouldn't make very interesting reading but after some
time it might provide a useful reality check against novel claims, when a
search in _Null_ shows the _x_ number of times a similar hypothesis was not
proved. And the incentive to publish there would be to show that you are a
credible scientist who sometimes, quite reasonably, comes up with hypotheses
that when properly tested don't prove to be true.

~~~
explicate
An online journal like in support of the null hypothesis like what you
describe can be found here [http://www.jasnh.com/](http://www.jasnh.com/) It's
published online bi-yearly.

~~~
oska
Thank you for that. And from scanning the titles of the papers listed there,
it actually _does_ look like interesting reading.

------
japhyr
I'm a big fan of what the folks at the Center for Open Science[0] are trying
to do. They are trying to build open tools [1] for the entire research
process, and they are trying to encourage people to conduct more of their
research in the open.

This would encourage open analysis on an ongoing basis, rather than waiting
for publication before peer review begins in earnest. It would also allow
people to establish a reputation for being strong in multiple aspects of
conducting research, not just in getting research published. You can build a
reputation by reviewing hypotheses, and helping researchers refine their
hypotheses. You can build a reputation for helping ensure that people's
statistical analyses are being done properly.

I understand that not all research should be done entirely in the open, but we
can certainly move beyond the model of just opening up journal articles.

They are also developing the Reproducibility Project[2], which aims to
encourage researchers to replicate existing studies rather than focus on doing
new research. It aims to raise the prestige of doing this work as well.

[0] Center for Open Science:
[http://centerforopenscience.org](http://centerforopenscience.org)

[1] Open Science Framework: [https://osf.io/](https://osf.io/)

[2] Reproducibility Project:
[https://osf.io/ezcuj/wiki/home/](https://osf.io/ezcuj/wiki/home/)

------
leoc
Oh bless us, it's the Hoax again. :( Set out to prove something reasonably
obvious and straightforward (could you publish any old guff in _Social Text_
at the time, even straightforward mathematical/scientific howlers? I'd bet my
trousers on it), make a complete bags of doing so (if there's one time the
editors of a social-science journal should feel reasonably happy accepting the
physics content of an overview paper at face value it's when the author is a
physics professor at a top-40 college publishing under his own name!), get
praised and adulated forever after; for if Science teaches us anything, it is
that it doesn't matter if your method is rubbish so long as you get the answer
everyone wants and expects.

~~~
steveklabnik
Yup. Not to mention that Social Text isn't peer reviewed, and fake papers have
made their way into _many_ different journals, including physics ones.

------
iandanforth
Here's what makes me sad about this. There are probably useful bits of
information collected by social scientists all the time that will never see
the light of day because there is no requirement to publish the data. But more
than that, it simply doesn't matter what format the data is in, or if its well
presented. It just has to be accessible. As the ability and scope of data
mining methods grows, every piece of scientific data will be fodder, not for
humans, but for machines looking back over the entirety of scientific output
to draw new conclusions from information we simply ignore today. I would go as
far as to say that if you feel your work has any value at all, you have a duty
to make your data available, so that it can be mined at a later date.

~~~
slashcom
My understanding of why this isn't often the case in the Social Sciences is
that data is usually extremely expensive to collect. The researchers usually
spend many, many hours writing proposals, revising it, resubmitting it, until
they finally have the funding to collect the data their interested in. Then
they actually have to get people into a laboratory, or send someone out to do
field work (sometimes halfway across the world), or any number of expensive
procedures.

And then, after data is collected, they begin analyzing it and seeing what are
the interesting patterns in the data. And this takes a lot of time, and often
you just want to show one thing at a time. In the current environment of
publish-or-perish, and grant providers asking what you've done with their
money, you might want to take that data you worked so hard to obtain, and milk
quite a few publications out of it. If you openly release the data, then
others can easily find interesting patterns in your data before you can,
beating you to publication, and diminishing your track record for future
grants, tenure, etc.

So your point is extremely valid. But the counter point of why open data isn't
feasible is, if a researcher does all the hard legwork necessary to get to the
interesting analysis stage, why shouldn't they reap the rewards of their hard
work (namely, publications and recognition).

A few things have been proposed: verification projects (one particular other
research group gains access to your data and verifies your results, but is not
allowed to use the data otherwise), and grace periods (you have to release
your data 3 years after a study or something, giving you time to milk out
publications, but still allowing for general verification later).

Generally speaking, true scientific verification should also involve the
complete recollection of the data. But when data collection is extremely
expensive, this is typically infeasible.

[I work in a field that's almost all open access, and research data is almost
always available upon request. So I agree with you. But data is typically
incredibly cheap in my field, so the problem above doesn't really apply.]

~~~
marcosdumay
> A few things have been proposed: verification projects, and grace periods.

Is glorifying (and sending grants to) data collectors on the list?

~~~
slashcom
You mean to say, granting more wide recognition for the collection itself,
rather than the analysis stemming from collection?

Possibly, I don't know. Mostly this is just my understanding from talking to
peers in the social sciences and asking why they don't have open access like
my field.

I think the problem there though, is knowing what questions to ask (what data
to collect) is the hardest part, but fits much more in line with the analysis
aspect (ie forming a hypothesis) than collection (conducting a designed
experiment). So an ideal researcher would show good work in both areas, not
just one.

~~~
marcosdumay
Exactly that. We can quite well be at awe to people that managed to collect
difficult sets of data, instead of the people that made a superficial analysis
on it and discovered something. While still reserving some respect for the
people that do difficult analysis, of course.

That problem is not specific to the social sciences. As far as I know, only
physics are making some inroads on that change of attitude, but even there
it's restricted to a few fields where data collection is extremely expensive
(much more than at social sciences).

I think that's one much needed change to adapt science to our current world.

------
dang
This (edit: I mean this comment of mine) is off-topic, so I'll demote it to
the bottom, but since I've spent the last month telling people what _isn 't_
appropriate on HN, I want to point out something that is.

This thread contains excellent comments, especially wmacmil's:
[https://news.ycombinator.com/item?id=7676945](https://news.ycombinator.com/item?id=7676945).
It's great when someone with directly relevant experience adds so substantive
a comment on an interesting question, and gets properly upvoted. Nearly all
the other comments right now are substantive and insightful too.

~~~
nkurz
_This is off-topic, so I 'll demote it to the bottom in a little while, but
since I've spent the last month telling people what isn't appropriate on HN, I
want to point out something that is._

Did you drop a negative somewhere, or add an extra? I have trouble following
your sentence. My best guess at an interpretation is that you've spent a lot
of time telling people "no, this is actually on-topic" and now you want to
point out something that truly is off-topic?

If so, why do you feel this is off-topic? Like yesterday's article on
parapsychology
([https://news.ycombinator.com/item?id=7666575](https://news.ycombinator.com/item?id=7666575)),
this article offers useful insight into the ways in which the ideals of the
"scientific method" differ from the realities of published science. I think a
lot of people here find this of interest, and I don't think this particular
slope is that slippery.

~~~
dang
I don't _think_ that sentence is worded wrong, but it's late and I may be
missing it. In any case: no! The article was wonderfully on-topic and the
thread was excellent.

Edit: Oh, I get it now. "This" in "This is off-topic" meant "this meta comment
I am currently posting". The article was superb. Sorry for the confusion! I
will edit the GP.

~~~
nkurz
That makes much more sense, and wasn't an interpretation I'd considered.
Thanks!

------
frozenport
Should we expect more from social science?

Has psychology ever been able to yield the same practical findings as fields
such as Electromagnetics or has it merely reflected fluid social trends.

The real story here is to avoid taking the social sciences seriously.

~~~
coldtea
Well, social science has told us a lot more about the structure of society,
power plays, etc, than all "practical findings in fields such as
Electromagnetics".

If you want "the same practical findings" from it, it's ovbvious that you
ain't gonna get them. Social sciences do not produce light bulbs or combustion
engines, not they are meant to.

~~~
frozenport
>>not they are meant to. Yes that my point The real problem is that people
confuse psychology with science.

~~~
coldtea
Well, I see where you're getting at, but I can't really blame them.

Science just means "an area of systematic knowledge", since time immemorial.
It's not just about a certain methodology to go about.

So, they shouldn't confuse it with a "hard science", but it's a science in the
common sense nonetheless.

