
Is Psychology About to Come Undone? - yread
http://chronicle.com/blogs/percolator/is-psychology-about-to-come-undone/29045
======
BenoitEssiambre
God I hope this becomes a trend in other fields as well.

Spending time in universities has made me very cynical of the research that
comes out of them. There is just too much incentives for profs to ignore
biases in their research. I've seen it happen many times. For them, it is the
difference between becoming a prof or a lab tech and it is literally worth
millions.

You can not trust any research done where the career of the researcher depends
on them finding results.

Universities, by hiring based on research credentials that currently translate
roughly in the amount of positive results a person has generated, completely
render worthless the research going on in their departments.

Aggravating the situation is the fact that the peer review system is too
incestuous to be relied on especially when the peers are probably also 'bias
ignorers' with incentives to keep the flaws in the system.

I strongly believe that for hiring purposes, the research skills of profs
should be evaluated on criterion that are incidental to research. Mostly math,
probability, statistics (yes even for psychology) then methodology skills and
also maybe leadership, communication skills and dedication to science (last
only because it is difficult to measure).

update:typos

~~~
viggity
"I take the hard view that science involves the creation of testable
hypotheses" - Michael Crichton

I'm not trying to be a troll. Really, I promise. But I really hope that the
next field they do this in, is in Climate Science, specifically articles on
climate change. It seems like bulk of their work is based on computer models
and I just don't know how you're supposed to replicate any of that. How do you
isolate the _millions_ of variables to show cause and effect with a closed
source computer model. That is my biggest problem with climate science.

~~~
scarmig
There's an interesting issue in climate science, and I say this as someone who
thinks that dealing with climate change is probably the most important task of
our generation. The (small subset of) professional statisticians I've spoken
to are both similarly suspicious of climate models (not climate change
itself), though for more technical reasons than you mention.

At the same time, the physics dictate that the general sign and order of
magnitude of CO2 forcing in climate models is correct, as surely as doubling
solar output would increase mean surface temperatures. The physics also don't
preclude the effects being even more severe than the models predict. But
there's considerable uncertainty involved in both directions, and when people
hear "well, things might end up better than scientists think is the average
case, and at no cost to us!" they jump for that option and ignore the average
prediction, let alone the worst case scenario that's within plausibility.

Coupled with an extremely well-funded group of fossil fuel industrialists
putting hundreds of millions of dollars into the outright shutting down of
climate research, good scientists and especially not-at-all-scientific
activists end up on the defensive, overemphasizing the finality of the models
and using them as solid predictions instead of tools to vindicate the general
thrust of the physics.

As far as your particular point, think of the the models in climate journals
as tools to understand the issue instead of the final word. It is somewhere
between very difficult and impossible to come up with a climate model where
CO2 forcing doesn't cause significant warming, but individual parts of those
models need to be and often are tested. Indeed, those are the main points of
dispute in the legitimate research and end up being thoroughly vetted.

~~~
dudeguy999
There are a billion dollars worth of research money for group-think infested
alarmist journals for every million dollars available from the fossil fuels
industry. Your life as a climate scientist is MUCH easier if you shut up and
find a hockey stick than if you take Exxon funding.

~~~
scarmig
I'll try to interpret this as charitably as possible =) I think there are two
separate issues you're bringing up here.

The first is the implication that climatologists are all involved in a
conspiracy with power hungry politicians to institute a kind of global eco-
Stalinism, in exchange for research grants. It's the only imaginable scheme
where you can treat all government-funded ecology, meteorology, clean-energy
tech, and climatology research as part of a coherent but corrupt bargain (As
you must to get your billions of dollars figure. I would also add that your
mere seven figure fossil fuel budget is grossly underestimated.). Frankly, I
don't think you actually believe it, as it's implausible rhetoric that could
come straight from the fevered fantasies of Fox News and Rush Limbaugh.

The other is that there's group think among scientists. The idea seems to be
that academics and scientists are all too often willing to get caught up in
petty vendettas, battles for turf and recognition, and back scratching,
instead of focusing on the angelically pure pursuit of knowledge. The issue
with that is... well, there isn't an issue. It's totally true, as anyone who's
spent much time in research knows. Hence the CRU emails.

It's a fair criticism. But science has soldiered on despite it through the
centuries, and scientific institutions, even being plagued with those flaws,
have consistently produced better explanations of the world than hacks-for-
hire employed by Big Tobacco, Lysenkoist Communists, or the fossil fuel
industry.

~~~
moldbug
There's only one small problem with your reasoning: the Lysenkoists called
their work "science" as well. As do of course the other two.

Unless you adopt a tautological definition, in which "science" does not
include pseudoscience, "science" is whatever the people in your society who
practice and organize it choose to call "science."

More specifically, since basically all "science" is government-funded, you'll
find that your actual working definition of "science" is "whatever my
government funds and calls science."

So your statement boils down to: climatology can't be pseudoscience, because
it's funded by the US Government. And Washington (unlike Moscow) would never
fund pseudoscience, and call it "science."

This is a pretty interesting epistemology to say the least. Do I have it
right? If not, where's the error? If so, what information do you, as (no
doubt) a rationalist, have about the US Government that justifies this
extension of trust?

And if USG is not the institution you're trusting, what is? What set of human
beings are you investing your trust in? If the field of climate science as
presently practiced was not in fact scientific, but rather pseudoscientific,
who would you expect to have stepped in and shut it down?

[Edit: see also the links to the actual funding levels a couple of posts down.
If you're interested in reconsidering your position on this issue, the blog to
read is Steve McIntyre's.]

------
pchivers
Richard Feynman on reproducibility in science:

"When I was at Cornell, I often talked to the people in the psychology
department. One of the students told me she wanted to do an experiment that
went something like this--it had been found by others that under certain
circumstances, X, rats did something, A. She was curious as to whether, if she
changed the circumstances to Y, they would still do A. So her proposal was to
do the experiment under circumstances Y and see if they still did A.

I explained to her that it was necessary first to repeat in her laboratory the
experiment of the other person--to do it under condition X to see if she could
also get result A, and then change to Y and see if A changed. Then she would
know that the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor. And his
reply was, no, you cannot do that, because the experiment has already been
done and you would be wasting time. This was in about 1947 or so, and it seems
to have been the general policy then to not try to repeat psychological
experiments, but only to change the conditions and see what happens."

(from "Cargo Cult Science")

~~~
nerdfiles
<http://webjournal.nerdfiles.net/2012/04/03/this/>

Notice the date. That was the date.

What is interesting is that they are engaging in an activity that wishes to
avoid reproducibility. They allow "facts" and "ideas" to swap.

Articles like <http://ir.canterbury.ac.nz/handle/10092/5828> exist. There's
the Philosophical Foundations of Neuroscience. All that theorizing needs to
record facts. To do that, we need to be able to follow what the hell is going
on without blatant wankery like with Anthony Crick or early John Searle's
"foot in the mind" rubbish or Dennett's everything's-a-scientist metaphysics.
Philosophers got the descriptivist bug with Experimental Philosophy.
Linguistics has always been largely descriptivist. Now psychology is to do the
same. Who cares that they may have false theories, let's see if they've
described anything.

As Fraiser obviously demonstrates, psychologists have had their Golden Age.
Psychology needs management.

------
pron
Small sample sizes and publication bias are a lethal combination in any field.

Suppose scientists wanted to test the hypothesis that a fair coin always comes
up heads. Due to budgetary issues, scientists are only able to toss the coin 5
times. 40 groups of researchers conduct the experiment in universities around
the world. One of them is quite likely to get 5 heads. Guess which result is
likely to be published (or which group is going to even attempt to publish its
result). Moreover, this result is statistically significant according to the
well-accepted peer-reviewed journal standard of p < 0.05.

I suggest people read John Ioannidis's paper: Why Most Published Research
Findings Are False
([http://www.plosmedicine.org/article/info:doi/10.1371/journal...](http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124)),
and the Atlantic story about Ioannidis and his work: Lies, Damned Lies, and
Medical Science ([http://www.theatlantic.com/magazine/archive/2010/11/lies-
dam...](http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-
and-medical-science/8269/))

~~~
Alex3917
Here is a more complete reading list:

Lies, Damned Lies, and Medical Science
([http://www.theatlantic.com/magazine/print/2010/11/lies-
damne...](http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-
and-medical-science/8269/))

The Truth Wears Off
([http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_...](http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all))

Grant System Leads Cancer Researchers to Play It Safe
([http://www.nytimes.com/2009/06/28/health/research/28cancer.h...](http://www.nytimes.com/2009/06/28/health/research/28cancer.html?pagewanted=all))

Placebos Are Getting More Effective. Drugmakers Are Desperate to Know Why.
([http://www.wired.com/medtech/drugs/magazine/17-09/ff_placebo...](http://www.wired.com/medtech/drugs/magazine/17-09/ff_placebo_effect?currentPage=all))

The Cult of Pharmacology: How America Became the World's Most Troubled Drug
Culture (<http://www.erowid.org/library/review/review.php?p=269>)

The Mouse Trap: The dangers of using one lab animal to study every disease
([http://www.slate.com/articles/health_and_science/the_mouse_t...](http://www.slate.com/articles/health_and_science/the_mouse_trap/2011/11/lab_mice_are_they_limiting_our_understanding_of_human_disease_.html))

Why Most Published Research Findings Are False
([http://www.plosmedicine.org/article/info:doi/10.1371/journal...](http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124))

Mistakes in Scientific Studies Surge
([http://online.wsj.com/article/SB1000142405270230362710457641...](http://online.wsj.com/article/SB10001424052702303627104576411850666582080.html))

Trials and Errors: Why Science Is Failing Us
(<http://www.wired.com/magazine/2011/12/ff_causation/all/1>)

In cancer science, many "discoveries" don't hold up
([http://www.reuters.com/article/2012/03/28/us-science-
cancer-...](http://www.reuters.com/article/2012/03/28/us-science-cancer-
idUSBRE82R12P20120328))

Lullaby for a Mouse: Anthropological Observations of an Animal Model of
Psychosis (<http://vimeo.com/33803995>)

Inside the Battle to Define Mental Illness
(<http://www.wired.com/magazine/2010/12/ff_dsmv/all/1>)

There are of course quite a few relevant books as well, but that's probably
enough to keep most people busy for a few days. Also, most of these articles
are about drug research, medicine, or psychology, but it applies to basically
every field.

~~~
pron
I suggest you stay away from "The Truth Wears Off" by Jonah Lehrer. It's
filled with errors and exaggerations, and it seems like Lehrer's concern was
demonstrating the existence of a Gladwell-like bogus "phenomenon" which he
calls the "decline effect".

~~~
Alex3917
Interesting, is there any discussion of this online?

~~~
boredguy8
I would add to the list:
[http://thelastpsychiatrist.com/2011/02/the_decline_effect_is...](http://thelastpsychiatrist.com/2011/02/the_decline_effect_is_stupid.html)

The main point: Psychology is not "science" in the same way F=ma is science.
It's a model.

The secondary point:

    
    
      When the article then goes on to describe the breakdown of this 
      sweeping generalization in studies after 1994 (on other species), 
      it attributes that to the Decline Effect.  It's not.  When you 
      look at the studies together, what you should have inferred is 
      "symmetry is an associated  factor in mate selection by females 
      in only some species and not others and more research is need to 
      explain why."  Instead, the article attributes its inability to 
      summarize the variety and complexity of nature in a 140 character 
      Twitter message to an underlying failure in the 500-year-old 
      guiding principle of science."

~~~
debacle
I don't disagree with you that Psychology is not science, but F=ma is a model
as well.

------
jerf
If psychology is "under attack", or for that matter _any_ scientific field is
under attack, the solution is to do the hard work to prove that your work is
meaningful and accurate, and if you discover it isn't, to fix those problems.

"People" are gullible and easy to fool in the short term, but I think it is
commonly underestimated how smart "people" can be in the long term. Yes, you
_could_ throw up a smokescreen and dodge out of the spotlight cast on your bad
science today, but that will only be a momentary reprieve of the pain. In the
long term you'll still be under attack, and given that you will have been
witnessed using smokescreens and obfuscations, you'll probably be on the
losing side of that attack. In the short term, the pain of revealing just how
much flimflam there is might hurt, but the result will be a discipline that in
5 or 10 years is no longer under serious attack, because "people" will notice
that honesty and react to it.

If science is under attack, it is only because "people" are noticing that a
lot of it is bunk... and the problem is people are _right_. We've seen that in
a number of studies in a number of fields lately. The only answer that's going
to truly restore confidence and respect is to eliminate the bunk. Politics
won't work. Pay the piper now, or pay the piper more later. How often those
are the choices....

and of all the people who should understand the psychological appeal of paying
the piper later and taking the easy road today, you'd think it would be the
psychologists...

~~~
roc
I take a deep exception to the idea that a group of scientists attempting to
reproduce the published work of their peers is, in any way, "an attack".

Casting things that way immediately exposes a speaker as either not
_understanding_ science, and/or not having _good science_ as their primary
goal.

Reproducing results _is_ science. Perhaps the most important kind.

~~~
jerf
That is a reaction to the statement from the original article:

"Nosek told Science that a senior colleague warned him not to take this on
“because psychology is under threat and this could make us look bad.” In a
Google discussion group, one of the researchers involved in the project wrote
that it was important to stay “on message” and portray the effort to the news
media as “protecting our science, not tearing it down.”"

I see they say threat in that text, not attack, which appears elsewhere. Might
I point out that if you carefully read my post it is vehemently agreeing that
replication is necessary, and from there it should seem logical that I would
not consider something necessary an "attack".

~~~
roc
I'm sorry I wasn't more clear. I was agreeing with you. I was just trying to
draw focus to just that particular point and its implications.

------
iandanforth
In 2003 I did the work that lead to this paper (<http://bit.ly/I1qvuI>). Could
it be replicated? Probably not. Let me outline a few challenges that _don't_
have to do with a result being true or not.

1\. The passage of time

Unlike physics and biology, culture has a huge impact on the field of
psychology, and cultures change. Additionally, my work was about addiction to
online games. I have _no idea_ if the same type of people exist today that did
then. I do know that the game I was studying (Asheron's Call) does not exist
in anywhere near the same form as it did then.

2\. Copyright

While I was lucky enough to find psychological measures (aka tests) that were
in the public domain, most are not. To replicate a result you may have to pay
to access the tests and then most likely cannot re-publish them. At the very
least this makes replicating work inconvenient and expensive.

3\. Data

Because psychology is largely driven by statistics, to replicate a result you
should really start by re-analyzing the data. The reality? This data simply
isn't available. It is not published alongside the results, and I don't have
it anymore. I _might_ be able to find the cleaned data, but that's not
important. Error and bias can just as easily be introduced during data
cleaning. For example I removed many data points that appeared as outliers,
but perhaps these should have been included. No one will ever know.

I hope barriers such as these are addressed in this investigation. It would be
disappointing to impune the work of scientists when it is the process that
could really use reform.

~~~
endersshadow
As a layman, your three points terrify me about the state of science. Well,
numbers two and three do. There are still people addicted to video games. WoW
and Minecraft probably moreso than Asheron's Call back in 2003.

As to point number two: Are these questionnaires or something that are used to
get psychological data? Why would those not be accessible--don't you need to
know the questions asked to determine if the results are valid?

As to point three: Data cleaning?! I would understand throwing out invalid
results (i.e.-you find out somebody's lying, errors with data collection,
etc), but to throw out results because they look like they don't belong feels
disingenuous. I've done a lot of work with public company financials and
operational systems. I cannot, on my worst day, imagine telling the SEC,
"Yeah, well that return was really abnormal and was an outlier, so we decided
not to record it." Why are these data points thrown out?

To me, again, a layman, it appears that if this is standard practice, these
experiments are starting with a conclusion and then just going through the
motions to get that conclusion published. Why are data cleansing and secrecy
normal practices? What about the scientific method don't I understand?

~~~
iandanforth
The line between an invalid result and one that looks like it doesn't belong
is fuzzy. I believe I was removing the former, but if an independent
replicator doesn't have access to the raw data they would never discover that
I had been fooling myself.

As to secrecy, there is nothing being hidden, instead it's that nothing in the
system requires you to publish all your data, and so because you're pressed
for time you don't. Never ascribe to malice what laziness can explain :)

~~~
endersshadow
I'm not ascribing this to malice. It strikes me as a paragon of negligence if
that's the way that the science of psychology is being done. In fact, it
strikes me as an affront to science.

Take this in contrast to the neutrinos that were going faster than light at
CERN. Instead of going, "Okay, well, those can't happen, so those are
outliers," they reported their results, published their methods and data, and
said to the world, "Help us validate this."

It seems to me like the modern psychology scientific method is the antithesis
of this--"These are the results of my massaged data, and no you can't see how
I got them, just trust me."

I'm not attributing this to malice, I'm attributing this to negligence and
laziness. Putting your work in front of people for critique is _hard_. Really
hard. But, it's part of science. If psychology as a field has convinced
themselves that they're above that reproach, then I think that's a huge
condemnation of the field.

When I first read the article, I thought it was a bit sensationalist to call
psychology "under attack," but now I'm not so sure. I've assumed that
reproducability is the standard litmus test of all science, but I guess not,
and that leaves me with a bitter taste in my mouth toward the field of
psychology.

~~~
jmilloy
Outlier detection is an important part of basic statistics, and has been for a
long time. It isn't about just deleting a few data points that "look" strange.
Some statistical tests are robust to outliers and some are not... it's always
important to use the appropriate tests in the appropriate way and be open
about them.

------
Estebon
I have found Hacker New to be full of some of the smartest people I've
encountered on the internet. However, every time a discussion pops up where a
social or soft science is involved, the discussion becomes mired by arrogance,
bias, and general small mindedness about what constitutes "real science." I
see comment after comment of where the author implies or outright says that
psychology is pseudoscience or close to it. The challenge in designing,
implement, and supporting a study in psychology or similar science is
staggering.

That's no excuse for questionable statistics interpretation or outright
manipulations, but to write off an entire field of study because it doesn't
have the convenient quantitative measuring capabilities that other sciences
have is ridiculous.

I have noticed a pattern as of late, or maybe its just the articles I have
been reading on HN, but there seems to be a lot more people making dubious
claims, commenting on things they no nothing about, and just genuinely turning
this place into a cesspool like the rest of the internet.

What is so hard about not posting an opinion on subjects that you don't have
knowledge of or any proof to back up your claims? If you haven't studied
psychology, have even a vague idea of what it's about, or what its past or
present state is then why muddle up the conversation with you BS conjecture?

I'm sure a legitimate licensed clinician would have a field day with some of
the posters on this site.

~~~
disgruntledphd2
While I agree with much of the substance of your post, the fact remains that
psychology is a mess. A friend of mine once told me that doing a doctorate in
a subject only really gave you the ability to see how your field is wrong and
all the problems with it, and I would agree.

The problems with psychology, from someone who's been at it for a while: 1)
lack of focus on replication 2) in survey studies, failure to correlate
psychometric measures with behaviour or other kinds of measures (surveys
versus reaction time measures versus physiological measures). 3) An
unfortunate lack of understanding of the assumptions behind the statistical
procedures used routinely within the field. 4) Misplaced emphasis on theory at
the expense of prediction. A relatively well known psychologist, quite
statistical aware, posted on stats.stackexchange.com that the goal of
psychology was theoretical understanding, not prediction. For the life of me,
I can't see how one can develop good theories without prediction, but it
appears to be a dirty word within much of psychology.

That being said, people are hard. They change their behaviour based on what
they think you are trying to do, they tell you what you want to hear, and even
when they tell the truth as they see it, they may well be mistaken.

So while its not right to write off an entire field for some errors (in fact,
_my_ results are all perfect) its also useless to deny the problem and pull a
Freud by saying that everyone who disagrees with your methods has some kind of
psychological disorder.

------
bane
It seems that psychology has always been about the atypical rather than the
typical. In most other sciences, the goal is to hopefully build a useful model
of the typical, and observations outside that model result in alterations to
the model.

In psychology the idea seems to be to produce the inverse of that, yet there
doesn't appear to be any particular end goal of producing the typical model,
only in defining the atypical cases to the nth-degree. The result? We have no
better idea what the typical mind is like than we did a hundred years ago, but
we have exhaustive lists of subtly different atypical models so encompassing
that almost anybody could be recognized as having a psychological problem of
some sort!

Describing a new disease (atypical model) is one of the only ways to get
recognized in the field. But all the easy cases have been taken, so bizarre
models seem to make the publishing rounds much more readily than new subtle
delineations on previously recognized diseases.

In treatment this turns into quack and fad medicine like "I prescribe shared
strip club night for marriage problems, with the idea that it forces couples
to talk to each other about their sexual problems blah blah blah" or "Anger
therapy" or "Primal Therapy" or other such nonsense.

I'd say that with the tools we have available today, that psychology is due
for an Einstein level revolution, but I'm not sure that the field, internally,
is ready for this.

~~~
radu_floricica
It happened already 50 years ago, it just doesn't get any press. Pretty much
everything I've been reading in the past few years (Kahneman, Baumeister,
Baron, Stanovich, Wilson for example) will pass this with flying colors.

It is true however that psychology requires a bit more intellectual honesty
then other fields. Mistakes are more subtle and easier to cover up, samples
are smaller and there's always the excuse of different cultures. I'm really
curious what this project will uncover.

~~~
neilk
Kahneman's getting a little more press lately, with a bestseller on the
shelves.

His story is interesting - he was given a lot of responsibility at a young
age, because he was doing officer evaluation for the newly-formed Israeli
army. He has to produce real, repeatable results, and has a wealth of data to
test his earlier conclusions, and perhaps the sheer newness of the entire
enterprise makes it safer for him to be honest. In a few years he realizes the
vacuity of his own psychological results, and by implication, many of the
methods he's been taught. That's what kicks off his whole program of
investigating sources of bias and error.

Would this have ever happened in a standard academic setting?

~~~
radu_floricica
Quite possibly. Roy Baumeister has possibly the best thinking process I've
ever met. He had the gall to approach a subject like this, and do it with
flawless rigor: [http://www.amazon.com/Meanings-Life-Roy-F-
Baumeister/dp/0898...](http://www.amazon.com/Meanings-Life-Roy-F-
Baumeister/dp/0898625319) Too bad his latest book (Willpower) is coauthored
with a professional writer... I miss his style.

------
hluska
I've always had problems with fields like psychology. Part of the field
revolves around hard science - neuroscience researchers (as one example) use
tools like fMRIs, etc to capture observable phenomenon. On the other hand,
there is the art of psychology - social/developmental psychologists use
dubious tools like surveys and interviews to try and prove a thesis.

I think that the average Hacker News reader could write a survey that would
'prove' that gravity doesn't exist. What kind of experimental integrity do
surveys have?

And then there are participant pools. These pools are primarily composed of
undergraduates (who often earn bonus marks/money for participating).

Add in a strong publish or perish mentality and you can see some serious
problems. A whole lot of researchers are using dubious methods on a set of
participants that do not adequately reflect humanity as a whole.

~~~
disgruntledphd2
There are some issues I (as a psychologist) would have with your assumptions.

Firstly, fMRI and neuroscience research is probably the single biggest source
of errors and shoddy research in psychology. Its a combination of really small
sample sizes, poor statistical tests (brain regions are independent of one
another, really?) and huge amounts of data dredging to find significance. See
for a roundup: <http://escholarship.org/uc/item/51d4r5tn>

Interestingly enough, most of the observable pheonomena you note in
neuroscience are linked to traits that people think about (optimism,
personality etc) through surveys, so even if their methods were perfect, the
results still wouldn't be.

On your survey point, I would agree that the participant pools are quite
limited and generalizability is quite low, but your example is ludicrous. No
one does surveys of gravity in psychology, surveys are carried out to
investigate the manner in which people conceptualise their experience.

For a really great roundup of the problems with typical social science
participant pools, see: Heinrich et al:
humancond.org/_media/papers/weirdest_people.pdf

Indeed, the entire issue of BBS that that article appears in is well worth a
look for a deeper understanding of these problems.

These surveys should then be calibrated against behavioural outcomes, but this
does not happen often enough, which is a major issue in my view.

Thats the major problem with surveys, well that and some poor methods accepted
too uncritically within the field (factor analysis).

To summate, psychology has many, many problems and I fully support this
reproducibility effort (and it will expose a lot of findings as non-
replicable). But don't single out surveys for derision, neuroscience deserves
as much if not more of your scorn.

~~~
wch
As a recent Psychology PhD, I agree completely, and I'll add this: people
outside the field commonly believe that neuroimaging tools like fMRI are
better and "harder" science than more traditional experiments that measure
human behavior, but they have it totally wrong.

Here's an analogy for computer nerds: imagine if you had an "fMRI" of your
computer's operation. You'd see that different tasks result in different parts
of the computer "lighting up". Tasks such as graphics, disk I/O, numerical
processing, etc., would lead to different patterns of activation. And if your
scanner had could resolve, say, 0.1mm voxels in your CPU, you might even learn
that certain parts of the CPU are related to certain tasks. But what all this
tells you is something about the gross physical structure of the computer; it
doesn't tell you much about the abstract, logical structure of the computer.
To draw an analogy, you learn something about the brain, but not very much
about the mind.

What's of interest to most people (except hardware engineers) is what the
computer does, not what parts activate.

With a "computer fMRI", you'd learn little about how, say, a filesystem works,
how a programming language works, or much of anything that's of interest at
the functional level of the computer. The same is true of fMRI and humans: it
doesn't tell us much about the mind works. Instead, it tells us that there's a
brain region associated with some task. For example, one of the major recent-
ish findings with fMRI is that there's a small brain region that's associated
with face recognition.

I'm not saying that traditional experimental psychology is going to answer
everything. The mind is very complicated and I'm pessimistic that we'll ever
have a good reductive model, the way that we do in many other sciences. I'm
also not saying that neuroimaging is totally useless -- but it's certainly not
as enlightening as many people imagine.

Here's a summary of a paper that found that lay people thought that
psychological explanations were more convincing with neuroscience talk (even
though the "neuroscience" was totally irrelevant), and that those with
experience in neuroscience/psychology thought the opposite:
[http://scienceblogs.com/cognitivedaily/2008/03/when_we_see_a...](http://scienceblogs.com/cognitivedaily/2008/03/when_we_see_a_brain_light_up_o.php)

Here's another one that did something similar, but with brain pictures:
[http://scienceblogs.com/cognitivedaily/2008/06/whats_more_co...](http://scienceblogs.com/cognitivedaily/2008/06/whats_more_convincing_than_tal.php)

------
radarsat1
It's funny because, coming from an "interdisciplinary" research area, many of
us cower in fear of psychology journals, completely daunted by the rigorous
level of (perceived?) statistical knowledge required to be accepted.
Psychology is viewed by many as one of the most hard-core sciences, just
because it has taken such a hard-line attitude toward statistical work, out of
shear necessity, due to past problems arising from a history of more
philosophical approaches to the subject. Other sciences (e.g. HCI) are often
somehow "softer" just because they take pains to avoid the kind of criticism
that can come from the kind of complex statistical interpretations we see in
pure psychology journals.

Although, I suppose I could be mixing up psychology with what we call
"psychometrics", which is the kind of psychology that I'm more familiar with
due to my research area, which involves perception of virtual reality.

By the way I agree with others that this is not an "attack" on psychology, it
is simply science. Verification of results can invalidate claims, but it can
also easily provide further evidence for claims. Nothing bad can come of this
initiative, if it's carried out properly.

~~~
disgruntledphd2
You are definitely mixing up psychology with psychometrics. Psychometrics is
awesomely thorough and aware of statistical models and the limits thereof.
Psychology, on the other hand relies on SPSS and what other papers have done
to determine their methods.

------
amcintyre
_"If you’re a psychologist, the news has to make you a little
nervous—particularly if you’re a psychologist who published an article in 2008
in any of these three journals... <snip>...Because, if you did, someone is
going to check your work.'_

Can somebody in the field comment on this statement? Is it _really_ so out of
the ordinary to attempt to reproduce work in psychology that this kind of
statement is warranted?

~~~
luriel
Yea, I would think that any sane and honest scientist would be _delighted_
that somebody else is going to test their work.

But after things like this:
[http://www.plosmedicine.org/article/info:doi/10.1371/journal...](http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124)
I'm starting to wonder how many scientists are really more dedicated to search
for the truth than search for grants and tenure.

~~~
eli_gottlieb
_I'm starting to wonder how many scientists are really more dedicated to
search for the truth than search for grants and tenure._

Scientists would love to do proper science. Just please, _please_ , give us a
way to not starve from doing it!

------
tokenadult
Free full text of the 1991 article by David Lykken, "What's wrong with
Psychology, anyway?," can be downloaded from an online reference link.

<http://cogprints.org/371/>

The article shows how many of these issues have long been on the radar screen
of psychologists at the minority of universities (for example, the University
of Minnesota) where the psychology departments train graduate students in the
general scientific method. There are some very amusing, thought-provoking, and
shocking examples in the article.

------
alttag
The challenge of publishing "novel" work exists in many fields, not just
psychology, and the debate about potential false positives is ongoing. It's
great to see this reproducibility work being done. I'd like to see it in my
field.

------
jamaicahest
I personally hope the software world picks this up, so we might get some
actual data on things like TDD, agile, waterfall, instead of peoples'
anectdotes and experiences on past projects and whatever Martin Fowler thinks
up (no offense, I love Fowler, but he would benefit from proving some of his
assumptions)

~~~
mangodrunk
I agree. It seems we have a problem where things that are most likely just
opinion or preference are treated as fact and to worsen the situation there is
a disregard for evidence or the need of it.

For example, take the argument why indexes should start with zero. [0] Some
people will claim that it's easier to learn without providing evidence or that
those who use such a system produce less bugs, again without evidence.

[0]
[http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EW...](http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html)

------
nradov
A cynic would say that psychology can't come undone since it's never really
been "done" in the first place. We know so little about the fundamental nature
of human thought that the whole field is basically a joke. Their journal
articles are built on a foundation of sand.

I expect that 200 years in the future historians will look back at 2012
psychology as a pseudoscience, little better than phrenology.

------
6ren
reminds me of Feynman's cargo cult science's rat mazes
<http://www.lhup.edu/~DSIMANEK/cargocul.htm>

------
montecarl
I am glad someone is doing this. I recently tried to implement an algorithm
described in a paper in a well known chemistry journal and discovered that the
paper's results were incorrect. It turns out that the author's program (which
was not made available with the paper) didn't correspond with what was
described in the paper. That is to say, when I used the described algorithm, I
got very different results than those in the paper.

Now I'm not sure what to do. I want to correct the paper, but I don't want to
openly criticize the work of others.

~~~
corford
If you are prepared to back your claim up in a way that can be independently
verified, why don't you want to criticize the original authors work? Surely
correct science is more important than sparing someone's blushes?

~~~
montecarl
After reading your comment, I can't come up with any reason other than the
blushes. Perhaps I'll reconsider.

~~~
otoburb
How about reaching out to the original author and outlining the problems that
you've been encountering? This way, you can at least give them a chance to
respond privately or correct potential misunderstandings with the paper
(perhaps publicly).

Similar analogy would be approaching a vendor regarding a
security/vulnerability disclosure.

~~~
montecarl
I already have. They recently gave me the source code and I pointed out the
problems. They agree that there are bugs in their code. I just haven't
approached them about publishing a comment on their paper because I feel bad
about it.

------
Alex3917
"Yet when Stuart Ritchie, a doctoral student in psychology at the University
of Edinburgh, and two colleagues failed to replicate his findings, they had a
heck of a time getting the results into print"

Maybe because they were a bunch of hacks. They didn't actually replicate Bem's
methodology, so whether or not they got the same results is irrelevant. The
journals were right for rejecting their work.

~~~
Estragon
Got a cite to back that up? Because Ritchie claims that no serious
methodological issues were raised during the review process, other than the
ridiculous "You can't get positive results from an ESP test unless you believe
in it."

~~~
Alex3917
I don't remember the exact article that discussed it, but I'm sure you could
find it by searching HN. Other than the issue you mentioned, the other issue
was that the original study was done in person using Ivy league students,
whereas the replication was done over the Internet using a non-comparable
demographic.

Regardless of whether or not one thinks it's ridiculous that these issues
could have any effect, it's intellectually dishonest for them to say that
they've replicated Bem's methodology and failed to reproduce the results when
in fact they haven't actually done so.

(The researcher-belief issue matters since we already know that the how well a
drug work depends on how much the person administering the drug believes it
will work, which is why for well-designed drug trials the investigators aren't
the ones who administer the drug. So it's not like the replication was being
dinged for not subscribing to some exotic new methodology, rather they were
being shot down for not following existing best practices. That said, I have
no idea whether there were videos how much the researcher's beliefs would have
actually been apparent, so I don't really know whether or not it was
reasonable for the journal to reject it based on this one point.)

~~~
gergles
> we already know that the how well a drug work depends on how much the person
> administering the drug believes it will work

hahahahahahaahahahahahahahahah what? No, no we do not.

If you have a citation that suggests otherwise, I would be _thrilled_ to read
it.

The reason that we blind is because there ARE effects based on perception if
the person administering the drug knows its experimental. There have been
studies indicating that the administrator would say things like "this really
should work", for example. That invokes the placebo effect response in the
patients (which is also well documented,) but the concept that "we know how
well a drug works depends on how much the person administering it believes in
it" is completely incorrect.

~~~
Alex3917
Everything you said in your comment supports what I said. If there is _any_
non-zero effect attributable to the beliefs of the person administering the
drug, then it is 100% correct to say that how well a drug works depends on how
much the person administering it believes it will work, regardless of whether
the difference in outcomes is enormous or tiny.

~~~
gergles
No, there is a difference between _belief_ and _actions_. The _action_ of
saying "this one should really work" of course has an effect.

There is absolutely no scientific evidence anywhere to support that the
"belief" on the part of an administrator in a drug trial has anything to do
with any outcomes.

~~~
Alex3917
That's simply not true. C.f. the clever hans effect, or all of the recent
research on drug dogs:

<http://www.erowid.org/freedom/police/police_article1.shtml>

I think that should be sufficient to prove that beliefs can be transmitted
non-verbally. You are trying to argue there is some distinction between belief
and action, but the only reason any 'action' has an effect is because it
transmits a belief. What you are saying is that the non-verbal transmission of
a belief could not have an effect on outcomes, whereas the verbal transmission
of a belief can. This makes no sense.

------
ams6110
It would not surprise me in the slightest to find that a large amount of
published research findings are pure bullshit.

------
rsanchez1
Sociologists everywhere are panicking.

On the other hand, they'll probably expose a whole bunch of "decided" research
fields that new psychologists can revisit.

