
What has happened down here is the winds have changed - mafribe
http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/
======
s_q_b
Good scientists should welcome this challenge with open arms, sunlight all of
their data and code in a repository, and show how proper statistically
rigorous research is conducted. That's an elevator right to the top of the
field.

If I wrote code that messed up, I'd patch it. If someone else points out a
bug? Even better, now I don't have to find it myself. But I would definitely
not feel attacked or nor would I blame the QA engineer. It's my mistake, and
so I must own it. These researchers need to do the same.

I believe full publication of code and data should be required for peer-
reviewed research. If I can't look at your raw data or your algorithm, it is
simply not possible to determine whether the study is correct. The social
sciences must adapt to more statistically rigorous methods.

After all, the only two choices are adaptation or extinction.

~~~
jgalt212
> sunlight all of their data and code

This is not always possible. For example, if you are working with proprietary
subscription-only datasets where the researcher has no redistribution rights.

~~~
Cerium
If the community demands the ability to review code and data then the usage of
closed data sets will simply decrease until they comply. We should not base
our standards of research on the ease of complying. We should do what we can
to produce the most accurate results.

~~~
nickff
The problem is not one of convenience, the problem is that you are depriving
yourself of huge amounts of valuable (closed) data, and this may cause large
gaps in what is researchable.

~~~
eveningcoffee
This was the point. You have to pay the price for the principles and hope that
the tide turns one day.

If you take your suggested easy route then this day may never come.

~~~
zmb_
Are you a researcher and following those principles? Because it is very easy
to demand _other_ people pay the price for what _you_ believe are the right
principles.

The reality of the way research funding and academia is set up (at least in
Europe) is that the price you pay is likely your career. You can go through
all the work and effort to follow those principles, and all that will happen
is that the politicians will give the research funding and tenure positions to
competitors that published more and told more exciting stories.

Academia and research will necessarily reflect the incentives put in place by
the people with the money (i.e., politicians). And right now they are giving
money for exciting stories and publication counts. _That_ is where the change
must start.

~~~
eveningcoffee
I did not _demand_ anything. I just explained that when everybody complies,
nothing will probably change.

------
dahart
Susan Fiske's name came up the other day, I wonder if the context for this
essay in the article is the Statcheck bot that found errros in published
papers.
[https://news.ycombinator.com/item?id=12643978](https://news.ycombinator.com/item?id=12643978)

Apparently (I gleaned from the comments) Susan was very vocally opposed to
that project and said it was an attack.

She's painting a bleak picture of widespread social attacks and commentary
causing the problems in academics that I don't think matches reality, and
she's also suggesting the public shouldn't comment on publicly funded
research.

I don't know of a single academic that has left the field due to negative
commentary coming from outside the peer review system, and I know a lot of
people that have left academics and a lot of people still in academics.

Some ambitious academic peers are vicious, and people are leaving due to
infighting, trouble getting funding, academic stealing, difficulty getting
tenure, and general departmental and university politics. The single biggest
threat to education is our funding model and the exploding costs stemming from
exploding amounts of administration.

The paper review process not only isn't one of the major problems, I
personally believe it's actually working better now than it ever has, and that
the quality and standards are higher than they've ever been.

The comments on Statcheck were wildly in favor of having automated bots fact-
checking publications. I contend that it's best use is before publication, and
not after, but otherwise I agree - it's awesome.

------
Houshalter
I love this essay. Also relevant, _the Control Group is out of Control_ :
[http://slatestarcodex.com/2014/04/28/the-control-group-is-
ou...](http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-
control/)

~~~
SonOfLilit
Oh yes. This should be required reading.

I have a long-term bet with a friend on the truth behind Wiseman & Schlitz (I
think I bet 1 dollar on "beyond known science" to his 50 on "methodological
error", but maybe the odds were different, and the amounts were definitely
more. Maybe we will try to reproduce it at home, because otherwise who knows
if it will be resolved at all in our lifetime.)

------
ak217
This original editorial by Fiske, especially the "methodological terrorism"
soundbite, reminds me of the NEJM data sharing editorial where they described
people who use datasets that others shared as "research parasites".

Both editorials are written by people in positions of academic power (Fiske is
a PNAS editor - PNAS editors have a unique and often criticized ability to
unilaterally review and approve publication of articles without a peer review
panel; NEJM editors wield great power in medical sciences).

Both are originally directed toward a narrower audience of their journal, but
taken out of the confines of their academic cloister, start to sound
ridiculous in a world where the public starts to point out where their funding
is coming from, and poke holes in their reasoning.

~~~
conistonwater
> _start to sound ridiculous in a world where the public starts to point out
> where their funding is coming from_

I have to say I disagree (did I misunderstand you?): they sound absolutely
ridiculous even if your only goal is to do good science and you don't care one
jot for the public or funding. That was always a key problem with _research
parasites_ and Fiske's _methodological terrorism_ : apart from everything
else, it's just bad science.

~~~
fenomas
"sounds ridiculous in a world where X" does not mean "sounds ridiculous _only_
in a world where X". It doesn't imply that the thing wouldn't be ridiculous if
X wasn't the case.

------
s_q_b
First, I don't understand why rapid feedback is bad. Communication adapts to
new mediums all the time, and academia goes with it. The arXiv pre-print
system is a great example of a field adapting well to a new technology.

Secondly, addressing the article's main point, I'm being told that social
media comments about their studies are so traumatic and detrimental that they
leave their careers. That's a problem with the researcher's ability to handle
digital hate mail. Doctors, lawyers, dentists, and veterinarians all get
massive amounts of digital hate mail.

Finally, after the failure to replicate studies, "self-appointed data police"
and "methodological terrorism"[0] sound like exactly what the field needs.

[0] I have to say, only an incredibly insular community could come up with
such an insensitive name. Checking your methods is not terrorism. Someone
doing preliminary reviews on research data is not terrorism. To compare data
critics to terrorists both insults the victims of actual terrorism, and
dilutes the definition of the term so far as to be almost meaningless. They
may as well call them "Methodological Nazis."

~~~
hackuser
> I don't understand why rapid feedback is bad

She doesn't say rapid feedback is bad. She says online personal attacks and
abuse are bad.

~~~
wyager
You are falling for a classic argumentative attack. Everyone agrees "abuse"
(e.g. beating your wife) is bad, so Fisk just redefines "abuse" to mean
"earnest, blunt, non-sugar-coated feedback" but you keep on using the
connotations of the real definition.

~~~
mwfunk
What about "earnest, blunt, non-sugar-coated feedback" that is useless,
uninformed, and counterproductive to researchers? I was under the impression
that that was the concern here.

Just because someone says something non-sugar-coated doesn't mean it has any
truth or value to it. In my experience, a lot of the people who publicly
congratulate themselves on never pulling any punches and telling it like it is
and keeping it real, etc., aren't motivated because they feel like they have
valuable feedback to offer (they usually don't), but rather it's just a way to
act out and maintain a tough-guy public persona that they want everyone to
believe in.

I'm 100% in favor of public peer review, I just question the value of
stereotypical Linus Torvalds-style "I'm just telling it like it is, you idiot"
feedback that some people fetishize in these parts. The best course of action
here is by definition whatever leads to more and better research, with
whatever the appropriate amount of public and private peer review is that
leads to that outcome.

~~~
vacri
> _Linus Torvalds-style "I'm just telling it like it is, you idiot" feedback_

Torvalds never starts out that way, and if you look at any given episode where
he's shown to have done this, the historical commentary will show Torvalds
trying to politely educate the other party, and the other party just not
getting it (often wilfully).

As for fetishising it, I've _never_ seen anyone here on HN laud Torvalds for
doing that; quite the opposite, it's only ever brought up to denigrate him.
Ironically, de Raadt's behaviour around OpenBSD _does_ get fetishised by some
around here. I've never really understood why de Raadt gets a reasonable
amount of respect for that behaviour whilst Torvalds is a pariah for it here
on HN.

~~~
botw
That caused a lot of pshycological damage.

------
hudibras
Gelman is being a bit modest (for him) because he only mentions in passing
probably the biggest recent example of the "social media takedown of
psychology research" phenomenon: he was the first person to publicly raise
questions about the LaCour and Green study on changing people's support for
gay marriage, and he did it in his washingtonpost.com blog five days after the
study was published.

~~~
apeace
Thanks for providing that context. I believe this is the article you
mentioned: [https://www.washingtonpost.com/blogs/monkey-
cage/wp/2015/05/...](https://www.washingtonpost.com/blogs/monkey-
cage/wp/2015/05/20/fake-study-on-changing-attitudes-sometimes-a-claim-that-is-
too-good-to-be-true-isnt/)

------
ewr24
> _When the authors protest that none of the errors really matter, it makes
> you realize that, in these projects, the data hardly matter at all._

~~~
draw_down
I think my personal favorite is the Kahneman quote about how you have NO
CHOICE BUT TO ACCEPT.

~~~
_yosefk
I also find this quote remarkable, and not in a good way; but I recall that
Kahneman's studies do replicate pretty well. (This is not to say that all
their claimed implications are true - this is the part where "no choice but to
accept" rubs me the wrong way.)

~~~
conistonwater
The full _no choice but to accept_ quote was specifically about priming
studies, IIRC, and I guess what irked Gelman (a statistician) the most was
that Kahneman got _statistics_ wrong by overestimating the strength of
evidence.

------
tgarma1234
I was a big fan of "Thinking fast, thinking slow", like pretty much everyone,
until the replication crisis came along. Now I see a lot of the work in this
field as a sort of scientifically informed stereotyping that might someday end
up on par with phrenology, but it's still fun to read and consider as if it
were true. It's really awful how pseudo-science can get so far though.

~~~
botexpert
Did it turn out that research in the book is flawed or not reproducible?

~~~
mikeklaas
Yes, some of the studies haven't been reproducible. I don't have references
right now, unfortunately.

~~~
whyever
Here is one:
[https://www.psychologicalscience.org/pdf/StrackRRR_manuscrip...](https://www.psychologicalscience.org/pdf/StrackRRR_manuscript_accepted.pdf)

~~~
JumpCrisscross
Wow, the plot on the bottom of page 11 really drives home the point.

------
Kenji
Oh, free speech and criticism is okay as long as it's moderated and
appropriate... Well played.

The work of psychology researchers influences our LAWS. They influence how I
can live my life. It's a very personal thing. I will criticise them however I
want and as much as I want until this branch of 'science' has cleaned up its
conduct.

If your work cannot withstand criticism, it's worthless. Get out of science,
Susan T. Fiske, you give it a bad name. Or rather, what you were doing was
probably never science in the first place. Your attitude fills me with
disgust.

This is not a Russel's Paradox situation that can be patched up. The crisis in
social 'sciences' is of such a magnitude that the only reasonable course of
action is "start over" in many cases.

~~~
forgotpwtomain
> The work of psychology researchers influences our LAWS. They influence how I
> can live my life. It's a very personal thing.

While I agree with the above sentiment:

> If your work cannot withstand criticism, it's worthless. Get out of science,
> Susan T. Fiske, you give it a bad name.

I think the tone is very much incorrect. While researchers should be
responsible for bad science, you may it sound like it was necessarily done
with malicious intent - I think this attitude is unproductive for everyone
involved.

~~~
sbmassey
Does the intent matter? Even if it was just a career-long run of incompetence,
"getting out of science" would seem a reasonable starting point.

------
cinquemb
Excerpt from post on Robin Hanson's blog covers this well[0]:

"In the rest of society, however, we often both try to hire people who seem to
show off the highest related abilities, and we let those most prestigious
people have a lot of discretion in how the job is structured. For example, we
let the most prestigious doctors tell us how medicine should be run, the most
prestigious lawyers tells us how law should be run, the most prestigious
finance professionals tell us how the financial system should work, and the
most prestigious academics tell us how to run schools and research.

This can go very wrong! Imagine that we wanted research progress, and that we
let the most prestigious researchers pick research topics and methods. To show
off their abilities, they may pick topics and methods that most reduce the
noise in estimating abilities. For example, they may pick mathematical
methods, and topics that are well suited to such methods. And many of them may
crowd around the same few topics, like runners at a race. These choices would
succeed in helping the most able researchers to show that they are in fact the
most able. But the actual research that results might not be very useful at
producing research progress.

Of course if we don’t really care about research progress, or students
learning, or medical effectiveness, etc., if what we mainly care about is just
affiliating with the most impressive folks, well then all this isn’t much of a
problem. But if we do care about these things, then unthinkingly presuming
that the most prestigious people are the best to tell us how to do things,
that can go very very wrong."

The winds may have changed, but landscape erodes very slowly…

[0] [http://www.overcomingbias.com/2016/06/beware-prestige-
based-...](http://www.overcomingbias.com/2016/06/beware-prestige-based-
discretion.htm)

~~~
ArkyBeagle
Oh, this is apt. Apt I tell you!

------
apatters
The phenomenon described here is one of many side effects of what I think is
the greatest disturbance in the zeitgeist since at least 9/11: the
disintermediation of information itself.

We live in an era where authorities of all kinds are under greater scrutiny
than ever before, because the Internet's made it so much easier for everyone
to learn about and discuss what they're doing. And to believe that we know
better than them. The mass media gatekeepers who once shaped public opinion
are still present but have a fraction of the influence they had in the 20th
century.

Nassim Taleb discussed this recently from a more political angle:
[https://medium.com/@nntaleb/the-intellectual-yet-
idiot-13211...](https://medium.com/@nntaleb/the-intellectual-yet-
idiot-13211e2d0577#.ju4yyfr3y)

But it is really the same thing. From the post-Snowden privacy movement to
Brexit to the Trump campaign to "the destructo-critics" referred to in this
article, all of these things are a product of people becoming more informed
about what authorities are really doing, getting pissed off about it, and
taking action.

You can make your own value judgments about whether all of these things are
good or bad (we surely know what the predominant opinions will be among HN's
audience of left wing tech people: pro-Snowden, anti-Brexit, anti-Trump, pro
destructo-critics). But I think the overall trend is good. And intensely
destabilizing to society, in no small part because even as we approach this
information equality, wealth inequality is soaring.

I think one important thing to understand about this trend is that _numbers
win_ -- you can be right but if a large enough part of the population
disagrees with you, you're going to be on the defensive anyway.

Another thing is that while people are rapidly becoming better informed,
they're becoming more confident in their opinions even faster. So again we are
not necessarily always going to see better decisions being made -- but they
are going to be made with more public input.

I know that might sound a little crazy when we seemingly hear about a new
secret conspiracy or abuse of power every day, but that's the point: _this
stuff was always going on and we just didn't know._

~~~
ArkyBeagle
Very little of the scrutiny is any good at all. It's riddled with mood
affiliation and confirmation bias. The incentives simply are not with
thoughtful, considered analysis. It's too easy to create moments for people to
be swept up in, and to use the tools of persuasion to raise the persuader's
stock. Where social media could be an agent for reason, it had not taken that
role very much so far.

Take the stories around Ferguson - I think Alex Tabbarok won the internet the
day he released "Ferguson and the Modern Debtor's Prison" but that point of
view only appears in other references to the situation sparingly if at all.

I just have to flatly disagree with the idea that better information leads to
higher confidence in conclusions - there's simply no way that can be true
unless the subject is inherently trivial or unless some methodological advance
has made the previously intractable tractable. More information means more
uncertainty unless you're dealing in convergent statistical situations.

What's needed overall is for more people to listen with respect and genuine
curiosity to people with whom they disagree.

The effect of social media so far has been decidedly uncomfortable, especially
in the Middle East. Perhaps that is the Islamic Protestant Reformation, with
social media playing the role of printing press, but I doubt it - people who
know Islam much better than I claim it's actually riddled with deep apostasy
and not the kind that seems overall a positive thing. Ignoring deeply
normative Koranic instructions for the application of Islam in favor of
violence ( perhaps even war-crime-shaped violence ) cannot be a good thing.

~~~
SubiculumCode
I agree. Argue with your peers, not with the internet—but open data your sets
after you publish the several studies you get.

------
TheLarch
Fiske's Cognitive Miser theory: "The theory suggests that humans, valuing
their mental processing resources, find different ways to save time and effort
when negotiating the social world."
([https://en.m.wikipedia.org/wiki/Cognitive_miser](https://en.m.wikipedia.org/wiki/Cognitive_miser))
Such an indictment that this "science" can debate these topics with a straight
face.

As Feynman said: "We've learned from experience that the truth will come out.
Other experimenters will repeat your experiment and find out whether you were
wrong or right. Nature's phenomena will agree or they'll disagree with your
theory. And, although you may gain some temporary fame and excitement, you
will not gain a good reputation as a scientist if you haven't tried to be very
careful in this kind of work. And it's this type of integrity, this kind of
care not to fool yourself, that is missing to a large extent in much of the
research in cargo cult science."

------
haalcion3
> once an article is published in some approved venue, it should be taken as
> truth

And I agree with the author that it shouldn't.

A good point is made about the arguments for releasing study findings in
blogs, where discussions can be had in comments and perhaps the content of the
blog amended as needed.

I definitely think that the old way of publishing studies needs to change.
They need to allow discussion. Those that publish the studies and those that
read them need to critique and possibly conclusions be amended. And to get
this to happen at an intellectual level, students and others in academia and
research should be funded more by taxes and possibly be required to do
critiques and duplicate other studies to some extent.

For too long we've assumed the bodies of accumulated knowledge in the sciences
to be fact without dispute; that's not science. Science is all about finding
models that work and gathering data and trying to draw conclusions from it.
Science was never meant to be about facts; it is all about understanding.
Though some understanding may be innate, much of human understanding comes
from experience. While we can build upon previous "knowledge", time over time
we've seen what we held to be science fact proven false, e.g. earth is flat ->
it's round but the sun goes around it -> it goes around the sun, and flies
spontaneous generate from feces -> flies lay eggs in feces. If we'd never
allow criticism, we'd never have progressed.

Science should be about keeping an open mind and accepting there is much we
don't know, even if you believe and have faith in something. Einstein believed
in God, and Hawking is an atheist. Both are scientists, and both have had
theories proven and disproven. We are human, and we must continually strive
for understanding knowing some ideas may be right, some may be wrong, and that
ideas will change.

~~~
beevai142
> For too long we've assumed the bodies of accumulated knowledge in the
> sciences to be fact without dispute

Who is this "we" who has assumed such a thing? Certainly no working scientist
I know would agree that a result being published means it is necessarily
correct. Nevertheless, some knowledge of the world scientists have accumulated
has very strong support, and those who challenge it have invariably turned out
to be in the wrong.

~~~
internaut
The 'we' could be a bunch of people who are no longer even alive. No longer
alive in body but alive in spirit through the application of the law. I was
listening to a building science podcast the other day.

In it they were talking about how they used to use, and still use, a 100 year
old assumption about 'make-up air' in houses which is part of setting up an
air conditioning system.

Turns out the original scientific papers were formalized into regulation or
ASHRAE standards without double checking whether they were true.

Practically this means every single household in the country has a AC unit
which is overpowered by 30% or 50%. For 100 years.

It also meant tens of millions of buildings suffered from dry rot because the
systems weren't balanced correctly if I remember right. I'll look it up if
anybody's interested.

~~~
internaut
[http://hwcdn.libsyn.com/p/c/7/c/c7c82959d816327d/BuildingPer...](http://hwcdn.libsyn.com/p/c/7/c/c7c82959d816327d/BuildingPerformance33-LewHarriman.mp3?c_id=5012376&destination_id=14420&expiration=1476583667&hwt=f23360bced4c0559a85ce1b766cbadb9)

Looks like it becomes a bigger problem as buildings become more airtight.
Something to do with peak dewpoint in makeup air being all wrong, not sure
because I'm not a HVAC expert.

------
Animats
Psychology as a science has far fewer reliable results than it claimed it did.
It's not that the winds have changed. It's that the field was broken.

Realizing this is a good thing. It's going to set back careers and reduce
funding, though.

Next, economics?

 _" Science is prediction, not explanation."_ \- Fred Hoyle

~~~
nickpsecurity
"Next, economics?"

I vote for that field's BS to be thoroughly dismantled next. Like psychology,
they often stray too far from the real world almost as if the goal is just
citations, funding, and academic prestige. Common in general in academia but
the connection to reality is thinner with a lot of economics. The amount that
try to understand what's going on in U.S. economy without factoring in bribes
to politicians and self-forming cartels in oligopolies might be a starting
point for the filter. The first thing anyone should think of about a
government subsidy, law, or regulation is, "Did someone pay for it? And how
would that affect things in local and global sense?"

Same with media reporting. "In todays news, this chemical company and these
specific drug companies paid this amount of bribe money to the following
Congressmen for the following law to be passed drastically reducing what they
have to pay you if they maim you for life, pay your family if they kill you,
or pay your town if they mess a whole town up."

A little reality check might change both purchasing and voting behaviors. It's
gotta be presented in the model people are reading or hearing. If the effect
exists & is not in model, then that's just politically or scientifically
dishonest. Often for a reason. ;)

~~~
Animats
It matters more for economics, because economics influences policy. This
despite the demonstrated lack of predictive power.

------
JBiserkov
>If Fiske etc. really hate statistics and research methods, that’s fine; they
could try to design transparent experiments that work every time. But, no,
_they’re_ the ones justifying their claims using p-values extracted from noisy
data, _they’re_ the ones rejecting submissions from PPNAS because they’re not
exciting enough, _they’re_ the ones who seem to believe just about anything
(e.g., the claim that women were changing their vote preferences by 20
percentage points based on the time of the month) if it has a “p less than
.05” attached to it. If that’s the game you want to play, then methods
criticism is relevant, for sure.

------
weisser
Interesting use of the lyrics to Randy Newman's classic "Louisiana 1927"

[https://youtu.be/MGs2iLoDUYE](https://youtu.be/MGs2iLoDUYE)

If you only know about him as the guy who does Disney/Pixar music you're
really missing out.

~~~
Hondor
Kind of fitting but I wonder if last few lines will turn out to fit too: "The
President say ... isn't it a shame what the river has done to this poor
cracker's land."

------
martincmartin
"A new scientific truth does not triumph by convincing its opponents and
making them see the light, but rather because its opponents eventually die,
and a new generation grows up that is familiar with it." \-- Max Planck

~~~
Afforess
A pithy quote, but it does not have to be. There is no reason or law that
prevents us from acknowledging our failures in the past and updating our
beliefs to the corrected findings of the present. Death is just the minimum
bar, the floor, for progress, not the ceiling.

~~~
bulatb
_> There is no reason or law that prevents us from acknowledging our failures_

I'd argue that there is. That law is Goodhart's law. [0]

As long as there's a benefit to having people _think_ you're right—in terms of
credit, money, power, fame, or even just the satisfaction of believing that
you've solved a problem—actually _being_ right will tend to take a backseat to
_appearing_ right.

[0]
[https://en.wikipedia.org/wiki/Goodhart%27s_law](https://en.wikipedia.org/wiki/Goodhart%27s_law)

------
sbwm
> 2011: Daryl Bem publishes his article, “Feeling the future: Experimental
> evidence for anomalous retroactive influences on cognition and affect,” in a
> top journal in psychology. Not too many people thought Bem had discovered
> ESP but there was a general impression that his work was basically solid,
> and thus this was presented as a concern for pscyhology research.

> In retrospect, Bem’s paper had huge, obvious multiple comparisons
> problems—the editor and his four reviewers just didn’t know what to look
> for—but back in 2011 we weren’t so good at noticing this sort of thing.

I was a postdoc in a Psychology department when this was going on, and
"obvious multiple comparisons problems" isn't a good characterization. Any
competent psychology researcher in 2011 (a) understood multiple comparisons
and looked for them as a matter of course (b) knew there was something wrong
with Bem's paper (see the editorial disclaimer).

Here is the main takedown of it:
[https://dl.dropboxusercontent.com/u/1018886/Bem6.pdf](https://dl.dropboxusercontent.com/u/1018886/Bem6.pdf)

That is some pretty advanced statistics, not just "correct for multiple
comparisons".

What was ongoing then, and continues now, is that psychology and social
science in general is coming around to the realization that the tools of the
past 50 years are flawed, and to correct them, they need to become better
statisticians. But it isn't a matter of "take stats 101 noobs", these are
people who have been doing statistical analysis routinely for years. I think
there is anxiety that to really do things right you need to _primarily be_ a
statistician.

So there is some defensiveness in social sciences about this, certainly not
helped by the fact that every jackass on the internet whose taken an undergrad
math class thinks they know better.

In the end I quit my psych research job to be a software engineer since all
the stats hurt my head and I needed something less quantitative to do.

~~~
aggerdmon
I too am coming from psychology looking to make the transition into a career
in tech, and would be very interested to hear more about your experience
making the transition*. But I would like to offer my experience having just
got out of school.

I agree that there is absolutely a need for a transition to more advanced
statistical methods in the field. In cognitive psychology at least, you are
starting to see growing interest in adopting Bayesian techniques and moving
away from null hypothesis significance testing. But unless you come across it
on your own, the difference between Bayesian and Frequentist statistics is
unlikely to be referenced until the graduate level. I believe Bayesian methods
may alleviate some issues. For instance, one of the studies we were starting
up towards the end of my time at the lab had an interesting property in that
using Bayesian methods the study expected to do what could be thought of as
corroborating or supporting the null hypothesis. If Bayesian methods start
seeing wider adoption, I have to wonder how careful people will be thinking
about their choice of priors, but it's a step in the right direction.

With regards to the statistics that are being taught. Quite a few of the K300
(Statistics for Psychology) courses are now being taught using R, but my own
K300 course emphasized learning to do it by hand and didn't allow calculators.
An interesting point brought up on a podcast I was listening to [1], was that
we still teach statistical methods in the order they were developed/discovered
and not the order that makes the most sense. I could see how teaching ANOVA as
a special case of linear models might be less hand wavy. Interesting podcast,
the professor they're interviewing is advocating a technique called structural
equation modeling which I would love to find time to read up on.

However the research the lab I was with does specializes in building
mathematical models of category learning, and has a relatively strong
quantitative focus, so I can't say how this extends to other subfields or
other universities. I no longer have the paper, but I saw a survey of
psychology departments awhile ago that I believe found the number of methods
courses being required in graduate programs was declining and fewer
universities having researchers that specialize specifically in methodology.
The paper made an interesting point that when you're specialized in
methodology, you may play an important role increasing the quality of everyone
else's work. However if your work specifically targets researchers, you're
unlikely to see the same kind of funding or high profile journal publications
seen by people in more applied areas. We can't all be like Tversky, but
hopefully we'll start seeing some of that return now especially after it kind
of declined around whenever psychophysics decreased in prominence.

I guess part of where the apprehension of increasing the complexity of
statistical methods may be coming from is (1) that people might be worried
about decreasing the accessibility of their work or (2) if you don't have a
strong understanding of the math, you run the risk of pushing complexity
somewhere you aren't as equipped to deal with it. With regards to (1), I know
that one of our frequent collaborators had developed a quantum dynamics model
of decision making that was showing impressive results characterizing the
data, but I do not envy the amount of effort I'm sure he has to put into
explaining the math in his papers. (2) might be addressed through more
interdisciplinary collaboration, but I think you need both the support for
development of methodology and adoption.

If you don't mind me asking, what area were you working in? And how did you go
about transitioning to becoming a software engineer? I started out programming
doing simulations like Conway's Game of Life and the like in a course that
taught programming for cognitive scientists, and along the way kinda fell in
love with it. When I decided to do an honors thesis, I got the chance to do
way more programming than is expected in an undergraduate psychology degree:
Using an OWL ontology in the planning stage to help with the experiment's
implementation, debugging the experiment, doing analyses, and writing a
visualization program to simplify recovering the orientation of
Multidimensional Scaling Solutions. Coming out of my undergraduate, I'm
thinking a career writing software either as a developer or engineer looks
preferable to going back to school right now, but python junior-dev positions
are proving tricky to find in Indy.

[1] [http://methodologyforpsychology.org/mfp017-mathematical-
psyc...](http://methodologyforpsychology.org/mfp017-mathematical-psychology-
dr-von-oertzen/)

------
hannob
It's good that these issues finally get a bit more attention, however I fear
that we're still only seeing the tip of the iceberg. There are so many fields
of science that don't care about replication at all, work with far too low
numbers of subjects and don't bother about issues like publication bias. I
don't think it's a stretch to say that the majority of stuff that's done at
university is cargo cult science.

And don't get illuded about your own field: Pretty much all of this is also
true for computer science when it comes to quantitative research. How often
have I seen studies like "we have tested this with 10 users which we divided
in two groups".

~~~
aetherson
That's interesting. What kind of computer science research is done with
control/experimental groups of users? I haven't paid any attention to academic
computer science in, well, decades, but I kind of assumed it was all "we got
this algorithm to do this thing," like math, with not much of a human element.

------
barry-cotter
Reading Susan Fiske's Wikipedia article makes this even more depressing. But
please resist the urge to write off all of psychology, or even all of social
psychology. Psychophysics and psychometrics both replicate very, very well,
and they're not the only areas of psychology to do so. This is true even if
you file biological psychology and neuroscience somewhere other than
psychology. And social psychology makes sociology look really rigorous.

[https://en.m.wikipedia.org/wiki/Susan_Fiske](https://en.m.wikipedia.org/wiki/Susan_Fiske)
A recent quantitative analysis identifies her [Susan Fiske] as the 22nd most
eminent researcher in the modern era of psychology (12th among living
researchers, 2nd among women).

^ Diener, E., Oishi, S., & Park, J. (in press). An incomplete list of eminent
psychologists in the modern era. Archives of Scientific Psychology

~~~
mafribe
Fiske's Wikipedia article reads like PR material. It should be edited to be
more somber, and reflect the extant criticisms of her research performance. I
suspect that that would lead to an edit war though.

    
    
       Psychophysics and psychometrics
       both replicate very, very well,
    

It might be good if researchers in those sub-fields lead the drive of
improving psychology's research methodology.

------
stretchwithme
Who is paying for all this bad science? Do they even have a choice in what
they are funding?

We have so many "industries" now whose function is to exploit the taxpayer. We
wouldn't have all this bad science or bad risk taking if there weren't a huge
patsy willing to pay for it all without regard to actual results.

~~~
tormeh
Like start-ups's main function is to impress VC's so researchers main purpose
is to impress their funders. Ideally this should be equivalent to producing
great companies and research, but it often isn't.

------
botw
It reminds me the other day I read a 2012 NLP PhD paper where there is no
contact information(email, etc) all over. Frustrating. And recent news about a
new genetic editing technology claimed by a seemingly 'nobody' unrecognized in
bioinformatics community before is causing great controversy that the research
could not be replicated, lacking critical detail about process and data. When
asked why he does not want to disclose who indeed replicated, the author said
it would let them down. I just don't understand the reasoning. All about
social pshycology in science?

------
guelo
I was just in a work setting this past week where someone brought up ego
depletion as a justification for a decision. The damage that these people do
with their bad science is real and widespread. If "methodological terrorism"
is what it will take to get them to take their damn jobs seriously then I'm
all for it. Stop publishing garbage!

------
maverick_iceman
All of this goes on to confirm my belief that social science is a hokum.

------
edem
After reading the first paragraph I still only have a vague concept what this
wall of text is about. Can somebody summarize it with a TL;DR?

~~~
danso
Psychology must get used to the idea of transparency and to the reform made
possible by how the Internet (as a broad term to generally describe the
current change in communication) has made it easier to scrutinize and
challenge scientific results.

~~~
edem
Thanks!

------
lvs
Why does he keep referring to PNAS as "PPNAS?" It's clearly the former, and I
don't know what the latter stands for.

~~~
JBiserkov
Possibly a joke? The first occurrence in the text(emphasis mine):

>Meanwhile, the _prestigous_ Proceedings of the National Academy of Sciences
(PPNAS) gets into the game

------
draw_down
Brutal. What a fantastic read, thanks.

------
hackuser
> In short, Fiske doesn’t like when people use social media to publish
> negative comments on published research.

This doesn't at all match Fiske's article. As people on HN know very well,
there's a difference between, on one hand, serious, constructive criticism;
and on the other, the trolling, abuse, and just endless barrage of nonsense
that buries any signal in noise, all of which is common online. It's clear
Fiske is talking about the latter; the following is very recognizable:

 _... uncurated, unfiltered trash-talk, In the most extreme examples, online
vigilantes are attacking individuals. Self-appointed data police are
volunteering critiques of such personal ferocity and relentless frequency that
they resemble a denial-of-service attack ..._

It's not discussion, it's abuse, and we know well that the latter have tried
to hide behind the former as a justification. And we also are very familiar
now with the consequences:

 _Only what 's crashing are people. The unmoderated attacks create collateral
damage to targets' careers and well-being, with no accountability for the
bullies. Our colleagues at all career stages are leaving the field because of
sheer adversarial viciousness._

\----

Then our author fabricates and attributes ideas to Fiske that she never
mentions or even alludes to:

> She’s implicitly following what I’ve sometimes called the research
> incumbency rule: that, once an article is published in some approved venue,
> it should be taken as truth.

The word 'implicitly' is not a license to fabricate the rest of the sentence.
And he goes on to criticize the whole field, despite saying at the beginning,

> it’s pretty much all about internal goings-on within the field of psychology
> (careers, tenure, smear tactics, people trying to protect their labs,
> public-speaking sponsors, career-stage vulnerability), and I don’t know
> anything about this, as I’m an outsider to psychology and I’ve seen very
> little of this sort of thing ...

 _I don 't know anything about this_; enough said. Also, his characterization
of her article makes me wonder if he read it; her article is about trolling
and abuse, and the things he mentions are only second-order effects.

The above behaviors are all familiar, and also telling is another habit we'll
recognize:

> I was inclined to just keep my response short and sweet, but then it seemed
> worth the trouble to give some context.

Maybe we found one of the trolls.

~~~
mcguire
If she's complaining about trolling and abuse, she should probably say so, and
not use terms like "methodological terrorism". That is a) a rather
unprofessional accusation and b) a term that brings to mind scientific
criticism , not abuse.

~~~
hackuser
> If she's complaining about trolling and abuse, she should probably say so

She does, clearly and explicitly, and I even quoted some of it.

~~~
mcguire
She also says, "We agree to abide by scientific standards, ethical norms, and
mutual respect" while throwing around terms like "terrorism", "destructo-
critics", "vigilantes", "data police".

------
wodenokoto
Is Fiskes article related to the replication crisis?

Remember that whole cultural appropriation thing last year? Or the safe space
movements on campus? There are plenty of movements that abuse political
correctness to bully anyone around. On the opposite side there are plenty of
4chan types who love to shamelessly bully anyone who dare come out with a
feministic message.

I imagine academics in related fields are ripe for being targeted by Internet
bullies and trolls.

Isn't it this kind of abuse that Fiske is referring to?

~~~
leephillips
I don't think she's talking about personal abuse. Some people just get upset
when mistakes are pointed out in public. There is a crisis in social
psychology: its status as a science is crumbling. Naturally, feelings are raw;
many careers are precarious.

~~~
caseysoftware
It seems they're "precarious" because massive amounts of research is
"fraudulent" or _at minimum_ plagued with errors.

If my code was in a similar state, I would consider my career "precarious" or
"at risk" too.

~~~
leephillips
Indeed. It seems likely that there are prominent people in these fields, with
decades of research and publishing behind them, almost all of it based on bad
statistical methods. Naturally, there is a certain amount of panic.

------
gkya
The author says:

 _" In short, Fiske doesn’t like when people use social media to publish
negative comments on published research. She’s implicitly following what I’ve
sometimes called the research incumbency rule: that, once an article is
published in some approved venue, it should be taken as truth. "_

after citing an article of Susan Fiske. The sentences in quotes are blatant
lies, obvious if the cited article is read. She says, in the cited article:

 _" In contrast, the self-appointed destructive critic's role now includes
public shaming and blaming, often implying dishonesty on the part of the
target and other innuendo based on unchecked assumptions. Targets often seem
to be chosen for scientifically irrelevant reasons: their contrary opinions,
professional prominence or career stage vulnerability. "_

Shame on you Mr. Gelman.

------
threeseed
Not sure who this guy is but he seems to have completely missed the point
about what Fiske was saying. Sure the internet has heralded a new paradigm but
that doesn't mean that we have to just accept the negatives. And the negatives
with the internet are just as severe as the positives.

Go spend time on any politics forum for example and it is downright scary.
Conspiracies run rampant and are unchecked, experts are personally vilified
and viciously attacked and facts come second to feelings and opinions. And all
evidence to date suggests that this is not helping society but actually making
it more polarised and less cohesive.

And exactly the same thing has happened in other fields e.g. climate change.
Legitimate criticism is always useful and welcomed but it needs to be
constructive.

~~~
lifty
I think Fiske met that constructive criticism with negative hyperbole and
vague counter arguments that were not on point. People protect their vested
interests and status quo by praying on people's decency and politeness, but
ignore the fundamental facts in the criticism brought against them, and that
leads to a proportional negative reaction from the other side. This situation
reminds me of the Eastern European political systems where failure to
recognise and deal with basic problems like fairness, corruption, nepotism,
etc leads to a progressive degradation of trust that in turn leads to more
violent/drastic changes, with total disregard for decency. So, if you want to
be treated decently, than you'd better act decently as well, and not only
superficially.

~~~
mafribe

       reminds me of the Eastern 
       European
    

That's interesting, but maybe not so surprising, because the techniques that
people like Fiske use stem (at least in parts) from the communist tradition of
undermining and taking over institutions (e.g. Trotsky's entryism).

~~~
venomsnake
(e.g. Trotsky's entryism) - known in modern times as diversity advocates. Let
me in your group. Then change your group culture to suit me instead of me
blending in.

~~~
_delirium
Trotsky's entryism was pretty specific, I don't think particularly related to
your beefs. His argument was that forming small far-left workers' parties
didn't make sense if there already existed large mass workers' parties, and
instead communists should work within those, even if they were more moderate
than you'd like, rather than forming new parties, which would be ideologically
pure but not contain the actual masses of the working class who would have to
form the basis of any workers' government. In most Western European countries,
the majority of the working class supported various kinds of Labour and Social
Democratic parties, so that's what he advocated joining.

------
ptero
TL; DR: uncurated social media is used to critisize scientific peer-reviewed
publications in psychology. This hurts scientists (peer reputation / society
reputation?).

IMO this is not specific to psychology: climate change, biology, computer
science, physics, etc. research is exposed to the same phenomena. The hotter
the topic, the bigger the flames get.

However, if publication is clear about methodology, data collection and
conclusions those flames do not hurt -- the published work stands on its own
and anyone can go back to it as a sanity check on whether accusations have
merit. If I disagree on methodology I will not think worse about the author.
However, if I suspect he manipulated the data I will; and real quick.

Being clear on what was done is, IMO, a practically foolproof way of
protecting your reputation. Researchers unnecessarily making publications
readable to only a tiny minority of field experts (sometimes hoping to
minimize criticism -- "I just need this published; and 3 more for a postdoc")
or being vague about how they collected data shoot themselves in the foot. And
this, IMO, is a major source of self inflicted wounds that are most frequent
in the fuzzier fields like psychology.

~~~
Noseshine
This "TL;DR" is _very_ misleading, even wrong! Source: I actually read the
whole article and much of the discussion.

Instead of looking for an TL;DR, _this time_ I suggest actually reading it. I
would say it's well worth it.

~~~
lobo_tuerto
Well this comment isn't really helpful too.

I'm intrigued. I haven't read the whole article, but if I did and say
something like you did I would explain myself.

If it's very misleading, even wrong why don't you expose your reasons?

~~~
Noseshine
> Well this comment isn't really helpful too.

I would claim it is: My only reason to make it was to prevent people from
going away with that TL;DR. I don't want to make one myself, because why not
just _click on the link_ and _read the article_? The comment is not a summary
of that article _at all_ , that is all.

This is a complex subject and details matter very much. I don't think it is
healthy to have people get only a caricature of the whole thing. I also
recommend reading at least some of the many comments, and even to follow some
of the links provided there. For example, I found this one in the comments and
I've just reached the comment section:
[http://slatestarcodex.com/2014/04/28/the-control-group-is-
ou...](http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-
control/)

~~~
raldi
_> TL;DR. I don't want to make one myself, because why not just click on the
link and read the article?_

There's an essentially infinite amount of information out there on the
internet, and people have a finite amount of free time; thus, many of us try
to glean the thesis of an article before committing an hour to reading it --
particularly slow readers and non-native English speakers.

Especially in articles like this one, with a vague headline, no pull quotes,
section titles that are too symbolic to actually describe the sections they
head, and a break from the convention of using the opening paragraphs to
provide a summary of what's to come, a TLDR comment is greatly appreciated to
give a potential reader at least some clue what it's about.

TLDR: You oppose TLDR comments because you feel they keep people from reading
the full article, but in cases like this, the _lack_ of one is doing so.

~~~
skj
No, the post you replied to opposes incorrect TLDRs. Not TLDRs in general.

~~~
raldi
Reread the part I quoted.

~~~
skj
The poster was suggesting an argument and then refuting it.

