
Academic success is either a crapshoot or a scam - mpweiher
https://grasshoppermouse.github.io/2017/12/05/academic-success-is-either-a-crapshoot-or-a-scam/
======
mjfl
I would say that social science is an exception rather than the rule when it
comes to reliance on p-values for their results. Many, or most scientific
fields don't. For example DNA nanotechnology gets published in Nature a lot by
creating concrete proof-of-concepts where the "result" is a picture on an
atomic force microscope or something like that. Other examples abound,
crystallography, microscopy. So saying that academic success is a crapshoot or
a scam in general because of p-hacking is not correct. Within social science
it is probably correct.

~~~
TeMPOraL
Judging by the replication crisis discussions over past two years, I'd add
quite a bit of medicine to that list too. And I don't mean just psychology,
which is social science with a medical degree - but also drug research.

All of this, though, shows how _hard_ those disciplines are. Relative to that,
hard sciences are easy, because they study precisely defined things pretty
closely entangled with laws of physics, and you can't easily bullshit mother
nature. But as you move away from physics, complexity rapidly increases, the
questions themselves become fuzzier, and the answers less clear.

So IMO, the probability that academic success is crapshoot or scam increases
with (Kolmogorov) complexity of the subject domain.

~~~
Retric
It's not quite that medicine is more complex, it's that Medicine is practical
and accepts not understanding the details. If drug X works then that's enough
we will use it even if we don't understand exactly why it works.

The downside is we end up building simplified models well Y is correlated with
bad things so we should treat Y as a bad thing. Often this actually works
really well, but also opens the floodgates for all sorts of issues.

That's not to say a less practical approach that simply looked for the details
would be more effective. But, it would look very different.

~~~
denzil_correa
> It's not quite that medicine is more complex, it's that Medicine is
> practical and accepts not understanding the details. If drug X works then
> that's enough we will use it even if we don't understand exactly why it
> works.

I'm always baffled by this. I can understand why we should move these drugs
into practice. But, if the way you conduct research stops at X works - you
should have a good look at the way you conduct research. I can completely
understand short term gains but ignoring long term gains is very unscientific
and immature.

~~~
zdragnar
I'm afflicted by three chronic conditions which have no known causes. All of
them are treatable; there are guesses as to why the treatments work, there
isn't really a good way to establish a sound reason for them working.

You can't just take a person apart to study them while they're alive, and you
can't easily reconfigure them to account for variables either.

------
lewis500
All of my research so far has been either theory/simulation or descriptive
(e.g., this is how much this project did cost). After going to a lecture on
someone's empirical research two weeks ago, I found myself asking why I have
always been kind of turned off by statistical projects. I came up with two
answers:

One: at this state in my career I need to publish a lot. There is kind of a
speed limit on how fast you can publish empirical work, but with theory the
upper bound on your publication rate is how hard you work and how creative you
are.

Two: I feel more honest about my theoretical work. My models and simulations
are so obviously contrived that there is no danger of anyone mistaking the map
for the territory. By contrast, if you come up with some effect of size X in
an empirical paper it might be reported in the media as some kind of golden
truth like the charge of an electron.

I think if I spent like 12 or 15 months on an empirical project, I also would
be so desperate to get a result out of it as to fudge the result. Even among
intuitive relationships, the chance you can honestly and provably identify an
interesting relationship is really low. For this reason, I read theory and
description from all kinds of researchers, but I tend to only be interested in
the empirical work from the best-of-the-best. In my field, I think some of the
best work is from Rebecca Diamond at Stanford, and one of her most recent
papers had a negative result.

I also really discount job market papers: these people spent years on a single
project and their whole future is riding on finding something cool, so the
incentive to fudge is just extraordinary. It's kind of like trusting a judge
to try a case in which his son is the defendant. No matter what he told me
about how fair he'll be, I wouldn't believe him.

~~~
paultopia
This rings really true to me. I'm switching to more empirical-ish work now,
after tenure when I don't have a knife at my throat.

------
Maro
One possible, very sad long-term effect could be that the Sciences (results,
theories) will be deemed less relevant, less trustworth, less worthy of
attention. Eg. I started a Phd in Physics, but I never finished it, I got
disillusioned with Academia and some specifics in today's Physics research
(like String Theory, Inflation). 5 years later, people are still beating on
those drums, and whenever I read about some crazy new model, I think two
thoughts: (i) nonsense (ii) I'm so happy I'm not spending my time on this.

~~~
fiatjaf
It would be a very good outcome if society stopped believing anything that
comes out of the mouth of a "scientist".

It's just sophistry all the way down.

~~~
yesenadam
So whose mouths should we trust and why? (Apart from yours)

~~~
fiatjaf
None.

------
knolan
> In my field of anthropology, the minimum acceptable number of pubs per year
> for a researcher with aspirations for tenure and promotion is about three.
> This means that, each year, I must discover three important new things about
> the world.

This is typically not the case. Academics will have a number of students
working for them and many academics push them incredibly hard while doing very
little of the actual research themselves. Instead they spend their time
chasing funding. Projects are often built on prior work so many new papers
explore new areas of a previous discovery or measure said discovery in a new
way. So you’re not discovering three new things every year. Some academics
will even be bold enough to publish the same data in multiple journals with
minor changes in analysis.

With academic research it’s often a case of two steps forward one step back.
You spend years working towards deep expertise in a field through postgraduate
studies and then a postdoc or two. Then when you become academic staff you
suddenly become an inexperienced manager and teacher. So you pass the work to
inexperienced students who spend years relearning everything and often
suffering from over aggressive supervision or complete abandonment. The
funding bodies, as is their right, are increasingly looking for commercial
outcomes to research so the scope moves from Research to r&D with a little
‘r’. So we see research on battery technology making ludicrous claims or
graphene in all the things rather than real progress.

This stunts the potential of scientific discovery in favour of pushing revenue
generating students through universities and chasing funding for short term
gain.

~~~
bachmeier
> This is typically not the case.

What you're describing is research in fields with lots of grant funding. In
some fields, you don't spend your time writing grants, and you don't hire grad
students to do your research for you. In those fields, teaching is important
because you do a lot of it. In grant-heavy fields, it wouldn't matter if the
university shut down their undergraduate programs, because the emphasis would
still be on bringing in grant dollars.

~~~
kurthr
It may be true that some University/College are interested in research and
teaching more than the grant money that is brought in to the department, but
there's definitely a bias there. Even more important, if you don't have
several graduate students working for you during tenure (supplementing your
research & teaching efforts), it's going to be hard to tenure there either.
How many classes/labs can you teach without an RA/TA? Remember the big intro
courses that new untenured profs get, and that teaching grads is a big part of
the job! You basically have to prove you can't handle grad students in some
horribly catastrophic way not to get them from the dean/provost.

Perhaps in a field like philosophy or math at a very small school it's
possible with amazing individual results, but it's uncommon in my experiences
which include watching numerous friends at many different schools (>10) and
fields (bio, chem, phys, geo, cs, math, philosophy) both public and private.

Maybe you mean outside the US? I'd believe that. Europe hasn't gone as far
down the path of non-tenured teaching faculty as the US, but due to money they
seem to be starting to.

~~~
VLM
"if you don't have several graduate students working for you"

And that feeds the birth/death demographics problem where the reason why its
hard to get a tenure position is the previous generation had to produce 10,
20, 30 qualified grads in order for them to get their previous generation
tenure position, so 10, 20, 30 minus one person are going to get training and
education for a position they will never be allowed to fill. I'm sure they'll
find something to do with their lives, although to get position XYZ they never
trained for, they'll be competing with people who actually trained and studied
for XYZ work, and in the end it just means a different name is on the
unemployment lists.

Academia is a weird pyramid scheme of human suffering.

------
sytelus
This article is about "empirical research" which means articles with titles
like "Study suggests XYZ".

I avoid these type of "research" articles like plague. There probably exists a
"study" that proves and disproves virtually anything. For example, study
suggests coffee prevents cancer and vice versa. Such pseudo (and often badly
done) statistical voting type of work shouldn't even be called "research" and
people doing these shouldn't be called "scientists" (the word "surveyors"
seems more appropriate). There is now a cottage industry of pushing out absurd
correlations that would be immediately be picked up by media because, you
know, so called "scientists" did that.

~~~
erikpukinskis
I would be interested in your point if you were revealing something concrete
about the OP and their research, but you’re just making handwavy claims about
broad correlations (“empirical research” = “study suggests xyz”)

The fact that you’re making overly broad claims accusing people of making
overly broad claims doesn’t instill confidence that you know what you’re
saying. If you’re such a champion of science, why not state your claims
minimally concrete terms. And ideally make falsifiable claims.

~~~
sytelus
OP's point is that its hard to get funds to conduct 9 studies per year with
hope that 3 will result in paper as expected from the tenure track. I neither
have intention to counter argument nor I have insights in to specific field
that author is working on (which seems to be anthropology). My argument is
exactly more broader as you have mentioned: Lot of research - at least as
captured by media - involves simply doing some statistical survey and getting
comfortable p-value to suggest some weird correlation. There is a lot of money
spent in this type of "research" because it is expensive to conduct such
studies. My objection purely on calling this "science" and calling people
doing this as "scientists" because these two fairly reputable words are being
used to lure general public to believe such "studies". I feel such works
should be called "surveys" and people doing it should be called "surveyors".
The real scientific work should involve taking the survey only as a hint and
then uncover actual mechanism that falsifiably proves or disproves the
correlation with hopefully more predictions. Simple surveys don't have these
properties and hence such studies shouldn't be labeled as work of scientists.

------
tbarbugli
Years ago after spending uncountable beers listening to the frustration of
several PhDs friends, I had the idea of a journal for research failures/dead-
ends.

The idea was to look at unfruitful research as something useful for the
community and give some credit to time and resources spent doing something
that did not work (and should not repeated by others). I still wonder if that
would create the wrong incentive though and if that could be feasible at all
(who likes to talk about money and time wasted?)

EDIT: fixed wrong sentence

~~~
chrisseaton
> I had the idea of a journal for research failures/dead-ends

A bit like the Journal of Negative Results in Biomedicine, the Journal of
Negative Results — Ecology and Evolutionary Biology and the psychology Journal
of Articles in Support of the Null Hypothesis?

> several phd undergraduate friends

People doing PhDs who are simultaneously undergraduates?

~~~
haZard_OS
Don't forget plant science!

[http://www.journals.elsevier.com/new-negatives-in-plant-
scie...](http://www.journals.elsevier.com/new-negatives-in-plant-science)

------
jondubois
It's the same with everything else. For example, with complex software
projects, the ones that make the most incredible claims about their
capabilities are much more likely to get hyped up and get blogged/tweeted
about... And it's often at the expense of real quality.

~~~
gaius
Methodology merchants like “Uncle Bob” have built careers and even companies
on that strategy

~~~
yesenadam
What 'incredible claims about his capabilities' did he make? (I came in late.)

~~~
mcguire
Wasn't he involved in Chrysler's C3 project, the major success of which was
the agile programming industry?

~~~
gaius
I always get Bob Martin and Martin Fowler mixed up

------
averagewall
A bit off the topic but I always wondered why social scientists need funding
or even a university for their experiments. I mean the kind where they put 20
people in a room and show them some pictures and ask them some questions. What
resources do you need?:

Time - do it as a hobby so it's limited to people who are passionate about
discovering something, not just trying to rack up points.

Access to human subjects - get people off the street. Universities can make
access to student subjects easier but they're not representative of the
world's people anyway so it's not that valuable.

Money to pay the human subjects - save up? People who're passionate about
cars, for example, somehow find the money to build modified cars and don't
need funding.

Ability - With the oversupply of researchers, maybe just leave it to those who
can do it under their own steam? Hopefully undergrad degrees taught them
enough to get started. This article says that university researchers aren't
doing it with skill anyway but with luck or cheating.

Motivation - if nobody can be bothered investigating some effect, then maybe
it's not really that interesting anyway and there's not much value in studying
it?

What am I missing?

~~~
jgmjgm
Depends what you study. If you're an economist you need access to data like
compustat or other. If you're a political scientist in the US much of the data
is available but you also need to spend a while cleaning it (although there
are great resources now too.)

You're really talking about small scale interviews or qualitative work. That's
an aspect of social science but it's not all of it.

------
Hendrikto
> empirical social scientists with lots of pubs in prestigious journals are
> either very lucky, or they are p-hacking.

That’s the money quote. This is mainly applicable to “empirical social
science”, which many already know as a big scam.

~~~
jgmjgm
Absolutely. Been there, done that. I worked for a guy who is top in his social
science field. Total p-hacking was the norm. It was an eye-opening experience
for someone new to the field expecting things to be done more honestly
following something like the scientific method.

Once I found a major bug in some code that was the basis of a significant
chunk of his major book and an important article. (Basically an optimization
process that didn't work but gave some nice results he built a story around.)
When I wrote him a report on what was wrong and why, he didn't respond. I was
supposed to write a paper based on the same code and all he would say was
along the lines of "it should work". Over time I realized that the implication
was that I should "make" it work. Of course, he never said those words but
over time the message was clear. After beating my head against a wall for a
long time, I eventually walked away from the project. Technically bad for my
career, but I honestly couldn't see the point in being at a top university
essentially selling snake oil.

That being said, there are fields of social science that are much better:
demography, political science, economics and even psychology (which is very
difficult at the best of times!) are trying to clean up their act. Basically,
any field of study with consistent data available openly is probably pretty
decent.

There are other fields like business/management where the pay is generally
good (i.e. motivation for a nice job with good perks is there), many of these
journals don't require data to be released, and the samples being studied tend
to be convenience samples or impossible to reproduce. I generally don't trust
the work in this area.

Sadly, policy makers are often willing to take questionable academic studies
that support their preconceived notions. When these things don't work out, the
general public becomes jaded that their taxes were wasted... it makes all
social science research look bad.

------
whack
One of the things I find surprising about the way we do science today, is the
idea that "data is king" and that it "speaks for itself". In reality, data
devoid of context, is absolutely meaningless. If the goal of scientific
researchers and journals is to promote good science, it's absolutely shocking
that practices such as pre-registration of hypothesis and methodologies, are
still completely optional.

[https://wp.me/p3SyBR-8t](https://wp.me/p3SyBR-8t)

~~~
make3
I'd be surprised if anyone is really arguing that the context of data is not
important.

~~~
whack
And yet, practices like pre-registration of hypotheses are still not
widespread. If they did, it would eliminate so much of the cherry-picking of
data that occurs.

[https://xkcd.com/882/](https://xkcd.com/882/)

------
fiatjaf
If everybody is required to publish tons of stuff, you're just creating tons
of stupid stuff no one will read, and that crap is going to cover the really
important stuff.

------
goldenkey
One has to wonder whether any of the qualitative social sciences really even
have a right to soundblast about concepts that have no provable metric...no
way to be tested, and cause no difference in the world whether they are right
or wrong. Is it time for the social sciences to throw in the ringer and reform
their teaching and relevant arguments into a more mathematical/axiomatic
foothold? I for one think so.

~~~
Noos
Isn't the problem trying to force social science into a
mathematical/axiomatical model in the first place, when it really is
historical and descriptive?

~~~
indubitable
I think the issue is more that the social sciences themselves are attempting
to operate as the natural sciences do with predictive, falsifiable, and
independently actionable conclusions.

History is invaluable and the lessons learned can provide some degree of
predictive power. Yet it would be unreasonable to try to argue that something
ought be enacted as, as a matter of fact (or at least heavily supported
hypothesis), because of results derived from the study of history.

The social sciences in general seem to be want to be allowed to behave as a
"historical and descriptive" field, yet be received with the same credulity
as, for instance, experimental physics. Ultimately, I think the field needs to
choose to go one way or the other.

~~~
ghostcluster
The problem is activist social "scientists" making broad claims with no
dvidence,and the march to enact sweeping societal changes with their
"results".

See: Stereotype Threat, Implicit Association Test and Implicit Bias, Power
Posing, et al.

------
Blazespinnaker
I thought quality of journal and number of citations (from quality journals)
was more important than pubs per year.

------
fiatjaf
If you're going to reward only high-quality studies you'll lose the larger
part of scientists, because there are very little of them who can produce
anything high-quality.

What you're proposing is mass unemployment.

~~~
stordoff
If the studies they are producing aren't high-quality, is there any value to
them? Do we, as a whole (because some individuals will lose out, as you say),
benefit by rewarding low standards of work?

I'd also said that many may be _capable_ of high-quality work, but merely have
limited incentives to produce it currently.

------
yipopov
I was once asked to edit a sociology paper for grammar, spelling and style,
but ended up deleting large swaths of it containing claims made up of thin air
and found nowhere in the data or the citations. Of course this was not well
received and I expect every one of those passages were promptly restored,
complete with their horrendous spelling and grammar.

The claim? That the biggest challenge facing remote poverty- and alcoholism-
stricken communities of indigenous peoples were not enough tablets to be able
to "Netflix and chill". Because surely that will solve all the world's
problems.

~~~
tome
Would be good if you could substantiate your comment by linking us to the
published version.

~~~
yipopov
If you don't mind I'd rather not have my name linked with fraudulent academic
articles, regardless of who exactly the guilty party is.

I could have just said "What the linked article says is correct", but I
figured this anecdote would add more value. But yeah, whether you want to
believe it or not, the linked article is pretty accurate. Successful academics
either get lucky or they are frauds.

