
Depression-Gene Studies Were Built on Shaky Foundations - wyclif
https://www.theatlantic.com/science/archive/2019/05/waste-1000-studies/589684/
======
AstralStorm
Statistics. Garbage in, garbage out.

This shows why you have to explicitly quantify statistical power and error
model for your study design. Tools are good, but not good enough to do it for
you.

Then, do calculate effect size instead of a binary answer. You will get answer
with a digit in fifth place with a proper size unbiased sample. Note that for
proper genome study, the sample size for observational studies is in tend of
thousands barring inbred model studies. (Mice or men, with easily detectable
disease process.)

~~~
wallace_f
It's amazing how convincing statistics are--even for supposed experts--and not
just for the reason you elaborate on.

In 2019, they really are the best way to lie.

~~~
lm28469
Science became a new religion. Instead of saying "shut up god made it that
way" you say "shut up science says it's that way".

You can find hundred of studies to support your arguments, no matter what they
are, it's even easier if you don't actually read the studies and stop at the
summaries made by popular medias.

~~~
Angostura
> You can find hundred of studies to support your arguments, no matter what
> they are.

I think that simply incorrect, assuming 'studies' != Youtube videos. I know
because I've been involved in internet arguments where I've gone looking for
studies and found my own argument is incorrect/poorly supported.

~~~
jerf
I'd say it depends on the topic, and it depends on the standard you hold the
papers in question to. For instance, I can find you a paper showing almost
anything you like in the field of nutrition in terms of what people should
eat. It'll even hold up in the abstracts. Whether it holds up beyond that can
get complicated, though; I've lost count of things like "a study that X
intervention is good in 20 rats over 5 days if you feed the X rats high-
quality X but give the not-X rats low quality not-X" or n=8 human studies or
n=20 self-reporting studies or all kinds of stuff like that.

~~~
fyrabanks
Can you find me a paper that says eating a Sausage McMuffin every morning
won't shorten my lifespan?

~~~
jerf
In those exact words no, but if you break down the nutritional content of the
Sausage McMuffin, yes, probably. A lot of studies are done on rats where even
the rats eating "quality" food in the study are still eating things much lower
in quality than a fast food muffin. That doesn't even sound particularly
challenging.

To be honest, if you're trying to make my point sound absurd by exaggeration,
you've shot way too low. The mainstream nutritional view would be that a
Sausage McMuffin every morning on its own isn't going to be a particularly bad
thing. You need to be a lot more specific with an overall diet to be a
problem. You should have asked something like whether you could find a study
about eating nothing but lard is available.

To which my reply would be that the principle of charity would make it clear
that in general I meant any semi-realistic nutritional view is available, not
that there are studies that prove humans can be healthy on a diet of rocks and
asbestos. It really isn't to anyone's advantage if I also have to append to my
little post a complete discussion of what is and is not within the boundaries
of nutritional theories that have been studied.

I mean... is that... _really_... what you want...?

~~~
antidesitter
> what is and is not within the boundaries of nutritional theories that have
> been studied

I don’t think this is clear to the average person at all.

------
thanatropism
I think pop sci rags going wild over papers that claim radical breaks are a
part of the problem, even if a small one.

There's simply no reason that people who are not technical enough to read
scientific papers would be up to date on the latest cancer research. People
have a right to knowledge? Then open up the journals.

~~~
mattkrause
It’s on a slightly different level.

There are strong incentives to find ‘sexy’ results, but this is mostly aimed
at getting a paper into Nature, Science, or Cell. Publishing in these journals
can have an outsized effect on one’s career, even if the results don’t
actually hold up.

In contrast, there’s not a huge pay off for getting something into Popular
Science or the New York Times science section. Publicity is good and can help
show the relevance of your research. It’s also fun to show your mom (who still
wants you to go to med school), but people tend not to chase it nearly as hard
as a “glam” paper.

~~~
thanatropism
That's probably a bigger effect.

People have a huge trouble accepting that their wonderful science is a social
process.

~~~
rossdavidh
Yes, and also that it's a very economic one. If you need to churn out (and get
accepted) research in quantity, then "hold on there, do we know that for sure?
let's try to replicate that, with a much bigger sample size..." is downright
unwelcome.

Scientists are mostly good people, but they're not angels, and if we put them
in a system that rewards the wrong thing, we will get the wrong thing.

~~~
AstralStorm
Unfortunately neither economics nor sociology are good enough science branches
to even fix themselves. ;)

------
hmd_imputer
imagine how many times a valid counter argument was silenced because of the
"are you denying science?" bull* argument

------
DuskStar
I love the quote from the Slate Star Codex article:

> ...what bothers me isn’t just that people said 5-HTTLPR mattered and it
> didn’t. It’s that we built whole imaginary edifices, whole castles in the
> air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR
> exerted its effects, what parts of the brain it was active in, what sorts of
> things it interacted with, how its effects were enhanced or suppressed by
> the effects of other imaginary depression genes. This isn’t just an explorer
> coming back from the Orient and claiming there are unicorns there. It’s the
> explorer describing the life cycle of unicorns, what unicorns eat, all the
> different subspecies of unicorn, which cuts of unicorn meat are tastiest,
> and a blow-by-blow account of a wrestling match between unicorns and
> Bigfoot.

[https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-
rev...](https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/)

------
scribu
The Slatestar Codex blog post linked in the article is a much more
entertaining read, I think:

[https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-
rev...](https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/)

~~~
matt4077
Scott is always a good read, and rarely not entertaining. The Atlantic’s piece
is a bit less snarky, but has other redeeming factors.

It is quite obvious when comparing the two that a journalist’s first instinct
is always to call a bunch of experts and incorporate their views. It’s also
laudable that the author rejects the easy cynicism of accusing these
scientists of individual intentional deception, and instead redirects our
scorn onto the publish-or-perish dynamic.

------
searine
As someone who works on depression GWAS, this isn't the end of finding causal
genes for depression, it's the beginning.

In the next few years we're going to see a waterfall of new rare variants
linked to disease, all which have a much higher chance of causing functional
change.

As WGS comes online for association studies it will both validate and more
deeply explore the genetic nature of every disease. It is going to be
mindblowing.

~~~
eggie
> As WGS comes online for association studies it will both validate and more
> deeply explore the genetic nature of every disease. It is going to be
> mindblowing.

But there is another layer of confusion, because WGS implies resequencing, and
resequencing is only as good as your reference genome, and will distort
results when the genome you're inferring is to far from the reference
(reference bias).

The real mind bend will come when we have thousands or millions of whole
genome de novo assemblies and we compare these to each other to do our GWAS.
Only then are we going to have a hope of knowing what is actually causal in a
genomic sense. Until then we remain in the land of association.

In nature, most adaptive variation appears to be large and structural. All the
recent studies that have used whole genome assemblies to look at this have
found the same thing. I would be surprised if this isn't the case for humans
too. If it is, then much of current perspective on GWAS (both based on chips
and WGS) will need to be rewritten.

~~~
asdff
GWAS is nothing more than correlations and by themselves aren't very
remarkable. What is remarkable is when someone does a functional analysis on
the variant they found in their GWAS and is able to describe the mechanism
that it uses to affect the disease or phenotype. The way you do that isn't
more sequencing, but with model systems that you can easily manipulate in the
lab.

GWAS also doesn't compare to an arbitrary reference sequence. Some good
numbers for a GWAS are >1000 cases and >2000 controls from the same
population. You have to match populations, or else all your association study
is going to find is the differences between east asians and europeans, for
example. You need a lot of samples to get enough stats power to even see these
rare variants.

------
lisper
> Between them, these 18 genes have been the subject of more than 1,000
> research papers, on depression alone. And for what? If the new study is
> right, these genes have nothing to do with depression.

The article doesn't actually say anything about what these 1000 research
papers actually claimed to show. It's entirely possible that most of them were
negative results, there's no way to tell from the text of the article. It
strongly _implies_ that this is the case, but it doesn't actually say so. This
is a significant omission for an article whose thesis is that these 1000
papers constituted a "house of cards."

[UPDATE]

"Sometimes the gene was linked to depression; sometimes it wasn’t. And
crucially, the better the methods, the less likely he was to see such a link."

Seems to me like this is science working exactly as it is supposed to (except
for the possible suppression of earlier negative results, but that's a known
broad problem that has nothing to do with this particular gene study).

~~~
rossdavidh
It worked as it was supposed to...eventually. Instead of double-checking the
initial results, hundreds of research papers by many research teams looked
into questions that would be very important, if the genes in question were
connected to depression.

Sure, it is great that it's not (quite) religious dogma, and it eventually
gets revisited. But, a couple decades worth of work and ~1000 research papers,
many of which took place AFTER the earlier 2005 paper that called into
question the link to depression, is by no means what we should shoot for.

More fundamentally, there was nothing in the system that motivated anyone to
do replication of the original link. The problem is not that the original link
turned out to be spurious, and it's not that anyone did anything unethical.
More worrisomely, nobody appears to have done anything unethical, but because
of the systemic incentives for research grants, tenure, etc., replication of
the foundational result wasn't attempted in a serious enough way until many
times that much work had been done on something spurious.

------
raverbashing
It's just naive to pin something as complex as depression on a single variable
(but that's Popperian scientificism for you)

Now it might be that the 1st study identified an effect on a (very specific)
subpopulation and in the bigger study case that subpopulation is not present
or not identifiable. But who knows

Now of course it's not wrong to study a certain gene, but to go "all in on" a
very narrow study path is stupid. Now, if on the course of the depression
studies they found this gene had everything to do with a different disease
that would be "p-hacking" or something, right? RIGHT?

More statistical power and "rigour" won't lead research anywhere because it's
not the studies that were necessarily flawed, it's that the effects might be
small, or dependent on a chain of other factors, so unless the effect of a
single factor is predominant it might not even matter (as an isolated factor).

~~~
fromthestart
>It's just naive to pin something as complex as depression on a single
variable

Why? There are other complex diseases with purely genetic causes, e.g.
Huntington's Disease; depression isn't necessarily different. Hindsight is
20/20.

~~~
thanatropism
Yes, but Huntington's is an explicit cluster of symptoms at diagnosis + a
specific prognosis. Uppercase D "Depression" is a diagnosis built around a
single symptom, lowercase d "depression". But many people experience
depression, both in "core mental illnesses" like bipolar and outside. (Note
how bipolar is much better constructed -- indeed the method of differential
diagnosis was invented by Kraepelin to differentiate bipolar from
schizophrenia)

If it was up to me, big D Depression would be redefined as "responds to
Prozac" or something.

------
pfdietz
What's needed is for the Ignoble Prize people to focus on the people who start
this sort of crap, not on humorous low value targets. Really call out the
people who contributed negatively to the progress of science.

------
mindgam3
“Nor... should his work be taken to mean that genes don’t affect depression.
They do, and with newer, bigger studies, researchers are finally working out
which ones do.”

This part really needs some context/sources. Anyone know which studies he’s
talking about?

~~~
bobcostas55
Here's one:
[https://www.nature.com/articles/s41593-018-0326-7](https://www.nature.com/articles/s41593-018-0326-7)

~~~
mindgam3
Thanks

------
unityByFreedom
Odd that this article cites Scott Alexander as if that is the name of a real
psychiatrist,

> “What bothers me isn’t just that people said [the gene] mattered and it
> didn’t,” wrote the psychiatrist Scott Alexander in a widely shared blog
> post.

Scott Alexander is the _pen name_ of a person running a blog who _claims_ to
be a psychiatrist, and AFAIK, his authenticity is not proven due to his
anonymity.

~~~
Lazare
His identity is hardly a deep secret; anyone who _really_ cares can find it
out, and verify his credentials.

(Not that those credentials are really relevant here, no?)

~~~
unityByFreedom
The blog actively misleads readers into thinking Scott Alexander is his real
name,

> SSC is the project of Scott Alexander, a psychiatrist on the US West Coast.
> You can email him at scott[at]slatestarcodex[dot]com. Note that emailing
> bloggers who say they are psychiatrists is a bad way to deal with your
> psychiatric emergencies, and you might wish to consider talking to your
> doctor or going to a hospital instead. [1]

Journalists citing that blog should note it is an alias, and deeply question
whether or not he really is a psychiatrist.

And no, I don't find it easy to verify his identity, and yes, it is completely
relevant here as he's being cited as a real psychiatrist. That's not
verifiable without knowing a real name.

[1] [https://slatestarcodex.com/about/](https://slatestarcodex.com/about/)

~~~
Lazare
> it is completely relevant here as he's being cited as a real psychiatrist.
> That's not verifiable without knowing a real name.

We're discussing his criticism of how a ton of "real psychiatrists" got some
major things wrong. This criticism stands or falls on its merits, _not_ on his
status as a "real psychiatrist". You don't need to be a "real psychiatrist" in
order to this sort of analysis (and, it turns out, a ton of "real
psychiatrists" got this wrong in the past).

This is like reading a cooking blog by someone who says they're an auto
mechanic, and demanding proof that they really know how to fix cars before
you'll listen to them talk about pie crusts. If they're not claiming special
expertise in the area (and he isn't), the credentials don't matter.

Eg, Scott writes:

> Border et al focus this infrastructure on 5-HTTLPR and its fellow depression
> genes, scanning a sample of 600,000+ people and using techniques twenty
> years more advanced than most of the studies above had access to. They claim
> to be able to simultaneously test almost every hypothesis ever made about
> 5-HTTLPR, including “main effects of polymorphisms and genes, interaction
> effects on both the additive and multiplicative scales and, in G3E analyses,
> considering multiple indices of environmental exposure (e.g., traumatic
> events in childhood or adulthood)”. What they find is…nothing.

I mean, either he's right or he's wrong, and you could go read the paper
yourself and find out. Or you can trust other people who have read the paper.
Or, I dunno, you could ask one of the _authors_ of the paper if they think
Scott's summary is any good:

> I have never in my career read a synopsis of a paper I've (co-)written that
> is better than the original paper. Until now. I have no clue who this person
> is or what this blog is about, but this simply nails every aspect of the
> issue

(Source:
[https://twitter.com/matthewckeller/status/112638089124318822...](https://twitter.com/matthewckeller/status/1126380891243188224))

And again, none of this has anything to do with his status as a psychiatrist.
You _shouldn 't_ blindly trust a blog post about this topic just because the
author is verifiably a psychiatrist, but you shouldn't blindly distrust one
just because they are not.

~~~
unityByFreedom
> We're discussing his criticism of how a ton of "real psychiatrists" got some
> major things wrong

No, you replied to me. My comment was it is strange that this Atlantic article
cites him as "the psychiatrist Scott Alexander" because that is a pen name,
and it isn't verifiable that he is a psychiatrist due to his anonymity.

Your refutation of my point is "it doesn't matter that it isn't verifiable",
but that was part of my point. You can't just toss it out, that's being
intellectually dishonest.

Any piece of journalism that cites a person should do so only if they can
verify the source. Journalists who skip this step means at least a portion of
their article is fake news.

> You shouldn't blindly trust a blog post about this topic just because the
> author is verifiably a psychiatrist, but you shouldn't blindly distrust one
> just because they are not

This has nothing to do with my point, which is all about how The Atlantic
journalist cites him, and has little to do with whatever his blog says.

~~~
skybrian
You're assuming without evidence that Ed Yong didn't verify his credentials.
How do you know? Maybe ask?

~~~
unityByFreedom
Again, the author cited him as "the psychiatrist Scott Alexander".

If he did verify credentials, he ought to have mentioned the name is an alias.

~~~
skybrian
He could have mentioned it, but I don't understand why you think it's
important?

The authors of the actual scientific study are Richard Border and his co-
authors. Scott Alexander is just a well-known blogger who gets credit for
writing about it in a vivid way that got people's attention. He's not a
primary source, so for the purposes of this article, it doesn't really matter
whether he's using an alias or even whether he's a psychiatrist. (It's not a
credential in the relevant field anyway.) You can verify the quotes by
following the link.

Calling up other scientists in the field and asking questions about a
scientific paper is how science writers verify a science article, and Ed Yong
did that.

~~~
unityByFreedom
> I don't understand why you think it's important?

Scott Alexander is a pseudoynm. Journalists referencing that name should note
this, along with the fact that his credentials as a psychiatrist are not
publicly verifiable.

------
hairytrog
This happens when you have a boatload of "educated" people with degrees in
psychology/neuroscience who have to do "research." There's something like
100,000 new psych grads each year - and there's only so many coffee shops.

~~~
dang
Please don't post shallow dismissals to HN, and please don't snark. These are
two of the site guidelines. If you'd read them, and follow them when
commenting here, we'd be grateful.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

------
gnoppa
Now please let the same epiphany come about cancer research or personalized
medicine based on genes.

~~~
icegreentea2
Let's not throw the baby out with the bath water. No doubt there's a lot of
sketchy stuff going on, but also the fields are very wide with lots of
"verticals" going on. Just cause one vertical might be built on a shakey
foundation, doesn't mean all of them are (and the converse is also true).

~~~
gnoppa
You might be right that not everything in those fields is bad, but most is.
Nevertheless, what is so terrible is that all the money is put into research
(gene mutation causing cancer) even though it has been proven wrong many years
ago. That becomes very clear after reading Thomas Seyfried's work (cell
fermentation causing cancer).

Sadly, academics have the incentives to publish papers in prestigious journals
that make them sound smart and not to find a simple cure. Likewise, the free
market does not have the incentive to easily and cheaply cure people but to
maximze profits. Hence, so much research is absolutely abysmal, wrong,
misleading and harmful. Sadly the human mind is constructed in such a way that
we self-deceive to gain advantages. That is why there is no need for a
conspiracy but just the wrong kind of incentive structure.

~~~
icegreentea2
I know you're getting downvoted. I don't think things are as "bad" as you
think they are, but it's also impossible to deny that our current incentive
structure is not ideal and can lead us to bad results.

That said, with respect with Seyfried's work, I urge you to give something
like [https://sciencebasedmedicine.org/ketogenic-diets-for-
cancer-...](https://sciencebasedmedicine.org/ketogenic-diets-for-cancer-hype-
versus-science/) a read. Go ahead and skip all of the bits about his chosen
associations and focus on just the science parts if you would like. I think
you'll find that quite a lot of research as been done on cancer metabolism,
and that Seyfriend's work at best can be considered incomplete.

If you find the blog's arguments about lack of clinical trial data to be
circular (I guess they kind of are), I don't have any answers for you now, but
I suggest you keep on eye out for this guy:
[https://clinicaltrials.gov/ct2/show/study/NCT01754350](https://clinicaltrials.gov/ct2/show/study/NCT01754350)

This is one of the studies in early stages that the blog links to (it's been a
few years). They just wrapped up the study a few months ago, and hopefully
there will be results posted soon. You can also search for that study number
(NCT01754350) or study name (ERGO2) for papers when they do come out (again,
expect at least a few more months).

