
Machine learning of neural representations of emotion identifies suicidal youth - phr4ts
https://www.methodsman.com/blog/mri-suicide
======
nieve
"This study used machine-learning algorithms (Gaussian Naive Bayes) to
identify such individuals (17 suicidal ideators versus 17 controls) with high
(91%) accuracy, based on their altered functional magnetic resonance imaging
neural signatures of death-related and life-related concepts."

Anyone with a Nature subscription want to check whether they simply trained
their discriminator and then used it on the same data set? There's no mention
in the abstract of testing it against a fresh control set and that's not
promising.

[https://www.nature.com/articles/s41562-017-0234-y?error=cook...](https://www.nature.com/articles/s41562-017-0234-y?error=cookies_not_supported&code=6e85e088-d042-4f52-a19e-320e2cd0bdd0)

~~~
onetwotree
17 subjects per group is _extremely_ small.

Looking at it either from a machine learning or statistical point of view,
using such a small sample is problematic.

This is the chronic issue with fMRI studies, since administering an fMRI is
extremely expensive, and has led to some very difficult to reproduce results
in the field.

~~~
matt4077
People love the "n=XX is far too little data!" argument, yet it's more
complicated than that. Sometimes 600,000 is too little, yet sometimes 17 is
enough.

Example: you believe a newly found plant species is toxic. You give it to 17
"grad students volunteers", while giving a placebo to 17 others. All in the
first group die aa gruesome death within 20 hours. None of the others do.

Result: yes significance. (also: tenure!)

I'm not saying that this study is significant (the statistics seem to be
slightly beyond my event horizon), and your criticism also stops short of an
outright dismissal of the research. But sample size alone makes for a bad
measure of quality. Yes, even p-values are better.

~~~
jacalata
I really hope you wouldn't get tenure for a study that killed all your
subjects.

~~~
rav
Subjects? But they're volunteer grad students!

~~~
jacalata
Subjects, minions, whatever you want to call them :p

------
loeg
Doesn't 91% seem far too low to be useful for the general population? Consider
that only 7% of the background population experiences one or more depressive
episode per year[0] (edit: okay maybe 8% in youth). Assuming independence and
using the higher 8% background rate figure for youth, .91 * .08 = 7.3% of the
population will receive a true positive result and (1-.91) * (1-.08) = 8.3% of
the population will receive a false positive result. This is "pretty bad" —
false positives outweigh the true positives — making the value of a positive
result useless.

(Consider what happens to people so-diagnosed as suicidal when in fact they
are not (false positives). Involuntary psychiatric imprisonment is a terrible
thing if it isn't absolutely necessary.)

[0]: [https://www.healthline.com/health/depression/facts-
statistic...](https://www.healthline.com/health/depression/facts-statistics-
infographic)

~~~
hk__2
> I don't have the stats grounding to come up with the proportion of true
> positives to false positives, but I suspect this would be "pretty bad" —
> vastly more false positives than true positives

IANAStatistician, but let’s consider the system is right 91% of the time and
we try to detect those 7% you mentioned. Let’s take 1000 people. 70 people are
depressive and 930 aren’t. Out of those, 70 _0.91=63 will be correctly
classified as depressive by the system and 930_ 0.91=846 will be correctly
classified as non-depressive.

That leaves us with 63 positives, 846 negatives, 7 false negatives and 84
false positives. False positives largely outnumber false negatives, but they
also outnumber the true positives.

(if a statistician read this, please correct me if I’m wrong)

~~~
loeg
Yeah. I came to largely the same conclusion but used the 8% number for youth,
given the current title.

------
nonbel
Will it be that 10% of people are suicidal and it always predicts non-
suicidal?

Will it be that accuracy actually means AUC?

Will it be that they are reporting predictive skill on the training data?

~~~
jaibot
It's training data. There's 17 suicidal and 17 non-suicidal scans, for a total
of 34 scans. They trained 34 models, leaving one scan out each time. Of those
34 models, 31 correctly predicted the left-out scan.

IANAStatistician, but this seems like a trash result.

~~~
nonbel
Cross validation is ok if you do it once, but they repeatedly did it and chose
the features based on the results. You can't keep adjusting your
model/features based on cross validation performance without overfitting to
the training data.

~~~
jjoonathan
How did they adjust the model/features based on CV performance? It looks to me
like they did LOOCV.

~~~
nonbel
Read the second paragraph I quoted above:

"The features used by the classifier to characterize a participant consisted
of a vector of activation levels for several (discriminating) concepts in a
set of (discriminating) brain locations. To determine how many and which
concepts were most discriminating between ideators and controls, a reiterative
procedure analogous to stepwise regression was used, first finding the single
most discriminating concept and then the second most discriminating concept,
reiterating until the next step reduced the accuracy. A similar procedure was
used to determine the most discriminating locations (clusters)."

The features were chosen using the same data as used to assess predictive
skill.

~~~
yorwba
That quote does not support your summary, unless you are basing it on the
information not explicitly mentioned. (I.e. they didn't say that they were
only using training data to select features, but if they are any competent,
they did.)

~~~
nonbel
See the last part of this post:
[https://news.ycombinator.com/item?id=15598117](https://news.ycombinator.com/item?id=15598117)

Can you provide pseudocode consistent with what they described (in the post
you responding to) that wouldn't lead to leakage? I can't see it.

~~~
yorwba
Select a training set, leaving out one sample for validation. For all
features, train a classifier on the training set using that feature. Keep the
one that gives the highest discrimination score on the training set. Repeat
with more features. Then evaluate the final classifier on the validation
sample, which has so far not been seen in any of the steps. The result
provides an estimate of the risk on unseen data from the same distribution.

To get the estimation variance down, you can repeat this for all possible
choices of validation sample. That means, you start the feature selection
process on the new training set over from scratch and obtain another risk
estimate. If they kept the features selected earlier, that estimate would be
"contaminated" and not independent, but if they correctly start over, the
procedure is valid.

~~~
nonbel
My understanding is you are saying create N (N=34 in this case) different
parallel models that use different features/etc. Then take the average (or
whatever summary stat) of the accuracies to get the predictive skill.

When we want to use these models, we run new/test data through all N=34 models
in parallel and calculate a prediction from each. Then somehow these
predictions need to be combined (one again an average, etc). This is the
average of the predictions, not accuracies/whatever.

Where was the step combining these predictions present during the training? It
seems your scheme necessarily calculates an accuracy based on a different
process than needs to be applied to new data.

~~~
yorwba
No, when you want to classify a new sample, you take a model trained on the
complete labeled data you have and use the prediction of that. The validation
procedure using those 34 models trained on subsets of the data is just to tell
you how accurate you should expect the result to be. Afterwards, you can throw
those models away.

Of course you could build an ensemble model, but if you want to know the
expected accuracy of doing that, you need to include the ensemble-building
into your validation procedure. (Or use some theorem that lets you estimate
the ensemble performance from that of individual models, if that is possible.)

------
iregina
"Words like death and cruelty differentially activated the left superior
medial frontal area and the medial frontal/anterior cingulate in the
individuals with suicidal ideation – these are areas associated with self-
referential thought." I wonder how they reacted to "alive" and "humane"

------
avip
It's the kind of "studies" you call BS on first, then go on to figure out the
details. Not a very scientific process for sure, but always produces the
correct result.

[https://www.naturalblaze.com/2017/03/scandal-mri-brain-
imagi...](https://www.naturalblaze.com/2017/03/scandal-mri-brain-imaging-
completely-unreliable.html)

~~~
verall
At least post the original article[0] if not the paper[1], rather than some
weird alt-health website.

[0] [https://www.sciencealert.com/a-bug-in-fmri-software-could-
in...](https://www.sciencealert.com/a-bug-in-fmri-software-could-invalidate-
decades-of-brain-research-scientists-discover) [1]
[http://www.pnas.org/content/113/28/7900.abstract](http://www.pnas.org/content/113/28/7900.abstract)

------
chiefalchemist
Slightly off topic but the book "Change your brain change your life" was
pretty interesting. Perhaps not as scientific as some would prefer, but none
the less thought provoking.

------
m3kw9
Basically you extract a matrix representation of the active or inactive
regions that is classified and have DNN learn it like you would learn images,
is that a correct assumption?

------
evolve2017
To the moderators, the title would be more accurate with 'fMRI' as opposed to
'MRI'. The latter is typically used to examine structural brain elements,
whereas fMRI is thought to correlate with brain activity and, by extension,
thought.

Confusing the two would lead to the more unusual conclusion that suicidal
ideation is associated with abnormal brain connectivity, while the authors are
instead focusing on neuronal activity.

~~~
mintplant
Specifically fMRI measures blood flow across the brain (the BOLD response)
which is correlated with neuron activity. It has good spatial resolution but
poor temporal resolution [0], compared to EEG which gives you good temporal
resolution but poor spatial resolution.

[0] i.e. you know with precision where in the brain activity occurred, but
less precisely when it occurred in time

------
chris_wot
So what will they do after they detect you are suicidal? Stick you in a psych
ward? Yet more attempts at taking away the rights of those going through
trauma.

~~~
fao_
That seems like putting the cart before the horse.

Diagnosis tools could mean faster access to treatment. Currently in the UK the
waiting list for access to mental health treatment is on the range of two to
three years. Transforming "suicidal ideation" from a "vague human-given
diagnosis" to "tool-given diagnosis" makes it politically easier to push for
that.

In any case, _that 's not going to happen_ based off a single study with _91%
accuracy_.

------
gremlinsinc
Maybe they should use this test before gun purchases... I don't think someone
suicidal should purchase a gun...hell I don't care if they kill themselves,
but lately a lot of suicides were mass suicides...we don't need more of that
shit.

~~~
Gibbon1
Problem is people buy guns when they aren't suicidal.

Tip: If you own a gun and are feeling suicidal, give it to a trusted person
for safekeeping.

~~~
marcoperaza
It can't hurt to get rid of the gun if you're suicidal. But would it actually
make a difference? Suicide rates across countries aren't related to gun
availability.

~~~
aplummer
Suicide is definitely linked to gun availability.

"A study by the Harvard School of Public Health of all 50 U.S. states reveals
a powerful link between rates of firearm ownership and suicides. Based on a
survey of American households conducted in 2002, HSPH Assistant Professor of
Health Policy and Management Matthew Miller, Research Associate Deborah
Azrael, and colleagues at the School’s Injury Control Research Center (ICRC),
found that in states where guns were prevalent—as in Wyoming, where 63 percent
of households reported owning guns—rates of suicide were higher. The inverse
was also true: where gun ownership was less common, suicide rates were also
lower."

[https://www.hsph.harvard.edu/news/magazine/guns-and-
suicide/](https://www.hsph.harvard.edu/news/magazine/guns-and-suicide/)

~~~
wahern
Gun ownership in the U.S. is strongly correlated with socio-economic status,
locality, etc.

Everybody points to the Australian example, where suicides declined after the
1996 gun control legislation. But unemployment in Australia peaked in 1995 and
declined precipitously afterward until 2009.

Given everything we know about suicide rates in other countries, and about
changes in suicide rates domestically (e.g. recent increase as gun ownership
goes _down_[1]), it would very odd if gun ownership was a root cause of
suicide.

That said, in a country with a strong gun culture like the U.S., I would
totally expect a generational dip in suicides if we substantially removed
access to guns. But then I'd expect it to normalize when suicidal individuals
became more comfortable with other methods. Just like with mass shootings,
there's a strong imitation effect. Take away the model that people imitate and
it might be awhile until there's a regression to the mean.

Even so, that's still reasonable justification for limiting access to guns--
saving tens of thousands of individuals. I'm not sure I'd agree with such a
policy prescription because of the insane gun politics, but it's quite
defensible from a public health perspective.

[1] Number of guns have increased but they're concentrated in fewer
households.

~~~
loeg
[https://www.hsph.harvard.edu/means-
matter/](https://www.hsph.harvard.edu/means-matter/)

Tl;dr?

* Many suicide attempts occur with little planning during a short-term crisis.

* Intent isn’t all that determines whether an attempter lives or dies; means also matter.

* 90% of attempters who survive do NOT go on to die by suicide later.

* Access to firearms is a risk factor for suicide.

* Firearms used in youth suicide usually belong to a parent.

* Reducing access to lethal means saves lives.

~~~
wahern
Those are all just rationalizations. They can't reflect anything intrinsic
about suicide risk if they can't predict relative suicide rates outside the
U.S.

Suicide is an epiphenomenon of larger socio-economic issues. Among OECD
countries the U.S. comes in the middle of the pack. If guns were a causative
factor, then considering how prevalent they are (by a ridiculous factor!) our
rates should be much higher:

[https://en.wikipedia.org/wiki/Suicide_in_the_United_States#/...](https://en.wikipedia.org/wiki/Suicide_in_the_United_States#/media/File:Suicide-
deaths-per-100000-trend.jpg) [http://foreignpolicy.com/2013/05/03/how-does-
americas-suicid...](http://foreignpolicy.com/2013/05/03/how-does-americas-
suicide-rate-compare-globally/) [http://www.oecd-
ilibrary.org/sites/health_glance-2011-en/01/...](http://www.oecd-
ilibrary.org/sites/health_glance-2011-en/01/06/g1-06-01.html?itemId=%2Fcontent%2Fchapter%2Fhealth_glance-2011-9-en&_csp_=dc7450c10468a7a49a6262726a53ecee)

Any correlation between guns and suicide in the U.S. is easily understood in
terms of modeling--guns are how Americans kill themselves. Take away the guns
and, yes, there'll be a dip in suicide rates, until Americans learn how people
kill themselves elsewhere around the world. Heck, they're already learning
that with opioids.

Let's go back to what I said about Australia. I claimed that the change in
Australian suicide rates is better understood in terms of the unemployment
rate. Now let's test that hypothesis. [... google google google ... ] Here we
go:

    
    
      Suicide has reached a 10-year high in Australia as 3027
      people killed themselves last year, the largest cause of
      death among 15 to 44-year-olds.
    
      Last year, 12.6 people in every 100,000 killed themselves
      compared to 12 the year before, 11.4 in 2012 and a low of
      10.4 in 2006.
    
      -- http://www.theaustralian.com.au/news/nation/suicide-rate-in-australia-reaches-10year-high/news-story/cb5d8384aadb571778775bda236f3c35
    

What was the rate in 1995, the year before the gun control law? 13.0.
([https://www.aph.gov.au/About_Parliament/Parliamentary_Depart...](https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BN/2011-2012/Suicide#_Toc299625631))

So suicides were at their lowest the very same year that unemployment was at
its lowest (2006)? Check! And they rose as unemployment rose? Check! To the
point where they're back at the pre-law level? Check!

I won't deny that there's some nuance here that we can tease out, but if you
read the actual papers that link guns to suicide, they do a much worse job at
nuance. In fact, not a single one of the papers I've read even considered the
unemployment rate. Which is patently bad science.

Correlation is not causation. All the gun suicide papers do is point out
specious correlations. But you don't need a degree in statistics to know this.
And you don't need a science degree to be able to see the gargantuan holes in
these arguments--that the correlations have simpler explanations.

I'm not denying that gun control could appreciably effect suicide rates.
Imitation and modeling have huge effects--much more well-established than the
supposed gun effect. So huge that even news media abstain from reporting
suicides, _especially_ _methods_ of suicide. Saving thousands of lives with
gun control, even if it's an ephemeral gain, is an absolute benefit that's
worth debating about. But let's just be honest about this stuff.

~~~
aplummer
I hope that you submit your thoughts to a place more formal than this comment,
I find the unemployment thing interesting. Particularly in Australia with $540
per fortnight unemployment and free healthcare, you could compare the
difference between suicides and safety nets, and the overall suicide cost of
poor safety nets.

It does seem like you agree in your last paragraph though though... gun
control would lower suicide rates. People would try other methods which have
higher failure rates, people's lives would be saved. Not all of them, but an
appreciable amount.

