
EEG Accurately Predicts Autism as Early as 3 Months of Age - contourtrails
https://www.sciencedaily.com/releases/2018/05/180501085140.htm
======
projectramo
This would be amazing if it works.

The reason I say "if" and hedge my words is because the conventional wisdom is
that autism is not a single disorder with a single underlying condition. It is
a cluster of symptoms of varying intensity and it is (likely) caused by a host
of underlying conditions.

Two children diagnosed with the disorder can have no overlapping symptoms.

If that is right, then the first step is to break it down into different kinds
of conditions. (No eye contact is caused by X. Speech issues are caused by Y).

However, if this test can effectively pick apart one of the underlying
conditions and its symptoms that would be a huge step forward. We could
definitively say whether a child has this particular version of "autism".

After we tease apart a few more versions, the original condition will
disappear and be subsumed by these other versions.

~~~
tejohnso
> Two children diagnosed with the disorder can have no overlapping symptoms.

Is there a precedent for this with other disorders? Seems to me that if there
are no overlapping symptoms, it should be a separate disorder. Even if it's
just arbitrary naming like "Type X" as in diabetes.

~~~
DanBC
> Seems to me that if there are no overlapping symptoms,

The full name is "Autistic Spectrum Disorder", so the name hints that there's
a range of stuff happening.

To be autistic someone has to have problems with social communication,
problems making or maintaining friendships, and fixed and repetitive
interests.

Some people also have other stuff on top. These things are common with autism,
but are not needed for the dx.

Alexithymia (the ability to recognise emotion in yourself and others) is one
example. It's far more common in autistic people, but you don't need it to be
autistic. Between 50% and 55% of autistic people have alexythimia. Sensory
sensitivities are another. There are a range of these things that are more
common in autistic people, but aren't needed for the dx.

And there's a lot of co-morbidity too. People with autism are more likely to
have depression or anxiety or OCD. These aren't part of autism, but it's
complicated to untangle what's going on. Is someone social isolated because
they're depressed, or autistic, or is it a bit of both?

When you start looking at these other things it makes sense that autism might
be an umbrella diagnosis.

~~~
gowld
And it overlaps with the diagnostic "Pervasive Developmental Disorder" in case
you thought ASD wasn't a big enough tent.

~~~
cbhl
In the DSM 5, "Pervasive Developmental Disorder - Not Otherwise Sepcified"
(PDD-NOS) and others (Asperger Syndrome) got refactored into a single unified
Autism definition.

------
DINKDINK
Relevant stats:

>The algorithms predicted a clinical diagnosis of ASD with high specificity,
sensitivity and positive predictive value, exceeding 95 percent at some ages.

More about the metrics you care about[1]

Edit: Many people in this thread are talking about bayesian stats that it
appears they don't full appreciate or understand. They're saying that 95%
statistical accuracy is commendable. 95% sensitivity and 95% specificity
aren't good enough to use in broad tests. Why? Autism has a 1/68 likely
hood[2]. Meaning if you had a sample of 100 general-population people, tested
them with this test, the likely hood of someone who tests positive for the
test is actually positive (positive predictive value) is a measly ~20% (that
is Probability that you have the condition given you test positive). Play
around with these more at the following app: [https://kennis-
research.shinyapps.io/Bayes-App/](https://kennis-research.shinyapps.io/Bayes-
App/)

[1][https://en.wikipedia.org/wiki/Sensitivity_and_specificity](https://en.wikipedia.org/wiki/Sensitivity_and_specificity)
[2][https://www.autism-society.org/what-is/facts-and-
statistics/](https://www.autism-society.org/what-is/facts-and-statistics/)

~~~
fundamental
Thanks for writing this summary. Plenty of people get the wrong idea when it
comes to test accuracy for topics like health diagnosis.

In the case of this particular topic it does seem like the outlined test could
be another tool that doctors could utilize. If for instance a child has shown
a change in developmental milestones then that observation comes with it's own
(somewhat doctor specific) sensitivity and specificity. That information could
be combined with the EEG test to improve the overall doctor+test accuracy.
Nothing's going to be perfect, but the outlook is a bit more positive than
presented in your example.

~~~
DINKDINK
>Nothing's going to be perfect, but the outlook is a bit more positive than
presented in your example.

Ideally what would happen is that the doctor would use their judgement to
narrow down the candidates who the test is applied to who have strong priors
of Autism. That would substantively increase PPV. You'd need a ~50% prevalence
before you get to 95% PPV

------
hannob
Given that diagnosing Autism is hard and that all previous attempts at
diagnosing it in a non-psychological way have failed this would be an
extremely surprising result.

Call me skeptical until it's reproduced independently.

------
neurotech1
EEG in 3mo infants is "poorly organized" even compared to 24mo child, so it I
would take this one with a grain of salt.

~~~
fundamental
I was taking this with a hefty grain of salt, though at least on first pass
they seem to have a reasonable cross-validation/testing procedure (which is
usually one of my major complaints with comp-neurosci papers). As per 3 month
infants, you can see that their classifier does have difficulties at that age
bracket, though performance seems to start to saturate around 6 months (table
5).

It would be interesting to give the paper another pass to see how more
operational data collection could impact the quality of the data and thus
classification results. EEG can be really hit-or-miss with different
equipment. More-so with simple features such as the band power ones used
within this paper.

~~~
nonbel
>"at least on first pass they seem to have a reasonable cross-
validation/testing procedure (which is usually one of my major complaints with
comp-neurosci papers)."

It looks like the usual overfitting the cv to me... They had 1000 features,
200 datapoints, tried out "several different algorithms".

~~~
fundamental
That doesn't really diminish the results in my book. If you're trying to
publish something it's basically assumed that you're going to try out several
methods and show the one with better performance even if the performance
difference is not statistically different.

As per the number of features and data points: In this field you generally
don't have a ton of subjects to sample from and the high dimensional features
are a natural result of the array based recordings. It should be possible to
perform dimensional reduction on the data, however the ML methods are already
implicitly doing that step so it's not necessarily that important.

My normal gripe is when the tested subjects have some data in the training
fold and some data in the testing fold (even if the data points are separate).
In those cases then the ML method can fit the statistics of a particular
subject rather than the true target class (e.g. target movement of a cursor in
a BCI). In this paper they explicitly are testing on a subject which was never
trained. So, even though the data + particular supervised layer is going to
budge the results around some, it should not be a night and day difference
from what's expected in reality.

~~~
StavrosK
You have to account for the fact that you used so many algorithms, though.
Using ten different algorithm makes it ten times more likely you'll fit your
dataset well just by chance.

~~~
fundamental
I acknowledge that the reported accuracy of a system will be higher if you
take the max accuracy of 10 methods which have the same 'true' accuracy +-
some noise. The results presented in table 5 of the paper are very unlikely in
my opinion (as someone currently in the ML field and who has worked in the
field of computational neuroscience) to be solely due to randomly trying
different ML techniques without the underlying data providing a noteworthy
difference between the target classes.

If these results are replicated independently with a different dataset then
the magnitude of the overselling of the method will be seen. I just don't
think that it makes sense to doubt the results (i.e. with a grid of EEG
sensors and bandpower features it is possible to identify a portion of autism
cases) based upon this factor alone.

~~~
nonbel
>"The results presented in table 5 of the paper are very unlikely... to be
solely due to randomly trying different ML techniques"

This is a strawman, noone argues that their methods picked up on some
correlations.

------
fundamental
Research paper in question:
[https://www.nature.com/articles/s41598-018-24318-x](https://www.nature.com/articles/s41598-018-24318-x)

------
kpil
5 % is quite a lot of misdiagnosed babies if this is implemented as a mass
screening activity, 2-3 times higher than than what the internet seems to
think is the actual ASD incidence, and I imagine that that number includes a
lot of "highly functional" ASD cases

What should a parent do when this happens? It will be only perhaps 20-30% risk
that the baby actually _do_ have ASD and not just a false positive.

I imagine that ASD "prevention" is mostly behavioural training [I have no idea
at all actually] - but how much time and effort would that take? What are the
consequences for healthy babies? I imagine that most people would spend a lot
of effort on anything that could help in cases like this.

It's a bit problematic since it's not possible to know until after a couple of
years if it's was a false positive or not. It might turn out that a lot (or
most) of all successful recoveries was in fact "false positives".

~~~
alexandercrohde
What? 95% accuracy is outstanding. I see that you're saying if you said "No"
for all babies, that'd be 95%, so that's a legitimate point.

However, it sounds like it's better than that "We were also able to predict
ASD severity, as indicated by the ADOS Calibrated Severity Score, with quite
high reliability, also by 9 months of age."

I imagine the intervention is ABA therapy
([https://en.wikipedia.org/wiki/Applied_behavior_analysis#Effi...](https://en.wikipedia.org/wiki/Applied_behavior_analysis#Efficacy_in_autism))
or similar, which is costly, but otherwise not a risk.

~~~
DINKDINK
>95% accuracy is outstanding

95% specificity and 95% sensitivity isn't good enough to test the general
population. See why here:
[https://news.ycombinator.com/item?id=16981888](https://news.ycombinator.com/item?id=16981888)

------
callesgg
Wow that is insanly cool. Especially considering that we don't even know what
ASD is, or what could be causing it.

Could this be used for a ASD scale?

~~~
_Schizotypy
There has been recent research showing that part of the issue with asd is
related to the glutamate system, specifically NMDA. Some sort of genetic
transcription problem. Sorry I can't quote the exact paper off hand.

------
devindotcom
There's some really early eye movement based detection too. Good to see this.
As others point out it isn't a binary but a hugely variant spectrum, but there
are some commonalities that at the very least suggest that a child receive
further screening or prepare the parent to watch for other symptoms.
Regardless of the type and spectrum position, a kid with an early
diagnosis/warning seems much more likely to have a good outcome than one that
gets one while in pre-school or kindergarten.

------
tw1010
I don't understand why there's such a massive focus on autism research, both
on HN and in the news in general. It doesn't seem like a pareto optimal use of
attention resources.

~~~
loriverkutya
Because with early development, people on the spectrum could function better
in the neurotypical environment, which means more people with ASD can work and
need less support later.

------
monochromatic
Does early diagnosis give better treatment options?

~~~
scardine
Yes! The most effective treatment is behavioral therapy and the results are
better if started early. At young age the brain is very plastic and for light
cases you can teach all the things that are innate to neurotypical kids.

------
talltimtom
In this weird world that we live in this might actually help prevent a load of
other deciders through increasing vaccination.

~~~
wanderfowl
Agreed. That was my first thought, too. Part of the reason that these silly
vaccine/autism conspiracy theories are hard to shut down is the fact that
Autism is harder to detect pre-vaccination, so there's confirmation bias here
among parents of Autistic children.

If this study provides nice evidence to the Anti-Vax crowd that Autism can be
measured and detected well before vaccination age, this might help take some
of the wind out of the sails of the movement.

Of course, for many, science won't help, much like usable retroreflectors will
only break down the fantasy for a subset of moon landing conspiracy theorists,
but if it gets brought up even once in a Whole Foods somewhere, they've done a
good thing.

~~~
205guy
In the other comment thread that got shadow-banned, someone pointed out that
there are several vaccinations recommended by 2 months of age [1]. I remember
my child getting the first one before leaving the hospital (HepB [2]).

[1]
[https://news.ycombinator.com/item?id=16979836](https://news.ycombinator.com/item?id=16979836)
[2] [https://kidshealth.org/en/parents/immunization-
chart.html](https://kidshealth.org/en/parents/immunization-chart.html)

------
tnash
"An experimenter blew bubbles to distract them." Wasn't expecting my daily
does of cute in this Autism study's abstract, but got it anyway.

------
tbrownaw
Not possible, that's before they've had most of their vaccinations.

/s

~~~
dang
Please don't do this here.

