
How a Machine Learned to Spot Depression - tokenadult
http://www.npr.org/sections/money/2015/05/20/407978049/how-a-machine-learned-to-spot-depression
======
harperlee
Whenever I see these kinds of video processing on real time, I just want to
drop everything and work on that. It seems so cool! It's just like movies that
we all saw while growing. Now the world is completely full of smartphones with
cameras that could make great things! But somehow they don't reach us.

Might it be that sensors and processing power are still not quite there? I've
just seen a Project Tango video
([https://news.ycombinator.com/item?id=9634317](https://news.ycombinator.com/item?id=9634317))
and it seems to suggest that.

BUT there are so much people focusing on these things that on one hand I feel
that great things will come very soon (Oculus, etc.), but on the other hand,
it's just a very risky move. The market is not yet created, everything is just
a demo.

The other day here in HN someone commented something similar here:
[https://news.ycombinator.com/item?id=9627754](https://news.ycombinator.com/item?id=9627754)

There is a huge risk in creating a cohesive value proposition for a big mass
of people. Be reckless and you could flop like Google Glass. But once someone
does that successfully, a huge market for entertainment will have just open.
So my idea is to just toy with these things and gain knowledge and
proficiency, but don't jump into it full time just yet. Do you think this is
an intelligent (non-)move? Or just fear of the unknown? The thing is, I would
also not want to move just for Fear Of Missing Out!

~~~
amelius
> Whenever I see these kinds of video processing on real time, I just want to
> drop everything and work on that.

Personally, I find AI too much of a "fuzzy black box" thing to be really
interested in it. I have to agree that the applications are often truly
marvelous. But the fact that, for example, training neural nets is often more
of an art than a science is holding me back to step into the field.

I like to work on a problem, think it through, and be 99% sure that it will
work before I have even written one line of code. With AI, it is more like
"let's try this and see if it works".

~~~
chongli
_Personally, I find AI too much of a "fuzzy black box" thing to be really
interested in it._

That's because artificial intelligence is subject to eternally moving
goalposts. Any technology which we are fully able to understand ceases to be
intelligent; it is relegated to the status of mere "algorithm".

~~~
visarga
We understand machine learning itself, it's the problem space that needs to be
understood. That's why it is such a black box - you never know the complexity
of the model you need to build before you tackle it.

------
quietplatypus
That looks cool. To get that face mesh to fit to the video in realtime like
that, what kind of processing power do you need? Is this work an application
of that or did they do this because now algorithms/hardware is up to the task?

------
briandear
Funny; we've been working on the same thing at iCouch for a while now: real
time predictive analytics during a video therapy session. Even funnier is that
the investor community has nearly zero interest in what we're doing. I bet if
we built a real-time, gluten free, organic food ordering service, we could
raise $500k in about ten minutes. Our problem is determining market size;
what's the market size for technology hat can predict severe mental illness
situations potentially weeks before they happen? What's the market size for
tech that can prevent suicides? Since nothing else like that exists, there's
no way to predict how much that would be worth in terms of an investment deck.
It's frustrating. The revenue generating side of the business (our SAAS for
therapists,) is doing wel enough, but certainly not well enough to be able to
subsidize moon shot technology that would actually disrupt mental health
diagnosis and treatment. But alas, good to see NPR is giving the issue some
coverage.

It is insanely frustrating when fundraising worthiness is so tied to how good
you can make 10 keynote slides. Changing the world doesn't always fit into
bullet points.

~~~
nitrogen
_Our problem is determining market size; what 's the market size for
technology hat can predict severe mental illness situations potentially weeks
before they happen? What's the market size for tech that can prevent suicides?
Since nothing else like that exists, there's no way to predict how much that
would be worth in terms of an investment deck._

Look up the number of PTSD-qualified therapists in the USA (or just therapists
if "PTSD-qualified" is not a meaningful distinction). Survey a random sample
of them and ask if they might use technology to help diagnosis and to prevent
suicide. Multiply the percentage who say yes by the total number of
therapists. That's your slide deck market size.

------
bechampion
Reminds me of:

[https://simpsonswiki.com/wiki/Virtual_doctor](https://simpsonswiki.com/wiki/Virtual_doctor)

~~~
kylek
I'm always reminded of:
[https://en.wikipedia.org/wiki/ELIZA](https://en.wikipedia.org/wiki/ELIZA)
whenever anything like this comes up

------
thomasfl
The same sentence can have complete opposite meaning if it was said in an
angry or calm voice. Sentiment is such an important part of human
communication, that these systems will have to be incorporated in to speech
recognition if they are going to be meaningful. 😉

~~~
jmount
Your comment seem phrased to imply the current Ellie system (featured in the
article) doesn't read sentiment (when you write "will have to be" about
incorporating sentiment- that usually reads that an existing system like Ellie
doesn't have that"). From the article: "When I answer Ellie's questions, she
listens. But she doesn't process the words I'm saying. She analyzes my tone. A
camera tracks every detail of my facial expressions."

~~~
jodrellblank
The comment said systems like Ellie will have to be incorporated _into speech
recognition_ , to better process the context of the words being recognised.

------
dharma1
very cool. Not exactly related but I have been thinking if it would be
possible to teach deep neural networks lie detection from video/audio feed,
picking up on cues that we would normally ignore (facial microexpressions etc)

~~~
tokenadult
That's a pertinent question in relation to the article submitted here. The
main answer to your question is that there don't seem to be any invariant
behaviors, or even reliable behaviors, associated with lying.[1] Some people
are "naturally" capable of lying shamelessly, and many other people can be
trained to lie while beating any lie detection system based on physical signs.

The best methods for detecting lying when (for example) conducting a police
investigation or engaging in foreign intelligence is to analyze the liar's
statements for inconsistencies. Statement analysis that prompts new questions
in a structured interview is the best way to detect deception, because it's
hard to make up a consistent set of story details in advance when making up a
story.

[1]
[http://www.slate.com/articles/technology/future_tense/2013/0...](http://www.slate.com/articles/technology/future_tense/2013/06/polygraphs_and_other_lie_detection_technologies_may_never_really_work_in.html)

[http://www.people.com/people/archive/article/0,,20079233,00....](http://www.people.com/people/archive/article/0,,20079233,00.html)

[http://www.amazon.com/A-Tremor-In-The-
Blood/dp/0306457822](http://www.amazon.com/A-Tremor-In-The-
Blood/dp/0306457822)

[2] [http://leb.fbi.gov/2011/june/evaluating-truthfulness-and-
det...](http://leb.fbi.gov/2011/june/evaluating-truthfulness-and-detecting-
deception)

[https://books.google.com/books?hl=en&lr=&id=5GjTLtibmXYC&oi=...](https://books.google.com/books?hl=en&lr=&id=5GjTLtibmXYC&oi=fnd&pg=PA41&dq=lie+detection+statement+analysis&ots=jTavMvNxPD&sig=fHfbYK6rg016vl20njSLrrCmkrs#v=onepage&q&f=false)

