
Courtroom AI system claims almost 90 percent accuracy in detecting lies - ohjeez
https://motherboard.vice.com/en_us/article/zmqv7x/ai-system-detects-deception-in-courtroom-videos
======
harry8
I will be utterly astounded if this "study" is replicated. I will not be
remotely surprised if they sell it the gullible fools in public service who
don't treat lie detectors with the utter contempt they deserve.

What are currently the best tech buzzwords that might convince people you have
magic pixie dust for sale?

~~~
keville
There's plenty of science behind micro-expressions and detection of deception.

Paul Ekman [0] is "ranked 59th out of the 100 most cited psychologists of the
twentieth century" and I thought the TV show based on his research [1] was
very entertaining. (I am surprised that the article used the phrase "micro-
expression" without any reference to Ekman's work.)

[0]
[https://en.wikipedia.org/wiki/Paul_Ekman](https://en.wikipedia.org/wiki/Paul_Ekman)
[1]
[https://en.wikipedia.org/wiki/Lie_to_Me](https://en.wikipedia.org/wiki/Lie_to_Me)

~~~
QAPereo
Do you know what science actually is?

~~~
wadkar
Your question seems to be inconsistent with HN guidelines: comments should be
constructive.

Specifically, comments should encourage discussion and allow for multiple
perspectives on topic. It appears to me that your comment is borderline name
calling and is anything but constructive.

~~~
QAPereo
Thanks for your concern, but I disagree. Science means something, and really
don’t confuse word count with value.

~~~
wadkar
I fail to see how asking someone if they know what science is would help with
a constructive discussion. Did you mean to imply that the micro-expression
related work the the GP cited is not scientific in your opinion? Or are you
questioning the GPs knowledge about science?

~~~
QAPereo
It seemed like a legitimate question to ask someone referring to the content
of those links as “plenty of science...”

~~~
unitmike
Do you know what "constructive" actually means?

~~~
QAPereo
See? Brevity can work.

~~~
wadkar
I think you missed the point - it wasn’t about brevity but about being
constructive.

------
PhantomGremlin
Title is very misleading. These are "pretend" courtrooms. No data from actual
legal proceedings.

From the article: _This was based on evaluations of 104 mock courtroom videos
featuring actors instructed to be either deceptive or truthful._

~~~
wadkar
This is interesting! The ground truth for the study couldn’t have been far
removed from the “ground truth”. I am afraid this appears to me as a pattern
where any dataset whatsoever is thrown at ML algorithms and the results are
heralded as the new way of analyzing data. There is no regard to the source of
the data, it’s deficiencies and utter disregard to its applicability to the
problem at hand. See the question generation related deep learning papers
based on SQUAD dataset.

------
gatmne
alphaalpha101's now dead comment is reasonable: Denying justice in one out of
ten cases is unacceptably high.

~~~
grzm
You're extrapolating a lot here: this is a demonstration in a mock
environment. There's no indication that this is going to be rolled out as-is
and used as the final arbiter of justice. Yes, denying justice 10% of the time
is unacceptable. But that's not what's being presented here.

As an aside, the comment you're referring to was dead on arrival. It looks
like that account has been banned.

------
deepnotderp
I tried for a while to think of a good retort to this article, I couldn't. So
instead:

This is fucking retarded.

They should institute a law that if you are using machine learning to make
major decisions about people's lives then you need to take a test about basic
ML (test sets, validation sets, etc.)

~~~
wadkar
Perhaps one could insist for an explanation and reasonable defense of the
outcome by these ML algorithms. Would that be a feasible retort?

------
yorwba
I would have liked to read the actual paper, but the arXiv link leads to
something completely unrelated:
[https://arxiv.org/abs/1712.05526](https://arxiv.org/abs/1712.05526)

Does anyone have a correct link?

~~~
advisedwang
They meant
[https://arxiv.org/abs/1712.04415](https://arxiv.org/abs/1712.04415) (based on
a full text arxiv search of the text mentioned in article)

~~~
AstralStorm
The project is known as DARE. Interesting classifier but not really that
great. Try it out, it is open source so far. Just like many proof of concept
it starts to rapidly fail when trained on too much data and otherwise
generalize only so so.

This is essentially some feature engineering thrown at an SVM / GNN. What
makes the results even tougher to reproduce is that the training set is not
provided, only resulting matrices.

It is telling when different classifiers work best for supervised and auto
cases...

------
late2part
Alternate headline: Best Courtroom AI wrongly accuses 1 in 10 people of lying.

~~~
mc32
But what is the Jury's alternative (unassisted) effectiveness?

------
pg_bot
Even if they can, anyone who understands Bayes' theorem will tell you that the
false positive rate would be way too high for this to be used in real life.

------
ColinWright
I'm reminded of the scene is "Ex Machina". If you've seen it you'll know what
I mean. If you haven't, I won't spoil it for you. But it's (semi-)relevant.

------
foolrush
Not designed by sociologists.

More rubbish out of the polygraph playbook. AI is the modern Mechanical Turk
when applied to the complex nonlinearities of human behavior.

------
jgamman
it's the 10% i'm worried about - what's the false positive rate? priors or it
didn't happen...

------
vowelless
Wouldn't this violate the Fifth Amendment (self incrimination clause) if
actually used?

~~~
wadkar
Interesting point though I am not sure how the 5th amendment would apply here.
Specifically, I thought that 5th amendment allows you to _not_ answer a
question or _not_ testify. After you say something, it looks like using it to
make a case against you is a fair game.

------
jgalt212
What's accuracy of lie detector machines?

~~~
jstarfish
Terrible (50%?). They're so bad their results haven't been admissible in court
for years.

~~~
foolrush
Exactly, and the statistic of 50% doesn't portray the actual reality.

The better fact is to cite that a monkey is as accurate as a polygraph
machine.

Remember that 50% numerical accuracy is _because it cannot be any lower_. If
it could, it would be inversely utilized for accuracy.

------
chrisco255
What about half truths?

