
An Impenetrable Program Transforming How Courts Treat DNA Evidence - rbanffy
https://www.wired.com/story/trueallele-software-transforming-how-courts-treat-dna-evidence/?mbid=social_tw_backchannel
======
mabbo
This seems to me a bit like a zero-knowledge proof. The company wants to hide
it's source code, but claims the source code can identify DNA evidence
correctly. The public wants to believe them, but how can we?

Well, what if every time the prosecutor wanted to use such evidence, they had
to include say 10 other samples of to test and the DNA of 10 other suspects
unrelated to the case? It would be more expensive I imagine, but it could lead
to more fair evidence- if this technique really does work, it should be able
to accurately identify the noise variables and point out the signal. If there
is no signal, great. If there's lots of false signals, then we know this is
bullshit.

Truthfully, my worry here is that if you have 5 DNA testing companies, and 1
always gives the prosecutors the answers they want, they'll be the ones who
get more business. We need a way to ensure accuracy.

~~~
esyir
Source code shouldn't matter for performance testing though.

Simply fire through a bunch of known-result-samples through the various
services and take a look at the test statistics. Determining accuracy is
hardly the hard part.

~~~
nugi
We cant test all edge cases as well as we can read source. Handicapping
justice for a dollar is whoring of the lowest order.

~~~
aquadrop
We can't find all the edge cases from reading source as well. But we also
don't need to test all edge cases, it's still about probability, if the lab
was tested (some double-blind study) and could identify successfully 1000
cases out of 1000, it means that probability of some random people being an
edge cases where algorithm is not as accurate is less than 0.1%. We just need
to make sure such tests/certification is used and there's no ties between labs
and prosecutors. Of course it probably would be nice the company to show their
process openly, but you still will need these tests.

~~~
TheCoelacanth
Double-blind tests are necessary but not sufficient. Without the double-blind
tests, the process simply has no credibility at all. The double-blind tests
establish that credibility.

However, the 6th amendment also guarantees defendants the right to confront
their accusers. Without the source code, they are unable to do this.

~~~
CompanionCuuube
How have courts obtained the source code of human eyewitnesses all these
years?

~~~
TheCoelacanth
Obviously there is no source code, but you can cross-examine a human
eyewitness. There is no way to cross-examine an algorithm. Eyewitnesses that
can't be cross-examined are only allowed in court in very rare circumstances,
such as when the defendant killed them to prevent them from testifying.

------
codeisawesome
> “It would take a genius to work out an error from a code.”

Really? Better keep the algorithm public then where many “geniuses” have
access to it, and can scrutinise it, considering people are being put away
with “multiple lifetime” sentences based on evidence generated by it.

------
TheCoelacanth
Outrageous that this is ever allowed in court. A trial is supposed to involve
the prosecution publicly proving that the accused person is guilty, not to
simply have guilt handed down by some inscrutable black-box.

------
crb002
Violation of the confrontation clause.

chad@flyingdogsolutions.com if any attorney needs an expert witness to look
into it. I can review the source under a protective order so the company's IP
will be fully protected. Worst case I find something wrong and they fix it.

------
tomohawk
> As John Oliver commented last month...

That pretty much says it all.

This is like requiring the manufacturer of glassware used in the labs to hand
over their proprietary manufacturing information, or all the evidence
processed by the lab must be thrown out. The glassware can be verified to work
properly given appropriate testing, etc. So can the software without looking
at the source.

In fact, treating the software as a black box and employing appropriate
testing without consideration as to what's in the box - just what the box
should be able to do - should lead to more confidence.

The real issue here is whether there is such a certification / testing process
that the tool meets that can be independently verified.

~~~
rbobby
> In October, the Department of Commerce National Institute of Standards and
> Technology announced that it would embark on a study to determine the
> reliability of DNA testing, including the algorithmic methods used by
> companies like TrueAllele. This research, which NIST says is to establish
> “foundational validity,” has been condemned by Perlin specifically through a
> post on the Cybergenetics blog, which calls the study “wasteful,
> “unnecessary,” and “meaningless.” Perlin argues that True Allele’s validity
> has already been proven through scientific, peer-reviewed research studies.
> But these studies that Perlin relies on are internal validation studies that
> were paid for and run by Cybergenetics.

It is a bit unsettling that NIST wants to independently test “foundational
validity” but the company founder is so dismissive of the idea.

------
sanxiyn
Since STRmix, a competing algorithm, is already public, I am curious whether
publication led to new defense. Such case would be a strong evidence for the
proposition that "the source code is crucial to defense".

------
mcherm
Well this is all stupid. Because we have double blind testing. It doesn't
matter what the algorithm is, it just matters whether the test is accurate.
And we can determine that by sending in test samples and having them rate
these. If we want to test its accuracy when working from the minimal DNA
included in a fingerprint then have a few people touch a spot and send in a
swab to compare (sometimes ask to compare to the DNA of the person who touched
it, sometimes to a different person).

A blind study means don't tell the DNA testing company which submissions are
for real cases and which are for testing. A double-blid study means don't tell
the police submitting it either (to avoid giving subtle clues to the testing
company, like calling to follow up on tests but not real cases, or vice
versa).

If they (the testing company and the police) are willing to conduct a double-
bind study, then its results are a reliable way to assess the technology. If
they are NOT willing to conduct a double-blind study then we should ask why
not. The answer is almost certainly because they are afraid of what the
results might show.

~~~
DoofusOfDeath
> Well this is all stupid. Because we have double blind testing. It doesn't
> matter what the algorithm is, it just matters whether the test is accurate.

Seems to me that your kind of reasoning is why the Volkswagon cheating went
undetected for so long.

~~~
tomohawk
It went undetected so long because the regulators were not competent and
constructed a test that was easy to game.

Science is best based on techniques such as properly formulated studies than
source code analysis. Consider how many bugs fuzzing has uncovered that
escaped source code analysis for many years by many experts.

