It'd seem a retest would only matter if there's a new test or the previous test was suspected to be improperly done. In any case, you'd need a reason to believe a different outcome is possible.
In this case, a quick search finds. It says evidence and sample procedures were not properly followed, and there wasn't even evidence of blood on the knife.
I'm all for being corrected on statistics, but this doesn't seem like a case of bad math, does it?
There's a difference between a knife drenched in blood and a tiny bit of blood from a cut, certainly, but as far as forensics goes that may not matter - they may want to find any evidence of blood, like some blood caught in the space between the handle and the blade, on the assumption that a killer may have tried to clean blood off a weapon.
Surely cleaning blood off the knife makes just as much sense if it was a cooking accident as if you've killed someone with it?
That's a strong claim.
I'm not a professional chef but I like to use a chef's knife, and I cut rather fast. Yet I don't remember ever cutting myself to the point of bleeding. I only remember scraping off nails and skin.
> like some blood caught in the space between the handle and the blade
I reckon that if it gets stuck between the handle and the blade then we're talking about more than a little blood.
I rarely cook, but I work on cars a lot. Cutting my hands and fingers is part of the routine. Oddly, such cuts have never gotten infected, even when the dirt gets embedded below the skin. I think car grease is an effective bacteriacide!
(I find wearing gloves while working on a car uncomfortable, though I'll do it when cracking a rusted nut loose - when it finally gives way you'll always bang your hand against something.)
I'm not a statistician, but if you toss a fair coin 20 times, there is about 0.1% chance of getting 17 heads, but to figure out the probability that the coin is fair given this data, it seems you need Bayes' theorem, which requires a prior probability on the coin being fair.
P(H0|observation) = P(observation|H0) * P(H0) / P(observation)
P(observation) = P(observation|H0) * P(H0) + P(observation|!H0) * P(!H0)
A normal significance test tells you P(observation|H0). [Though I'm not sure about P(observation|!H0)]. To apply Bayes' Rule you need P(H0), where P(H0)=???
Here's some good (possibly more "fair" than I've been above) discussion if anyone wants to read/think about this more:
It seems to me, and the commenters on stats.stackexchange  that this comic both misinterprets frequentist statistics and misrepresents Bayesian statistics. I realize that XKCD is a nerdy comic meant to be entertaining - I just wanted to leave this discussion here in case anyone else is confused; I think this is an important distinction and one most people interested in statistics should spend some time thinking about.
Edit: I can't edit my first comment now, but gweinberg's post (sibling to the grandparent of this) words the problem perfectly.
I don't know the particular DNA test used in this case, but lets assume it gave a certainty of 92% that the DNA isolated was from AK. This means that the particular sequences of DNA identified could have come from another person with a probably of 0.08 (i.e. a one-in-12.5 chance, which is not particularly low in a case like this). It does not mean that the DNA is correctly characterized with a probability of 92%.
For a repeated test to give a different probability, the identity of one or more of the sequences isolated from the sample would have to have been incorrect in one of the assays, i.e. there is a procedural error.
It is not at all like tossing coins. An analogy would be getting someone's eye color as blue the first time and brown the second.
Also very depressing to read his work on forensics. Long story short, everything in CSI besides DNA evidence is an unscientific sham. And even DNA evidence is dominated by lab error (1-2%). See: lst.law.asu.edu/fs09/pdfs/koehler4_3.pdf.
Section V of that paper suggests that being more precise in language choice (an important problem in statistics) is his main advice there, but perhaps making the data and conclusions open in some "anonymized" form would support this endeavour?
"In the days following the verdict it emerged that the jury had been split about the murder charge, but those who had favoured an acquittal were persuaded to accept a conviction."
I just don't understand it. How can a person serving on a jury, with another person's life in their hands, be "persuaded to accept a conviction" when they don't actually believe the defendant is guilty?
I'm perfectly fine with someone starting out thinking the defendant is innocent and then later changing their minds to thinking the defendant is guilty. I'm very much against anyone thinking the defendant is innocent but voting to convict due to peer pressure or whatever.
Probability tests do, but not things we label "probability" that aren't. "there is a 25% chance of rain tomorrow" isn't really a 25%. It is a confidence score.
Something people can grasp better than DNA: Let's say you have a partial thumb print in an imaginary murder. You could eliminate suspects who don't have that portion of the thumb print, but you couldn't confirm that the person or people who match did it. Only that the thumbprint is a "Pocked Loop", "Whorl", or "mixed" and so anyone with a "tent Arch" is not the killer.
You can be 100% confident that the print excludes the person with the "Tent Arch" and if you knew there were only 4 people in the room when the victim in our imaginary scenario died you could even go so far as that since only 2 of them have a potential match that you have a 50% confidence in the match. But Running the test 800 times will not get you to a number better than that.