Judges receiving kickbacks to throw minors into juvenile jail, racial bias in sentences, jail for marijuana, private prison etc. These are all trademarks of a broken penal system and we cannot and should not train machines based on this data.
You could have an app in smartphones that would detect criminal activity, so they could collect global data on actual criminal activity (bias: by people having smartphones with them).
So you'd need to have little autonomous robots following and watching every human being, and detecting automatically criminal activity, and then they could share their data and now machines could learn unbiased.
I expect you have some solid evidence for this? Maybe just a citation? Anything? Obviously there is institutional racial bias, but kickbacks for private jails? for marijuana? for minors? Really?
He's probably referring to "Kids For Cash" .
> racial bias in sentences
You can look at federal and state incarceration statistics for yourself. African-Americans and Hispanic-Americans serve, on average, longer sentences for the same crimes as their Caucasian counterparts. Here's one source.
> jail for marijuana
Mandatory minimums for possession (especially "with intent to sell") have been part of federal drug enforcement policy for the last 50 years. Here are a few of them.
> private prison etc
It was only this year that the DOJ finally committed to eliminating private prisons in the federal system. They're still a massive part of state penitentiary systems (see: cash for kids).
TLDR: "There seems to be a strong racial bias in capital punishment and a moderate racial bias in sentence length and decision to jail. There is ambiguity over the level of racial bias... There seems to be little or no racial bias in arrests for serious violent crime, police shootings in most jurisdictions, prosecutions, or convictions."
But the kickbacks was in the cash for kids scandal -- hopefully isolated criminal corruption in judges. The others are definitely negative and indications of a broken system, but not the level of "judges taking cash for sentences" corruption. Thank you for the clarification.
I feel that most people are in (considerably less extreme) versions of that situation: kind, generous people in general, who simply don't think about how the fairly banal things they do every day are part of really bad stuff. The distributed, banal nature of modern evils -- and they were intentionally structured that way so people wouldnt think about it -- is really what allows evil to flourish in modern society.
Yeah, it wasnt a big deal you flipped a train switch (or filled out mortgage loan paperwork a little shoddily, etc), and likely someone else would've done it instead, but if we all refused to do it, the world would be a better place.
I think many US judges are similar: they're good, a little bit selfish people who just don't think about the evil they contribute to.
Enforcing unjust laws, even fairly, is evil. And (almost) all US judges are guilty of that. (Eg, mandatory minimum drug laws, which empirically just cause harm.)
So Occam's razor seems to imply that something changes in the face for people with e.g. (lead) poisoning, brain defects leading to psychopathy/violent tendencies or something else?
It could alternatively be in the life style of criminals. Drug use or attitude to others? Someone that really prey on people might even have changed hormone levels, or something, as it might be an evolutionary strategy?
Or just body language, but not for face pictures of course. In Sweden, when a state security group hired people with police training, they didn't hire police that ever worked on the streets because it changes the body language in some way; they are recognized by criminals -- and vice versa. (This is old, I have no clue how it is today.)
Again, that can't explain how you can reach 89.5% hit frequency?! Even in China, there needs to be some evidence... right?
To me, that argument sounds like "double think", because of fears of a 1984 society -- where people are characterized from their faces. (That is stupid, if this can really be refined and made to work, then active criminals would just do cosmetic operations.)
If those results goes against your ideology, just note that it isn't peer reviewed yet. :-)
Let Pc = probability of criminal = 0.00716
Let Pt = probability of test being accurate = .895
Probability of criminal given criminal conclusion = Pc * Pt / (Pc * Pt + (1 - Pc) * (1 - Pt)) = 0.0579
Thankfully, the most likely application of this technology I can see in the near-future is someone making an app that scores your face for criminality.
I don't share your positive outlook on law enforcement officials.
It shows how unreliable networks currently are. Any model you'd want to use on real world population would need to have detection rates >99.9%, even if you're just talking about pre-screening.
Good journalists usually let 3rd party experts chime in(this news site usually does that), which seems like a useful filter for news quality.
That's fudging things a bit, isn't it? The 89.5% is the probability a criminal conclusion is accurate, not the probability of the system reaching a criminal conclusion itself.
Research should continue into this, but it's worth remembering that the "criminals" being trained on aren't necessarily bad people in the moral sense. They're merely the recipients of judgment by some third entity (in this case, China's legal system).
They then used 90 percent of these images to train a convolutional neural network to recognize the difference and then tested the neural net on the remaining 10 percent of the images.
The results are unsettling. Xiaolin and Xi found that the neural network could correctly identify criminals and noncriminals with an accuracy of 89.5 percent."
Likewise if this network predicts a user to be criminal when he is not then the probability of success goes down.
But if we had a 100% sure way of detecting that someone's going to commit a crime, should we use it? This would be a dangerous change of perspective about free will and its relation to the law.
If it was 100% correct, it seems morally justifiable to use it, as long as you are a dictatorship. For a democracy, you would need not only correctness, but independently verifiable correctness. If the algorithm is 100% correct today but can be influenced tomorrow, you are giving absolute power over the state to whoever can influence the algorithm. After all, it's impossible to proof your innocence if you are convicted before the crime.
Basically, after solving the technical problems with creating the algorithm, you also need a system of checks and balances that can replace the public verifiability of our current court system.
With minority-report scenarios, it's not like you actually need to punish people as if they really did commit the crime, even though the crime is in the future. All you need to do is prevent it.
First, think about what a 'crime' is. What constitutes 'crime' is based entirely on what the 'law' is. Laws (in this sense) are a creation of man, not nature.
So now we are correlating a man-made concept with something that is (at least partially) the product of nature. If this correlation is real, we need to explore that aspect of it before we start talking about policy.
So what makes a face? Genes, sure but that can't be the whole story. Environment? Nutrition? Psychology? Over time, as a person matures, maybe all of these things and more play some role in how his face will turn out.
Then, could it be that something behind the factors that shape a face also promote a psychology that puts less importance on submitting to authority? When we look at this as a criminal justice issue, that seems like a bad thing. In another context, it may not be.
This is kind of what I mean by approaching the issue with an open mind.
Of course, we already use less than 100% effective prediction to target non-arrest intervention, so it's quite possible that 100% prediction would only have the effect to better target what we already do (and potentially alleviate harm or waste), rather than Minority Report style pre-crime arrests.
Im not sure crime prediction is a moral quagmire, just that what we use as a basis for prediction or a few drastic uses, like pre-crime arrests, are tough questions.
On person doing these studies was for example Francis Galton (who by the way did quite a mix of things from eugenics to statistics and "the wisdom of the crowd").
Lots of things, coming from the old times into the modern times like Physignomy  that was also used for racial identification/discrimination in the 20th century.
Have fun walking deeper into that rabbit hole of history. What I take from that is, that bad ideas never die, even if science was able to debunk them.
Or as @thechao already said:
> Alternately: criminal prosecution is targeted at people who "look like" criminals; thus, the NN is just selecting those people we think look like criminals, rather than any inherent criminality.
Given the above I move that the title of this post is changed to "Neural Net trained on mugshots confirms the findings of Phrenology".
You look weird -> people treat you worse -> you don't feel like working with them -> higher chance of being a criminal.
I know that's a giant leap in reasoning, but that was the first thing that came to mind.
That is non-criminal faced people might show bias against criminal faced people such that these criminal-faced people resort to [petty or other] crime to get by?
Of course, one question is why this bias might have emerged in the first place. What was its genesis?
Because half of the general male population is criminal, of course.
The accuracy rate would be very different with a training/testing sample that takes the base rate of criminality into account.