Hacker News new | past | comments | ask | show | jobs | submit login
Neural Net Trained on Mugshots Predicts Criminals (technologyreview.com)
68 points by jastr on Nov 25, 2016 | hide | past | favorite | 58 comments



Alternately: criminal prosecution is targeted at people who "look like" criminals; thus, the NN is just selecting those people we think look like criminals, rather than any inherent criminality.


This is so true. We are training the neural networks from our own bias and cannot expect it to be fair. A supervised ML algorithm is as good as it's training data and in our current scenario the training data is almost always flawed.

Judges receiving kickbacks to throw minors into juvenile jail, racial bias in sentences, jail for marijuana, private prison etc. These are all trademarks of a broken penal system and we cannot and should not train machines based on this data.


The next step is obviously for machine to go search for their data themselves.

You could have an app in smartphones that would detect criminal activity, so they could collect global data on actual criminal activity (bias: by people having smartphones with them).

So you'd need to have little autonomous robots following and watching every human being, and detecting automatically criminal activity, and then they could share their data and now machines could learn unbiased.


> Judges receiving kickbacks to throw minors into juvenile jail, racial bias in sentences, jail for marijuana, private prison etc.

I expect you have some solid evidence for this? Maybe just a citation? Anything? Obviously there is institutional racial bias, but kickbacks for private jails? for marijuana? for minors? Really?


> Judges receiving kickbacks to throw minors into juvenile jail

He's probably referring to "Kids For Cash" [0].

> racial bias in sentences

You can look at federal and state incarceration statistics for yourself. African-Americans and Hispanic-Americans serve, on average, longer sentences for the same crimes as their Caucasian counterparts. Here's one source.[1]

> jail for marijuana

Mandatory minimums for possession (especially "with intent to sell") have been part of federal drug enforcement policy for the last 50 years. Here are a few of them.[2]

> private prison etc

It was only this year that the DOJ finally committed to eliminating private prisons in the federal system.[3] They're still a massive part of state penitentiary systems (see: cash for kids).

[0]: https://en.wikipedia.org/wiki/Kids_for_cash_scandal

[1]: https://www.aclu.org/issues/mass-incarceration/racial-dispar...

[2]: http://www.pbs.org/wgbh/pages/frontline/shows/snitch/primer/

[3]: https://www.justice.gov/opa/blog/phasing-out-our-use-private...


Mandatory SSC post going into more detail than anyone probably wants to on the actual statistics of this stuff:

http://slatestarcodex.com/2014/11/25/race-and-justice-much-m...

TLDR: "There seems to be a strong racial bias in capital punishment and a moderate racial bias in sentence length and decision to jail. There is ambiguity over the level of racial bias... There seems to be little or no racial bias in arrests for serious violent crime, police shootings in most jurisdictions, prosecutions, or convictions."


I misread that as "Judges receiving kickbacks to:" 1) throw minors into juvenile jail, 2) racial bias in sentences, 3) jail for marijuana, 4) private prison 5) etc.

But the kickbacks was in the cash for kids scandal -- hopefully isolated criminal corruption in judges. The others are definitely negative and indications of a broken system, but not the level of "judges taking cash for sentences" corruption. Thank you for the clarification.


Anybody who actually wants to be a judge in the American system is corrupt. The American system of justice is so obviously flawed that only a person corrupt in their thinking would want to become a judge.


[Removed]


Is a loving husband, devoted father, kind neighbor, etc etc person who drove trains for the Nazi's a good person?

I feel that most people are in (considerably less extreme) versions of that situation: kind, generous people in general, who simply don't think about how the fairly banal things they do every day are part of really bad stuff. The distributed, banal nature of modern evils -- and they were intentionally structured that way so people wouldnt think about it -- is really what allows evil to flourish in modern society.

Yeah, it wasnt a big deal you flipped a train switch (or filled out mortgage loan paperwork a little shoddily, etc), and likely someone else would've done it instead, but if we all refused to do it, the world would be a better place.

I think many US judges are similar: they're good, a little bit selfish people who just don't think about the evil they contribute to.

Enforcing unjust laws, even fairly, is evil. And (almost) all US judges are guilty of that. (Eg, mandatory minimum drug laws, which empirically just cause harm.)


Not the person you're responding to, but I assume that the kickbacks comment was referring to cases like https://en.wikipedia.org/wiki/Kids_for_cash_scandal


That is testable. Prison populations should look different in areas where a larger fraction of serious crimes are solved. (It is hardly likely that a large enough fraction of people in prison are innocent to influence TFA's gigantic results of 89+%!)

So Occam's razor seems to imply that something changes in the face for people with e.g. (lead) poisoning, brain defects leading to psychopathy/violent tendencies or something else?

It could alternatively be in the life style of criminals. Drug use or attitude to others? Someone that really prey on people might even have changed hormone levels, or something, as it might be an evolutionary strategy?

Or just body language, but not for face pictures of course. In Sweden, when a state security group hired people with police training, they didn't hire police that ever worked on the streets because it changes the body language in some way; they are recognized by criminals -- and vice versa. (This is old, I have no clue how it is today.)


This BBC article also quotes people arguing that this recognition might just be a tendency for a judge/jury to find someone guilty.

Again, that can't explain how you can reach 89.5% hit frequency?! Even in China, there needs to be some evidence... right?

To me, that argument sounds like "double think", because of fears of a 1984 society -- where people are characterized from their faces. (That is stupid, if this can really be refined and made to work, then active criminals would just do cosmetic operations.)

If those results goes against your ideology, just note that it isn't peer reviewed yet. :-)

http://www.bbc.com/news/technology-38092196


Internet Bubble great escape plan into reality. We're in for a thrill.


Their method had a 89.5% success rate, which might seem great, but is pretty much worthless in real life. The US has the highest incarceration rate, so we can use the US incarceration rate as an upper bound for the probability of randomly selecting a criminal from the population (716 per 100k, P=0.00716). This means that if we apply the same method at random to members of the general population, there's actually at most a 5.79% chance that the result of "criminal" is accurate.

  Maths:
  Let Pc = probability of criminal = 0.00716
  Let Pt = probability of test being accurate = .895
  Probability of criminal given criminal conclusion = Pc * Pt / (Pc * Pt + (1 - Pc) * (1 - Pt)) = 0.0579


If the method was specifically targeted at high-crime neighbourhoods, the chance of an accurate match would presumably increase. You could also map the entire population of a country and detect previously unknown "criminal" hotspots. Both are awful, authoritarian ideas, but would potentially give useful information to support regular policing.

Thankfully, the most likely application of this technology I can see in the near-future is someone making an app that scores your face for criminality.


Thankfully, the most likely application of this technology I can see in the near-future is someone making an app that scores your face for criminality.

I don't share your positive outlook on law enforcement officials.


This will then lead to people failing job interviews because the app didn't like their face.


Their data had a 50/50 split between criminals and non-criminals, so an 89.5% success rate is actually pretty good. You're right that this algorithm wouldn't be practical to detect criminals in the general population (ethical issues notwithstanding), but I don't think they're suggesting that. They're just offering a neat statistical explanation for a 2011 study that showed that under certain controlled conditions, people are surprisingly good at spotting convicts.


Thanks for calculating this, it's astonishing how few journalists can do this math.

It shows how unreliable networks currently are. Any model you'd want to use on real world population would need to have detection rates >99.9%, even if you're just talking about pre-screening.


>> it's astonishing how few journalists can do this math.

Good journalists usually let 3rd party experts chime in(this news site usually does that), which seems like a useful filter for news quality.


>> Probability of criminal given criminal conclusion

That's fudging things a bit, isn't it? The 89.5% is the probability a criminal conclusion is accurate, not the probability of the system reaching a criminal conclusion itself.


Give this to a despot to train on his political enemies (or ethnic/religious minorities), and you suddenly have a very good NN for condemning innocent people to jail.

Research should continue into this, but it's worth remembering that the "criminals" being trained on aren't necessarily bad people in the moral sense. They're merely the recipients of judgment by some third entity (in this case, China's legal system).


Phrenology for the 21st century.


"Their method is straightforward. They take ID photos of 1856 Chinese men between the ages of 18 and 55 with no facial hair. Half of these men were criminals.

They then used 90 percent of these images to train a convolutional neural network to recognize the difference and then tested the neural net on the remaining 10 percent of the images.

The results are unsettling. Xiaolin and Xi found that the neural network could correctly identify criminals and noncriminals with an accuracy of 89.5 percent."


89.5 accuracy is real bad. It looks pretty convincing but the probability of a person being a criminal turns out to be pretty low. Cancer detecting machines have probability close to 99% but even if it says you have cancer the probability is not that high (close to 10% I think) because of false positives.

Likewise if this network predicts a user to be criminal when he is not then the probability of success goes down.


I'd like to see what human recognition would be as a null hypothesis. You could also block out certain features to see what's driving the decision making here.


The article details the specific facial features that were focused on by the trained model. So that's at least part of what you're looking for.


This doesn't "predict criminals", it predicts those who will be convicted of a crime, which is not the same thing. Suppose that people have an unconscious prejudice against those with eyes set close together. They will be disproportionately convicted, and this neural net will find the correlation.


Since people assume beautiful people are good what is the correlation between these results and people who are considered ugly in their particular population?


'Criminals' being defined as people who have been convicted by a judicial system that is full of bias.


What is the bias in this case?


Every country has different laws so a 'criminal' in one jurisdiction may not be a 'criminal' in another. The training data is fundamentally flawed as the target classes (criminal and not-criminal) are defined by largely arbitrary rules.


We can simply restrict our attention to one country, as they did in the study, to eliminate that variability. What is the bias?


This concept is troubling, uncomfortable, and could potentially be the basis for some very bad policy but none of that is a reason to dismiss it outright. If these correlations are real, it's worthwhile to find out more about it with an open mind.


Minority report scenario hinges on the fact that the crime predictions are not completely reliable.

But if we had a 100% sure way of detecting that someone's going to commit a crime, should we use it? This would be a dangerous change of perspective about free will and its relation to the law.


>if we had a 100% sure way of detecting that someone's going to commit a crime, should we use it

If it was 100% correct, it seems morally justifiable to use it, as long as you are a dictatorship. For a democracy, you would need not only correctness, but independently verifiable correctness. If the algorithm is 100% correct today but can be influenced tomorrow, you are giving absolute power over the state to whoever can influence the algorithm. After all, it's impossible to proof your innocence if you are convicted before the crime.

Basically, after solving the technical problems with creating the algorithm, you also need a system of checks and balances that can replace the public verifiability of our current court system.


If it was 100%, all you would need to do was intervene in some manner where that likelihood dropped to 0%. For example, if the system can predict you will beat your spouse, then force the future-perp to come outside 10 minutes before and modify the scenario to the point where they wouldn't beat their spouse.

With minority-report scenarios, it's not like you actually need to punish people as if they really did commit the crime, even though the crime is in the future. All you need to do is prevent it.


You're already getting too far ahead of things.

First, think about what a 'crime' is. What constitutes 'crime' is based entirely on what the 'law' is. Laws (in this sense) are a creation of man, not nature.

So now we are correlating a man-made concept with something that is (at least partially) the product of nature. If this correlation is real, we need to explore that aspect of it before we start talking about policy.

So what makes a face? Genes, sure but that can't be the whole story. Environment? Nutrition? Psychology? Over time, as a person matures, maybe all of these things and more play some role in how his face will turn out.

Then, could it be that something behind the factors that shape a face also promote a psychology that puts less importance on submitting to authority? When we look at this as a criminal justice issue, that seems like a bad thing. In another context, it may not be.

This is kind of what I mean by approaching the issue with an open mind.


Also worth noting: predicting that someone will commit a crime doesn't mean that we have to arrest them. We could use the prediction to target other (voluntary) intervention.

Of course, we already use less than 100% effective prediction to target non-arrest intervention, so it's quite possible that 100% prediction would only have the effect to better target what we already do (and potentially alleviate harm or waste), rather than Minority Report style pre-crime arrests.

Im not sure crime prediction is a moral quagmire, just that what we use as a basis for prediction or a few drastic uses, like pre-crime arrests, are tough questions.


I bet you could train a neural net to detect the exact moment when the AI bubble has jumped the shark.


Right when you turn on the shark jump detecting AI, would be my bet.


What with the breakthroughs in Fonzi-recognition lately I think we could be looking at a full-on shark jump detector by mid-2017


Well what comes around, goes around or such. I remember during my studies (Literature, Culture and such) to having read of methodologies used in the 18th century to detect criminals by their physiological features.

On person doing these studies was for example Francis Galton (who by the way did quite a mix of things from eugenics to statistics and "the wisdom of the crowd")[1].

Lots of things, coming from the old times into the modern times like Physignomy [2] that was also used for racial identification/discrimination in the 20th century.

Have fun walking deeper into that rabbit hole of history. What I take from that is, that bad ideas never die, even if science was able to debunk them.

Or as @thechao already said:

> Alternately: criminal prosecution is targeted at people who "look like" criminals; thus, the NN is just selecting those people we think look like criminals, rather than any inherent criminality.

[1] https://en.wikipedia.org/wiki/Francis_Galton [2] https://en.wikipedia.org/wiki/Physiognomy

[Edit] Formatting


The paper's conclusion claims that " Furthermore, we have discovered that a law of normality for faces of non- criminals. After controlled for race, gender and age, the general law-biding public have facial appearances that vary in a significantly lesser degree than criminals."

Given the above I move that the title of this post is changed to "Neural Net trained on mugshots confirms the findings of Phrenology".


So were faces of e.g. dodgy GFC bankers included in that pool? Or are we just talking petty criminals here.


I looked at this paper the other day and it looked to me like the non-criminal faces examples were men with shirt collars, whereas the criminal examples were men in t-shirts. If they got above random accuracy, then I wonder if they simply overfitted on that, and the lighting and colour differences between the two styles of photos.


Did not think this would be the year phrenology made a comeback. . .


Why not? The inaccuracy problems were merely poor-quality measurements and statistics. Prediction accuracy will continue to improve with technology.


I wonder if it is the same before and after one becomes a criminal. Being a criminal does not mean they don't regret or feel guilt. Perhaps that is what is detected.


"In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people"

Hmm... so.

You look weird -> people treat you worse -> you don't feel like working with them -> higher chance of being a criminal.

I know that's a giant leap in reasoning, but that was the first thing that came to mind.


Could it be used to identify those likely to commit election or securities frauds? Or maybe those likely to poison an entire city by ruining their water supply?


Previous discussion on arXiv paper:

https://news.ycombinator.com/item?id=12983827


So if the 2011 paper shows similar conclusions as this paper... and if people as well as this CNN can "tell" criminals apart from non-criminals could non-criminal people's unconscious bias lead to non-criminal faced people to see criminal-faced as actual criminals and kind of funnel them into criminality?

That is non-criminal faced people might show bias against criminal faced people such that these criminal-faced people resort to [petty or other] crime to get by?

Of course, one question is why this bias might have emerged in the first place. What was its genesis?


"They take ID photos of 1856 Chinese men [...]. Half of these men were criminals."

Because half of the general male population is criminal, of course.

The accuracy rate would be very different with a training/testing sample that takes the base rate of criminality into account.


Let me guess... the NN was trained to spot tattoos? I bet you tattoos and crime correlate.


Phrenology is still phrenology even if its digital.


The future of this would make a great film...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: