And still citizens are searched at our train stations and other public areas here in Australia based on that kind of (in)accuracy.
I'm going to make up a number. Keep in mind that I know it's a made up number, but I hope that it will illustrate my point. Let's imagine that 1 out of every 100 bags contains drugs that a sniffer dog can detect. If the dogs have an 80% false positive rate, that means that in 100 bags they will indicate the 1 bag carrying drugs and 4 other bags not carrying drugs.
If that bags carrying drugs (1:100) were correct, and I'm not saying it is, then of 100 bags, 4 bags were searched fruitlessly in order to find the 1 bag with drugs. That means that 4% of bags were searched fruitlessly. That sounds a lot more encouraging for the dogs.
Another way to look at it is to say that when the police get a bag that was indicated by a dog, their chance of finding drugs is 20%. With our 1:100 scenario (still not trying to say it's true) their chance would be 1%. So the chances of finding the drugs increases by a factor of 20 at the cost of inconveniencing 4% of the people.
Of course, I'm being a bit silly. We don't know how many bags the dogs checked so we have no idea how effective the dogs are. However, I suspect that relatively few people carry drugs in their bags (surely less than 1%? Maybe I'm wrong), so this 80% false positive rate sounds pretty effective to me. It potentially disrupts a tiny percentage of people and potentially increases the odds of finding drugs pretty dramatically. Without more information, there is no way to say what's going on, though (which is why I hate newspaper articles like that).
In your scenario, they've searched 100 people with no probable cause (what is a sniffer dog doing, if not a search?), and then done a much more intrusive, stressful, and embarrassing personal search of 5 people, also with no probable cause or presumption of innocence, in order to catch one person.
Being basically accused of a crime and physically searched by men with guns is much more than an "inconvenience" in my opinion.
A system that accuses 4 innocent people of a crime for every guilty person it catches, is in my opinion a broken system that should not be employed.
My opinion of course, you may be happy with a much higher level of police intrusion into your life. Gotta stop those dangerous drug users!
While to me it is obvious that using a trained drug-sniffing dog in a suspicion-less search on unopened bags cannot be used to generate the probable cause required for a human to open the bag and search it manually, for some reason, the judges have so far seen fit to dance around that and come up with legal excuses to allow it anyway.
The problem is not that the dogs are bad at identifying drug bags. The problem is that dogs should not be employed in the first place, unless the cops already have a good reason to search the bags. I.e. they saw a suspected drug/money mule go in with a bag and come out empty-handed, but are having trouble identifying which bag was his. They then have reasonable suspicion to perform the search with the dog, and the dog gives reasonable suspicion to open the bags the dog marked. Just going around with a dog, sniffing every bag that comes within range of that nose, should not be allowed.
The face recognition and license plate scanners are the same to me. They may not technically be a search, but they are still breaching the pseudo-anonymity one enjoys by being out in public. Strangers know you're there, but they don't necessarily know who you are, what you are doing, or where you are going. You are likely unnoticed or quickly forgotten. The cops are using this technology to fight crime, but they end up harassing a lot of innocents in the process. They are essentially using the mechanical assistance to follow everyone, all the time, and remember the details for a long, long time. And that sort of behavior is precisely what prompted the 4th Amendment in the first place. The technology has brought back general warrants, worse than ever, and the cops are concealing or lying about what they are doing with it, because it has been illegal in the US for over 230 years.
That's the thing, as the other commenter pointed out: for all practical purpose, there was no "debate" or "conversation" (outside the tech community, and a few wonky policy discussion forums). It has been basically presented as a fait accompli.
The "powers that be" chose public safety over personal privacy.
That may be the rational for (some of) the customers.
But I'm sure you know all too well that the vendors of this technology are (overwhelmingly) motivated by the pursuit of one thing, and one thing only. And it certainly isn't your safety or mine.
Historically speaking, the "if I didn't build X - someone else would have" argument -- well, we all know what that has led to.
Bull. And even if it did you are literally suggesting that one can't change their mind.
(a) the technology is being deployed in specific environments where there really is a credible threat of criminal behaviour, and
(b) it is being used only as an initial filter prior to normal, lawful police action (or inaction) with normal safeguards, and
(c) the target data set used for comparisons is generated on a lawful and justified basis
then I'm not sure there is anything inherently unethical about this sort of technology. Is it really any different to having police officers at the entrances to a stadium with mugshots of known hooligans who have been identified while discussing trouble-making online ahead of the match?
Whether it is technically capable enough to be useful and cost-effective is a different matter, and it's also a test that might give a different result tomorrow.
Whether it is possible to have sufficient safeguards that a system could reliably meet the standards above is also a different question.
And whether the public would trust that those safeguards existed and have confidence that the standards were being met is another question again.
"Indigenous Australians account for 15 times as many offensive language offences as would be expected for their population."
> (a) the technology is being deployed in specific environments where there really is a credible threat of criminal behaviour,
You need to define what credible threat is, pretty exactly. Just a thief? A terrorist? I think we all agree that this would be nice if there was foreknowledge of an imminent attack. BUT I think there still is a danger in setting them up for only that kind of use. We know that if we give police (or anyone really) tools, they will use them (why wouldn't they?). So having the ability uses to more abuse of use. But also as time goes on and people get used to it, the regulations relax.
So in this terrorist scenario I would be okay with bringing in cameras, or hooking into the already existing system, but it would need to tear down and be required to only come out when time is of the essence.
As we say in America, I'd rather a hundred guilty men go free than one innocent man be stripped of their freedom.
As we say in America, I'd rather a hundred guilty men go free than one innocent man be stripped of their freedom.
Anyways, I always wondered if the discretion allowed to police engenders more societal problems than it solves. I imagine this would have a similar effect.
And the idea that the tech is more dispassionate and therefore fairer than humans is a step on the road to hell. I guess you never had an anomalous credit or insurance rating yet. (Notably GDPR has a clause for precisely the ability to review automated decision making where it can have a material effect.)
This isn't a sorting algorithm where there is a simple provable solution, this is putting heuristics in a computer.
I'd hedge that a little. Because I think there are a lot of people who are just naive and think that using data that is historically racist will result in racist outcomes.
Not saying that the person creating the algo is racist, but for such a harsh accusation you need enough substantial evidence. That, or we dilute the term.
I like this: https://arxiv.org/abs/1706.09847
Obviously the designer could make this data set racist on purpose, I was just saying that an AI that judges on race might be due to stupidity and not malice. Hanlon's Razor.
Not realizing that there are other parameters that correlate with race
For example, bite mark “evidence” has been long disproven as having no basis in fact, but that doesn’t stop prosecutors from using it.
There are a lot of forensic analyses like that: you can’t really use them to pinpoint a suspect; but they’re great at proving that a whole class of suspects couldn’t have been responsible.
The real problem with police is the thought process that leads to playing Guess Who with suspects. Only one person of interest has no alibi, and can’t be excluded? They must have done it then, even if we have literally no proof positive that they did, and have no idea what the evidence in favour of all the people we haven’t looked at yet would be.
- Thorough background on why it is bunk (start page 83): https://obamawhitehouse.archives.gov/sites/default/files/mic...
- Some stories of people imprisoned (often for decades) by bite mark analysis before being exonerated by DNA evidence: https://www.innocenceproject.org/all-cases/#bitemark-analysi...
- The story of "bite mark expert" and general piece of garbage, Michael West: https://www.washingtonpost.com/news/the-watch/wp/2016/08/24/...
In what way can this possibly put innocent people behind bars, never mind easily? The article specifically notes that any hits on the system are then checked by real police officers before any further action is taken, and from that point surely the same processes and controls will apply as if an officer thought they'd recognised a person of interest as part of their normal activity.
Face recognition, autonomous driving, whatever works with dynamic environments will terribly fail for at least another 5 years.
Recognition and classification work fine when working with static environments (pipe checking, train rail track checking, etc), but people and cars are hard to be tracked.. too many variables (light,. Day/night transition, skin tone)
I'm astounded anybody who lives in a country where police can and do get away with shooting and killing people for faulty taillights - could possibly think this is a good idea...
No because they have to prove it's the actual criminal, otherwise they have to release the guy.
Also, we make way too many things illegal in the US and the punishments are often harsh for a country founded on "freedom." I wish more people would shed this archaic delusion.
Over here (Australia) they call it "the trifecta of charges" - offensive language, resist arrest, assault a police officer - and it's well known to be used as "Arrest as a method of oppression".
These regularly get thrown out by victims capable of fighting them in court (eg: https://www.smh.com.au/national/nsw/arrest-of-student-for-of... ) - but they're overwhelmingly used against groups who're least likely to be able to do that: "Indigenous Australians account for 15 times as many offensive language offences as would be expected for their population." http://www5.austlii.edu.au/au/journals/AltLawJl/2004/53.html That's not because white folk are any less likely to tell a cop to fuck off, it's because there's systemic racism built into the police force here - they "know_ they'll get away with "the trifecta" against "people who look a certain way".
The courts in the United States regard resisting arrest as a separate charge or crime in addition to other alleged crimes committed by the arrested person. It is possible to be charged, tried and convicted on this charge alone, without any underlying cause for the original decision to arrest or even if the original arrest was clearly illegal.
That last sentence is the kicker. If a cop makes an illegal arrest and you resist non-violently, you get charged for it. Think about that for a minute. That alone gives any officer the ability to arrest you for anything they want, then claim resisting arrest and you now have a prosecutable charge, even when the arrest was clearly illegal. This is the case, do we really even have rights in the US, or is it an illusion?
My take is that - yes we should have this stuff, but with the following caveats:
1) it only get loaded with the faces of people for whom there is an active warrant out there - ie you need a judge to sign something
2) all data collected that is not relevant should be discarded asap (including false positives the moment you know they are false)
3) you don't deploy something like this until the false positive rate is appropriately low, for everyone - it has to be able to deal equally to people of different races, genders, haircuts, with and without beards, makeup, hats, helmets, zinc oxide sunscreen, green St Paddy's day faces, actual smurfs etc etc all the diversity of normal street life
Let's say one person in 1000 has an outstanding warrant. Let's say that when the camera sees this person, it has a 99% chance of recognising them correctly. Let's also say that it has only a 1% chance of recognising an innocent person as a suspect.
Under these generous conditions, 10 out of every 11 hits will be false positives. Put another way, if you run the hypothetical system on a football match with 60,000 people in the crowd, you'll find 59 or 60 of the ones with outstanding warrants, and several hundred without.
This is just the same math that was used to show how ineffective PRISM mass surveillance would be: https://bayesianbiologist.com/2013/06/06/how-likely-is-the-n...
You have described a false positive 'rate' of one thousand percent (10 to 1). A genuinely low false positive 'rate' of 1 to 10 would require a actual-rate around 0.01% chance of recognising an innocent person as a suspect.
It might help if there was widely known term for 'rate's, rather than using wrong-term-in-scare-quotes.
Edit: Also, it's increasingly impossible to have a low false positive 'rate' for one-in-a-million or one-in-a-billion events, and not-technically-lying about how good your detectors are is probably a significant secondary factor in why this sort of thing gets so much flak for "staggering inaccuracy".
"The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."
That ratio in my example is 1 to 99.
Automated facial recognition systems will listen to whatever they’re told without question, and can watch all cameras at all times.
Add gait recognition and you won’t be able to go anywhere within CCTV without it being known.
> "When we first deployed and we were learning how to use it... some of the digital images we used weren't of sufficient quality," said Deputy Chief Constable Richard Lewis. "Because of the poor quality, it was identifying people wrongly. They weren't able to get the detail from the picture."
> Information Commissioner Elizabeth Denham said the issue had become a "priority" for her office.
When you roll out facial recognition technology at an event with ten thousand people and expect your magical facial recognition system to pick out just the bad guys, you quickly discover that even a system that's 99.99 percent accurate has a 63 percent change of giving you at least one false positive in that crowd. In reality these systems are probably closer to 95 percent accurate, in which case you're looking at vastly more false positives. Like hundreds per event.
This is the exact same statement as "breast cancer screening is staggeringly inaccurate." Do you know why breast annual cancer screenings aren't recommended for women under 40? Because breast cancer is so rare in that population that the high false positive rate outweighs the benefit of early detection. The correct thing to do is only screen only if there is already a suspicion of cancer.
Similarly, pervasive surveillance where you (statistically) consider everyone a possible criminal is a waste of time. Anyone who's ever taken a probability course will tell you that unless your facial recognition tech is fantastically accurate you'll waste all your police officers' valuable time sorting through false positives.
This is not an image quality issue. It's not a data quality issue. It's not a deployment issue. It's a Bayesian statistics issue. Unless your tech has superhuman 99.999% accuracy, it doesn't matter how accurate your facial recognition is if you're looking for a needle in a haystack. I'm sure the facial recognition is just as wonderfully accurate as breast cancer screening, it's just being applied to a population with a stupidly rare prior.
This is a statistics 101, freshman undergrad-level mistake, and frankly it makes me mad as hell that authorities are stupid enough to make it and boneheaded enough to endanger their entire population's civil liberties in the process.
For more info, check Wikipedia's page on the aptly-named prosecutor's fallacy: https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy
> concluding that a positive test result probably indicates a positive subject, even though population incidence is below the false positive rate, is a "base rate fallacy".
Whereas https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy says:
> At its heart, the fallacy involves assuming that the prior probability of a random match [i.e. the odds of a positive test result] is equal to the probability that the defendant is innocent [i.e. the odds of a positive subject]. For instance, if a perpetrator is known to have the same blood type as a defendant and 10% of the population share that blood type, then to argue on that basis alone that the probability of the defendant being guilty is 90% makes the prosecutor's fallacy
Those are the same mistake. The prosecutor's fallacy is a base rate fallacy.
Prosecutor's fallacy: "The prosecutor's fallacy is a fallacy of statistical reasoning, typically used by the prosecution to argue for the guilt of a defendant during a criminal trial".
The false positive paradox is a result. To get to a prosecutor's fallacy or base rate fallacy from it, you have to use the result in a way that involves certain fallacious reasoning.
There's nothing necessarily wrong with using a screening test that the false positive paradox applies to, as long as you recognize its limitations. When you are looking for a needle in a haystack (which is typically when you hit the false positive paradox) it gives you a smaller haystack to search for the needle, but that can still be a big improvement over not using it as long as it has a low false negative rate.
But it's _oh_ so profitable and career enhancing for the people choosing to do it...
Guilty until proven innocent. Make The Prison-Industrial Complex Great Again.
I wonder how many people got their face, address and ID publicly displayed there for false positives, and it maybe even effects their "Social Credit Score".
I believe had this kind of bold to blindly implement such system which had such false positive rates clearly showing how incredibly responsible our government really is.
Only thing I do found is mentions about the correct rate, some over 90%, some over 99%, none 100%.
I am amazed every day at work that everyone expects technology to be a "silver-bullet", and are outraged if they still have to do anything themselves