Hacker News new | comments | show | ask | jobs | submit login
Face recognition police tools 'staggeringly inaccurate' (bbc.co.uk)
118 points by pmoriarty 68 days ago | hide | past | web | favorite | 80 comments



I know a lot of people here are talking about the tech and how of course it makes too many false positives. But are we not going to talk about IF this type of thing is even okay? I for one don't think so. Too Orwellian.


Drug sniffer dogs indicate wrongly 80% of the time [1].

And still citizens are searched at our train stations and other public areas here in Australia based on that kind of (in)accuracy.

[1] https://www.smh.com.au/environment/conservation/sniffer-dogs...


Keep in mind that sniffer dogs are used to filter the trials. What's missing in that statistic is the total number of bags the dogs sniffed.

I'm going to make up a number. Keep in mind that I know it's a made up number, but I hope that it will illustrate my point. Let's imagine that 1 out of every 100 bags contains drugs that a sniffer dog can detect. If the dogs have an 80% false positive rate, that means that in 100 bags they will indicate the 1 bag carrying drugs and 4 other bags not carrying drugs.

If that bags carrying drugs (1:100) were correct, and I'm not saying it is, then of 100 bags, 4 bags were searched fruitlessly in order to find the 1 bag with drugs. That means that 4% of bags were searched fruitlessly. That sounds a lot more encouraging for the dogs.

Another way to look at it is to say that when the police get a bag that was indicated by a dog, their chance of finding drugs is 20%. With our 1:100 scenario (still not trying to say it's true) their chance would be 1%. So the chances of finding the drugs increases by a factor of 20 at the cost of inconveniencing 4% of the people.

Of course, I'm being a bit silly. We don't know how many bags the dogs checked so we have no idea how effective the dogs are. However, I suspect that relatively few people carry drugs in their bags (surely less than 1%? Maybe I'm wrong), so this 80% false positive rate sounds pretty effective to me. It potentially disrupts a tiny percentage of people and potentially increases the odds of finding drugs pretty dramatically. Without more information, there is no way to say what's going on, though (which is why I hate newspaper articles like that).


No.

In your scenario, they've searched 100 people with no probable cause (what is a sniffer dog doing, if not a search?), and then done a much more intrusive, stressful, and embarrassing personal search of 5 people, also with no probable cause or presumption of innocence, in order to catch one person.

Being basically accused of a crime and physically searched by men with guns is much more than an "inconvenience" in my opinion.

A system that accuses 4 innocent people of a crime for every guilty person it catches, is in my opinion a broken system that should not be employed.

My opinion of course, you may be happy with a much higher level of police intrusion into your life. Gotta stop those dangerous drug users!


Also keep in mind that police must have reasonable suspicion that a crime has been committed in order to conduct a search.

While to me it is obvious that using a trained drug-sniffing dog in a suspicion-less search on unopened bags cannot be used to generate the probable cause required for a human to open the bag and search it manually, for some reason, the judges have so far seen fit to dance around that and come up with legal excuses to allow it anyway.

The problem is not that the dogs are bad at identifying drug bags. The problem is that dogs should not be employed in the first place, unless the cops already have a good reason to search the bags. I.e. they saw a suspected drug/money mule go in with a bag and come out empty-handed, but are having trouble identifying which bag was his. They then have reasonable suspicion to perform the search with the dog, and the dog gives reasonable suspicion to open the bags the dog marked. Just going around with a dog, sniffing every bag that comes within range of that nose, should not be allowed.

The face recognition and license plate scanners are the same to me. They may not technically be a search, but they are still breaching the pseudo-anonymity one enjoys by being out in public. Strangers know you're there, but they don't necessarily know who you are, what you are doing, or where you are going. You are likely unnoticed or quickly forgotten. The cops are using this technology to fight crime, but they end up harassing a lot of innocents in the process. They are essentially using the mechanical assistance to follow everyone, all the time, and remember the details for a long, long time. And that sort of behavior is precisely what prompted the 4th Amendment in the first place. The technology has brought back general warrants, worse than ever, and the cops are concealing or lying about what they are doing with it, because it has been illegal in the US for over 230 years.


Ya know, that conversation already happened, and the "powers that be" chose public safety over personal privacy. I work in facial recognition. You're gonna see it blanket this civilization, completely. We are selling systems to every school district, shopping center, sports stadium, and municipal district nationwide. In the 2nd world it is even more active. The time to debate FR is past.


When was that time, precisely? I don't recall there being a debate about this. Will we just blow through the "time for debates" on every new technology?


It's been building with every school shooting, public gun violence, and high tension political gathering. The conversation takes place at the local level, often at the specific school itself, the specific campus, stadium and shopping district. You see, many of these places are private, so they can do what ever they want to a large degree; and many of the public spaces are patrolled by private security companies, who use the most cost effective method to cover large areas. The police are the end of the line as far as deploying facial recognition.


Ya know, that conversation already happened,

That's the thing, as the other commenter pointed out: for all practical purpose, there was no "debate" or "conversation" (outside the tech community, and a few wonky policy discussion forums). It has been basically presented as a fait accompli.

The "powers that be" chose public safety over personal privacy.

That may be the rational for (some of) the customers.

But I'm sure you know all too well that the vendors of this technology are (overwhelmingly) motivated by the pursuit of one thing, and one thing only. And it certainly isn't your safety or mine.


The "powers that be" are the collective owners of the franchises at the local mall, putting pressure on the mall security contractor. The "powers that be" are the loss prevention departments at every major retailer. They are also the security at every sports stadium, college campus, and local school district who are strapped for cash, looking to cover large areas in as cost effective a manner as possible. If you are looking for some over arching authority that says "yes we do" or "no we don't" as a society, I think the active adoption of FR by all these smaller private players is and is the end of the discussion.


Another point of view is that the "powers that be" is you - and me - and everyone else who has a choice to make about whether to work on technologies like these.

Historically speaking, the "if I didn't build X - someone else would have" argument -- well, we all know what that has led to.


> Ya know, that conversation already happened.

Bull. And even if it did you are literally suggesting that one can't change their mind.


You have to be in the conversation first, and then recognize that the conversation takes place at the private security contractor level. They are the people tasked with security of a hotel, shopping district, mall, sports stadium, airport, high school and college campus. They are they people adopting FR far more than the local police. They are also largely private companies, looking to be the most cost effective, and composed of typical security guards, education wise.


And on the flip side, if the purpose is Orwellian chilling effects on some types of behaviour, is the accuracy or false positive rate really going to concern the users of the tech? Consider the Mechanical Hound in Fahrenheit 451.


I think it's a debate worth having. Personally I am heavily in the pro-privacy, limited state power camp on most issues. However, if:

(a) the technology is being deployed in specific environments where there really is a credible threat of criminal behaviour, and

(b) it is being used only as an initial filter prior to normal, lawful police action (or inaction) with normal safeguards, and

(c) the target data set used for comparisons is generated on a lawful and justified basis

then I'm not sure there is anything inherently unethical about this sort of technology. Is it really any different to having police officers at the entrances to a stadium with mugshots of known hooligans who have been identified while discussing trouble-making online ahead of the match?

Whether it is technically capable enough to be useful and cost-effective is a different matter, and it's also a test that might give a different result tomorrow.

Whether it is possible to have sufficient safeguards that a system could reliably meet the standards above is also a different question.

And whether the public would trust that those safeguards existed and have confidence that the standards were being met is another question again.


Ask any #blacklivesmatter supporter just how effective police "normal safeguards" end up being... :-/


Clearly policing in the US has some problems, and I'm not sure technology can fix them. However, this story isn't about the US.


> Clearly policing in the [country] has some problems, and I'm not sure technology can fix them.

FTFY


Better yet, ask the data.


The data reported in the article shows 0 incidents of wrongful arrest due to false positives. It also implies several hundred cases where the correct people were properly identified by the technology. There is plenty of room for debate about this technology, but so far, just saying "ask the data" doesn't seem to make the point you might have intended.


As well.

"Indigenous Australians account for 15 times as many offensive language offences as would be expected for their population."

http://www5.austlii.edu.au/au/journals/AltLawJl/2004/53.html

:sigh:


There's a lot of principles I agree with you here, but I'd still rather not have the surveillance, and here's why

> (a) the technology is being deployed in specific environments where there really is a credible threat of criminal behaviour,

You need to define what credible threat is, pretty exactly. Just a thief? A terrorist? I think we all agree that this would be nice if there was foreknowledge of an imminent attack. BUT I think there still is a danger in setting them up for only that kind of use. We know that if we give police (or anyone really) tools, they will use them (why wouldn't they?). So having the ability uses to more abuse of use. But also as time goes on and people get used to it, the regulations relax.

So in this terrorist scenario I would be okay with bringing in cameras, or hooking into the already existing system, but it would need to tear down and be required to only come out when time is of the essence.

As we say in America, I'd rather a hundred guilty men go free than one innocent man be stripped of their freedom.


     As we say in America, I'd rather a hundred guilty men go free than one innocent man be stripped of their freedom.
I don't think I've ever heard anyone say that, but I don't like how simple it is.

Anyways, I always wondered if the discretion allowed to police engenders more societal problems than it solves. I imagine this would have a similar effect.


It’s a bit ironic that America both reveres that quote and has the largest prison population in the world.


I'll give you that. But that's also why I try to remind people what one of the founding beliefs was. And that your system is always going to fail to some degree, so do you want it to fail safe (innocent go to jail) or fail open (guilty go free)?


definitely the latter.


That's what I agree with and the essence of Blackstone's Formulation. This was something many of the founding fathers of America talked about.


I don't really see that it is that bad compared to what happens now. You have people doing it currently with all their subjective biases (such as race and class). Maybe this just makes it cheaper and maybe more effective as the tech matures. Surely accuracy concerns can be addressed by combining multiple data sources (e.g. looks like someone dodgy, is somewhere strange, doing something strange, just the way a police officer would rationalize investigating further).


The fundamental principle of presumption of innocence means you shouldn't go around spying on everyone looking for signs of criminality.

And the idea that the tech is more dispassionate and therefore fairer than humans is a step on the road to hell. I guess you never had an anomalous credit or insurance rating yet. (Notably GDPR has a clause for precisely the ability to review automated decision making where it can have a material effect.)


Because we don't want people who can track the entire population at once. It's too much power.


And you think the algorithms made by racists aren't racist?

This isn't a sorting algorithm where there is a simple provable solution, this is putting heuristics in a computer.


> And you think the algorithms made by racists aren't racist?

I'd hedge that a little. Because I think there are a lot of people who are just naive and think that using data that is historically racist will result in racist outcomes.

Not saying that the person creating the algo is racist, but for such a harsh accusation you need enough substantial evidence. That, or we dilute the term.


Hm? At this point there's really nothing controversial about "racist data makes racist models" (I'm being flippant in my phrasing, but, more rigorously: "the model will reflect and reify the biases of the person training the model and the data used to train it, and the people who annotate the training data")

I like this: https://arxiv.org/abs/1706.09847


But the system in question is scanning a crowd looking for known faces in a database, race only enters it when you are comparing it to the human system of a police officer looking for terrorists at a football game. I get that people object to persistent surveillance but that doesn't seem to be what this system is about. It is looking for certain people at certain events. The way your face gets on the list in the first place is where you should put your concerns...


Well what I was saying, and why I suggested hedging the accusation, is that if someone does, say an AI that determines court sentences then that might cause the AI to be racist even though race was never set as a parameter. Such an AI might have racist behavior because of the naivety of designer[1], not that the designer was themselves racist.

Obviously the designer could make this data set racist on purpose, I was just saying that an AI that judges on race might be due to stupidity and not malice. Hanlon's Razor.

[1]Not realizing that there are other parameters that correlate with race


Police add yet another massively inaccurate tool to their arsenal which they can easily put innocent people behind bars.

For example, bite mark “evidence” has been long disproven as having no basis in fact, but that doesn’t stop prosecutors from using it.


Bite mark evidence is fine... if what you’re trying to prove is whether the bite came from a human or an animal. It doesn’t do much more than that.

There are a lot of forensic analyses like that: you can’t really use them to pinpoint a suspect; but they’re great at proving that a whole class of suspects couldn’t have been responsible.

The real problem with police is the thought process that leads to playing Guess Who with suspects. Only one person of interest has no alibi, and can’t be excluded? They must have done it then, even if we have literally no proof positive that they did, and have no idea what the evidence in favour of all the people we haven’t looked at yet would be.


Indeed. Anyone in for a depressing read should look into the history of bite mark analysis as used by US prosecutors. Scientifically, it is 100% bunk. Basically the equivalent of the cops paying top dollar to psychics to tell them who did a crime.

- Thorough background on why it is bunk (start page 83): https://obamawhitehouse.archives.gov/sites/default/files/mic...

- Some stories of people imprisoned (often for decades) by bite mark analysis before being exonerated by DNA evidence: https://www.innocenceproject.org/all-cases/#bitemark-analysi...

- The story of "bite mark expert" and general piece of garbage, Michael West: https://www.washingtonpost.com/news/the-watch/wp/2016/08/24/...


Police add yet another massively inaccurate tool to their arsenal which they can easily put innocent people behind bars.

In what way can this possibly put innocent people behind bars, never mind easily? The article specifically notes that any hits on the system are then checked by real police officers before any further action is taken, and from that point surely the same processes and controls will apply as if an officer thought they'd recognised a person of interest as part of their normal activity.


Alternatively, politicians continue cutting police budgets, forcing them to replace people with underperforming computers.


Where on earth is this happening? I would like to move there. In my county/state/nation, the federal government makes up new funding programs every year to induce local departments to hire ever more assholes to pry ever more into the lives of their subjects.


Maybe I'm misremembering, but wasn't bite mark evidence used to catch Ted Bundy?


Not really, and frankly as with many prolific serial killers the active ingredient was police incompetence rather than rare genius of the killer.

https://en.m.wikipedia.org/wiki/Ted_Bundy#Arrest_and_first_t...


Not catch per se, but it was used in the trial.


I once helped build a fingerprint processing system. One of the things I learned is that police technicians essentially photoshop fingerprints before searching to ensure the best possible match. It took a lot of the shine off of the process and made it clear that as high-tech as things seemed, there were a lot of organic factors at play that were hard or impossible for a computer to sort out by itself. I expect facial recognition has similar constraints.

Edit: Conjugation


Computer vision is hard.

Face recognition, autonomous driving, whatever works with dynamic environments will terribly fail for at least another 5 years.

Recognition and classification work fine when working with static environments (pipe checking, train rail track checking, etc), but people and cars are hard to be tracked.. too many variables (light,. Day/night transition, skin tone)


Yes, it is very hard. That is why the leaders in the industry are not recent startups, rarely have anyone under 40 on their staff, and have been working in the field for 15 to 20 years. This is not casual technology, it is intended as a public safety technology, and the industry takes that seriously.


Let's not let the ineffectiveness of this technology distract us from the fact that it is unethical. If it worked properly it would still be horrible.


Why is it unethical? Because it allows us to catch criminals?


Perhaps because it allows "them" to catch whoever they feel like, and claim "oh, the computer said they were a criminal".

I'm astounded anybody who lives in a country where police can and do get away with shooting and killing people for faulty taillights - could possibly think this is a good idea...


> Perhaps because it allows "them" to catch whoever they feel like, and claim "oh, the computer said they were a criminal".

No because they have to prove it's the actual criminal, otherwise they have to release the guy.


Resisting arrest is a typical workaround for that. The antidote is, "well why were you arresting him?" This dilutes the antidote with, "the computer said he was a criminal."

Also, we make way too many things illegal in the US and the punishments are often harsh for a country founded on "freedom." I wish more people would shed this archaic delusion.


Why would you resist arrest even if you're innocent?


Ask anyone from any sort of minority community how often their members get charged with resisting arrest vs how often they _actually_ resist arrest. Even allowing for self reporting always making the comparison skewed I suspect most non-minority people won't even believe the difference.

Over here (Australia) they call it "the trifecta of charges" - offensive language, resist arrest, assault a police officer - and it's well known to be used as "Arrest as a method of oppression".

These regularly get thrown out by victims capable of fighting them in court (eg: https://www.smh.com.au/national/nsw/arrest-of-student-for-of... ) - but they're overwhelmingly used against groups who're least likely to be able to do that: "Indigenous Australians account for 15 times as many offensive language offences as would be expected for their population." http://www5.austlii.edu.au/au/journals/AltLawJl/2004/53.html That's not because white folk are any less likely to tell a cop to fuck off, it's because there's systemic racism built into the police force here - they "know_ they'll get away with "the trifecta" against "people who look a certain way".


Resisting arrest is pretty vague. If you say, "hey, wait a minute, let me explain," while they are turning you around to cuff you, that can be considered resisting. Really it doesn't have to be anything, they can just put that on the list of charges.

The courts in the United States regard resisting arrest as a separate charge or crime in addition to other alleged crimes committed by the arrested person. It is possible to be charged, tried and convicted on this charge alone, without any underlying cause for the original decision to arrest or even if the original arrest was clearly illegal.

That last sentence is the kicker. If a cop makes an illegal arrest and you resist non-violently, you get charged for it. Think about that for a minute. That alone gives any officer the ability to arrest you for anything they want, then claim resisting arrest and you now have a prosecutable charge, even when the arrest was clearly illegal. This is the case, do we really even have rights in the US, or is it an illusion?

https://en.wikipedia.org/wiki/Resisting_arrest


We've had a lot of press this week about the police here in New Zealand rolling out similar stuff - really we need to have a wide ranging public discussion on the issues around this stuff before it's done, sadly the police have largely been doing this under the radar.

My take is that - yes we should have this stuff, but with the following caveats:

1) it only get loaded with the faces of people for whom there is an active warrant out there - ie you need a judge to sign something

2) all data collected that is not relevant should be discarded asap (including false positives the moment you know they are false)

3) you don't deploy something like this until the false positive rate is appropriately low, for everyone - it has to be able to deal equally to people of different races, genders, haircuts, with and without beards, makeup, hats, helmets, zinc oxide sunscreen, green St Paddy's day faces, actual smurfs etc etc all the diversity of normal street life


Even if you get the false positive rate low, the low prevalence of criminals versus innocents will lead to a large number of false positives.

Let's say one person in 1000 has an outstanding warrant. Let's say that when the camera sees this person, it has a 99% chance of recognising them correctly. Let's also say that it has only a 1% chance of recognising an innocent person as a suspect.

Under these generous conditions, 10 out of every 11 hits will be false positives. Put another way, if you run the hypothetical system on a football match with 60,000 people in the crowd, you'll find 59 or 60 of the ones with outstanding warrants, and several hundred without.

This is just the same math that was used to show how ineffective PRISM mass surveillance would be: https://bayesianbiologist.com/2013/06/06/how-likely-is-the-n...


I think this may be a problem of terminology. When I (and, I think, others) say "low false positive rate" what we're actually talking about is the (more useful for approximation) ratio "fraction of negatives classified positive" over "fraction of population that is positive".

You have described a false positive 'rate' of one thousand percent (10 to 1). A genuinely low false positive 'rate' of 1 to 10 would require a actual-rate around 0.01% chance of recognising an innocent person as a suspect.

It might help if there was widely known term for 'rate's, rather than using wrong-term-in-scare-quotes.

Edit: Also, it's increasingly impossible to have a low false positive 'rate' for one-in-a-million or one-in-a-billion events, and not-technically-lying about how good your detectors are is probably a significant secondary factor in why this sort of thing gets so much flak for "staggering inaccuracy".


You can't get around the fact that rare events happen rarely. False positive rate is the rate of false positives: given that this one event is a positive, what is the chance that it is a false positive? This is important for decision-making in a real situation. Your suggested definition is for a much less helpful statistic, no matter how comforting it might seem to accuse the field of statistics of "not-technically-lying". In fact your idiosyncratic definition has the opposite effect from the one you seem to seek: when "true" negatives are far more common than "true" positives, false positives typically will also be far more common than true positives.


Yes, I know what a false positive rate is, hence the scare-quotes everywhere I misused "rate". And I'm (obviously) not accusing the field of statistics of not-technically-lying; I'm accusing supporters of face recognition police tools of not-technically-lying. False positive rates are actively misleading when (trying to avoid) answering the question "given that this one event is reported positive, what is the chance that it actually is?".


Well, no, that's not the Wikipedia description of a false positive rate:[1]

"The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."

That ratio in my example is 1 to 99.

[1]https://en.wikipedia.org/wiki/False_positive_rate


Perhaps humans will reapply for this job: http://www.rcmp-grc.gc.ca/en/gazette/never-forget-a-face


People with rare abilities have a chance to refuse or whistleblow unethical work.

Automated facial recognition systems will listen to whatever they’re told without question, and can watch all cameras at all times.

Add gait recognition and you won’t be able to go anywhere within CCTV without it being known.


Not with current failure rates where they are unable to generalise glasses, facial hair, bad lighting, different clothing etc.


The Nest doorbell is fairly good at this, but it has a relatively narrow field of view and 1080p resolution. Most of the CCTVs out there have a very wide field of view, which makes the number of pixels per face low.


As another commenter pointed out, it also doesn't have to sift through thousands of faces. Chances are the doorbell is only exposed to a handful of faces a day which greatly reduces the volume of false positives regardless of the field of view and resolution.


Modern FR systems can compare tens of millions of faces per second, per core, on a $99 Intel Compute Stick. The thing that sucks about Nest and similar "security cameras" is their wide angles - that makes the distance of recognition far too close. To detect a face, FR needs between 28 and 54 pixels of head height, but to recognize with accuracy is requires a minimum of 120 pixels of head height. A Nest camera will not produce a head that large until it is 3 feet from the camera. I'd prefer to make recognition at 30 feet or more, preferably when a bad actor is not yet inside the property.


> Police facial recognition cameras have been trialled at events such as football matches, festivals and parades... High-definition cameras detect all the faces in a crowd and compare them with existing police photographs, such as mugshots from previous arrests.

> "When we first deployed and we were learning how to use it... some of the digital images we used weren't of sufficient quality," said Deputy Chief Constable Richard Lewis. "Because of the poor quality, it was identifying people wrongly. They weren't able to get the detail from the picture."

> Information Commissioner Elizabeth Denham said the issue had become a "priority" for her office.

When you roll out facial recognition technology at an event with ten thousand people and expect your magical facial recognition system to pick out just the bad guys, you quickly discover that even a system that's 99.99 percent accurate has a 63 percent change of giving you at least one false positive in that crowd. In reality these systems are probably closer to 95 percent accurate, in which case you're looking at vastly more false positives. Like hundreds per event.

This is the exact same statement as "breast cancer screening is staggeringly inaccurate." Do you know why breast annual cancer screenings aren't recommended for women under 40? Because breast cancer is so rare in that population that the high false positive rate outweighs the benefit of early detection. The correct thing to do is only screen only if there is already a suspicion of cancer.

Similarly, pervasive surveillance where you (statistically) consider everyone a possible criminal is a waste of time. Anyone who's ever taken a probability course will tell you that unless your facial recognition tech is fantastically accurate you'll waste all your police officers' valuable time sorting through false positives.

This is not an image quality issue. It's not a data quality issue. It's not a deployment issue. It's a Bayesian statistics issue. Unless your tech has superhuman 99.999% accuracy, it doesn't matter how accurate your facial recognition is if you're looking for a needle in a haystack. I'm sure the facial recognition is just as wonderfully accurate as breast cancer screening, it's just being applied to a population with a stupidly rare prior.

This is a statistics 101, freshman undergrad-level mistake, and frankly it makes me mad as hell that authorities are stupid enough to make it and boneheaded enough to endanger their entire population's civil liberties in the process.

For more info, check Wikipedia's page on the aptly-named prosecutor's fallacy: https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy


The base rate fallacy [1] and the false positive paradox [2] seem to be much more on point for using face recognition for screening seem to be much more on point than the prosecutor's fallacy.

[1] https://en.wikipedia.org/wiki/Base_rate_fallacy

[2] https://en.wikipedia.org/wiki/False_positive_paradox


What exactly are you trying to correct? Read your own link.

https://en.wikipedia.org/wiki/False_positive_paradox says:

> concluding that a positive test result probably indicates a positive subject, even though population incidence is below the false positive rate, is a "base rate fallacy".

Whereas https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy says:

> At its heart, the fallacy involves assuming that the prior probability of a random match [i.e. the odds of a positive test result] is equal to the probability that the defendant is innocent [i.e. the odds of a positive subject]. For instance, if a perpetrator is known to have the same blood type as a defendant and 10% of the population share that blood type, then to argue on that basis alone that the probability of the defendant being guilty is 90% makes the prosecutor's fallacy

Those are the same mistake. The prosecutor's fallacy is a base rate fallacy.


False positive paradox: "The false positive paradox is a statistical result where false positive tests are more probable than true positive tests, occurring when the overall population has a low incidence of a condition and the incidence rate is lower than the false positive rate"

Prosecutor's fallacy: "The prosecutor's fallacy is a fallacy of statistical reasoning, typically used by the prosecution to argue for the guilt of a defendant during a criminal trial".

The false positive paradox is a result. To get to a prosecutor's fallacy or base rate fallacy from it, you have to use the result in a way that involves certain fallacious reasoning.

There's nothing necessarily wrong with using a screening test that the false positive paradox applies to, as long as you recognize its limitations. When you are looking for a needle in a haystack (which is typically when you hit the false positive paradox) it gives you a smaller haystack to search for the needle, but that can still be a big improvement over not using it as long as it has a low false negative rate.


That is simple arithmetic, not a "paradox". Those who can't do that arithmetic will be ill-equipped to use this "smaller haystack" in a sensible or just way. Instead, they will simply arrest the first member of that smaller haystack they find.


<cynical thought> The false positive paradox rests on the relative magnitude of the false positive rate and the incidence rate. The numbers in the article reveal a false positive rate of 91%. So long as the police assumption is that somewhere around 91% of the population are criminals, that paradox will not limit the effectiveness of a technology that's wrong 10 times as often as it's right...


> ... pervasive surveillance where you (statistically) consider everyone a possible criminal is a waste of time.

But it's _oh_ so profitable and career enhancing for the people choosing to do it...

Guilty until proven innocent. Make The Prison-Industrial Complex Great Again.


Still remember few weeks ago or so people are talking about China's few jaywalk shame thing?

I wonder how many people got their face, address and ID publicly displayed there for false positives, and it maybe even effects their "Social Credit Score".

I believe had this kind of bold to blindly implement such system which had such false positive rates clearly showing how incredibly responsible our government really is.


I'm curious about how the China facial recognition experiment works too. Giving the generous assumption that the matches are correct, I wonder if they are using mobile phone tracking or RFID card reading to supplement the visual recognition system.


I didn't found mention about supplement systems, so I don't actually know.

Only thing I do found is mentions about the correct rate, some over 90%, some over 99%, none 100%.


So the UK police, under huge budgetary pressures (there are 21,000 less police than 7 years ago at the start of "the cuts" ) were duped into investing in an expensive technology that promised to save them resources in catching criminals. The pressure to get results overrode common-sense and concerns about civil-liberties. A bit like the DNA "fingerprint" craze which is now unraveling with a number of convictions being challenged. If it had been treated from the beginning as corroborating evidence, rather than The Answer, this would never have happened

I am amazed every day at work that everyone expects technology to be a "silver-bullet", and are outraged if they still have to do anything themselves


There's an easy fix here. Criminalize looking like a criminal.


"It's a crime to be broke in America! And it's a crime to be Black in America!" -- Spearhead/Michael Franti 1994.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: