This is interesting to me personally because last month while flying back to the US after 2 years, I was detained for an hour because the computer said that my face didn't match my passport. The customs agent, clearly seeing that I am the person on my passport, wasn't sure what to do, so she called over her senior. He said it happens sometimes and just to wave me through, but she wasn't willing to disagree with the computer, so she sent me to the detention room. I waited for an hour in a room full of people who had flagrant visa violations (and who were being treated very rudely, fwiw), and overheard two officers discussing what to do with my case, being a US citizen my whole life with nothing on my record. The senior insisted that I be questioned, "Always ask a few questions, you never know what you might find". Eventually they called me, interrogated me in a way intended to get me to incriminate myself (questions I wouldn't answer to police on the street, but wasn't in a position to deny them), and let me go. I almost missed my connecting flight, and hope that just the fact that I was detained won't cause friction in the future.
This is not really a personal complaint. My situation is still quite privileged. But I think it's an example of just how much of a techno dystopia we're living in.
Unrelated to facial recognition: I'm a brazilian, flying from Casablanca, Morocco to Lisbon, Portugal, and I dropped my passport while on the plane. The minute I descend from the plane I realize it and inform the crew and airport security. They tell me to wait. Nobody goes to the plane to try and find my passport. I get detained in the airport prison with mostly illegal migrants. The plane flies away to Madrid. I can't leave without a passport. There's a small 8 square meter room where more than 10 people sleep, including someone with a contagious respiratory disease (this was pre-covid). Couldn't sleep and was up all night reading The Process, Franz Kafka, which was funnily available in prison hall. Had to bribe the police officer with a collector's plastic Clipper lighter for him to allow me to smoke a single cigarette. Was able to get out after 24h, but only because my brother had a friend who worked on the plane operator company, and was able to detect my passport inside the plane at Madrid, and arrange it to be given to a pilot who would be flying to Lisbon the next morning. It was so traumatic that when leaving they asked me if all my belongings were on my backpack (which had been confiscated upon arrestal) and I didn't realize police officers had actually stolen my sunglasses and an iPhone. It's not exactly a techno dystopia. It's a full on dystopia turning to tech.
Holy shit. Growing up in a super small border town between Texas and Mexico, I've seen, heard, and been part of some crazy and terrifying stuff, but nothing compares to this. I felt terrified just imagining what you went through.
Obligatory glad you made it out, but seriously- had any one thing gone wrong, who knows what could have happened in the end.
I used to sit and wonder about the future. As in how wonderful it will be and what amazing things we will accomplish. I used to fear dying because I wouldn't get to see any of it, much less the end of the story. Somewhere along the way I began to fear the future, to the point I didn't want to have children, I felt it unethical to knowingly bring an innocent life into that kind of terror.
Professor Farnsworth said it best- 'I don't want to live on this planet anymore.'
Edited to add- when I was young I caught wanderlust pretty hard and traveled most of the US, some of Mexico and Canada. I dreamt of visiting other countries and continents and was hugely disappointed to not find a way before having a family and obligations. All that went out the window about the time I started watching Locked Up Abroad. Brokedown Palace had nothing on those stories.
> "but she wasn't willing to disagree with the computer"
This mentality is far too common among modern humans. I've suffered these people at banks, bill pay stations, travel situations, etc. It's especially annoying when you're facing them with undeniable evidence that the machine contains mistaken data and they still respond with "I'm sorry, sir. There's nothing I can do. It says right here that you owe {$random-money}." Machines can be wrong, and even more commonly, the people who enter data into those machines can be wrong. But then you get these people who can clearly see that something is wrong, but they refuse to refute what the machine tells them? Why?
I swear, when the machines finally decide humans are obsolete, we'll happily line up and willingly march into the kill-box…
Unfortunately it's turtles all the way down (up?) - should they ignore the computer and make a mistake, they no doubt will face their employer confronting them with 'the computer told you so and you flagrantly ignored it'.
The recently uncovered UK post office accounting scandal goes to show that the computer is assumed correct by default!
It starts young; in grade school. I saw it with my son and his peers with simple graphing calculators and their math homework. They'd punch in some numbers, get a result, and that was that. Try as I might, I couldn't convince them that garbage in leads to garbage out. In their minds, the machine was infallible.
To be fair though businesses havd always used these 'buffer' tactics. Where now it's software, the all-knowing never-wrong magical boxes that cannot be reasoned with, before it was District Managers who were never available, secretaries as literal gatekeepers, 'company policy' which is nothing but some garbage some department decided on, but they make sound like a law. On and on, round and round. :(
This is my biggest problem with working in the video security / FR industry: the industry completely ignores end-user training and the obvious necessity for the operators of this class of software to be screened for A) computer literacy to the degree they understand this is just software and NOT an authority, and B) operators of video security software need to be screened for Racial Blindness - if a person cannot tell the difference between near age siblings or cousins of an ethnicity other than their own, and specifically the ethnicity they police, they need to be legally regulated as ineligible to operate such software.
The problem is a lot of people who work these roles are so demoralised, they see themselves as a kind of almost-robot when it comes to fulfilling their job duties. They don't feel they have the autonomy to apply common sense.
It's a weird mixed feeling of annoyance and pity when you run into one of these folks.
I had a similar situation crossing the US border coming back from Toronto. I got flagged because I pulled past a yellow line during the wait. Incidentally, I just followed the pattern of the person in front of me who flew through.
The thing that resonated with me was the approach the officers used. These types of jobs attract people who like the power trip. You are considered guilty unless you can prove innocence. There was no probably cause, no need for a random search. I know people in LE and I can only imagine how these people would react to similar treatment if "flashing their badge" didn't get them a wave through. I think this is the thing that people don't fully get about the surveillance state and encryption back doors in addition to facial recognition. If there is ever a "hit" on any of these things, you are guilty. LE will stand on you to try to get you to crack because the tech can't be wrong. There are going to be so many innocent people in jail in the future dystopia.
For some reason everyone who isn't born into crushing poverty, and/or didn't move here without speaking the language or looking and sounding like a natural-born citizen, and/or doesn't come from an underfunded inner city as an obvious racial minority feels the need to acknowledge some ephemeral privilege that made everything easy for them.
I don't have a problem with this per se, and at times it's even a good exercise from an intellectual standpoint, but it's at about a 12/10 at the moment.
I thought his privilege was beeing an US citizen, if not the option of beeing deported would have a high propability. (technically a US citizen cannot be refused entry AFAIK)
How often are people remanded to secret prisons and/or are just murdered because the computer says their passport photo doesn't match their face? Why bring it up?
Technically, a previously unknown near-earth object could wipe out most life on earth at lunchtime. Odds are, I'll still have that 2 PM meeting, though.
Atrocities are committed but my sense is a photo mismatch won’t be the catalyst. I’d go further and say it happens less often then you might think.
Did you know CBP agents are trained to get you emotional, piss you off, and treat you like shit? It’s an effective and quick way to get someone on edge and maybe reveal something to be investigated. Nervous behavior. CBP officers go through therapy to deal with this.
Fight fire with fire. I would make it clear (respectfully) the situation is ridiculous and push for resolution in my favor.
I was once asked for a second photo ID at the border and I didn’t have one. The officer found this very suspicious. I did get kind of pissed and asked him what kind of idiot takes every photo id they have abroad. What if it’s lost or stolen? I was waived through.
Buddy of mine from college flies with his US passport at all times, even for domestic flights, because he's of Arab decent and constantly gets hassled for it.
So one time we're flying domestic as a group to set up some equipment for a research project. He gets pulled aside by a couple of white TSA agents as part of a "random inspection", out of a line of about 20 people, mostly white or light complected.
He starts getting questioned about where he's flying, why, etc. Then they ask to see his ID and boarding pass. Hands them his passport, state DL, and boarding pass, and they start going off on him as to why he's got his passport for a domestic flight, how that looks suspicious and such.
He responds with "Well, I get stopped for looking Arab every time I fly, so this is a much easier way to prove I'm a US citizen." They stop for a moment and look around at everyone watching, tell him to have a good day and walk away. I'll never forget that encounter.
I'm black, I've had quite my shares of DWB stops and bad cop experiences (I detailed some of them on Twitter and in this subsequent interview). That Twitter thread blew up because so many black people chimed in (from college students to famous journalists to famous musicians) detailing their ridiculous experiences with cops.
One of the more "interesting" one was I was stopped by a cop in a suburb of Boston, I think it was Newton or Newtonville). The reason given was that "I was adhering too strictly to the rules" which made me suspicious. This was a read that had a 35 MPH limit and I was strictly observing that limit. So the fact that I was following the law made me suspicious. But the reason I was following the law (even when other drivers were speeding) was because cops will use any ridiculous reason to pull me over. Can't win for losing.
Privilege is real and I see it every time I compare how I get treated with how my white friends (who share the same socio-economic profile and other markers to me) are treated. I dumped a white acquaintance I was giving a lift to Manchester NH on the side of the road when he pulled out a joint as I was driving and got belligerent with me when I got mad at him for endangering me and refused to put it out or throw it out. He just could not get it through his head why I, a black person at much higher risk of getting stopped by cops would take umbrage at his stupidity.
I'm 47. I have never done weed, not because I think there is anything wrong with it, but because given how often I got stopped by cops (driving or even simply walking), all it would have taken was for me to have a joint on me for it to ruin my life. My white friends had no such fear.
When I was younger and in college I used to work security, and sometimes I'd do security for concert venues. I saw how the cops treated predominantly black patrons vs white patrons. With black patrons, the cops felt their role was to intimidate and they tended to escalate situations and just generally be rude and aggressive. With white patrons, they went super easy (I worked security at the Fleet Center in Boston for a Fish New Years Concert (1995 or 1996 I think) (the band put on an amazing show and made me a fan of theirs that night)). White concert goers were walking around openly smoking pot (I drove home in the morning with my uniform reeking of weed) and the only things the cops would do was to occasionally ask patrons to put them out. Totally different from when the crowd was predominantly black (no black person in Boston at that time their right mind would even THINK! of lighing up a joint in front of a cop).
The facial recognition issue is particulary vexing/scary for me (Netflix has a great documentary called "Coded Bias" about this) because there are 2 things that are true
1: Facial recognition systems are incredibly terrible at telling black faces apart
2: Any rule or perceived rule violation is going to be enforced much more harshly against POC. There is wiggle room and entreaties about being mis-identified are likely to fall on deaf years
A good example of this confluence is the black girl who was put out of skating rink (at night, thereby putting her in danger) because the facial recognition system mis-identified her. The only saving grace is the venue did not call the cops (I'm pretty sure that things would have gone badly for the teenager)
https://www.newsweek.com/black-teen-barred-skating-rink-afte...
> Did you know CBP agents are trained to get you emotional, piss you off, and treat you like shit? It’s an effective and quick way to get someone on edge and maybe reveal something to be investigated. Nervous behavior. CBP officers go through therapy to deal with this.
Are these fishing expeditions a demonstrably effective way to prevent terrorism/trafficking/whatever?
>interrogated me in a way intended to get me to incriminate myself (questions I wouldn't answer to police on the street, but wasn't in a position to deny them)
If you are a U.S citizen, they legally can't deny you entry into your own country. Nor can they force you to answer any questions beyond the most basic ones that establish your citizenship and ID. They can intimidate you and make you feel you should divulge more, but this is just bluster, and you have rights that are well worth exercising. The ACLU confirms this by the way: https://www.aclu.org/know-your-rights/what-do-when-encounter....
Such an incident occurs in Singapore often and police detain people and later release them as the facial recognition tech is defective not reliable.But such surveillance is done by the scared governments like china also
> I was detained for an hour because the computer said that my face didn't match my passport
The "excuse du jour". Always blame the computer and you will have a free pass to do anything as you want without being blamed for it. I had seen a similar pattern other times in airports.
Ah yes, deploying a technology with obvious potential false positives and operating without the assumption that it has false positives. Geniuses right there.
I remember a story where after 9/11 and during the dirty bomb panic that ensued, a network of radiation detectors was deployed around New York. Authorities identified many dirty bombers. It seems Al Qaeda had recruited cancer patients for some reason, and they all congregated around radiation therapy centers for some nefarious purpose.
Those people shouldn't be fired for infringing upon this gentleman's civil rights, they should be carted off to a secure facility for the severely mentally impaired, for the good of society. I don't want any of them to be allowed to operate heavy machinery (including vehicles and elevators), matches or knives; the risk is too great.
This has been going on for a while and has nothing to do with AI or tech.
> Drug tests generally produce false-positive results in 5% to 10% of cases and false negatives in 10% to 15% of cases, new research shows.
Lots of innocent people were probably jailed because of that. Afaik, most countries still relies on these tests to at least detain people. The system will implement something (anything?) as long as it is "good enough" or appear to fix the problem.
> > Drug tests generally produce false-positive results in 5% to 10% of cases and false negatives in 10% to 15% of cases, new research shows.
> Lots of innocent people were probably jailed because of that.
Efforts to increase the accuracy of the tests have downsides too, since the tests are already too sensitive. Lots of people have lost their jobs due to erroneous true positive results from eating something like a poppyseed bagel.
The test result is erroneous because no drugs were used, but a true positive because the trace amounts detected from the poppy seeds were actually present.
> Drug tests generally produce false-positive results in 5% to 10% of cases and false negatives in 10% to 15% of cases, new research shows.
Most often, some tests have high sensitivity (no false negatives) whilst others have high specificity (no false positives). (Rarely both.)
So, for example, this is the reason why a positive RT-PCR test has more weight than a negative test because of the test’s high specificity but moderate sensitivity.
Lumping together false positive stats (for low-specificity tests) with false negative stats (for low-sensitivity tests) is misleading at best.
People just are not prepared/don't understand how even very low level of false positives looks like in real world.
If a vaccine kills 1 in a million people, that's still hundreds of people in US and thousands in entire world.
But now, see, how most people just don't understand the concept of very large/small numbers and can't interpret what it means exactly.
There are tests with very high false positive ratio. For example, pregnancy test.
Now, you don't do pregnancy test all the time. If you did it 5 times a day every day, basically at least once a week you will find out that you are pregnant and the test would be useful. And then you go to a real doctor to confirm.
The way to use test like that is to use it combined with some other knowledge/process/situation. In case of pregnancy test you only do pregnancy test when a) you show other early symptoms of pregnancy and b) you had sex with possibility of getting pregnant.
Facial recognition could be useful but as any high false positive test, looses its usefulness when deployed on large scale, indiscriminately.
> But now, see, how most people just don't understand the concept of very large/small numbers and can't interpret what it means exactly.
I think the argument is that this particular form of innumeracy should no longer be acceptable if you're making decisions where this matters. At a minimum, this would be police officers, judges, lawmakers.
But they are normal people and so are our lawmakers.
Unfortunately you can't build the system based on assumption that officers, judges or representatives are Vulcan and only use logic. Any system like that will necessarily be pretty bad.
The safety in the system is general public scrutiny, but as we see it is failing miserably, too.
Most people are not qualified to be doctors. We don't just throw up our hands and conclude genocide would be the only answer and it is unacceptable therefore we need to let anybody do organ transplants.
Statistical literacy isn't so low we cannot make it a requirement to be a judge. The gatekeeping for once would be the start of a solution - make statistical literacy a qualification for well paying jobs and many will steer themselves to it.
Yes, but in a lot of cases facial recognition/drug testing is done without a valid prior reason, just the office pulling the "I smell weed" crap -- there is no incentive for the police officer not to make up an unfalsifiable prior reason.
The point is, for facial recognition to make sense, you would need to have some additional context.
For example it could make more sense if you followed up any results with an actual person making determination (some variation on a blind or double blind test).
> as any high false positive test, looses its usefulness when deployed on large scale, indiscriminately.
Well of course anything mis-applied is probably bad. But sometimes false positives can be a good thing. For example, in immunology, it's quite common to use cheap tests that have quite high false positives, but low false negatives to screen samples before applying more expensive, but more rigorous tests.
> Williams was wrongfully arrested in 2020 for federal larceny after he was misidentified by the Detroit Police Department’s facial recognition software after they used a grainy image from the surveillance footage. He was then picked from a photo lineup by the store security guard who wasn’t actually present for the incident. According to his testimony, Williams was detained for thirty hours and was not given any food or water.
So the software pointed out that some guy resembled a grainy photo. Okay.. that's like a random person calling in a tip that they saw someone who looks like a grainy photo on a Wanted-poster.
Then the guy gets arrested, there's a lineup for a security-guard who wasn't even there, they hold the guy for 30-hours without food or water, and that's due to the software?
The argument's too absurd for me to believe that it's being made in good-faith.
I worked as technical support staff in law enforcement for many years before transitioning to knowledge work, and I assure you this very scenario (even without facial recognition oopsies) happens all the time, especially to people of color. It's one of the main reasons I got disgusted with the career and got out. People get arrested based on bad-faith tips, usually on a Friday so they can be held all weekend since detectives generally work banker's hours. If a suspect is in a holding cell they generally don't have access to a water fountain and must ask for water from jailers/guards who have already judged them as less than human and will "forget" to get them food and water. When Monday comes and the CID commander shows up, he will make the decision to cut the suspect loose because they never should have been detained in the first place, the detective(s) involved move on to the next suspect, and the cycle repeats.
It's disgusting and amoral and dehumanizing, and that's why I hated the job even though I had no direct involvement apart from running background checks and processing evidence.
Those most critical of the system tend to be former employees, and some lose their jobs because they speak up. I wasn't that brave, I kept my mouth shut while I worked there but that in turn led to me hating the job, and myself, even more.
And I'm not saying that my experience or the man in the article's experience is "normal", but it certainly does seem widespread. There have been efforts to improve the situation in pre-trial jails; the PREA (Prison Rape Elimination Act) has had a measurable positive effect on it even though its focus is more on post-conviction incarceration, but I have observed jail staff laughing at and making fun of county jail PREA training (I worked as a training officer for a while at the end of my career, my effort in trying to make things better). There is simply a lack of empathy and compassion that is part of the job requirement; if you show a heart you are pushed out of the career by the heartless ones. By and large, especially at the ground level in booking/processing/housing, the workforce is mostly bullies and sadists with a few more-or-less "normal" people in the mix to try and keep it under control.
Please note this is the process where I worked but it is different depending on where you are and which level of government you're dealing with (municipal, county, state, federal, etc.) Also I am not a legal expert, this is simply my experience while working in the system.
It varies by jurisdiction, but in most cases you can be arrested on a warrant and held for "first appearance" before a judge, where you are either formally charged by the court or the charges are dropped. If you are arrested on a Friday evening in a jurisdiction that doesn't do weekend court (usually smaller cities and counties), you will stew in jail until said first appearance on Monday. If the charges are dropped at first appearance (which is usually held by videoconference) you are released but the arrest remains on your record until you have it expunged. If you are formally charged at first appearance, you then either receive a bail/bond amount, are released on your own recognizance, or else bound over for indictment. Once indicted, you again may be able to make bail unless the charges/circumstances determine you are to be held without bail (i.e. you are considered a danger to yourself/others/your alleged victim, or you are a flight risk).
> The argument's too absurd for me to believe that it's being made in good-faith.
It's the US, so I have no difficulty believing this kind of thing can happen. All it needs is an overzealous DA office and state prosecutor with a hard-on for deploying these technologies to keep their conviction rates up (or think they can).
See also Aaron Schwartz, who thought things would get so dystopian for that young man, just for downloading a bunch of science papers?
All the facial-recognition software is supposed to do is point out similarities in pictures, and then humans can see if they agree. That's it. None of the silliness after that is due to the software, anymore than it's due to the security-camera that captured the grainy-footage in the first place.
I don't think so. His point is that there are, surely, many more cases exactly like this (shoddy tips leading to wrongful detainment and subsequent poor treatment) where facial recognition software is completely absent from the picture.
This man was not "Wrongfully Arrested by Facial Recognition". He was wrongfully arrested (and criminally treated) by incompetent, capricious people and the bureaucracy that shields them. Bonus for the media outlets that could not care less about humanitarian rights violations or human dignity, but do have an axe to grind with Big Tech.
An accurate headline would be: Man Wrongfully Arrested by Corrupt State Government Tells Corrupt Federal Government His Story
They're not exclusive to the US, of course. But in the US, such detentions, or asphixiations, or shootings, or deprivations of property ("civil forfeiture") by law enforcement and/or under colour of law, are why the streets have been in rightful protest and upheaval for much of the past decade.
Those protests are a continuation of those of a half century ago and more, many based on the broken promises of the civil rights reforms of the 1960s.
And whilst the victims of such actions are overwhelmingly traditional minority populations, the abuses are not exclusive to them. Anyone poor or othered is particularly vulnerable. Even city councilmembers, state representatives, congressional representatives, and senators have been beaten, detained, or denied boarding privileges on flights due to Kafkaesque procedures and brutish enforcement.
I agree. How is the 30h of no water and no food not the main point of this article. People will always get wrongfully detained for a while, (no matter what or no technology used), isn't the real important thing to treat detainees well?
Agreed. The sexy AI twist is just the squirrl bringing attention to normal human authoritative behavior that has been going on for many thousands of years. In my mind the involvement of AI or facial recognition is immaterial.
The police believe that the software match proves he is guilty from the jump. The rest is gathering "evidence" to prove it further. Without the software, he wouldn't have been arrested. Guilty until proven innocent. The police are lazy, why investigate when the software means you don't have to?
- Dependence on a known-unreliable but very entrenched technique (lineups)
- Inhumane treatment of prisoners.
Of these, the snake-oil software is probably easiest to go after, as it's newer. I assume he's made complaints about the other stuff, too.
I mean, I'd be shocked if it had happened here, but the latter two points, unfortunately, don't sound too out-there by the standards of US law enforcement.
> Research has repeatedly shown that facial recognition technology is fundamentally biased against women and people of color, leading to errors like this.
It’s not fundamentally biased. Many implementations have had imbalanced training sets, which is a different conclusion. But you can’t say CNNs are fundamentally biased against women and people of color. That’s just plain wrong and makes the entire argument look poor.
The same datasets given to human operators would not produce biased results.
And it isn't merely a question of unbalanced classes. The "bias" is, in part, arising from the inability of machine algorithms to apply rich models to datasets; and to treat "alike images" "alike" -- because the machine has no model of "alike".
In the case of images, luminosity values on an image sensor are not constitutive of human facial identity. Any algorithm unable to derive actual identity markers from these is necessarily fundamentally limited -- and routinely, these limitations lead to bias.
The issue is the limited ability to fit a morphological model to facial features which are "distinct from the ones you are used to".
Machines aren't fitting a morphological model. They are being fed 2D pixel patterns -- this information is profoundly insufficient.
"Contrast bias in pixel patterns" is not the same as "needing a few more cues to fit a morphological model". The former can never be made into an example of the latter; and therein is the problem.
It’s as insufficient as a human looking at a photo. We don’t outlaw humans looking at surveillance videos, so why is a machine fundamentally any more biased here?
Machines are simply using co-occurance relationships in pixel patterns.
Humans have rich models of other humans which we "fit" to the extremely shallow data present in a photograph.
This "fitting" is prior to our perception, so as soon as you're perceiving you're also using vast amounts of prior knowledge to reconstruct your environment.
The actual "photos" contain almost no information at all -- and lead to gross "stupid" errors.
Humans similarly can be reduced down to relationships of nearby photo receptors. That’s exactly how V1-V4 works in our human brain… building detectors that go higher and higher level from nearby bits of information from two 2D sensors. In the same way, most neural networks CNNs do exactly this as well. CV models similarly get trained to produce their own higher level abstractions of the world that they, like us, fit data too. As much like computers, we too can be subject to all kinds of gross stupid errors, like optical illusions, mistaken identities, etc.
The best CV models are often fine tuned from more generalized models that can recognize thousands and thousands of different kinds of objects, even if in the end they only need to find 1 kind. This is because they build generalized representations of the world.
None of this is to suggest there any bias that fundamental to them. We’re starting to see fruits of multi-task learning where models are integrating across domains, which ends up improving accuracy even for just one modality by incorporating more world knowledge. So it’s not like we’ve reached some kind of theoretical limit as to what they can do, and it turns out that theoretical limit is sexist and racist. Nothing could be further from the truth.
NNs aren't generative causal structural models of reality
they are models parameterised by pixels, not by jawlines
to mistake one for the other is to mistake and misunderstand intelligence
we are not in a humean nightmare of sensory input -- we have bodies which have causal contact with reality, and from these bodies we have built a structural understanding of reality
our perception applies this structure to our "sensory input" -- thinking doesnt terminate in the visual cortext, that is where it begins
by the time we are seeing we are seeing vastly more than is present on our retina
Correct, they should remove the word "fundamentally" and then the argument probably holds. Because from what I read those training sets were quite often imbalanced. So I must wonder, where are the training sets coming from? Is it actually the same set, being reused? Is there somewhere a job as "training set creator"?
No, they were biased because they literally excluded certain races or had small numbers of certain groups. These training sets classically either came from celebrities, or researchers themselves - unbalanced classes of input.
Everyone mentions 'training sets' in 2021 as if the experts in this field have not been well aware of this issue for decades.
Even balanced training sets produce disparate levels of accuracy between races. It would seem computers are just less capable of recognizing certain faces.
I mean, I'd read that as "facial recognition technology products which actually exist in the real world are fundamentally biased". The platonic ideal facial recognition technology wouldn't be, but it doesn't exist and will never exist so is hardly worth worrying about.
He wasn't arrested by facial recognition, he was arrested after facial recognition provided that he might look similar and then multiple humans looked at him and all the photos and and (falsely) verified that yep, that's the one, there's a match.
The tech provided a candidate, but all the decisions were made by humans, who came to the same conclusions.
I think, its more about Probable Cause for an arrest and that facial recognition is considered a valid, non-biased trigger for that by the police. The emphasis is on the "non-biased" here. It basically triggered the chain of events where none of the investigators really double-checked. Its kinda similar to the Stop-and-Frisk issue.
Yea but I think almost by definition the software will find a person that looks very similar to the actual perp. So then subsequent humans are more likely to make the mistake.
I don't see how this is much different than artist's sketches or just using the video itself, other than cops might put more undue stock in the software, and the obvious potential for mass surveillance.
Poor investigative work leads to bad conclusions. Maybe the answer is a combination of re-forming law enforcement departments, adversarial processes for attaching warrants to suspects (with a judge and defender, etc), and to stop blindly trusting processes that at best reduce the size of a probable suspect pool (of everyone) as being capable of positively identifying a single individual.
Ai should stick to annoying customer support bots, funny deep fakes, and instagram butt and face enhancement apps.
The fact that a media hype and a bunch of workers overestimating their AI "codes" have creeped in our daily lives is a simply appalling and shows how mediocre today's society is. This tech is not yet ready for what its being used for and should take a sidestep for a long, long time. Perhaps an aide for filtering large datasets but not much more.
This will become much more common. The accelerated adoption of surveillance tech in the past 20 years is terrifying. We are trending towards total surveillance, ML decision-making, centrally-controlled digital currency, domestic passports, and social credit systems. The solutions at this point, if any exist, are political. People need to reject the totalitarian techno-state through law. Leviathan won't/can't restrain itself.
> Williams was detained for thirty hours and was not given any food or water.
This treatment is inhumane and borderline torture. Especially for a SUSPECT. Even if it had been the right guy, this is far from an acceptable way to treat fellow humans.
The most important bit of data when deciding on things like this is what the normal wrongful arrest rate is and whether it's higher or lower with this tech. I don't see any discussion on this though.
This is not really a personal complaint. My situation is still quite privileged. But I think it's an example of just how much of a techno dystopia we're living in.