Would be great to see Amazon's support.
The ACLU ran an experiment with Rekognition and these are their findings:
"Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos. Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition.
... the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.
... Academic research  has also already shown that face recognition is less accurate for darker-skinned faces and women. Our results validate this concern: Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress."
> However, just from a cursory review of the mugshots, people of color were disproportionately represented in the mugshot database. So it’s not entirely fair to criticize the facial recognition technology for matching more people of color.
No, the point is the disproportionate representation, and it is a fair criticism, because it is a fundamental limiting factor in the use of the technology. You seem to be distinguishing the "technology" from the data source, but that is not possible. The quality of the technology depends on the quality of the data provided to it.
> I believe a curated list of mugshots with certain characteristics would result in a similar representation of mismatches. Nothing I have read about the technology suggests that there are inherit [sic, is inherent] bias.
This bias is in the data, not the algorithm per se. This is quite accepted in many, even most, criticisms of machine learning applications in the social sphere. It comes up a lot for example in NLP models, for example with assumptions on gender for professions.
It is a technology problem because it demands a technological solution, because removing any and all bias by hand-curating datasets is just not a scalable approach.
The algorithms reflect and amplify biases in the data. They also go into feedback loops once they affect the reality being represented in data.
Imagine that an algorithm selects individuals at an airport for search. Contraband smugglers are caught. The dataset now includes them, evolving the bias. We have already seen this at play on social media, recommendation engines, fraud detection, etc.
The future is worrying. There are ratchets on technologies such as these. Unless there are extraordinary counter-reactions, they will be "affecting their own datasets" en masse very soon.
The difference is with technology these biases can at least try to be addressed directly and create an objective more fair system.
Technology can address biases but we need to make sure that they are addressed sufficiently, so that we do not end up causing more problems than we solve.
This is a really interesting point that I'll keep in the back of my mind, thanks for that! I hadn't even thought it through in such a concrete manner but I think this really hits the nail on the head.
I actually think it’s probably better than most people’s naked eye recognition in a crowd.
Take DNA testing. We all know that given two samples the odds of a false positive are incredibly low. So DNA testing is a great too for eliminating people who might otherwise be suspects or further evidence against the guilty.
The problem is that given this DNA database, someone decided "let's find our suspect by looking for a match". With 1 in a billion chance of a random match and 100M samples, your chances of getting a false positive are really high.
The problem is that there are instances where a DNA match alone is used to prosecute or even convict people.
In recent years this problem has gotten worse due to the rise of familial DNA matching. Given two samples, we used to only have the ability to say if they were a match or not. Now we can say how much of a match they are. How much of a partial match is enough? What's more, you may be implicated by the stored DNA is relatives.
Facial recognition is far more imprecise than DNA. So yeah I fully expect this to get abused by prosecutors and law enforcement.
If I increase the population I test to include any warm body, eventually I'll end up with more false positives than real positives.
If you are looking for one male suspect and comparing them to all the male faces (let's assume it gets gender correct) in the us, with a 99.9999% (six 9s) accurate algorithm you would get something like:
1 true positive
182,499,818 true negatives
182 false positives
Broad scale facial recognition is just an outright stupid thing to do as a sole measure of identification without the use of other information.
The hardest thing for them with probability is false positives. conceptually it does not seem to align well with how the human brain works. I've had students literally get so frustrated the cry because they know what the math says but can't accept that you can test positive for a disease and not have it. They know they are wrong but it's just this weird sticky misconception.
This seems like a clear example for why facial recognition is a technology that is just not 'solved' yet. The appearance of people's faces, especially from similar ethnic backgrounds, is just too similar for a ML model to parse out with any confidence.
I have noticed this in real life. As I get older, I notice it more and more. I'm sure many of you all have too. There are very distinct patterns, or 'buckets', that human faces tend to fall in. I think our brains tend to naturally categorize them accordingly.
It is probably subconscious. I might not be able to articulate it, or put a definite 'name' on a group. But I know I am constantly seeing patterns of faces in public. People I don't know, and have never met, but they remind me of other random people I have seen in public. Or maybe they remind me of a popular celebrity that everyone knows.
Either way, something goes off in my head. I can't help but think to myself, they must have some sort of similar lineage, or genetic background. I subconsciously categorize them into a bucket with others I've seen.
I imagine this is similar to how Rekognition, and other models work. I thought the blog post from the parent commment @bko, is a fantastic example of this. It is actually amazing, when you think about it, that the ML model can match these faces up as well as it does.
To the naked eye, it is clearly not the same person. Rightly so, considering all the images were in the range of 70-80% confidence. But many are remarkably close. I think this illustrates, the concept I am trying to describe. You can notice it, even with the naked eye.
All of this rambling is to say, I agree with Amazon's moratorium on Rekognition.
As impressive as the technology is, it should probably not be used to try pin-point specific individuals yet, or whatever else folks might be erroneously trying to use it for. If we are to trust facial recognition to identify specific individuals, it should probaly be approaching near 100% confidence, and I imagine that level of confidence is a long ways off.
Now ask if my neighbor looks like any random person on the street, sure, it's small. But that chance we find two people somewhere that look very similar is incredibly high.
"Sally Clark was a practising solicitor before the conviction. After her three-year imprisonment she developed a number of serious psychiatric problems including serious alcohol dependency and died in 2007 from acute alcohol poisoning"
There is already a whole slew of dubious methods (bite mark analysis, blood splatter analysis, fibre comparison) being used to put people in prison/to death, these things are known to be bullshit, but they are still being used. I really think we as technology-literate people should fight as hard as we can against the introduction of facial recognition in the justice system because once it becomes common place it will be really hard to undo it.
This of course means that there are doppelgangers or near doppelgangers all over the place. Even funnier, when i first moved back here from the U.S I was staying at a place in which there was this guy who looked exactly like one of my friends from the U.S, even down to facial expressions and mannerisms (maybe that was because he was always extremely stoned).
Presumably you'd also use the same techniques as used by humans today to narrow down further, like taking into account location, time, etc of match
Move fast and break lives.
For comparison, I did another test where I took one of those youtube videos that shows you the same person's face over ten years. I used the original 12 year old boy as the image I'm using to match. Over 1,300+ images, it had > 70% confidence in all but 4 images (2 had big sunglasses, one the guy was in green-face, and the other one was actually his wife). And this is from a single picture that's ten years old.
The first part I looked at an open-source facial recognition model that did considerably worse.
> > Only one in a thousand abusive husbands eventually murder their wives
> The more pertinent question is what percentage of murdered women were murdered by their abusive ex-husband?
Shouldn't this be "what percentage of murdered women with an abusive ex-husband were murdered by said ex-husband"?
Yay, we have an algorithm judging people, but is it fair to Canadians? Completely off...
I know how ML works but I believe we should demand better from these technologies than the limits of our own biases.
Right now, this machine learning algorithm is apparently about as smart as a bigot arguing "yea but percentages show that crime is in fact higher among blacks!". It mainly shows how systemic the racism is, that a dumb ML algo picks up on it.
This is not solved by showing the bigot less statistics about black crime, but by showing them how to pull their head out of their ass.
We should expect no less from our ML technologies, otherwise you'll keep running behind the facts, always fixing errors after they have been learned and made.
Yeah that is hard and we have no idea how to approach it. But the alternative appears to be writing computer programs with the reasoning skills of a racist cop.
You're using the strawman and whataboutism logical fallacies, but you're not actually making a point.
From one of the links on the left of the article: https://blog.aboutamazon.com/policy/amazon-donates-10-millio...
> Update, June 9: Since announcing our $10 million donation, we’ve heard many employees are making their own contributions—and we’ve decided to match their donations 100% up to $10,000 per employee to these 12 organizations until July 6, 2020.
Fast Company  writes about this as well: "The ACLU in both tests used an 80% match confidence threshold, which is Amazon’s default setting, but Amazon says it encourages law enforcement to use a 99% threshold for spotting a match." That bit of the article links to the CompareFaces API documentation  which (still) states "By default, only faces with a similarity score of greater than or equal to 80% are returned in the response".
Have you seen/read something else about this?
Then this whole thing is potentially misleading because there's a huge difference between 80% and 99%. It's probably nonlinear and they could possibly see their false matches drop to 0. This is not a fair test - or rather, the conclusions are not quite supported by the parameters.
Not that I'm defending police use of facial recognition tech, I think it's abhorrent, though possibly inevitable.
I'm deeply troubled by the text I've seen here implying this threshold is some accuracy percentage or positive predictive value percentage. Unless God is working behind the scenes at AWS they can't make any claim about the accuracy of the model on an as yet unseen population of images.
That's even before getting to the more esoteric map vs territory concerns like identical twins, altered images, adversarial makeup and masks, etc.
As for the test, you say it's not a fair test. The point / conversation right now seems to be about the choice of parameters used by the ACLU. As far as I see / understand, the ACLU used the default parameters (and/or those recommended in the documentation / articles that are still up today with those same non-99% values).
What would have been a better / fairer test?
I would bet good money that cops KPI goals benefit from false positives, since they'll reward higher "number of identified/interviewed suspects" and "number of arrests" as a positive thing even if "number of convictions" doesn't line up.
Even more cynically, I'd bet this is a powerful technique for ambitious cop promotion, and that there's little blowback on fraudulently manipulating parameters that adversely affect POC much more significantly that white people.
Thinking about it, I'm now recalling the multiple reports of police departments claiming to not be using clearview.ai, only to have to backtrack when clearview's customer data got popped and it became public knowledge that individual cops were signing up for free trials - which their department/management either chose to hide or didn't know about. That's reasonably compelling circumstantial evidence to me that ambitious cops are quick to jump on unproven and unauthorised technology with insufficient or oversight or with management actively avoiding oversight for them...
If the default is 80, most will be 80. The SE may say “I’m told to inform you that you should use 99.”, but I’m sure he is winking.
My understanding was that the ACLU used the default settings.
July 26, 2018 — Amazon states that it guides law enforcement customers to set a threshold of 95% for face recognition. Amazon also notes that, if its face recognition product is used with the default settings, it won’t “identify individuals with a reasonable level of certainty.”
July 27, 2018 — Amazon writes that even 95% is an unacceptably low threshold, and states that 99% is the appropriate threshold for law enforcement.
Either way, the defaults are the problem if the application is law enforcement.
"Defaults have such powerful and pervasive effects on consumer behavior that they could be considered “hidden persuaders” in some settings. Ignoring defaults is not a sound option for marketers or consumer policy makers. The authors identify three theoretical causes of default effects—implied endorsement, cognitive biases, and effort..."
I don't think this 99% thing is communicated properly at Amazon if it's getting through blog posts like this.
So I think a valid criticism is that we need to make sure that it's higher.
With that said, some of their work is still great, and I'm thankful for it.
Remember how McDonalds was the victim of a baseless lawsuit? Well, it wasn't actually the case, but that sure benefitted corporations who can now assert most lawsuits against them are frivolous.
The ACLU would not take the Skokie case today’: https://www.spiked-online.com/2020/02/14/the-aclu-would-not-...
Former ACLU board member Wendy Kaminer:
The ACLU Retreats From Free Expression: https://www.wsj.com/articles/the-aclu-retreats-from-free-exp...
to complaints of sexual violence. We will continue to support survivors.”
The ACLU Declines to Defend Civil Rights: https://www.theatlantic.com/ideas/archive/2018/11/aclu-devos...
Their coffee is just as hot nowadays and to lower the temperature to the degree to where the effect on Stella would have been meaningfully different would result in lukewarm, under-extracted coffee that fewer people would be interested in. Further, the sheer quantity of Mcdonalds coffee moved every year without incident implies user error, rather than product error.
Yes, I know what the jury said and how they divided up the blame. I disagree with their conclusion.
Stella's own doctor testified:
>Lowering the serving temperature to about 160 degrees could make a big difference, because it takes less than three seconds to produce a third-degree burn at 190 degrees, about 12 to 15 seconds at 180 degrees and about 20 seconds at 160 degrees.
The NCA (a coffee industry group) recommends holding at a temperature of 180-185, due to "rapid cooling", and consuming at or below 140.
Stella's injuries were exacerbated by:
* The hot coffee permeating through thin sweatpants and being held against the skin.
* Her age - 81 years old at the time of the injury. Older skin is damaged more easily, and would also have implications for her mobility (how fast she could remove the soaked sweatpants)
Some experiments  show that coffee served at 180 will cool to around 162 in 5 minutes, 148 within 10, and 138 within 15. 70% of Mcdonalds business is through the drive-through , so must customers would be getting their coffee to go.
The question I'm unable to find a satisfactory answer for is how long it takes the average customer to receive their order and return home. I could probably figure that out if I knew how far the average customer was from their store, but that information is not readily available.
If holding at 180 results in optimum drinking temperature of around 130 in about 15 minutes (per 4), then this is the optimal temperature to hold at for product quality if the average customer lives within 15 minutes of a Mcdonalds.
If you were to hold at 160, using the info from , the coffee would fall below this optimum temperature in about 10 minutes and require reheating, which alters the flavor.
: https://web.archive.org/web/20150923195353/http://www.busine... (page 4, bottom)
: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3377829/ (It is widely accepted that elderly burn patients have significantly increased morbidity and mortality. Irrespective of the type of burn injury, the aged population shows slower recoveries and suffers more complications.)
Bringing up Stella Liebeck makes me feel irrationally hostile; I've had a negative reaction to people "busting the myth" of that lawsuit on the internet for probably 20 years.
I certainly believe that suing corporations with deep pockets is a reasonable and moral way to deal with medical bills in a country without universal health care. It's not a good system, but if you can get away with it, why not?
And I know well that on average, the scourge of frivolous lawsuits against corporations is a myth, because I've worked in the legal industry and have a perspective based on many other lawsuits.
And if McDonald's served coffee without a properly secure lid, or some other defect then they should be held responsible for every penny of damages.
However, I am irritated by anyone who may insist that it is a "fact" that serving coffee which is hot (but less than 212 deg) is negligent in itself. And if I continue to see this "myth busted" or "fact checked" for the next 20 years, it's not going to change my opinion, because assuming I live that long, I'm going to be boiling water for coffee on my stove almost every day.
It was known by management to be dangerous; that was done deliberately so that people would be forced to sip slowly, to discourage refills. Ain't negligence in light of intent
The point I'm trying to make is not that coffee should be hot, because I know arguing that is futile.
The point is, that framing this as a disagreement about readily available facts is incorrect and if you habitually interpret people's opinions/values this way, it warps your sense of reality to your own detriment.
I think anyone who wants to prevent me from getting hot coffee is not a nice or reasonable person, and I feel threatened by any implication that I would be in the wrong for making hot coffee if someone else spilled it. But these are facts about me, not about the rest of the world. As such, you don't have to accept them, but you can't invalidate them with facts about the world either.
But its also true that responses, like the one you linked to are also often one-sided. And what would you accept from an association of lawyers who make money launching such suits?
The truth is, its not as simple as either side tries to make it out to be. I think the Wikipedia article: https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau..., does a good job of presenting pertinent details from both sides.
> McDonald’s operations manual required the franchisee to hold its coffee at 180 to 190 degrees Fahrenheit.
> Coffee at that temperature, if spilled, causes third-degree burns in three to seven seconds.
They might also be interested to know it was composed primarily of dihydrogen monoxide, a lethal chemical agent known to be the proximate cause of hundreds of deaths each year. They are describing boiling water.
Even if McDonald's did the wrong thing, it is a frivolous lawsuit. I serve boiling hot tea to all my guests, it isn't negligence. The case as described seems to be that McDonald's should be civilly liable for serving a hot beverage in a styrofoam cup to a customer who asked for a hot beverage and could easily detect it was in a styrofoam cup.
You hand styrofoam cups out your window to guests you know won’t be staying? Curious social habit.
> a customer who asked for a hot beverage
Coffee is brewed at very high temperatures but is rarely - if ever - consumed at those same temperatures.
For most people, getting a cup of coffee that is near boiling would make for a very unpleasant surprise.
Huge fan of the ACLU in general, but some of the highly unpopular but incredibly important work they used to do, for example defending neo-Nazi groups right to protest, has been discarded in favor of popular causes.
While I understand why they have gone this way (who wants to defend Nazis in court?) it was a very important symbol of the organizations insane dedication to civil liberties. Taking a principled stand of preserving freedoms for those who are deeply, deeply unpopular is inconvenient and essential.
The ACLU's social media accounts are a mess as well. Their postings come off as unhinged and sue-happy, and the fan base of commenters has become so one-sided, that I think the ACLU simply caters to that vocal audience now. Maybe the change is not solely attributable to that - there might also be a new generational wave of inside actors that simply operate the ACLU in a more ideological manner.
I agree that some of their work is still great. But unfortunately it's been enough of a change that upon weighing the good and bad, I had to finally pull the plug on my recurring donations too.
Just because you share the bias doesn't make it unbiased
Trans women are women. These schools are denying this with this idea, so it seems like a good use of the ACLU's resources to me. Morality aside, I don't know about the legal aspects of defending that principle in the US, but presumably the ACLU have some ground.
>, so that those sports are competitive and fair.
[drifting off the OP topic, but...] shouldn't school sports be about inclusion? what is your ideal of fairness here?
For me, the pursuit and rewarding of certain idealised body types selected by narrow athletic criteria reminiscent of pageantry has never been 'fair' for anybody..
School sports ought to focus on inclusive physical self-improvement, help kids develop cooperative skills, resolve conflicts and work together for a common goal, that sort of thing.
Trans women are women from a gender perspective, and not from a biological perspective. Unfortunately, the biological perspective is what the separations of men's and women's sports a thing, because of inherent biological sexual differences (primarily testosterone). I mean, if they want to get around the issue, then let's go all in - disband men's/women's sports and just have sports. Then anyone can play with anyone. That's never going to happen though, for political, cultural, and safety reasons, so we're stuck with a situation where gender is bumping up against biological sex hard.
From a biological perspective, trans-people lie in-between male and female. Transwomen are at an increased risk of breast cancer than cis-men while a lower risk prostate cancer. Transwomen (after years of estradiol) have significantly less muscle mass than cis-men, while, transmen have much more muscle mass than cis-women. All of the differences biologically are the result of sex hormones and the time at which they're introduced (pre-natal or puberty) and the time of exposure.
That said, I don't think it's wise for trans-women to play in serious sports. If they win, they won't get the credit, they'll be told it's because they're trans. If they lose, well, then no one cares and it doesn't make the headlines. It's unfair and unjust, but honestly, sports have never been fair or just. Especially the Olympics, they're a selection ritual for celebrating people who are genetically optimized for some specific task. I don't understand why it exists except for out of tradition.
The challenge is that trans women are women who used to have comparatively vast quantities of testosterone in their bodies, giving them dramatically higher bone density along with all of the other side effects of testosterone on the body.
When they transition fully, the estrogen has a side-effect of preserving the bone density they had previously. Specifically, this gives them a massive advantage in combat sports like MMA. I won't get into the other musculoskeletal aspects that happen.
It's a vexing problem, because I don't like the idea of trans women not being treated like women. It sucks to think about that. It also sucks for non-trans women who are getting their skulls fractured in fights.
The reality is that fans of non trans female athletes aren't going to accept this. When a tiny percentage of the population is trans, and suddenly trans women start winning at the highest levels of female sports at an insanely disproportionate amount, it clearly indicates that being trans provides a massive advantage in female sports. I haven't found any cases of trans men dominating in male sports.
I'm pro trans rights, and also know that my view on this is deeply unpopular in the trans community. I'm not sure what the answer is for this. Sometimes, reality doesn't mesh with our ideals.
Would it work to just group people by bone density and weight class?
Or for MMA, just some sort of scale of ass-kickingness. I would have to be in the can't punch out of a wet paper bag class, which I don't think would get shown on TV. ;)
The problem now is that we're being polarized on an emotional issue with both sides being demonized. E.g. One side says the other is hateful of trans people, the other side says trans is being pushed on them unfairly, etc. If we rather focus on the facts and amoral concepts (e.g. testosterone count), and drive opinion/policy based on that, there is good chance that we can all coexist without having to force opinions and understanding on both sides where it's largely unwanted. Only then can we come together and have a single, happy view on the topic as a society.
Perhaps there needs to be new vocabulary to describe sex and gender separately but that's a different topic. As for fairness, this is the best we can do on a broad scale between male and female because of the vast differences. After that we leave it to the individual to decide which sport best fits their physicality and interests.
I was going to post the same in anticipation of the partisan downvotes that you're receiving. Without going into specifics, this sums it up nicely:
>It’s not that the left shouldn’t have opportunities to speak up against the president’s agenda -- of course it should. But the ACLU shouldn’t be its political bullhorn. The organization’s legal independence gave it special standing. By falling in line with dozens of other left-leaning advocacy groups, the ACLU risks diminishing its focus on civil liberties litigation and abandoning its reputation for being above partisanship
One issue in particular is the ACLU's interpretation of the second amendment, which they do not fight for with the same fervor as the first.
Lol, that's because it's Amazon AI. Do you expect better from the makers of Alexa?
But most likely people and organizations will think this work like the movies.
I wonder what would be the score of the actual faces if we added them to the test set (faces, but not the same photos). Would bigger test sets have photos that match the targets more than themselves?
What percentage of the arrest photos were people of color? Was it significantly more or less than the 20 percent people of color in congress or about the same?
Do you understand what a badly designed experiment this is?
>> "The latest cause for concern is a study published this week by the MIT Media Lab, which found that Rekognition performed worse when identifying an individual’s gender if they were female or darker-skinned." 
I can't really comment. Just recalled this in the memory banks and thought they might address this directly [they may have].
1 - https://www.theverge.com/2019/1/25/18197137/amazon-rekogniti...
As an extension of this photographs of individuals with darker skin required more lighting than photographs of individuals with darker skin.
I don't know all of what goes into the ML for facial recognition and I am sure there are people far smarter than me working on it (and making way more money than me to boot), but I guess my thought here is that some variation of Poe's Law applies. I know that people are quick to jump to condemn something as racist but sometimes there really are just honest mistakes.
I have a hard time believing that anyone at any level of the AWS structure set out to produce a racist facial recognition, but rather it may have just been an honest over site and rather than rushing to crucify them we should instead look at it as a learning opportunity to help develop further the field of facial recognition.
EDIT: I wanted to clarify that although I don't think it was done purposefully I don't think it doesn't bespeak a problem; my intention was rather to serve as a suggestion that we should sometimes temper the often strong reaction produced when labeling something racist, and focus our efforts on identifying and solving the issue rather than trying to act punitive. To forestall objections I do recognize this is an issue that does require correction, and that it does bespeak a larger societal problem that has real consequences for real people ever day, but in my experience we will get more progress by attempting to work together in a spirit of cooperation rather than a spirit of anger and vengance.
We've been around this block several times before and while the quarry may change from photo accuracy to ML driven facial recognition, the hunt does not. There's no excuse for selling technology to industries facing (or creating) life and death situations when the bugs are so obvious.
It's also not purely a technical problem, if you feed the model only pictures of white and asian male college students then it's no surprise when you get a model that biases towards recognizing white and asian male college students (which is exactly how several prominent models were trained).
I personally feel it's wrong but that's one thing I've always got hung up on in building a critique.
This has nothing to do with machine learning. It is a simple correlational situation.
If African Americans have, on average, poorer credit ratings, then correlational models will begin to equate race with poor credit ratings, which will impact their ability to get credit and hence feeding back on that mechanism.
...of course RACE isn't allowed to be factored into financial applications, so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.
The problem isn't with the models - it's with reality.
The author famously said "Math is Racist". It's hard to get over such stupidity.
If so they seem to make the point well.
If the only information you have about a loan applicant is where they live, your decision will be 'biased' if the location of where someone lives is correlated with other factors (as opposed to, say, the fact they live on a flood plain means don't give them a loan).
In this context, saying "Math is Racist" is like saying "Physics hates Fat People" because gravity disproportionately affects heavier people. Accurately reporting what is happening is not biased, making decisions without considering [edit: or not making a decision because you didn't consider] the context is biased.
Maths is a tool (well, collection of tools), and the onus is first on the tool user to use it in a fair way. Yes it is important for educators and tool creators to be mindful of how these tools will be used in practice, but there is a big jump from that idea to "Math is Racist".
I think these organisations are criticising the tool builders for creating tools that are easily misused (or are created with unreasonable limitations, like only being valid for university students at one university, but are sold as widely applicable).
Supporting affirmative action initiatives like you list is trying to address the biases that exist in reality. I think this is often a bit backward (not addressing the root cause) but it can be expensive (in time, effort, money, politics) to address the actual root cause so these programs aim to address the bias at the place in manifests.
This is a similar (dare I say pragmatic?) argument to "it would be cheaper and more effective to just give everyone a no-strings attached payment each month then to provide means-tested payments to those who need help".
Detrmining if these arguments are correct is a different thing altogether, and I have no idea if these programs are cheaper and more effective then dealing with the root problem, or if it's even possible to define and address the root problem in the first place!
The two things you contrast above are fundamentally different - one is criticising tools and tool builders, the other trying to address perceived biases in the world.
> but not Asians for some reasons
Asian people are distinct because so many of them have immigrated recently, and immigration requirements favor educated and well-off folks. That masks many issues because they should have better than average outcomes due to better than average education and skills.
On a side note: welcome to the Twilight Zone.
You realize this too is illegal right? The law doesn't say "you can't use race" - instead it says (paraphrased by the Brookings Institute): "Are people within a protected class being clearly treated differently than those of nonprotected classes, even after accounting for credit risk factors?"
O'Neil points out that math is often used to obfuscate this (whether it be deliberately or not). This is a valid point, and one that people who think of math as a values neutral tool should consider.
I didn't love the book, but it's difficult to make the argument that she is stupid.
This is plain physics, no? Things are darker or lighter depending on the amount of light they reflect.
> I know that people are quick to jump to condemn something as racist but sometimes there really are just honest mistakes.
There are far too many things at play here. In a fair and just society, this kind of issue (like the XBOX Kinetic issue) would be met with a "oops, forgot to account for individual light absorption variance". A fix would have been issue and that's that.
Now, the problem starts when you begin digging. Why was this problem not caught? Because QA teams didn't catch it. Why didn't they catch it? Because the team wasn't very diverse, so testing failed to catch the problem? Why wasn't the team diverse enough? ... and now I've entered a societal rabbit hole that's far too complex for this post.
> I have a hard time believing that anyone at any level of the AWS structure set out to produce a racist facial recognition
Yes, highly doubtful. No benefit, and major issues if caught. It is far more likely that the dataset itself was biased. Why was it biased? ... and there we go again.
> we should sometimes temper the often strong reaction produced when labeling something racist, and focus our efforts on identifying and solving the issue rather than trying to act punitive.
Agreed, in principle. In practice, this produces no results in a mostly racist society. Companies (and politicians) will listen to outrage, they won't listen to well articulated and well-reasoned comments.
>This is plain physics, no? Things are darker or lighter depending on the amount of light they reflect.
Not quite. I don't know all the technical terms, but apparently, photograph technology early on settled on some standards and made some design choices that made it easier to photograph white than black people. That set a long-term precedent and standard for how film should work that persisted for a while and even to the present.
>>>Until recently, due to a light-skin bias embedded in colour film stock emulsions and digital camera design, the rendering of non-Caucasian skin tones was highly deficient and required the development of compensatory practices and technology improvements to redress its shortcomings. Using the emblematic “Shirley” norm reference card as a central metaphor reflecting the changing state of race relations/aesthetics, this essay analytically traces the colour adjustment processes in the industries of visual representation and identifies some prototypical changes in the field...
Lay article on the topic:
People want some privacy in public. A tech that can track or backtrack people's movements is kinda creepy in a few ways.
That's a little weak. If they were serious, the moratorium would extend indefinitely, or until such rules were in place.
One year might just be long enough for the fervor to die down, so they don't take such a PR hit when they resume sales.
Plus corroborating anecdotes from people I've met at the protests.
I always make sure I'm out of the way when these monkeys start kettling protestors
Why does that need an explanation?
E.g. they can cross reference-timestamps and see which individuals were close to violent altercations, they can then build up a solid case for who to interview and investigate further. Or maybe an actual crime happens during the protests and they need to investigate further, etc.
Honestly, as a side note, I personally find it very worrying that I have to justify such policing to a tech crowd. We can have responsible use of facial recognition and data-gather / profile-building, and it need not be a privacy issue. Right now, we sit with a situation where violent/aggressive/illegal behavior is being allowed to transpire during chaotic protests and it slips through because of the chaos and sheer scope and size of the protests. There is no way that traditional "policing" can combat that, and I fear we're emboldening criminal elements to take advantage of peaceful protests because we're unnecessarily tying the hands of the police because of nebulous "privacy" concerns.
And yes, burning/looting is a criminal act and shouldn't be tolerated during protests.
There is a direct line between police use of Rekognition and the pretence to squash the freedom to protest. But you knew that, you just purposely have your blinders on to protect your worldview. Enjoy your leather sandwich.
No one is saying there isn't a right to protest. The police have a duty to enforce the law and when protests turn violent they have the support of the majority to enforce that.
This method means minimal engagement with the crowd.
You are a cartoon
Facial recognition technology is, after all is said and done, probably illegal in any country that implements the protection of basic human rights into its national laws. If countries (including the USA) do not, it says enough of its own about them. Nor has the idea of (national) exceptionalism ever produced a more equal and/and fair society. AFAIK, not a single in all history.
For those who like to argue that such technology could be legalized when people (collectively) agree with its use through political consensus (aka "the people", through politicians, voted for it), there are good reasons for why basic human rights are defined as "inalienable". Regretfully, many countries have nonetheless ignored that fact, whenever it suited the personal interests of politicians and those that stand behind them in the shadows.
“Fascism should more appropriately be called Corporatism because it is a merger of state and corporate power”
― Benito Mussolini
(while cynically trying to link an organisation who built themselves providing IT services to nazi genocide, with the ethical side of the current police brutality protests... :sigh: )
Well, that is categorically wrong, given the history of computing.
But anyway, the implication in the previous comment was that IBM were built on the business of exterminating Jews, which is frankly ridiculous given that they started business 30 years before the Nazi party even came to power.
Note, I'm not claiming IBM didn't get involve with the nazis. The German subsidiary certainly did business with the Nazis, including with their processing of Jews and minorities. Thomas J Watson even received an award from them. But, IIRC, he realised he'd been set up as a publicity stunt and gave it back. Once the war started, the German subsidiary bought themselves out and became independent.
It should be noted that IBM in the US has a history of introducing policies ensuring equality and diversity in employment that precede similar federal legislation, sometimes by decades.
And yet it's only the Nazis thing everyone brings up.
From that, I felt like it doesn't work and shouldn't be used in production, never mind police production.
Separately, we experimented with various vendors "face detection" (not whose face, but rather just "is there a face") just to see how many faces appeared in a photo, because for group loans you needed at least 75% of the borrowers present in the photo. If this didn't happen then it meant the loan didn't get posted and someone would have to go back and get all the borrowers together again for another photo which is laborious and inefficient. Much better if you could give the feedback upfront. Granted, as I noted in another comment, at the time all the major vendor's tools had abysmal accuracy and we abandoned the effort.
Also there seemed to be no substantive discussion prior this about the police using Rekognition until it became a hot button issue. What will the widespread effects be if corporations start allowing their decisions to be governed by <outrage of the mob/principled consumer pressure>?
Finally I wonder how they will implement this, I mean after all I can sign up and start using any AWS service with just a credit card what's to stop police departments from simply using a corporate card and signing up for a different account? Also does this apply to just local PDs or does it extend to the FBI, NSA, CIA, or other 3 letter government agencies?
Disclaimer: The purpose of these comments are intended to be observational not advocational.
Surely there's better oversight in police management to prevent that?
Well, you know, except in Australia where the cops lied to the public about using clearview:
Or New Zealand:
Or the UK:
And I'm sure all the 600+ US law enforcement agencies here went through proper approval channels and oversight:
Just because you don't know about it doesn't mean it doesn't exist.
Try google "aclu facial recognition", "eff facial recognition".
congress.gov returns 132 bills introduced in the last two sessions going back to 2017. If you read the titles, it's clear many of them are related to transparency and respecting rights to privacy.
"Also there seemed to be no substantive discussion prior this about the police using Rekognition until it became a hot button issue."
If you didn't mean that it wasn't an issue that was high in public awareness or concern wouldn't it be tautological that it wasn't high in public awareness or concern until it was a hot button issue? Like, the definition of it becoming a hot button issue is that it's high in public awareness or concern.
Am I misinterpreting something? Did you mean something else and I just got it wrong?
Also, to call this "mob outrage" or "principled consumer pressure" is delegitimizing the entire thing. Do you really genuinely think this happens any other way? Seems like when a lot of people start to have a problem with something, it makes sense to have a moratorium and investigate improvements/solutions.
Yikes. Is there a fallback option?
Note that this is a one way relationship, corporations must comply with laws, but can also do other things.
I'll gladly accept additional benevolence from them though! Just not as the sole power in the area.
Sadly, global corporations seem to be mostly able to choose which laws they want to comply with by shifting jurisdictions at will... "Oh no, for _tax purposes_ we're an Irish company! For privacy purposes we're European. For Intellectual Property purposes we're a Delaware C Corp. And, ummm, that department that doesn't exist is officially deputised by the Saudi Royal Family."
Hoping that police/tech companies/miltiary/etc are moral isn't an actionable plan.
People in the armed forces need to be put in the same boat I'm in.
Maybe they could work some kind of penalty into the contract? Something like "if you're working on behalf of a police department, you are forbidden from using our facial recognition services. If you sign up despite this term, we will cancel your account and retroactively bill you $100,000 or your usage at a rate 1000x normal, whichever is greater."
This question just does not work when you consider how much companies spend lobbying.
Corporations have no such morals. They are profit seeking social constructs. Breaking the law is often a profitable cost of doing business, as is making an ever increasingly shitty product when there is little to no competition.
Arguing corporations have no morals is being pedantic. The question is clearly, what moral obligations _should_ they have?
>The question is clearly, what moral obligations _should_ they have?
If moral obligations do not exist as you claim, why should we pretend they should exist?
"We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested."
Dead giveaway is that Legal and PR teams relentlessly edit out self-agency.
Or you know... you could do it yourself. Ethics don't have to come from regulations.
But Amazon can only control its own offerings. It can't control what any other company that offers facial recognition does, and they probably know that as soon as AWS steps back, some other AI company with less ethics (or cynically, less care of public pushback) will swoop in without hesitation. The only way to stop that is regulations.
While it's good that they are doing the moratorium, I think it's hardly applause worthy for them to have needed this much backlash to act.
Also, I'm not impressed by the argument that other people might offer face recognition. This is about Amazon's actions.
Amazon can keep the sharks at bay for a year and cry for help, but if regulation is too late, they're going to be eaten for lunch by shareholders and they know this.
Shareholders do determine whether a company does something ethical or something profitable. And by the numbers, they choose profit unless it would cause public outcry (and sometimes despite that).
Last year, only 2% of Amazon stockholders voted to ban the sale of facial recognition software to the government, and only 28% even wanted a report on possible threats to civil liberties.
> They care about making money, but no one is going to sell Amazon because they’re not continuing a single low profit line item.
This is true, but I don't understand how it's relevant.
> No shareholder is condemning Apple for not making Rekognition
Nobody is condemning Apple because it would be more of a risk for them to develop it, since they would have to do a larger pivot from their current core product. Amazon in contrast is already in the business of selling cloud services, so it's a product with a straightforward path to profitability.
> and Amazon wouldn’t be killed for dropping it.
My point is that Amazon won't drop it in the long term. The 1-year moratorium is to cover their butt until they can figure out how to sell the technology to the police without becoming the scapegoat for the recently news-blasted civil liberties movements. If I were in their position I'd do the same.
Without any kinds of laws, wouldn’t things like this incentivize new niches to popup to milk money from the government?
If it's a third party with a new name unrelated to law enforcement, that complicates the chain of custody and probably wouldn't be worth it to any agencies to set it up to skirt a 1 year moratorium, even if someone at the agency thought it was a good idea to try and flout Amazon's policies (they definitely won't, agencies just don't move that quickly).
That being said, there's no way to guarantee that it won't be used, but it would be difficult for LAPD to be running at scale with nobody raising any flags.
Can AWS see the images used with Rekognition?
> Amazon is known to have pitched its facial recognition technology, Rekognition, to federal agencies, like Immigration and Customs Enforcement. Last year, Amazon’s cloud chief Andy Jassy said in an interview the company would provide Rekognition to “any” government department.
> Amazon spokesperson Kristin Brown declined to comment further or say if the moratorium applies to federal law enforcement.
It seems that Amazon has a far better reputation on HN compared to Clearview AI. Is that deserved?
I welcome this move from Amazon, but I hope it doesn't foreshadow more moral bans in future e.g. spurred on by the next angry mobs who will try to limit free speech in society.
If anyone has a hard time empathizing here, imagine your kids in the false positive person's shoes.
1. I am against places that say “photography prohibited - private property” — if i can see it, i should be free to photograph it.
2. I am against ANY use of facial recognition ever anywhere. I own my face and i am allowed to keep it private if i choose to.
So,yes completely schitzo and i realize this.
But its not an evenly distributed spectrum of a problem. Its a weighted web of nuanced issues.
I just dont know how to balance it.
Id love to discuss this if anyone is open.
You are in control of whether your face can be seen
We don't need this kind of hyper surveillance for common crime, people with warrants, it's just too much of an intrusion.
I can see this being used in certain places for 'high value individuals' such as those marked by the FBI (major crimes, multiple murders) or literally 'terrorists' - but for regular crime, I think it's way too much.
We can't be under constant surveillance by the police computers that's just no way to live.
Since that happened, there have been dozens (at least) of murders and vicious, life-changing assaults, most captured on video. I'd be very happy to see every one of the bad guys identified, and this seems like it would be effective toward that end.
They're leaving money on the table, but it will still be there in a year, and they'll only miss whatever Amazon's functional analog of "compound interest" is.
The optimistic view is that this moratorium is to see if police departments truly do reform themselves over the next year to the point where they can be trusted to use facial recognition again. I hope a reevaluation takes place then.
CTOs of city and law enforcement orgs are probably seriously questioning the vulnerability of relying on cloud SaaS.
The reason why super heroes wear masks and capes is to avoid facial and gait recognition cameras!
"hasn't passed a thing" sounds like the end of a Congressional funnel.
edit: this just purely a retort about the specific complaint of the parent. I don't deny that Congress hasn't actually done much useful to forward the policy changes I would deem desirable here.
And of course you can not require citizens to show the face and to cover the face at the same time, so to try to publicly denounce people with the face covered as delinquent wannabe is not possible at this moment.
But I'm just speculating.