It doesn't matter that there's an argument of this refusal being good business - truth is, big companies being confident that consumer outrage won't translate into enough market impact to say they shouldn't do the bad thing has become the norm. Nice to see an instance where that's the not the case.
I can see why the US government is perturbed. The more patriotic thing to do would be helping the NSA, FBI, and a bunch of good, old-fashioned American TLAs build up our own government surveillance network against our own people, and maybe helping the CIA and US Military build up its surveillance network of the Chinese people.
Second of all, most universities in China are affiliated with the government; this one just has a snappier name. It would also appear the two individuals in question are grad students at Simon Fraser University in Canada.
Third, MS has been and will continue to catch uneducated & poorly-researched flak like this for operating in China in any capacity.
Every major corridor is covered LPR and facial recognition by the DEA’s LPR initiative and and CBPs border nexus surveillance programs.
That’s in addition to the wide variety of state, local and private systems.
Microsoft entered into a well publicized partnership with NYPD on intelligence/surveillance stuff back in 2012. They may have lines they won’t cross, but are hardly innocent.
Only civil government can stop the creeping nature of expanding military and paramilitary powers.
However, it'd make more sense to help your own establishment vs a foreign one.
MS being a global company, one has to wonder how much MS HQ can prevent other branches from engaging in very lucrative projects like that.
Either way, good reply. /s
Also, the government didn't knock Microsoft for not helping spy on U.S. citizens— they were mad that they helped the Chinese spy on their own citizens, which they asserted was morally wrong enough to be considered a human rights abuse. My response was not "criticizing the government for wanting national companies to help it," I was criticizing them for having a two-faced stance on governments surveilling their own citizens.
This attitude concerns me, a bit. You're mad at Microsoft (it appears) for something they haven't done, and for using "slick PR" that doesn't exist.
What's the basis for the anger?
Microsoft is going to have to behave itself for an equal or greater period of time before I risk thinking it has changed for the better.
Since Satya Nadella has become CEO of Microsoft, exactly what history is there that is dark enough to blacklist that company from any positive reaction?
I AM asking.
Windows 10 and its heavy use of dark patterns for excessive data collection is reason enough for me to loathe the company.
Plus those decades of ruthless antitrust/monopoly abuse by MS under Bill Gates has left its mark on the culture.
They are going to need to behave quite spectacularly for at least 20 more years before I even consider viewing them as a customer friendly company with a social conscience.
Dark patterns in Windows 10 were used to get people to upgrade, and the exec behind those decisions was let go.
What exactly constitutes "excessive data collection" to you? If you are saying "telemetry" then turn off all of your computers and devices forever, because they ALL do it. Microsoft is simply more open about actually doing it than most places.
Bill Gates has donated more money to charity than anyone in history, and through that charity work has certainly saved more lives than you or me.
Just admit that you like disliking Microsoft and that your opinion isn't based on anything real.
Don't care. They are still using dark patterns all over Windows 10 to convince people to disable various privacy settings.
It also doesn't matter that it was to get people to upgrade. The ends don't justify the means. Shitty behaviour == shitty company.
> What exactly constitutes "excessive data collection" to you? If you are saying "telemetry" then turn off all of your computers and devices forever, because they ALL do it. Microsoft is simply more open about actually doing it than most places.
Argumentum ad populum. "Other people do it too!" is not valid defence of this behaviour.
I'm not going to argue with logical fallacies. The fact that the EU just passed GDPR and the fact that people all over the world are waking up to the ramifications of this dragnet data collections is a sign of things to come: surveillance capitalism has peaked and people, when they actually understand what is happening, do not like this behaviour. Keep using those dark patterns to deceive your users though.
> Bill Gates has donated more money to charity than anyone in history, and through that charity work has certainly saved more lives than you or me.
Ah yes, the Bill and Melinda Gate's foundation. The colossal tax write-off scheme that allows Bill Gate's to avoid having his wealth taxed before he dies so he can make his children 'directors' of the foundation with security for life so he can get around that pesky vow he made about not giving his children all his money when he croaks.
The fact that he puts some money towards charity at the end of his utterly selfish life to try to leave a legacy that is anything but negative is transparent and shallow. Nothing will ever make up for what he has done in the past, the business he bullied into submission and the lives ruined by his antitrust practices and hostile attitude to open source.
So a lot of people are being saved now due to contributions made by the Gate's foundation. That's great, it really is. I still donate a larger proportion of my wealth to charity than he does. He's barely even trying.
Bill Gate's is part of modern societies problem. Billionaires that can't see it's their and their ilk's own actions that bring so much of the misery they are seeing. The little good is doing will never make up for what he is and what he has done.
Yet we were talking about Windows 10, and you started whining on about Bill Gates. I'm guessing you're a Microsoft employee.
I was only making a prediction about what I think they will do. I'm not particularly cross about it, I think this kind of behaviour is inevitable.
I hope I have assuaged your concerns.
Microsoft isn’t exactly working from a blank slate of reputation here.
Exactly why it's useful.
"Microsoft accused of being 'complicit' in persecution of 1 million Muslims after helping China develop sinister AI capabilities"
If I am not mistaking, the shareholders are entitled to sue under the current doctrine for not acting in their best interest.
MS would arguably defend itself making the case that key employees might resign, or key accounts might leave, or whatever else its legal team comes up with. MS might even win such a case for all we know.
Still, a court case could be an unwanted distraction. Or an interesting test case, depending on viewpoint.
But I find "someone else might do it" to be a bit of a weak excuse to do anything. Every bad thing that COULD happen that HASN'T happened is because someone (usually multiple someones) chose not to do it.
It's really that simple. The first step in no one doing any particular thing is to not do it. And while there are cases where this forces a tough choice, the majority of such cases aren't that tough. Like here: The question is not "do we make mega bucks and be evil, or be poor but good". The question here is "Do we make mega bucks being evil or do we go try to make mega bucks elsewhere?"
I'm not saying nothing bad will ever happen - as you say, the will to do it is there.
But I'd much rather deny them the effort of all the people who COULD do it but choose not to.
After all, if you're doing something you'd rather not just because someone else might (or even probably will)...you're not very convincing that you'd rather not do it.
Personally I avoid having to make tough calls, but I respect people who think about it differently.
A world where we don't do something bad creates a reason not to do it. A world where we say it's okay to do the bad things because we assume that someone else will not only do them, they will be more evil than us (and here we are compromising our morals from the start) removes any such incentive and creates a race to the bottom.
This is either hopelessy naive cynically deceptive.
Doesn't change the fact that Microsoft did the right thing.
No thank you. The tech world needs to acknowledge that the real danger is facial recognition will be developed outside the public eye, without the input and oversight of the technical community that keep the Orwellian nature many suspect in check.
Which means, lead from within. Push companies that want to work on this technology to engage with law makers to insure that privacy rights are respected. to insure the technology does not false incriminate and when unsure it is enshrined in law that it cannot be used to do so. Lead within to establish limits of the use of the data, how it is stored, and how it is guaranteed to deleted within a set period. Limit how law enforcement or other government agencies gain access or use the data. There are no whistle blowers when we are not involved
we either lead in making sure it works and respects our rights or watch as its done in a back room deal with intelligence agencies and the hawks in Congress.
Microsoft is aware of the downsides of false positives. In this case California police officers would use facial recognition on individuals, but Microsoft recognized that the false positives could disproportionately target women and minorities. That could result in massive backlash against the company.
So they are calling for greater regulation of AI-related tech, considering the human rights issues. No mention of what "regulation" could mean here.
A company can make moral decisions, and the fiction that lawsuits are likely or even plausible is almost entirely mythical.
These are the perverse incentives when the standard is "equal treatment under the law", or "equal protection under the law". If they can find some way to treat everyone in a crappy fashion, the law doesn't have a problem with that. The successful class actions tend to come only when it can be shown that you are only treating some people crappy.
That would be fairly easy to show in this case. Simply have random whites and blacks stand in front of a camera running MS software and compare the accuracy rates. (Come to think of it, a demo like that would be pretty powerful in a courtroom too. Maybe there is more to this than just PR MS is worried about?)
I just realized how bad that sounded. To be clear, I'm not saying we should treat some people unfairly. I'm saying the law doesn't care whether we treat people fairly or unfairly, so long as they all get treated the same. I was positing that maybe it would be better if we could try to make the law incentivize treating everyone fairly.
Cigarettes are considered harmful to everyone, you don't magically get the ability to resist addictive, harmful substances when you turn 21, yet we ban sale of them to minors - for similar reasons.
Nothing has ever had nor will ever have zero chance of false positives. Even DNA evidence can have false positives. Law enforcement, the justice system, and society overall has always had and will always have (highly imperfect) mechanisms to deal with nonzero false positive rates.
Therefore, it is of course important that the false positive rate not be higher than average for a subgroup, as they will be disproportionately affected, since our systems will be set up to deal with the lower average false positive rate.
Algorithms 'discriminate' (as in differentiate) because that is exactly the job they are tasked with. Is this a picture of a person on list A or not? Is this a picture of criminal activity X? They discriminate on a large number of, often hidden, unknown or not understood features in the data.
In many countries the laws dictate that some features such as race, religion, sexual orientation, ... are protected, as in not legally allowed to be used in differentiation. (Take note that in many countries certain national security/safety related organizations are exempt from certain regulations).
The models that are used in facial recognition rely (in part) on 'sub-symbolic' probabilistic feedback systems, that in many cases defy post rationalization: we do not have a convincing or specific 'story' about how 'the machine' decides in each cases. This means we can not deductively prove that the above mentioned 'illegal' forms of discrimination were not used (note that it is not sufficient to show that e.g. 'race' or 'gender' was not explicitly used as a feature in the input, as it could be strongly correlated with other inputs or derivations thereof (e.g. type of shampoo bought, zip-code, food preference, ...)
So we rely on things like testing post training deployed model to 'vet' the systems aren't biased in the ways we by law and regulation have deemed that they should not be. We test whether the output distribution shifts when we only feed in males vs a mixed gender test set etc.
In practice this means that in compliance testing we replace deductive reasoning with correlation. We accept that this will yield false positives, but this choice is partly due to technical limitations (apart from a few well published cases, understanding and explaining how e.g. a deep learning derived model actually 'works' in a rational synoptic way is still beyond us), and part due to ideological stances we have come to.
So, yes, we choose to accept false positives that are presented as evenly distributed across specific protected groups or features, while not 'making a fuzz' over others. These are inherently cultural, political, moral and empathic decisions, not 'logical' ones.
1) "The algorithm has a 50% false positive rate over everyone" -> awesome, no problem!
2) "The algorithm has a 50% false positive rate of people of color, and a .01% false positive rate over everyone else." -> yikes, we can't use this thing!
Wearing masks in public is already a norm in some East-Asian countries, and it’s a form of fashion as well . Just add glasses to that, which will become common if AR smartglssses take off.
Maybe we’ll see people walking around in fashionable neo-tribal’ish masks  (as decorations around AR glasses) to hinder intrusive face/retina scanning.
> The "gait recognition" technology, developed by Chinese artificial intelligence firm Watrix, is capable of identifying individuals from the shape and movement of their silhouette from up to 50 metres away, even if their face is hidden.
Again, if those medical/hygienic face masks are OK, wouldn't decorative frills (say, feathers) around sunglasses be OK too? That combination would effectively cover enough of a face.
Celebrities and politicians have money and power to ensure they recover from this event, your sister or wife doesn't.
And has for some years:
While both are true, they aren't any help at predicting the level at which progress plateaus.
There's also supply and demand. Reducing supply of the technology increases its costs.
Plus, it's good PR for them.
Reducing the supply of this particular technology is not going to increase its costs, because literally anyone with a recent GPU and some motivation can just download it from GitHub at this point. The most worrisome user of it, the US government and its various three-letter agencies, already have and extensively use this tech. Casinos have been using it for well over a decade to spot people who are "too consistently lucky".
I do buy your point regarding good PR. It's just completely ineffectual wrt its stated goals.
I don't think you can, but it depends entirely on what you mean by "superhuman performance."
Or, if that's still too much work: http://vis-www.cs.umass.edu/lfw/results.html
Many of the other methods get you worse than 50-50 chance of false positives by the time the algo gets to 95% true positives, but not being a statistician I may be reading the results wrong, in which case I'm totally willing to be corrected. Empirically, though, we know that the Super Bowl model has never worked.
How difficult is it to fix this bias? For example, the model can be told to only produce a match when a confidence level is higher than a certain threshold. Then the threshold can be increased as needed on those subsets of faces where training data is lacking. Would that work?
Also, why not build more diverse training data if this is a pervasive problem? It is not free, but neither is it cost prohibitive for someone like Microsoft.
Yet, this article sounded like a PR piece from MS. Is it just me?
Has Reuters had issues in the past accepting work from journalists who are paid flaks?
Is my Spidey sense off here?
Huh? Isn't Menn's Reuters article the one that broke that news?
I'd also question how much the public interest would be damaged if that information hadn't come to light.
Would they reject any sales to saudi arabia, china, israel or even the US military if human rights concerns arise?
If this is part of a genuine ethical paradigm shift within the company, then I commend them for it. But if this is just a one-time PR move, then my opinion of Microsoft has dropped considerably.