This stands in stark contrast to the meeting I attended on Tuesday with Amazon's general counsel regarding their Rekognition service. There was a near complete rejection of the idea that mass deployment of surveillance technologies in today's largely unregulated environment posed any danger to civil society. He also denied that Amazon had any responsibility for the negative impacts of their AI/ML technologies, or role to play in industry efforts to self-regulate.
And I think that cuts to the core of this: this is an anti-Amazon move. Nonetheless, I do also think it's a pro-consumer, pro-civil liberties stance. It's also a recognition that the tide is turning with respect to consumers and privacy; here, Microsoft is getting ahead of this changing trend and establishing that they're on the right side of it. Amazon is going to find itself hurting on several levels next year, as legislation likely finds its way onto the books and the consumer tide changes further.
On Nov 27th, I watched the International Joint Hearing with Richard Allan, Facebook’s vice-president of policy solutions. It was a couple hours long and of much higher tech literacy than the one the US held with Zuckerberg. Questions came from seven countries. Questions were mostly honest and the answers were informative. There were still a number of people who chose to just ask angry questions for sound bites. (Canada's rep was unfortunately one of the ones who asked bad questions.)
Overall though, it was a really good discussion. Here's a link to the transcript and video ("watch the meeting" link).
A joint-declaration was made after the hearing. It speaks to an international alliance for regulation. This is really where I see traction forming.
Here's the declaration, it's only a page long: https://www.parliament.uk/business/committees/committees-a-z...
Hope this throws you a notification or something. Cheers.
If enough industry people are on the side of it and/or there no populist fight over it, Trump would have no reason to veto it.
I don't think Trump really cares that much about these things, it'll only be a thing if it turns into a pop-culture war in which he may choose sides.
But if MS and a bunch of other heavyweights are in favour of it, he just might even go for it.
It's possible. But more likely in a few years.
> But if MS and a bunch of other heavyweights are in favour of it, he just might even go for it.
I remember the first time I heard about Net Neutrality as an issue, in what, 2011 or something? I couldn't have imagined it becoming politicized at all. Plus, Wikipedia, Google, basically any tech company who isn't an ISP should clearly be on the same side. And yet, it became a political issue. People who couldn't explain the first thing about what Net Neutrality is, have opinions about it. Trump's administration is against it.
Maybe it'll be different for Facial Recognition, since people can better intuit about it, but honestly, that probably just makes it easier to fear-monger about, rightly or wrongly.
Carriers are enormously powerful and influential so there will be a war.
If it's MS vs. Amazon ... well it's not one industry vs. the other.
But I agree it could turn into a pop culture war.
Amazon or tech in general wont be stopped , but it will have to pay the toll. This is something dems and repubs agree on. Im guessing MS is not looking to be the leader so why not just get some good press at the same time.
Amazon's "main business" is AWS, not their online store. For many years their store operated at a loss. Don't get me wrong, amazon.com is a HUGE business, but it's not Bezos' breadwinner.
AWS is a very large business, and much more profitable than selling physical goods online.
But Amazon's overall revenue was $56.6B (Q3 2018) and "only" $6.68B of that was AWS. By comparison, revenue from Amazon's advertising business is $2.5B - and no one is claiming that is their main line of business.
Even the article you point to shows Amazon made a profit of $1.69B on North American ecommerce vs $1.35B profit from AWS.
So yeah - AWS is a high margin business, but no where near their "main business".
More specifically, a common theme in public policy analysis is agenda setting. Unfortunately, in many situations, the agenda is so crowded with priorities that only crises cut through. For more context, see "The Public Policy Primer: Managing the Policy Process" (https://www.goodreads.com/book/show/8727263-the-public-polic...)
Facial recognition's biggest abuses are from the fox watching the hen-house. "Common sense" would lead to such absurdities like banning using it on cops because "everybody knows criminals could use it to track them" while using it on every protester because "they might be terrorists" despite both known truths being complete bullshit.
Perhaps I'm jaded, but I don't agree at all. They're trying to get it regulated so they can be creepy without fear. One of the first "good" uses listed was finding 3000 children. "Think of the children" is a common rallying cry for evil. Nothing else mentioned was really relevant.
The line IMHO needs to be drawn at anything that you'd call stalking if it were done by a human. If a store wants to recognize me as a prior visitor, that's fine because an actual employee might do the same. But when my presence (just my presence, not even my purchase history) is shared among more than one location, that's like someone following me around. Stalking. This tracking of people and creating databases about them is at its essence a form of automated cyber stalking and should be illegal. The "societal benefits" of this are nothing more than claiming what's good for corporations is good for society. It is not. Please stop pretending this shit is OK.
As best I can tell, from a surveillance standpoint what Microsoft is encouraging to become standardized is to apply the same level of legal rigor for wiretaps to facial recognition, which would be a significant step beyond matching legal standards for physical surveillance (which is as it should be, as there's a scaling potential with automated surveillance that needs to be kept in check beyond simply the legal limits of physical surveillance).
Microsoft's stance is a pretty good one "in a perfect world." The problem is that it isn't a perfect world, and we've already seen a good system of FISA courts established after the uncovered abuses from the 1975 Church committee turn into a revolving door for exponentially increased secret warrants for NSA surveillance, even to the point of that surveillance becoming available to local police forces.
What doesn't help is that Hollywood has already normalized a lot of surveillance tech so that most of the public assumes things are already legal/implemented in much broader ways than they are, so it's normalized the very idea of the surveillance state that we are growing into.
If anything, private corporate abuses of the technology is the only thing that will generate enough public outrage to lead to a generalized reform. Self-governance at a corporate level may seem good, but I suspect it will simply mitigate the necessary outrage.
Ironically, a "race to the bottom" is precisely what will result in the appropriate level of regulation - corporate self-governance will always be a half-measure that seems good on paper but leaves a lot of procedural loopholes for systemic abuses that go unnoticed for a very long time.
Yeah, you're talking about human stalkers, but OP's talking about capturing your image with many cameras, logging your presence into databases and sharing that information with other organisations.
I don't think Microsoft and their partners are interested in building a warrants-only surveillance system. Businesses already have camera systems and one could run face recognition on stored video after something has happened. Nobody is talking about that. This is all about real-time cloud connected face recognition.
Video tape you?
Video tape an entire public area constantly?
Have Arial footage of an entire city?
All these things exist and are being done. There's good uses for all of them too. But where do we draw the line? Do the good uses outweigh the bad? Personally I'm not convinced. And that's before we include the internet as a public place, which I really don't think people internalize it like that.
Connecting the AI face recognizer to the cloud or any other system is exactly where the line needs to be.
We need a framework for programming ourselves so as to be able to systematically protect ourselves at emotional, mental, and spiritual levels.
Further, any system incentivizing such emotional exploitation needs to leave now. I'm looking at you, capitalism.
I'm not familiar with this line of reasoning -- can you elaborate, or are there any good examples?
> Ethicist Jack Marshall described "Think of the children!" as a tactic used in an attempt to end discussion by invoking an unanswerable argument. According to Marshall, the strategy succeeds in preventing rational debate. He called its use an unethical manner of obfuscating debate, misdirecting empathy towards an object which may not have been the focus of the original argument. Marshall wrote that although the phrase's use may have a positive intention, it evokes irrationality when repeatedly used by both sides of a debate. He concluded that the phrase can transform the observance of regulations into an ethical quandary, cautioning society to avoid using "Think of the children!" as a final argument.
So now they have an excuse to get all of thus unsightly severely maimed Civil War Veterans reduced to begging out of the way. It isn't that we don't want to provide for them at all or have to look at them but it is for the children!
That bit of stupidity got left on the books until the 70s - mostly since it got forgotten until it came up as an excuse for the police to be dicks. Then there are the similar rationales of 'we have to discriminate against gays because they are all sexual predators'.
It’s not the same. That employee is not always there, not always paying attention and he doesn’t keep a log.
A machine on the other hand is very efficient and it’ll recognize you every single time.
For a period of time, there was a lot of chatter about all developers having some kind of professional oath, like doctors. Many of the approaches that were taken have issues (preclude working on smart weapons program or legal surveillance).
I wonder if what we really need is a developers oath along the following:
"Anything I build can and will be abused. I am responsible for my designs, for my products, and for the data I collect and store. If my technology is used for evil, I am responsible."
So, by that logic, Tatu Ylönen should be held responsible for all hacks/crimes committed via ssh?
You can't blame someone for making a tool that someone else uses for evil. Should Ford be responsible for all auto-related deaths? Should Edison be responsible for all deaths where the state put someone to death via electrocution? Should Jobs be responsible because a hacker used an OSX machine to hack into another machine?
I don't get this line of thinking, although I understand the sentiment.
I'd argue few companies ever reach this level of risk, and those that do are so large that the individual contributor cannot reasonably take on that burden of responsibility.
In the example of some Amazon surveillance 'big-brother' software: Max the software dev is just making facial recognition software to the best of their ability. They aren't privy to the motivations, long-term plans, and potential consequences of those decisions.
The oath is always a fun topic to discuss though: In reality it holds no meaning other than to the one who takes the oath. Correct me if I'm wrong, but in malpractice cases I doubt they cite the oath as evidence since all students are essentially forced to recite it.
Can we make it a requirement for ourselves to limit our power to our ability to keep that power safe?
I think that’s a superset of the problem of incentive alignment in AI safety, so probably not... but we also shouldn’t let the perfect to be the enemy of the improved.
I commit every single day of my professional life as a physician to do my best for my patients, and to do the absolute least harm possible. And over the entire course of my lifetime, I will not achieve the scale of harm - or benefit - that a developer can achieve with a few months or years of concentrated effort.
That aside, in US specifically, there's already something that is more narrowly tailored to our current reality. They aren't accepting new signatures because of how many there were, but you can still make the same pledge (and e.g. share it publicly to have some skin in the game):
Well, which is it? Both of these statements can't be true. Does this creed apply to gun manufacturers? Knife manufacturers? Car manufacturers? Hammer manufacturers?
There seems to be an eternal cycle of abstraction creating and breaking, in both finance and software.
- Someone builds a general-purpose abstraction that becomes popular.
- Inevitably, the abstraction gets abused.
- To prevent abuse, the abstraction gets violated. It's not general-purpose anymore because some usages are disallowed.
(This seems to be why open API's to online services tend not to last very long.)
Higher order causation must be separated from moral culpability. In other words, you should not hold people responsible for things that happen far downstream. Things that happen several links down the causal chain have occurred due to the decisions of many other people further along that chain, and a higher culpability should fall upon them.
That is not to say that a software developer cannot be directly responsible for bad outcomes, maybe for example working on weapon systems for a nefarious state, where you’re fairly close causally to the point of application. My point is that it’s not a good idea to push this to it’s limits.
Obviously hyperbolic, but not very different at the core.
Even if we somehow had that measure in place it would be a cure worse than the disease - well you can't get a job doing something legitimate because it turns out that your safety human recognition algorithm got abused by somebody else? Just crime for you to make a living.
If my tech is used to do harm, the responsibility to heal is shared between myself, the cultures involved in leading to harm, the perpetrators of the harm, those harmed, those witnessing harm, and those in denial of responsibility. I'll do my part while encouraging and believing in others' willingness to grow together.
I won't be holding my breath as you start on your new venture.
In software, at least.
The world has moved on a lot since 1848 — someone needs to promote genuinely new political ideas for our era, not rehash ones that came from the transition from agrarian-feudalism to industrial-capitalism.
But the root cause of the problem is legal. No matter how you try to analyse this,it's a social problem and society is governed by laws through government not by corporate policies through megacorps.
America's democracy is not agile. It cannot adopt to rapid changes in society and advancements in technology. America desperately needs legal reform starting from the constitution.
For this issue in particular,why aren't lawmakers passing laws in favor of the public? I don't want this to be up to Microsoft feeling like a good "corporate citizen" today,the government is supposed to be for the people,by the people and of the people.
You what concerns me and should scare every american? What if everyone is running around trying to fix the symptoms while ignoring the elephant in the room,a disease at the heart of american democracy. What if by the time people get around to try and fix the disease(root cause) it's too late?
I find it downright scary when corporations take shortsighted and immoral positions like that. It is historically very clear there are consequences to our work as engineers and companies developing and supplying technology. It is very important to know that we share responsibility for how our work is used starting from the moment we reasonably understand how it is being used.
I mean, there are engineers who designed and built the gas chambers in the second world war. Were they responsible for the murders that were committed with it? Or is only the one turning the wheels responsible? Or the one who was in command? Or his boss higher up? I think everyone who knew was partly responsible, including the engineers.
It also has also been proven that is is really easy to coerce people (including engineers) into doing immoral things. It is easy to deny any responsibility when it is someone else telling/ordering you to do things. But it does not clear you of responsibility for your actions.
I think Microsoft is doing the right thing now, they have come to realize their technology can easily be abused in ways they did not foresee (this probably already happened), and they try to take responsibility by speaking out and lobbying to get legislation in place to avoid abuse, but without destroying the market opportunity.
Tragically, I expect several things:
- There will be no government legislation / "red tape", certainly not from the current US government.
- The race to the bottom they are afraid of will happen anyway, and Microsoft gets to choose whether they want to be part of it or not. Their morals (now out in the open) will work against their chances of market success.
- What Microsoft asks for is still far to weak. They want to take the moral high-ground, but they also want to sell their stuff. For instance, they ask for clear signage in stores that facial recognition is being used, so that customers can choose not to enter the store. Do they really think this will provide good privacy protection? Business will simply strong-arm consumers into consent by denying service if they don't, just like they did with the old EU privacy directive (cookie law).
- In the EU, GDPR is already providing consumer protection against facial recognition, mostly better that what Microsoft is asking for. Business in EU are now effectively prohibited from using it, but US based startups will use their lead to "disrupt" the market and introduce it here anyway.
How exactly? Maybe lobbying for relaxing the law? I think that maybe shopping centers and similar businesses could lobby for facial recognition. I hope both of them don't succeed.
Also there is the option of "growth hacking" and "legal marketing", aka just doing it illegally (with some faux activism story behind it) and seeing what happens. The government here is not really actively enforcing GDPR, so you can probably get away ignoring it for quite some time, flying under the radar if you are small, like most website publishers do too.
As someone who's worked in the tech sector in the Seattle area for over a decade, I could have told you that would happen. One of Amazon's core values is minimizing expense -- it's baked into their DNA. It doesn't matter what the issue is. If it costs Amazon money, they HATE it.
Lots of potential problems can arise. I wonder how they bake those into their DNA?
Taking matters into our own hands becomes an option.
There is no such thing. “Why take a step away from the oncoming truck, that one strp won’t save my life?”
Thankfully in the EU we have GDPR. It considers biometrics as a similar sensitivity to medical data, so unless you genuinely need it (maybe a hospital) then you can only get it with explicit consent. If consent is not given then that can not bar you from service.
So I reported a company to the ICO this week for the introduction of fingerprint scanners and was assured they consider it was a breach and will deal with them. GDPR isn't perfect and I think defaulting to consent is wrong and alternatives must be called out but you can't help people sleeping walking into it, it is very convenient.
Or just hundreds of other organizations and corporations.
as opposed to, you know, somewhere in a poorly designed system which gets hacked.
(sorry, apparently joined a chorus)
Practically, anyway, GDPR seems like a much more effective measure.
For many private actions companies demand ID number. That reduces identity theft to a bad movie plot.
If that's banned I'll be in the Resistance!
One of my local gyms this week switched to requiring fingerprints or you were barred from access
This is actually a nice hypothetical for that idiotic vision of replacing law and the court system with algorithms. It's extremely unlikely that the specific case would be foreseen in a contract. There is a continuous spectrum of such changes, and it's impossible to formulate any specific rule that would capture them all.
Example A: The gym changes from keys to plastic membership cards. Would this be a breach of contract? I think most everyone would agree that no, it isn't.
Example B: The gym requires whole-genome sequencing (once), then requires a drop of blood every time you enter to check your identity? Breach of contract? -> Obviously.
For any two such changes, you can probably come up with yet another example that's somewhere in between. The closer they get, the more often you will find people disagreeing, yes. But that just shows how justice is a constant conversation not easily set in stone.
As for the specific case: European law really doesn't like biometric data, and it's unlikely they can get away with it.
(the following is based only on my knowledge of German and Portuguese law)
BUT, ) if they do, the pro-rated refund is the most likely outcome. It works both ways, though: if you move away, they also cannot require you to keep paying fees. It's a concept loosely translated as a "cessation of the foundational requirements of the contract).
But they probably can increase the price to cover "administrative costs/..."
What’s the reasoning for this? Shouldn’t I have the freedom to pick the conditions under which I offer my services, except for discrimination? Is it discrimination if I only offer biometric ID, say for business convenience?
> From the moment one steps into a shopping mall, it’s possible not only to be photographed but to be recognized by a computer wherever one goes. Beyond information collected by a single camera in a single session, longer-term histories can be pieced together over time from multiple cameras at different locations. A mall owner could choose to share this information with every store. Stores could know immediately when you visited them last and what you looked at or purchased, and by sharing this data with other stores, they could predict what you’re looking to buy on your current visit.
> Our point is not that the law should deprive commercial establishments of this new technology. To the contrary, we are among the companies working to help stores responsibly use this and other digital technology to improve shopping and other consumer experiences. We believe that a great many shoppers will welcome and benefit from improvements in customer service that will result.
But people deserve to know when this type of technology is being used, so they can ask questions and exercise some choice in the matter if they wish. Indeed, we believe this type of transparency is vital for building public knowledge and confidence in this technology.
So they don't actually advocate that you should get a right to privacy or a right not to be profiled once you enter a store.
Instead, you get a right to opt-out of profiling by not ever entering any kind of store again.
More regulation generally gives an advantage to larger companies over smaller ones since it creates barriers to entry; compliance costs usually increase sublinearly with revenue. (E.g., it's a lot easier for Microsoft to hire a dedicated lawyer than it is for a garage start-up.)
This idea that "companies always want less regulation than is socially efficient" is usually based on a misunderstanding of economics.
Of course anti-competitive lobbying happens all the time. But if it's not economically feasible for a Scrappy Gang of Dropouts in The Garage follow to regulations that protect people's lives and freedom in this country, I'm cool with them finding another country, or their own deserted island perhaps.
 Of course, we are not automatically in an efficient equilibrium, and there exists worlds where the reduced competition due to barrier to entry create costs that are larger than the benefits of the regulation. But I'm happy to put that scenario aside.
They aren't the industry leader, so leading the conversation on regulation helps them to impact the market leader.
It also let's them steer the conversation on regulation before it becomes a conversation occurring outside their control/influence.
Self-regulation is EXTREMELY common for both those reasons in the corporate world. It also never actually works as well as independent regulation, and when there are issues in independent regulation, it frequently occurs as a result of that independence being undermined/corrupted by revolving doors/lobbying/etc.
It's a nice press release, and smart on Microsoft's part, but don't fool yourself into thinking it's not in their self-interest to be doing this. To date, I don't think I can recall any instance of a public corporation acting against its own self-interest for moral reasons.
On the other hand, if Microsoft is sincerely worried about this technology (and potential negative impact it may have on it's image just like Amazon a few months ago), then it makes sense they would be lagging behind as they would be more concerned about assigning resources and releasing product?
a) Anti-AWS (conceding loss of JEDI contract)
b) Regulatory capture for the remaining big cloud players
Now as CCTV cams are everywhere, everybody should have a right to wear a mask everywhere without getting discriminated.
It's a losing game. They will track you with your mask on from the moment you leave your house, your phone, your credit card, your gait, your car... It's like a super cookie, if you don't delete all the 10 places it was stored, a single missed one will be enough to regenerate it.
Total citizen surveillance is coming, everyone's location history will be in a database and kept for years, just like phone call metadata.
perhaps some city-owned CCTV cams are always on, but I'd be doubtful.
I am under the assumption that it's already here since whoever carries a mobile phone is already under surveillance since the mobile networks share info with the government, the license plate readers see who is traveling on the roads, and electronic financial records show your transactions.
Even if you pay cash, your transactions can still be tracked, because your face will be on the cash register camera.
Public transport has cameras inside too.
It's just a matter of time until the computing power and software to analyze all this video will be everywhere.
You want to be overlooked? Behave typically, obviously, and boringly.
I haven't kept up with EU currency security features, but I remember reading how they keep trying to put RFID chips/fibers in them for tracking. They're currently using magnetic ink that can be read via scanners as you walk past. There might be other tracking features that aren't disclosed.
> Some areas of the euro notes feature magnetic ink. For example, the rightmost church window on the €20 note is magnetic, as well as the large zero above it.
I don't think you're as anonymous as you think, even with all of the steps you've taken.
Not sure if they do this on iPhone or just android, but my guess is they do both if you've installed google maps/gmail etc.
This would allow local privacy groups to put people from their group on their shirts and distribute them, kind of like facial recognition graffiti. This would be much harder to deal with due to the volume and flux of adverse imaging.
* Rather than merely forbidding biased uses in their TOS,
an internal team should review relevant source code & use cases of anyone implementing MSFT facial recognition, a'la Apple's app store.
* Build apis, libraries and easy-to-use tools that allow consumers to destroy their face data.
* Increase the concentration of pressure on Amazon by refusing to engage in the race to the bottom. Specifically, refuse to license facial recognition technology to law enforcement, military, or intelligence agencies until such time as they have independent civilian oversight, direct neutral-party monitoring, transparency, and demonstrated accountability for mis-use.
MSFT (and any corporation) is fundamentally untrustworthy. Principles are easily changed or ignored. Instead, they should begin creating institutions, code, and business process that make abuse difficult. Testing tools and APIs are the right idea - more of this approach please.
As should license plate surveillance.
And for those that think that license plate surveillance should be legal, what you do not know is that municipalities are mandating that private corporations install license tracking cameras on their facilities and report back to the municipality who is driving by that address. Menlo park is just one such municipality.
Facial recognition surveillance technology deployed in any public sphere should be expressly illegal. Period.
What this really will mean that in effect, facial recognition will be widespread, legitimized, and unavoidable unless you want to live like a hermit, just like CCTV today. The only way this could potentially be avoided is targeted protests at the first stores adopting it.
The post does have some laudable positions and arguments against government surveillance using facial recognition, but I'm not sure how useful this is if private actors build even more powerful databases and offer them for sale to the highest bidder.
The point is, there needs to be some law and order in place so that when people abuse this tech to harm people and society and get caught, there's some precedent to stop them and punish them. It doesn't matter if the technology is passive. The intent and action to use it to harm people are not passive and are not analogous to thought crime at all.
There is dire need for regulation with a lot of emerging technologies right now. We're building systems with enormous power which can break human society if misused. I think the intangibility and "passivity" of this tech (or at least how it is perceived) gives us a very false sense of security. Like how a few decades ago very few average people could understand how the internet might have a great impact on society. Obviously they aren't thinking that way anymore.
Check out Charles Stross's speech at C3 about regulatory lag relative to the the accelerated nature of tech growth:
It is like declaring your city nuclear weapons free when the only players are either above the law by jurisdiction or within detonation range already. Just having the law on the books makes the city look stupid.
Facial recognition is a process that works on images - that makes it more passive than even a sensor since there are definite precedents for 'not here' with sensor recordings.
Also, the very act of having it on the books could discourage usage.
What will MS do when their terms of service are violated?
A regulation forbidding use of this tech for discriminating against certain people is worthless if it specifies a $10/day fine. Penalties have to be significant, and life-changing for violators.
For example, HIPAA / HITECH specifies criminal penalties, and pierces the corporate veil, for intentional violation of patient privacy.
Both are important. 1) the penalties have to be criminal, not civil. 2) natural persons (not Romney persons) who break the law must not be able to hide behind the limited liability of corporations.
A third step would be a bounty system for citizens bringing charges. The same thing made the Clean Water Act enforceable in the 1970s-1980s.
Without enforceability like this, it's all chin music. Or even greenwashing.
- Targets (people)
- Enablers (tech companies)
- Stalkers (consumer companies)
- Big Brother (governments)
Notably the only ones who cannot use the system are the Targets, because they don't have the necessary scale. Being part of a system you can't use is typically to your detriment.
I'm guessing you had the US or Europe in mind rather than other parts of the world like Argentina, Uganda, China and India. (ignoring the omission of scale)
I could see holding the providers liable for bad detection a good precedent for calibrating caution in a rational way although it is a bit 'eye for an eye' for societal liking. Having something which if it recognizes a face 60% of the time saying "Hi John" to Bob isn't a liability to anyone - really just amusement. Locking someone out of their apartment and causing them to need to call a locksmith because they got a bloody noise and facial recognition doesn't work any more is low stakes. Having someone potentially jailed for a long time would bring the intervals of confidence appropriately tight if say the prosecutor were at risk of death row or 300 consecutive years sentencing. We would see people very reluctant to work in forensics or prosecution if that were the case.
We are to the point where commercial entities and governments are full-on stalkers. And that should be illegal, too.
Further, it is immoral. There is a fundamental hypocrisy when people are exhorted to autonomy and personal responsibility -- "by your bootstraps", "entrepreneur", "gig economy", etc. -- at the same time they are, with mass surveillance, being left with none of this, in truth. Your every action monitored, measured, standardized, and compelled to conform.
You are left with no agency, save that granted -- left -- to you by the powers that be.
And with everything recorded and stored, seemingly indefinitely, you become self-monitoring, self-constraining. Will this be used against me in a year? Five years? Does one slip or oversight last a lifetime?
Fy the way, do you see them taking action to swing the cameras, and the monitoring, the other way? Even Obama, with ever and ever more secrets and aggressive prosecution of whistleblowers. The police, who have fought cameras and monitoring for years. NDA's left and right, disparagement suits. On and on and on...
And, I've gone on too much, here.
Never mind just the philosophy of the matter; look, too, at how it works in practice!
Do we all want to spend not just our work hours, but our lives, in virtual cubicles?
The post-modern panopticon.
Numerous processes already capture gender, race, and age.
Facial recognition seems like a better/faster tool for capturing these data points. But the requirement to comply with existing laws is unchanged.
Any further regulation will only limit the development of facial technology to a few large players that can afford compliance and enforcement measures.
Mainly because many companies doing it are arguing that when their models produce biased results, it's not their fault, it's just "computer thinks that way". So far as I know, this approach hasn't been properly tested in court, but it might just fly, if courts decide that you need to have intent to discriminate (and that training on real-world datasets, that are always implicitly biased, does not constitute such intent).
This is really interesting. I wonder how much of this is real, versus PR (although, Brad Smith has an excellent track record in this area).
The company that has the most to lose, were there to be real regulations concerning facial recognition, is actually a company in Microsoft’s investment portfolio. That company has built the world’s largest database of face and identity information. Facebook.
Every decision made by an algorithm should make its inputs clear, its criteria for interpreting those inputs clear, and its judgement to be disputable. If you can't get your black-box neural network to do so, then perhaps it shouldn't be making life-changing decisions for other people.
There's a startup that's doing sentiment analysis of social media posts, to measure how 'risky' a babysitter is - ad likely does so in a biased manner. 
It's illegal to use such a system, for purposes of vetting an employee, yet their entire business model revolves around families using them to vet babysitters.
Does anyone else find this interesting? Is Microsoft trying to keep their facial recognition algorithm on top by comparing their's to others'?
Because this feels a bit reminiscent of the "G-man" campaign, which might have been fun and had a point, but a point that seemingly got lost along the road of "Windows 10 as a service/storefront".
At least gotta give it to MS PR: They seem to know what's on peoples minds and are rather good at trying to appeal to that.
cf Regulatory Capture (https://en.wikipedia.org/wiki/Regulatory_capture)
It might seem like I'm trying be on the side of the perfect against the good, but there is room for both efforts without stifling either. A holistic approach to privacy in general would help inform the values necessary towards the responsible use of facial recognition technology.
... or they are just mad that I refuse to give them a linkedin photo. :) Supposed to be funny, but also serious. Every time they ask for one, and then ask why not, I tell them because they are fundamentally untrustworthy. And they are untrustworthy, as is every public for profit business whose officers carry a fiduciary responsibility to shareholders.
IMO this position is well written and these two sentences succinctly articulate the situation:
"In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition."
There is an underlying idea here of a person owning information about themselves and having control over it and making sure a company can not use it inappropriately. I think addressing it as only 'facial recognition' wouldn't go far enough.
Expanding the circle to protect more PII in general is going to be a longer fight because we’ve already let economic power and consumer behaviour develop into a strong status quo. For instance, if strong regulations came in to limit the creation of profiles for targeted advertising and that resulted in Google withdrawing free email accounts from the market, it might not have a lot of popular support.
So that's what Microsoft really wants. Allowed everywhere, minimial notice, no user ownership of data, and no opt-out.
My immediate next thought, though, is that Microsoft operates its "Cognitive Services," including facial recognition, in China. That's worrying, even if Microsoft would loudly prefer that governments generally pass nice privacy laws.