Hacker News new | comments | show | ask | jobs | submit login
Facial recognition: It’s time for action (microsoft.com)
876 points by myinnerbanjo 9 days ago | hide | past | web | favorite | 270 comments





This is a laudable first step in advocacy for real regulation of a technology that already has huge impacts on privacy and civil society. I was in the room for one of the meetings with Microsoft's senior leadership as a representative of a Seattle-based civil liberties group. While our coalition would like to see MS go further, it was quite clear that they take their commitment to corporate responsibility and ethics around the AI issue very seriously. In particular, they seemed to understand our concerns about how facial recognition technologies can magnify existing biases in our criminal justice system.

This stands in stark contrast to the meeting I attended on Tuesday with Amazon's general counsel regarding their Rekognition service. There was a near complete rejection of the idea that mass deployment of surveillance technologies in today's largely unregulated environment posed any danger to civil society. He also denied that Amazon had any responsibility for the negative impacts of their AI/ML technologies, or role to play in industry efforts to self-regulate.


> This stands in stark contrast to the meeting I attended on Tuesday with Amazon's general counsel regarding their Rekognition service.

And I think that cuts to the core of this: this is an anti-Amazon move. Nonetheless, I do also think it's a pro-consumer, pro-civil liberties stance. It's also a recognition that the tide is turning with respect to consumers and privacy; here, Microsoft is getting ahead of this changing trend and establishing that they're on the right side of it. Amazon is going to find itself hurting on several levels next year, as legislation likely finds its way onto the books and the consumer tide changes further.


I'm not convinced the US will be passing much of anything in the next two-years. Passing pro-privacy regulation? Seems very unlikely to me. Microsoft should engage with Canada and the EU where technology/privacy regulation is gaining traction.

The incoming California legislature is over 3/4ths Democratic so while the Federal gridlock may not make any progress, there could be some movement closer to home that at least gets the industry to prepare for the inevitable.

Yes, state action on this is likely to occur first, and it should definitely be done sooner rather than later. State law commonly templates federal law, as it has historically done with mileage standards, the ACA, etc. Get this done in CA, NY, and MA, where it's politically viable.

I'm not so sure - I don't necessarily think it'll be federal straight away, but I think there's real possibility of state-by-state rules, which will eventually give way to federal.

This is the best way anyways. 50 experiments will lead to a better outcome.

50? That would be nice. For any given forward-thinking issue, I consider us lucky to have a handful of innovators at the state level. Now, when a crisis of some sort comes around, lawmakers from others states will start to pay attention. Ideally, at that point, they will evaluate and adapt other state experiments. In practice, it depends on the timing, committees, etc. Sorry to sound a bit cynical, but really my take-away message is this: if you care about an issue, get involved one way or the other in lobbying for your cause (I'm assuming here that HN people are well-informed, even if we have different policy prescriptions).

Could you elaborate on the privacy regulation that is gaining traction in Canada?

Hey sorry I didn't check for follow-up comments. Hopefully you still see this.

On Nov 27th, I watched the International Joint Hearing with Richard Allan, Facebook’s vice-president of policy solutions. It was a couple hours long and of much higher tech literacy than the one the US held with Zuckerberg. Questions came from seven countries. Questions were mostly honest and the answers were informative. There were still a number of people who chose to just ask angry questions for sound bites. (Canada's rep was unfortunately one of the ones who asked bad questions.)

Overall though, it was a really good discussion. Here's a link to the transcript and video ("watch the meeting" link).

http://data.parliament.uk/writtenevidence/committeeevidence....

A joint-declaration was made after the hearing. It speaks to an international alliance for regulation. This is really where I see traction forming.

Here's the declaration, it's only a page long: https://www.parliament.uk/business/committees/committees-a-z...

Hope this throws you a notification or something. Cheers.


Only 3 Republican senators need to support it, and there's a ton of them that don't care so much for Trump.

If enough industry people are on the side of it and/or there no populist fight over it, Trump would have no reason to veto it.

I don't think Trump really cares that much about these things, it'll only be a thing if it turns into a pop-culture war in which he may choose sides.

But if MS and a bunch of other heavyweights are in favour of it, he just might even go for it.

It's possible. But more likely in a few years.


> I don't think Trump really cares that much about these things, it'll only be a thing if it turns into a pop-culture war in which he may choose sides.

> But if MS and a bunch of other heavyweights are in favour of it, he just might even go for it.

I remember the first time I heard about Net Neutrality as an issue, in what, 2011 or something? I couldn't have imagined it becoming politicized at all. Plus, Wikipedia, Google, basically any tech company who isn't an ISP should clearly be on the same side. And yet, it became a political issue. People who couldn't explain the first thing about what Net Neutrality is, have opinions about it. Trump's administration is against it.

Maybe it'll be different for Facial Recognition, since people can better intuit about it, but honestly, that probably just makes it easier to fear-monger about, rightly or wrongly.


Net Neutrality will definitely become politicized because it's a war between layers of the value chain: carriers vs. Google.

Carriers are enormously powerful and influential so there will be a war.

If it's MS vs. Amazon ... well it's not one industry vs. the other.

But I agree it could turn into a pop culture war.


red tape ... legislation will be passed if regulation can be made.

Amazon or tech in general wont be stopped , but it will have to pay the toll. This is something dems and repubs agree on. Im guessing MS is not looking to be the leader so why not just get some good press at the same time.


Amazon's Rekognition is a joke: it is weak FR, does not have basic features, and is overly expensive. The only thing they have going for them is Amazon's PR. They fooled you into thinking their Yugo of a product is anything at all. Disclaimer: I work at a real FR company, and Rekognition is no where, a non-contender in quality, features and price. A joke.

Which would make it even more potentially harmful in this context, if it is relied upon.

Microsoft knows how to talk the enterprise talk. BTW, Amazon competes against Alibaba for their main business, not Microsoft.

Just to clarify:

Amazon's "main business" is AWS, not their online store. For many years their store operated at a loss. Don't get me wrong, amazon.com is a HUGE business, but it's not Bezos' breadwinner.

https://www.zdnet.com/article/all-of-amazons-2017-operating-...


It only operated at a loss because they were plowing massive amounts of money into their delivery infrastructure. Revenue matters more than net income

Also because they are trying to estabilish monopoly. Amazon's long term plan is to kill every other big retailer.

To be fair, that's the hope of every company.

That's nonsense.

AWS is a very large business, and much more profitable than selling physical goods online.

But Amazon's overall revenue was $56.6B (Q3 2018[1]) and "only" $6.68B of that was AWS. By comparison, revenue from Amazon's advertising business is $2.5B - and no one is claiming that is their main line of business.

Even the article you point to shows Amazon made a profit of $1.69B on North American ecommerce vs $1.35B profit from AWS.

So yeah - AWS is a high margin business, but no where near their "main business".

[1] https://www.cnbc.com/2018/10/25/amazon-earnings-q3-2018.html


their main business is their online store, your linked article proves as much. If the online store was to disappear tomorrow the value of Amazon as a company would decrease much more than if it were AWS. It is proportionately more profitable but that doesn't make it the main business

Why do we always wait until it's too late and the repurcussions almost unbearable to put in any common-sense preventions to abusive technology or practices?

It all comes down to your definition of "we". In short, this is either a collective preference problem (e.g. not enough people care or pay attention) or a collective action problem (if you believe that at least collective intent exists).

More specifically, a common theme in public policy analysis is agenda setting. Unfortunately, in many situations, the agenda is so crowded with priorities that only crises cut through. For more context, see "The Public Policy Primer: Managing the Policy Process" (https://www.goodreads.com/book/show/8727263-the-public-polic...)


You can't meaningfully regulate for specific things before they exist for one. Trying to do so prematurely would result in nonsense that would only hold us all back. Not to mention matters of enforcibility.

Facial recognition's biggest abuses are from the fox watching the hen-house. "Common sense" would lead to such absurdities like banning using it on cops because "everybody knows criminals could use it to track them" while using it on every protester because "they might be terrorists" despite both known truths being complete bullshit.


Because law makers are still trying to wrap their heads around the Internet. That means tech has to self regulate and we just haven't set ourselves up for that. I don't think that lack of diversity helps with that.

>> This is a laudable first step in advocacy for real regulation of a technology...

Perhaps I'm jaded, but I don't agree at all. They're trying to get it regulated so they can be creepy without fear. One of the first "good" uses listed was finding 3000 children. "Think of the children" is a common rallying cry for evil. Nothing else mentioned was really relevant.

The line IMHO needs to be drawn at anything that you'd call stalking if it were done by a human. If a store wants to recognize me as a prior visitor, that's fine because an actual employee might do the same. But when my presence (just my presence, not even my purchase history) is shared among more than one location, that's like someone following me around. Stalking. This tracking of people and creating databases about them is at its essence a form of automated cyber stalking and should be illegal. The "societal benefits" of this are nothing more than claiming what's good for corporations is good for society. It is not. Please stop pretending this shit is OK.


Except with a warrant, it's entirely legal (and has been for a long time) currently for a surveillance team to physically stalk you in areas with an expectation of privacy (TEMPEST, wiretaps, etc), and in public areas they don't even need a warrant (actually, in public areas, it's legal for anyone to stalk you - just ask celebrities about their paparazzi stalkers).

As best I can tell, from a surveillance standpoint what Microsoft is encouraging to become standardized is to apply the same level of legal rigor for wiretaps to facial recognition, which would be a significant step beyond matching legal standards for physical surveillance (which is as it should be, as there's a scaling potential with automated surveillance that needs to be kept in check beyond simply the legal limits of physical surveillance).

Microsoft's stance is a pretty good one "in a perfect world." The problem is that it isn't a perfect world, and we've already seen a good system of FISA courts established after the uncovered abuses from the 1975 Church committee turn into a revolving door for exponentially increased secret warrants for NSA surveillance, even to the point of that surveillance becoming available to local police forces.

What doesn't help is that Hollywood has already normalized a lot of surveillance tech so that most of the public assumes things are already legal/implemented in much broader ways than they are, so it's normalized the very idea of the surveillance state that we are growing into.

If anything, private corporate abuses of the technology is the only thing that will generate enough public outrage to lead to a generalized reform. Self-governance at a corporate level may seem good, but I suspect it will simply mitigate the necessary outrage.

The necessary conversation starter would be the ACLU running a personalized erectile dysfunction out-of-home display advertising campaign in Washington, DC. As long as that level of abuse is against the terms of use of the intermediate providers, it protects the less visible abuses of the technology.

Ironically, a "race to the bottom" is precisely what will result in the appropriate level of regulation - corporate self-governance will always be a half-measure that seems good on paper but leaves a lot of procedural loopholes for systemic abuses that go unnoticed for a very long time.


> it's legal for anyone to stalk you - just ask celebrities about their paparazzi stalkers

Yeah, you're talking about human stalkers, but OP's talking about capturing your image with many cameras, logging your presence into databases and sharing that information with other organisations.


>> Except with a warrant, it's entirely legal

I don't think Microsoft and their partners are interested in building a warrants-only surveillance system. Businesses already have camera systems and one could run face recognition on stored video after something has happened. Nobody is talking about that. This is all about real-time cloud connected face recognition.


I think this needs to be changed. What is a public space has changed drastically. What you can do with technology has changed this drastically too. Like where do you draw the line of what someone can do without a warrant?

Follow you?

Video tape you?

Video tape an entire public area constantly?

Have Arial footage of an entire city?

All these things exist and are being done. There's good uses for all of them too. But where do we draw the line? Do the good uses outweigh the bad? Personally I'm not convinced. And that's before we include the internet as a public place, which I really don't think people internalize it like that.


What's critical is who & why. I might follow you to find out where you go for a drink after work so that I can set up a brilliant surprise party for you there, on the other hand someone might follow me so that they can determine where best to murder them. A city might be filmed to support traffic efficiency and pollution control, if the footage then gets used to suppress political protest then it's a whole different thing.

That's a really good line in the sand, and relatively practical as well. Unfortunately, I don't think it will be put into the books with all the interests corporations would have in keeping it out.

Really?! I don't think it is a good line in the sand. Because even though we can draw lines around how an AI might act like a human in a certain case, recognize you as a frequenter of a store, when we connect that AI to the rest of the computer capabilities out there it quickly becomes something a human couldn't do. A convencience store clerk might remember several dozen, even several hundered frequent customers but it tops out somewhere. And after some time the clerk will forget many of those who stop coming in. But an AI with a database for recollection suffers from neither of those issues. But you might say the recall has nothing at all to do with what the AI is basically doing. We have to look at this much more wholistically.

>> when we connect that AI to the rest of the computer capabilities out there it quickly becomes something a human couldn't do.

Connecting the AI face recognizer to the cloud or any other system is exactly where the line needs to be.


Disagree. This is a technology that can be secretly or openly deployed in businesses for the sake of emotional exploitation via dynamic displays targeting specific emotional states. I'm against turning the world into anymore of an emotional hunting ground than it naturally is.

http://eqradio.csail.mi.edu


You may have just convinced me that this technology needs to be banned outright.

If we rely on the law for protection, we're doomed. This calls for much harder and higher action:

We need a framework for programming ourselves so as to be able to systematically protect ourselves at emotional, mental, and spiritual levels.

Further, any system incentivizing such emotional exploitation needs to leave now. I'm looking at you, capitalism.


Dead link? I'm on mobile if that makes a difference.


> "Think of the children" is a common rallying cry for evil.

I'm not familiar with this line of reasoning -- can you elaborate, or are there any good examples?


Interestingly, there's a Wikipedia article on this: https://en.wikipedia.org/wiki/Think_of_the_children

That wiki doesn't describe anything related to using the phrase for evil.

The wording isn't as strong, but I think this paragraph explains a bit what GP is talking about:

> Ethicist Jack Marshall described "Think of the children!" as a tactic used in an attempt to end discussion by invoking an unanswerable argument.[2] According to Marshall, the strategy succeeds in preventing rational debate.[2] He called its use an unethical manner of obfuscating debate, misdirecting empathy towards an object which may not have been the focus of the original argument.[2] Marshall wrote that although the phrase's use may have a positive intention, it evokes irrationality when repeatedly used by both sides of a debate.[2] He concluded that the phrase can transform the observance of regulations into an ethical quandary, cautioning society to avoid using "Think of the children!" as a final argument.[2]


> called the use of the phrase "Think of the children" in debate a type of logical fallacy and an appeal to emotion.[1] According to the authors, a debater may use the phrase to emotionally sway members of the audience and avoid logical discussion.[1] They provide an example: "I know this national missile defense plan has its detractors, but won't someone please think of the children?"


It is used for censorship plenty, for suppressing the disapproved without rational thought. There are 'ugly laws' on the books because of a bullshit misinterpretation of biblical passage that images seen by a pregnant mother could cause the child to be formed accordingly - never mind that the whole thing is essentially a miracle. Otherwise they wouldn't bother encoding 'common sense' livestock techniques in their holy books! They would leave it for their 'animal husbandry manuals'.

So now they have an excuse to get all of thus unsightly severely maimed Civil War Veterans reduced to begging out of the way. It isn't that we don't want to provide for them at all or have to look at them but it is for the children!

That bit of stupidity got left on the books until the 70s - mostly since it got forgotten until it came up as an excuse for the police to be dicks. Then there are the similar rationales of 'we have to discriminate against gays because they are all sexual predators'.


Definitely originated from The Simpsons:

https://m.youtube.com/watch?v=RybNI0KB1bg


justifying the destruction of privacy to find out child molesters who look for their victims online. there are many examples like that.

99% of people would support destruction of privacy for this cause.

That is the problem.

Sorry, it's a not for evil. It's usually for government over reach and expansion of power and control. Children, terrorism, and drugs - they say we need to give up our rights for those things.

I'm unclear on how LACK of regulation leads to fears around being creepy... Torches and pitchforks are orthogonal to the state, either way.

>> If a store wants to recognize me as a prior visitor, that's fine because an actual employee might do the same

It’s not the same. That employee is not always there, not always paying attention and he doesn’t keep a log.

A machine on the other hand is very efficient and it’ll recognize you every single time.


> He also denied that Amazon had any responsibility for the negative impacts of their AI/ML technologies, or role to play in industry efforts to self-regulate.

For a period of time, there was a lot of chatter about all developers having some kind of professional oath, like doctors. Many of the approaches that were taken have issues (preclude working on smart weapons program or legal surveillance).

I wonder if what we really need is a developers oath along the following:

"Anything I build can and will be abused. I am responsible for my designs, for my products, and for the data I collect and store. If my technology is used for evil, I am responsible."


I'm not arguing for or against that oath, but that would 100% kill Open Source dead the minute it becomes a requirement. Absolutely no one would share their code if they could be held responsible for its misuse.

Yep. How much do you want to bet that the Chinese surveillance systems that target the Uighur run on some version of Linux? I bet that they have some parts that are written Python, Java, Javascript, C or C++ for which they use an open source compiler.

> If my technology is used for evil, I am responsible.

So, by that logic, Tatu Ylönen should be held responsible for all hacks/crimes committed via ssh?

You can't blame someone for making a tool that someone else uses for evil. Should Ford be responsible for all auto-related deaths? Should Edison be responsible for all deaths where the state put someone to death via electrocution? Should Jobs be responsible because a hacker used an OSX machine to hack into another machine?

I don't get this line of thinking, although I understand the sentiment.


I don't know if the oath comparison works out so well for development. Let us not forget that each physician is directly responsible for a life every time they see someone. They also have a more direct effect on the success or failure of the procedure as they tend to produce a large portion of the work.

I'd argue few companies ever reach this level of risk, and those that do are so large that the individual contributor cannot reasonably take on that burden of responsibility.

In the example of some Amazon surveillance 'big-brother' software: Max the software dev is just making facial recognition software to the best of their ability. They aren't privy to the motivations, long-term plans, and potential consequences of those decisions.

The oath is always a fun topic to discuss though: In reality it holds no meaning other than to the one who takes the oath. Correct me if I'm wrong, but in malpractice cases I doubt they cite the oath as evidence since all students are essentially forced to recite it.


No snowflake is responsible for the avalanche, therefore I must pile on as much snow as possible -- The modern developer.

But the snowflake doesn't know that it is falling on a mountain slope.

Individual snowflakes are not capable of abstract reasoning.

Can we make it a requirement for ourselves to limit our power to our ability to keep that power safe?

I think that’s a superset of the problem of incentive alignment in AI safety, so probably not... but we also shouldn’t let the perfect to be the enemy of the improved.


The analogy works better than you think - gravity is responsible for the avalanche. They are all responding to irresistible forces.

No individual litterer is responsible for a trashed park.

> I don't know if the oath comparison works out so well for development. Let us not forget that each physician is directly responsible for a life every time they see someone

I commit every single day of my professional life as a physician to do my best for my patients, and to do the absolute least harm possible. And over the entire course of my lifetime, I will not achieve the scale of harm - or benefit - that a developer can achieve with a few months or years of concentrated effort.


That is insanely broad and vague. You can be responsible for knowingly designing a system for abuse, or if you give it to someone you know, or have reasonable cause to suspect, will abuse it. But what you're saying here is that if you ship anything at all, and someone somewhere finds a way to use it for some nefarious purpose, then you're responsible. That's not how things work with anything else; why should they work that way with software?

That aside, in US specifically, there's already something that is more narrowly tailored to our current reality. They aren't accepting new signatures because of how many there were, but you can still make the same pledge (and e.g. share it publicly to have some skin in the game):

http://neveragain.tech/


"Anything I build can and will be abused... If my technology is used for evil, I am responsible."

Well, which is it? Both of these statements can't be true. Does this creed apply to gun manufacturers? Knife manufacturers? Car manufacturers? Hammer manufacturers?


If this effort succeeds, the end result will be something like "know your customer" for cloud providers. So, just like with app stores, you'll have to worry about getting your service approved.

There seems to be an eternal cycle of abstraction creating and breaking, in both finance and software.

- Someone builds a general-purpose abstraction that becomes popular.

- Inevitably, the abstraction gets abused.

- To prevent abuse, the abstraction gets violated. It's not general-purpose anymore because some usages are disallowed.

(This seems to be why open API's to online services tend not to last very long.)


It depends on how far down the line you want moral culpability to apply. To take this to it’s logical extreme, it effectively makes the developer responsible for anything the end user decides to use their software for. The same reasoning is not only not applied to software, it is not applied to any product class with the exception of those that are immediately and obviously lethal. Engineers in google would be responsible for drug cartels operating in Columbia.

Higher order causation must be separated from moral culpability. In other words, you should not hold people responsible for things that happen far downstream. Things that happen several links down the causal chain have occurred due to the decisions of many other people further along that chain, and a higher culpability should fall upon them.

That is not to say that a software developer cannot be directly responsible for bad outcomes, maybe for example working on weapon systems for a nefarious state, where you’re fairly close causally to the point of application. My point is that it’s not a good idea to push this to it’s limits.


Has anyone actually quit over this? http://neveragain.tech/

So, if I'm an assembly line worker at GM and someone buys a pickup truck and intentionally drives it into a crowd, murdering 20 people, I should get the death penalty as a serial killer?

Obviously hyperbolic, but not very different at the core.


A car’s primary function is not to kill people. If you are an engineer and develop a tool whose primary purpose is mass surveillance, you should bear that ethical burden.

And yet, cars kill many, many more people (1.25 million in 2013, per WHO) than mass surveillance does. And parent didn't qualify his statement as to which products the oath would apply to:

"Anything I build can and will be abused. I am responsible for my designs, for my products, and for the data I collect and store. If my technology is used for evil, I am responsible."


Mass surveillance does something a lot more insidious than car wrecks, it sets up a future where people are killed. It's like global warming.

Whatever the "primary function," car accidents is the leading cause of young people's death in the US.

I think oaths are completely outmoded as a social construct - everyone knows that people lie regardless of what they claim. The actual teeth to it were their own sources of problems, guilds and not being able to run your business if violated. They were supplanted by licenses.

Even if we somehow had that measure in place it would be a cure worse than the disease - well you can't get a job doing something legitimate because it turns out that your safety human recognition algorithm got abused by somebody else? Just crime for you to make a living.


I think it's time to acknowledge the impact of culture in our oaths. It IS how we're all naively programming ourselves and others, so we might as well address it.

If my tech is used to do harm, the responsibility to heal is shared between myself, the cultures involved in leading to harm, the perpetrators of the harm, those harmed, those witnessing harm, and those in denial of responsibility. I'll do my part while encouraging and believing in others' willingness to grow together.


Worth noting the Project Managers who are certified "PMP"s attest to a code of conduct that includes: "take actions based on the best interests of society, public safety, and the environment." It requires that PMP's report any unethical practices to "appropriate management". Pretty weak, but hey, it's better than nothing.

Something like this would be a great start. I have a feeling the industry has a tendency to try to recruit young and perhaps naive developers who might not be fully ready to appreciate all of the potential misuses (and the probability of those outcomes) of what they build, however. Or worse, have some misaligned incentives to build them regardless.

I like the idea of improving a sense of developer responsibility. But the challenge here is how to you define evil? How would you interpret this oath when building a web browser? Have a blacklist of "evil" sites? Obviously, a lot of issues to sort through there.

So if you write an encrypted chat app, and people use it to plan a crime, are you responsible?

What is "evil"?

It's almost as if workers should own and control the means of production.
j4kp07 9 days ago [flagged]

There is absolutely nothing stopping you and your friends from starting such a company.

I won't be holding my breath as you start on your new venture.


Personal attacks will get you banned on HN. Please don't post like this again.

https://news.ycombinator.com/newsguidelines.html


Nothing holding anyone back except pesky things like paying rent and buying food.

We do.

In software, at least.

The world has moved on a lot since 1848 — someone needs to promote genuinely new political ideas for our era, not rehash ones that came from the transition from agrarian-feudalism to industrial-capitalism.


"Anything I build can and will be abused. I am not responsible for my designs, for my products, and for the data I collect and store. If my technology is used for evil, the corporation employing me is responsible and the government is at fault for not doing something about it."

Yes, you are. You chose to follow the design and build a system you knew would be abused. This mindset is what got us here to begin with. YOU ARE AT FAULT.

Agreed. The USA is all about individual liberty, but not responsibility, it would appear.

Yay for MS and boo Amazon. I agree.

But the root cause of the problem is legal. No matter how you try to analyse this,it's a social problem and society is governed by laws through government not by corporate policies through megacorps.

America's democracy is not agile. It cannot adopt to rapid changes in society and advancements in technology. America desperately needs legal reform starting from the constitution.

For this issue in particular,why aren't lawmakers passing laws in favor of the public? I don't want this to be up to Microsoft feeling like a good "corporate citizen" today,the government is supposed to be for the people,by the people and of the people.

You what concerns me and should scare every american? What if everyone is running around trying to fix the symptoms while ignoring the elephant in the room,a disease at the heart of american democracy. What if by the time people get around to try and fix the disease(root cause) it's too late?


> He also denied that Amazon had any responsibility for the negative impacts of their AI/ML technologies

I find it downright scary when corporations take shortsighted and immoral positions like that. It is historically very clear there are consequences to our work as engineers and companies developing and supplying technology. It is very important to know that we share responsibility for how our work is used starting from the moment we reasonably understand how it is being used.

I mean, there are engineers who designed and built the gas chambers in the second world war. Were they responsible for the murders that were committed with it? Or is only the one turning the wheels responsible? Or the one who was in command? Or his boss higher up? I think everyone who knew was partly responsible, including the engineers.

It also has also been proven that is is really easy to coerce people (including engineers) into doing immoral things. It is easy to deny any responsibility when it is someone else telling/ordering you to do things. But it does not clear you of responsibility for your actions.

I think Microsoft is doing the right thing now, they have come to realize their technology can easily be abused in ways they did not foresee (this probably already happened), and they try to take responsibility by speaking out and lobbying to get legislation in place to avoid abuse, but without destroying the market opportunity.

Tragically, I expect several things:

- There will be no government legislation / "red tape", certainly not from the current US government.

- The race to the bottom they are afraid of will happen anyway, and Microsoft gets to choose whether they want to be part of it or not. Their morals (now out in the open) will work against their chances of market success.

- What Microsoft asks for is still far to weak. They want to take the moral high-ground, but they also want to sell their stuff. For instance, they ask for clear signage in stores that facial recognition is being used, so that customers can choose not to enter the store. Do they really think this will provide good privacy protection? Business will simply strong-arm consumers into consent by denying service if they don't, just like they did with the old EU privacy directive (cookie law).

- In the EU, GDPR is already providing consumer protection against facial recognition, mostly better that what Microsoft is asking for. Business in EU are now effectively prohibited from using it, but US based startups will use their lead to "disrupt" the market and introduce it here anyway.


> US based startups will use their lead to "disrupt" the market and introduce it here anyway.

How exactly? Maybe lobbying for relaxing the law? I think that maybe shopping centers and similar businesses could lobby for facial recognition. I hope both of them don't succeed.


You could design from the ground up and create a retail concept that is based on customer recognition and automatic checkout, using some form of membership that includes consent; I think that would be allowed under GDPR as using it in this way is a clear choice that can be freely made.

Also there is the option of "growth hacking" and "legal marketing", aka just doing it illegally (with some faux activism story behind it) and seeing what happens. The government here is not really actively enforcing GDPR, so you can probably get away ignoring it for quite some time, flying under the radar if you are small, like most website publishers do too.


> He also denied that Amazon had any responsibility for the negative impacts of their AI/ML technologies, or role to play in industry efforts to self-regulate.

As someone who's worked in the tech sector in the Seattle area for over a decade, I could have told you that would happen. One of Amazon's core values is minimizing expense -- it's baked into their DNA. It doesn't matter what the issue is. If it costs Amazon money, they HATE it.


Class action lawsuits can cost a lot of money. If the system is found to be biased there may be grounds to sue.

Lots of potential problems can arise. I wonder how they bake those into their DNA?


The employees have a term for that: "Frupid."

If you’re willing to share, why were you meeting with Amazon’s general counsel?

My group is part of a coalition organized by the Washington chapter of the ACLU. We have been engaging both Microsoft and Amazon about their surveillance technologies and AI/ML offerings. ACLU-WA, in collaboration with the Northern CA chapter of the ACLU were largely responsible for the well-publicized test of Amazon’s Rekognition using photos of members of Congress against criminal mugshot databases.[1]

[1] https://arstechnica.com/tech-policy/2018/07/amazons-rekognit...


Note that aws runs on the Xen hypervisor, which has a new critical security hole pop up reliably every six months.

Taking matters into our own hands becomes an option.


> I was in the room for one of the meetings with Microsoft's senior leadership as a representative of a Seattle-based civil liberties group.

Which group?


My organization is Densho. We are a civil rights education and digital archive focusing on the history of the WWII Japanese American incarceration experience.

“This is a laudable first step”

There is no such thing. “Why take a step away from the oncoming truck, that one strp won’t save my life?”


Biometrics are creeping into everyday life. One of my local gyms this week switched to requiring fingerprints or you were barred from access. Another local gym uses facial recognition for entrance, although you can choose to have a member card instead if you ask for it directly, they don't list it as an option.

Thankfully in the EU we have GDPR. It considers biometrics as a similar sensitivity to medical data, so unless you genuinely need it (maybe a hospital) then you can only get it with explicit consent. If consent is not given then that can not bar you from service.

So I reported a company to the ICO this week for the introduction of fingerprint scanners and was assured they consider it was a breach and will deal with them. GDPR isn't perfect and I think defaulting to consent is wrong and alternatives must be called out but you can't help people sleeping walking into it, it is very convenient.


My gym also uses facial recognition, but does it via a person sitting behind a desk checking that my member ID matches my face. I don't think many people are uncomfortable with this process, and this biometric method has been used for a long time.

Yes, but that person can't copy your biometric data stored in their brain, convert it to a standardized format, and distribute it to millions of other devices.

You mean like a photo?

They would have to use a camera, which you would have to allow.

Every gym I've joined snaps your picture when you sign up.

>and distribute it to millions of other devices...

Or just hundreds of other organizations and corporations.


not yet but that day is coming soon.

https://after-on.com/episodes-31-60/039


Yes, but there are few chances that the person sitting behind the deck will be the target of a brain cyberattack!

What exactly do you think phishing is?

Yet

the big difference is that the data is stored temporarily in a human brain.

as opposed to, you know, somewhere in a poorly designed system which gets hacked.

(sorry, apparently joined a chorus)


The photo on the card is already stored in a system somewhere, and already vulnerable to attack. The brain is just for authentication at the door.

good point, though there might be a bit more biometric data required for a good facial recognition system.

Not necessarily.

the gym thing seems like a bit of a red herring. certainly when you're in your own home you have a reasonable expectation of privacy. even in a shared public space, you ought to have some legal protection against being arbitrarily tracked/surveiled. but on the literal physical premises of a private business, I'm not sure it's such a reasonable expectation.

Biometrics are the future equivalent of using the same password everywhere.

It seems more like 2FA using body parts instead of a phone.

I leave my U2F token in way fewer places than I leave my fingerprints.

Can we agree to 3FA then?

I think the main concern with biometrics is surveillance. Not as much the security problems associated with using the same password everywhere.

i think the implication is that, because it is insecure, it will not be deployed everywhere.

Hmm, I'm not sure that makes sense, since a lot of the deployments we're concerned about don't involve authentication (using biometrics in place of passwords), but other kinds of surveillance.

Sort of a tangent, but I wonder if there are any EU countries that have mandatory biometric ID cards. After a cursory (5-10 minute) search I found countries that either used to have ID cards but currently don't, used to have biometric ID cards but currently don't require biometrics, or have biometric ID cards that are optional. It does seem likely that in the future some country will, if it hasn't already, concede and mandate ID cards that contain biometrics.

Practically, anyway, GDPR seems like a much more effective measure.


The vast majority of countries (EU or not) only issue biometric passports these days: https://en.wikipedia.org/wiki/Biometric_passport#Countries_u...

There are still some loopholes which one can use in order to escape this, but they’re getting fewer and fewer. For example here in Romania they don’t ask you for your biometric data if you apply for a short-term one-year passport, but the issue is that over time going that way is more expensive than directly applying for a 5-year or 10-year passport (which do ask for your biometric data). And loopholes like this one will close pretty soon, I fear.

Spain. I've owned a government issued ID with my fingerprints for forty years. That's no tragedy, believe me. I don't want government to have certain data and I don't want private companies to have other data. If you think about identity as a right, you might start to see things under a different light.

For many private actions companies demand ID number. That reduces identity theft to a bad movie plot.


How does this prevent identity theft, do the private companies verify the fingerprints?

The national ID prevents identity theft. If someone gets something using a fake ID, it's a problem of the provider, so they have every incentive to check the card and request a signature for every transaction. If someone gets past the first filter, you simply go there with your card and make it their problem, not yours.

Hungary scanned my fingerprints when I applied for a residence permit.

Facial recognition for gym entrance sounds wonderful!

If that's banned I'll be in the Resistance!


Indeed, privacy-wise it is equivalent to having cameras recording. But convenience wise it is brilliant, when I want to do sports.. I do not want to bring keys or gym cards, just sports clothes

Far from equivalent my friend. Going through tapes is a manual laborious process that can only be done on a case-by-case basis. Transforming tapes into perfect database line-items that can be combined into a comprehensive profile on all your activities and whereabouts is a much more valuable and dangerous data set.

There already is regulation around storing biometrics in a few states, Ohio and Texas if I remember correctly, and in turn, most national companies avoid storing this data all together because of the risk it causes should people move between those states.

  One of my local gyms this week switched to requiring fingerprints or you were barred from access
If this is a membership gym with contracts, wouldn't they have to wait until your next contract to impose such a change?

I'd say probably not, but it would likely allow you to cancel your membership early (and receive paid fees back, pro-rated).

This is actually a nice hypothetical for that idiotic vision of replacing law and the court system with algorithms. It's extremely unlikely that the specific case would be foreseen in a contract. There is a continuous spectrum of such changes, and it's impossible to formulate any specific rule that would capture them all.

Example A: The gym changes from keys to plastic membership cards. Would this be a breach of contract? I think most everyone would agree that no, it isn't.

Example B: The gym requires whole-genome sequencing (once), then requires a drop of blood every time you enter to check your identity? Breach of contract? -> Obviously.

For any two such changes, you can probably come up with yet another example that's somewhere in between. The closer they get, the more often you will find people disagreeing, yes. But that just shows how justice is a constant conversation not easily set in stone.

As for the specific case: European law really doesn't like biometric data, and it's unlikely they can get away with it.

(the following is based only on my knowledge of German and Portuguese law)

BUT, ) if they do, the pro-rated refund is the most likely outcome. It works both ways, though: if you move away, they also cannot require you to keep paying fees. It's a concept loosely translated as a "cessation of the foundational requirements of the contract).


I wonder what the implications are for the large number of secondary schools that now apparently use fingerprint biometrics for school meal payments.

> If consent is not given then that can not bar you from service.

But they probably can increase the price to cover "administrative costs/..."


in somewhat functional markets that are subject to competition, companies cannot arbitrarily raise prices.

In markets where a "Privacy Fee" is considered standard, the price raise will not be considered arbitrary. I fear this is a potential reality.

Disneyland Paris doesn't use biometrics?

I really hope the politicians in the US champion a GDPR law here. Unfortunately I'm not optimistic.

you know, after saying something like this, you have to vote :P

My datacenter uses biometrics too.

If consent is not given then that can not bar you from service.

What’s the reasoning for this? Shouldn’t I have the freedom to pick the conditions under which I offer my services, except for discrimination? Is it discrimination if I only offer biometric ID, say for business convenience?


The same reason that you aren't allowed to mix sawdust into the flour you sell, even if it is cheaper. Having a business incentive does not mean that you have a moral justification.

What's the moral justification here? Most people love biotmetrics.

What you describe is called freedom of contract, which has the problem that it's the opposite of free when you have inequal parties. For most people contracts have a take it or leave it character, so there needs to be government regulation, to ensure freedom for them.

Calling an imbalance of leverage "the opposite of free" is peak absurdity.

This sounds like a Classic leading-and-pacing piece to take the lead in regulating a field before actual hard regulations are passed. So while demands for restriction on government use are (correctly imo) very strict, for private entities we get this:

> From the moment one steps into a shopping mall, it’s possible not only to be photographed but to be recognized by a computer wherever one goes. Beyond information collected by a single camera in a single session, longer-term histories can be pieced together over time from multiple cameras at different locations. A mall owner could choose to share this information with every store. Stores could know immediately when you visited them last and what you looked at or purchased, and by sharing this data with other stores, they could predict what you’re looking to buy on your current visit.

> Our point is not that the law should deprive commercial establishments of this new technology. To the contrary, we are among the companies working to help stores responsibly use this and other digital technology to improve shopping and other consumer experiences. We believe that a great many shoppers will welcome and benefit from improvements in customer service that will result.

But people deserve to know when this type of technology is being used, so they can ask questions and exercise some choice in the matter if they wish. Indeed, we believe this type of transparency is vital for building public knowledge and confidence in this technology.

So they don't actually advocate that you should get a right to privacy or a right not to be profiled once you enter a store.

Instead, you get a right to opt-out of profiling by not ever entering any kind of store again.


Agreed it’s sales 101 help the customer write RFP.

This is the first instance I've seen of the tech industry calling for regulation. I admire this and the idea that people running a corporation can understand that despite their best intentions, in the long run the corporation will act to maximize profit via legal means, even if an action is not in the best interest of society. And so in some cases we need to make certain things illegal. I would love to see a company do this sort of thing for a broad set of tax loopholes as well, for example.

Companies calling for more regulation is a ubiquitous phenomenon.

https://en.wikipedia.org/wiki/Regulatory_capture

More regulation generally gives an advantage to larger companies over smaller ones since it creates barriers to entry; compliance costs usually increase sublinearly with revenue. (E.g., it's a lot easier for Microsoft to hire a dedicated lawyer than it is for a garage start-up.)

This idea that "companies always want less regulation than is socially efficient" is usually based on a misunderstanding of economics.


I'm glad you called out regulatory capture. The cynic in me wonders if Microsoft is so far behind in some AI aspects like this that they're taking this approach to slow down Amazon's and Google's forays into selling this tech to businesses and government.

The real cynic would say that Microsoft knows that this stuff is coming in legislation anyway (because too many people are too pissed off by now), and those who are on the bandwagon early - and actively helping to push it forward - will be rewarded with positive PR, while also helping to sink competition that's not so fast on the uptake.

I doubt Microsoft is behind in face recognition, which is an AI task that's been studied and done to death in academic literature. Microsoft has actually made some of the most important discoveries in AI (invented ResNets, co-authored Faster R-CNN), they can surely pull up a mere FR system.

This actually made me pause and think about what is already known about me. I wonder if there is a word for the collective knowledge about an individual.

Maybe if you can't do business without making sure you're not causing harm, you shouldn't do business at all.

Of course anti-competitive lobbying happens all the time. But if it's not economically feasible for a Scrappy Gang of Dropouts in The Garage follow to regulations that protect people's lives and freedom in this country, I'm cool with them finding another country, or their own deserted island perhaps.


I think you missed my point and the discussion on Wikipedia. You seem to be assuming that essentially all regulations are correctly aligned with the public interest. If that were the case, you'd be correct that compliance costs would (in an efficient equilibrium) be correctly internalized, merely excluding inefficiently small companies from the market [1]. However, this is not the world we live in. Uncountable historical examples, and the massive size of the corporate lobbying industry, are clear evidence that companies often shape their own regulations to their benefit. So on priors it's much more likely that Microsofts public call for regulation -- which is just a particular form of lobbying -- is in their own interest, not some pure expression of civic duty.

[1] Of course, we are not automatically in an efficient equilibrium, and there exists worlds where the reduced competition due to barrier to entry create costs that are larger than the benefits of the regulation. But I'm happy to put that scenario aside.


Okay, so that outlaws commercial transportation. Or at least I haven’t heard of any of them making sure they’re not causing harm.

There's always the call for regulation - if a competitor has a product which is ahead if you in some way wher you for whatever reason can't or won't compete. See for instance Steve Cook's recent comments about GDPR and privacy.

In this case it's almost the opposite I think. MS has good technology and that is why they would want regulation. If there's regulation then any potential competitor will now have a much harder time creating a competing product.

Curious...do you have a link to that?

It's very strategic on Microsoft's part.

They aren't the industry leader, so leading the conversation on regulation helps them to impact the market leader.

It also let's them steer the conversation on regulation before it becomes a conversation occurring outside their control/influence.

Self-regulation is EXTREMELY common for both those reasons in the corporate world. It also never actually works as well as independent regulation, and when there are issues in independent regulation, it frequently occurs as a result of that independence being undermined/corrupted by revolving doors/lobbying/etc.

It's a nice press release, and smart on Microsoft's part, but don't fool yourself into thinking it's not in their self-interest to be doing this. To date, I don't think I can recall any instance of a public corporation acting against its own self-interest for moral reasons.


> It's very strategic on Microsoft's part. They aren't the industry leader, so leading the conversation on regulation helps them to impact the market leader.

On the other hand, if Microsoft is sincerely worried about this technology (and potential negative impact it may have on it's image just like Amazon a few months ago), then it makes sense they would be lagging behind as they would be more concerned about assigning resources and releasing product?


Maybe I'm just reading into it, but this looks like:

   a) Anti-AWS (conceding loss of JEDI contract)
   b) Regulatory capture for the remaining big cloud players

I think you are close or exactly on target.

Adding new regulations for emerging technologies can reduce uncertainty. Why invest in something if you don't know if the government will curtail it in the future?

The idea is to be established in the field before regulation. That way you can lead the conversation around regulation which would, in effect, cement your continued relevance.

I am kind of wondering what is in it for Microsoft. No public company does anything against their own self interest. Is this just a play to regulate smaller companies out of the market? What are the strategies and tactics that MS is using?

Really? Elon Musk has been making such calls for quite some time - although it has been regarding AI in general and not just facial recognition. The last time I heard him talking about it, he seemed rather disheartened by the fact he'd been unable to have any impact.

Apple has been doing so for a few months at least.

Governments are the first in the line to abuse facial recognition.

Now as CCTV cams are everywhere, everybody should have a right to wear a mask everywhere without getting discriminated.


> everybody should have a right to wear a mask everywhere without getting discriminated.

It's a losing game. They will track you with your mask on from the moment you leave your house, your phone, your credit card, your gait, your car... It's like a super cookie, if you don't delete all the 10 places it was stored, a single missed one will be enough to regenerate it.

Total citizen surveillance is coming, everyone's location history will be in a database and kept for years, just like phone call metadata.


Nevertheless, when somebody has stolen my bike from a bike parking (where there were no other bikes and no crowd) right under a security camera near a fancy shopping centre the police couldn't find it, miraculously they couldn't even find it on the camera record like if both the bike and the thief were invisible.

most shops and private facilities have motion activated video to reduce storage volumes. There is a massive variance in how good they are at actually turning on and recording given some movement.

perhaps some city-owned CCTV cams are always on, but I'd be doubtful.


>Total citizen surveillance is coming, everyone's location history will be in a database and kept for years, just like phone call metadata.

I am under the assumption that it's already here since whoever carries a mobile phone is already under surveillance since the mobile networks share info with the government, the license plate readers see who is traveling on the roads, and electronic financial records show your transactions.


I use cash, public transport (which is awesome in Europe), almost always have my cellphone switched to airplane mode (going online via WiFi occasionally e.g. to check for new messages) and only turn its GPS on about 5 times a year for rather short durations of time and am pretty happy.

You are 0.0001% representative of the population. You will be tracked by metadata absence. There are a couple articles about terrorists/spies tracked through data patterns generated when the closed their usual phone and switched to a burner one.

Even if you pay cash, your transactions can still be tracked, because your face will be on the cash register camera.

Public transport has cameras inside too.

It's just a matter of time until the computing power and software to analyze all this video will be everywhere.


Agreed. Acting unusually and furtively is a great way to raise red flags and invite more tracking. By repeatedly working to fly under the radar, you've made yourself much more interesting and more likely to be tracked closely whenever/wherever you reappear.

You want to be overlooked? Behave typically, obviously, and boringly.


So there aren't any cameras on public transport, or at the stations? Or on any of the buildings you walk past? Or at any of the stores you enter, or on light/power poles on public streets?

I haven't kept up with EU currency security features, but I remember reading how they keep trying to put RFID chips/fibers in them for tracking. They're currently using magnetic ink[1] that can be read via scanners as you walk past. There might be other tracking features that aren't disclosed.

> Some areas of the euro notes feature magnetic ink. For example, the rightmost church window on the €20 note is magnetic, as well as the large zero above it.

I don't think you're as anonymous as you think, even with all of the steps you've taken.

[1] https://en.wikipedia.org/wiki/Euro_banknotes#Security_featur...


FYI, Google tracks you even on airplane mode. So the second you jump back on Wifi your location history is updated. With dead reckoning calculations using your accelerometer and other sensors they know EXACTLY where you've been.

Not sure if they do this on iPhone or just android, but my guess is they do both if you've installed google maps/gmail etc.

https://www.youtube.com/watch?v=S0G6mUyIgyg


Masks are a boring, defensive option. We need t-shirts with patterns that identify as multiple wanted criminals and lots of people wearing them at the same time.

Surely that would be easily filtered out via ML?

This would depend on the nature of the movement. In a more realistic way: You should probably develop a simple way to create adverse images from local photographs, and a simple, decently reliable way to put those on t-shirts, hoodies, jackets. Creating imaging adverse to machine recognition is a field of active research, after all.

This would allow local privacy groups to put people from their group on their shirts and distribute them, kind of like facial recognition graffiti. This would be much harder to deal with due to the volume and flux of adverse imaging.


And more and more passing laws the expressly prevent you from concealment of your identity in a crowd.

I'd like to see some real teeth in their implementation. For example:

* Rather than merely forbidding biased uses in their TOS, an internal team should review relevant source code & use cases of anyone implementing MSFT facial recognition, a'la Apple's app store.

* Build apis, libraries and easy-to-use tools that allow consumers to destroy their face data.

* Increase the concentration of pressure on Amazon by refusing to engage in the race to the bottom. Specifically, refuse to license facial recognition technology to law enforcement, military, or intelligence agencies until such time as they have independent civilian oversight, direct neutral-party monitoring, transparency, and demonstrated accountability for mis-use.

MSFT (and any corporation) is fundamentally untrustworthy. Principles are easily changed or ignored. Instead, they should begin creating institutions, code, and business process that make abuse difficult. Testing tools and APIs are the right idea - more of this approach please.


Facial recognition should be expressly illegal. Period.

As should license plate surveillance.

And for those that think that license plate surveillance should be legal, what you do not know is that municipalities are mandating that private corporations install license tracking cameras on their facilities and report back to the municipality who is driving by that address. Menlo park is just one such municipality.


It's a computation. Laws against it are unenforceable in general, absent draconian restrictions on computing devices.

This is HN. I fucking get it.

Facial recognition surveillance technology deployed in any public sphere should be expressly illegal. Period.


You clearly don't get it. What if facial recognition technology can be replaced with body recognition technology? Do you ban all forms of recognition technology? That would criminalize a lot of legitimate uses for machine learning.

No you don't get it: FR is a sensor, it is not authoritative, and was never intended to be by anyone other than science fiction authors. FR simply helps sift through many faces to help locate. A human still needs to be in the loop, or you are misusing the system. Lead developer of a leading enterprise FR system speaking.

This is where the ACLU should spend their time and effort rather than their political meanderings. This is politically neutral, it affects everyone and Americans should enjoy some basic rights in this realm. Govern how private entities use this tech and data. Regulate official use so it’s not abused, etc.

Political neutral doesn't exist. Regulation invades freedom of corporations. I support that, but I'm also left of the center. (For some definition of left and some definition of center)

Yes, I suppose. I still think this would have bipartisan support in congress as well as support from the public, regardless of affiliation.


Awesome. I’d like them to spend their high powered lawyers here and at least drive the conversation and influence legislation. This is more than just abstract theory where it might affect a few thousand. This affects every citizen.

Every single time someone says "where was the ACLU when X happened?" (Usually in the context of a conservative's civil rights being violated) a quick search shows that the ACLU was defending that person or at least speaking out against it. They're pretty consistent in their principles, even when arrayed against liberals, as comes up from time to time when political correctness is on the line.

"The law should specify that consumers consent to the use of facial recognition services when they enter premises [...] In effect, this approach will mean that people will have the opportunity to vote with their feet"

What this really will mean that in effect, facial recognition will be widespread, legitimized, and unavoidable unless you want to live like a hermit, just like CCTV today. The only way this could potentially be avoided is targeted protests at the first stores adopting it.

The post does have some laudable positions and arguments against government surveillance using facial recognition, but I'm not sure how useful this is if private actors build even more powerful databases and offer them for sale to the highest bidder.


This is the same guy who recently vowed to provide any/all of MSFT's AI technologies to the Department of Defense https://www.google.com/amp/s/www.nytimes.com/2018/10/26/us/p...

I don't understand how this could be realistically regulated. It's passive technology and its use can't be detected. It's like trying to stop people from thinking.

You could say the same about a credit card skimmer on an ATM. It's passive, right? It's just a sensor that sits there and absorbs people's data. I don't think this argument makes a lot of sense.

The point is, there needs to be some law and order in place so that when people abuse this tech to harm people and society and get caught, there's some precedent to stop them and punish them. It doesn't matter if the technology is passive. The intent and action to use it to harm people are not passive and are not analogous to thought crime at all.

There is dire need for regulation with a lot of emerging technologies right now. We're building systems with enormous power which can break human society if misused. I think the intangibility and "passivity" of this tech (or at least how it is perceived) gives us a very false sense of security. Like how a few decades ago very few average people could understand how the internet might have a great impact on society. Obviously they aren't thinking that way anymore.

Check out Charles Stross's speech at C3 about regulatory lag relative to the the accelerated nature of tech growth: https://www.youtube.com/watch?v=RmIgJ64z6Y4


The passivity sense is in that it doesn't do anything /to be stopped/. The installation of the credit card skimmer is what is illegal along with the theft involved.

It is like declaring your city nuclear weapons free when the only players are either above the law by jurisdiction or within detonation range already. Just having the law on the books makes the city look stupid.

Facial recognition is a process that works on images - that makes it more passive than even a sensor since there are definite precedents for 'not here' with sensor recordings.


A lot of criminal acts aren't immediately detectable. But legal systems are set up to allow discovery of normally private documents or other evidence to see if anything illegal took place.

Also, the very act of having it on the books could discourage usage.


Of course this could be regulated. It depends who owns this technology and what access they into people's info. This is as passive as detecting's a person's face but linking it to information about this person is where regulation should take place.

It requires physical infrastructure beyond the human body, unlike thinking.

Regulate the sharing of FR galleries. Prevent the creation of giant FR galleries of consumers, students, residents or whatever blanket term. Additionally, require term limits on holding an image or "identity" in an FR gallery.

Its use can be detected by people who implement it, and if it's actually illegal, then you'll get whistleblowers every now and then. It's not perfect as far as enforcement goes, but most things aren't.

Without teeth--without meaningful consequences for violations--laws and terms of service are meaningless.

What will MS do when their terms of service are violated?

A regulation forbidding use of this tech for discriminating against certain people is worthless if it specifies a $10/day fine. Penalties have to be significant, and life-changing for violators.

For example, HIPAA / HITECH specifies criminal penalties, and pierces the corporate veil, for intentional violation of patient privacy.

Both are important. 1) the penalties have to be criminal, not civil. 2) natural persons (not Romney persons) who break the law must not be able to hide behind the limited liability of corporations.

A third step would be a bounty system for citizens bringing charges. The same thing made the Clean Water Act enforceable in the 1970s-1980s.

Without enforceability like this, it's all chin music. Or even greenwashing.


Think about the participants in a facial recognition system:

- Targets (people)

- Enablers (tech companies)

- Stalkers (consumer companies)

- Big Brother (governments)

Notably the only ones who cannot use the system are the Targets, because they don't have the necessary scale. Being part of a system you can't use is typically to your detriment.


FR is not out of reach for ordinary people. The algorithm of choice by the NSA is available as a cost of about $1200 per seat, and it only needs an ordinary PC to operate. This is no longer rocket science...

"$1200...ordinary people...needs an ordinary PC"

I'm guessing you had the US or Europe in mind rather than other parts of the world like Argentina, Uganda, China and India. (ignoring the omission of scale)


Personally I think facial recognition is a sign of an underlying problem - evidentary standards and liability. We have already had people effectively murdered by bad evidence from win seeking prosecutors taking any sort of psuedo-science voodoo they can get like bite mark analysis. False positives of any sort like from facial recognition should be no different. Similarly AWS thinks that many members of congress are already in jail. Something that bad had no place being sold.

I could see holding the providers liable for bad detection a good precedent for calibrating caution in a rational way although it is a bit 'eye for an eye' for societal liking. Having something which if it recognizes a face 60% of the time saying "Hi John" to Bob isn't a liability to anyone - really just amusement. Locking someone out of their apartment and causing them to need to call a locksmith because they got a bloody noise and facial recognition doesn't work any more is low stakes. Having someone potentially jailed for a long time would bring the intervals of confidence appropriately tight if say the prosecutor were at risk of death row or 300 consecutive years sentencing. We would see people very reluctant to work in forensics or prosecution if that were the case.


When someone follows you around constantly, lurking and observing your actions, that's called stalking. And it's illegal.

We are to the point where commercial entities and governments are full-on stalkers. And that should be illegal, too.

Further, it is immoral. There is a fundamental hypocrisy when people are exhorted to autonomy and personal responsibility -- "by your bootstraps", "entrepreneur", "gig economy", etc. -- at the same time they are, with mass surveillance, being left with none of this, in truth. Your every action monitored, measured, standardized, and compelled to conform.

You are left with no agency, save that granted -- left -- to you by the powers that be.

And with everything recorded and stored, seemingly indefinitely, you become self-monitoring, self-constraining. Will this be used against me in a year? Five years? Does one slip or oversight last a lifetime?

Fy the way, do you see them taking action to swing the cameras, and the monitoring, the other way? Even Obama, with ever and ever more secrets and aggressive prosecution of whistleblowers. The police, who have fought cameras and monitoring for years. NDA's left and right, disparagement suits. On and on and on...

And, I've gone on too much, here.

Never mind just the philosophy of the matter; look, too, at how it works in practice!

Do we all want to spend not just our work hours, but our lives, in virtual cubicles?

The post-modern panopticon.


There are already several anti-bias and discrimination laws on the books. Why does facial recognition warrant additional regulation?

Numerous processes already capture gender, race, and age.

Facial recognition seems like a better/faster tool for capturing these data points. But the requirement to comply with existing laws is unchanged.

Any further regulation will only limit the development of facial technology to a few large players that can afford compliance and enforcement measures.


> There are already several anti-bias and discrimination laws on the books. Why does facial recognition warrant additional regulation?

Mainly because many companies doing it are arguing that when their models produce biased results, it's not their fault, it's just "computer thinks that way". So far as I know, this approach hasn't been properly tested in court, but it might just fly, if courts decide that you need to have intent to discriminate (and that training on real-world datasets, that are always implicitly biased, does not constitute such intent).


(Re-posting my comment on this link from yesterday)

This is really interesting. I wonder how much of this is real, versus PR (although, Brad Smith has an excellent track record in this area). The company that has the most to lose, were there to be real regulations concerning facial recognition, is actually a company in Microsoft’s investment portfolio. That company has built the world’s largest database of face and identity information. Facebook.


This is a time for action, not just for facial recognition, but for every algorithm.

Every decision made by an algorithm should make its inputs clear, its criteria for interpreting those inputs clear, and its judgement to be disputable. If you can't get your black-box neural network to do so, then perhaps it shouldn't be making life-changing decisions for other people.

There's a startup that's doing sentiment analysis of social media posts, to measure how 'risky' a babysitter is - ad likely does so in a biased manner. [1]

It's illegal to use such a system, for purposes of vetting an employee, yet their entire business model revolves around families using them to vet babysitters.

[1] https://gizmodo.com/predictim-claims-its-ai-can-flag-risky-b...


Enabling third-party testing and comparisons.New laws should also require that providers of commercial facial recognition services enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias. A sensible approach is to require tech companies that make their facial recognition services accessible using the internet also make available an application programming interface or other technical capability suitable for this purpose.

Does anyone else find this interesting? Is Microsoft trying to keep their facial recognition algorithm on top by comparing their's to others'?


blatantly obvious attempt at regulator capture is obvious. If there's anyone I'd guess is already doing what they are warning against in this "open letter" - it's MSFT.

Maybe I'm a bit too cynical, but I guess MS isn't that heavily invested/successful in facial recognition technology, that's why it can posture like that?

Because this feels a bit reminiscent of the "G-man" campaign, which might have been fun and had a point, but a point that seemingly got lost along the road of "Windows 10 as a service/storefront".

At least gotta give it to MS PR: They seem to know what's on peoples minds and are rather good at trying to appeal to that.


So MSFT was actually ranked the most accurate facial algorithm company in the world by NIST. It’s interesting they’re just now talking about this (after these results were published)

That's the perfect time to call for regulations to make it harder for your competitors to catch up.

cf Regulatory Capture (https://en.wikipedia.org/wiki/Regulatory_capture)


Brilliantly malicious

No they were not. Link? I'm in the FR industry, MSFT is a non-player.

Microsoft gains the social kudos for speaking up (vaguely) about some "issue" or another. Meanwhile, actual development of the scenario will occur regardless.

I understand there is some overlap and some divergence between the perils of poor privacy practices and pervasive facial recognition, but doesn't addressing the former help with the latter somewhat?

It might seem like I'm trying be on the side of the perfect against the good, but there is room for both efforts without stifling either. A holistic approach to privacy in general would help inform the values necessary towards the responsible use of facial recognition technology.


The TLDR, MSFT says "You can't trust any of us so you better regulate us now."

... or they are just mad that I refuse to give them a linkedin photo. :) Supposed to be funny, but also serious. Every time they ask for one, and then ask why not, I tell them because they are fundamentally untrustworthy. And they are untrustworthy, as is every public for profit business whose officers carry a fiduciary responsibility to shareholders.

IMO this position is well written and these two sentences succinctly articulate the situation:

"In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition."


This is MSFT saying "we can't compete here, and can't apparently buy the leaders, so we must legislate because We MSFT is not in control."

Cool! How about Microsoft taking their own advice to their heart and build services (e.g. Cortana) that run on-device?

This is really interesting and I agree, but I think only addressing facial recognition is missing the forest for a tree.

There is an underlying idea here of a person owning information about themselves and having control over it and making sure a company can not use it inappropriately. I think addressing it as only 'facial recognition' wouldn't go far enough.


They're not missing anything; by regulating hot topics individually they can prevent sweeping GDPR-style privacy regulation.

I am not convinced GDPR is a bad thing yet... What makes you think it is?

I like GDPR but it's probably bad for Microsoft.

There is already a circle around an individual’s health data. This represents an expansion of that circle, and I think by focusing on the technology before it becomes too deeply integrated into “the way things are” it can happen quickly, in the way that congress passed the Genetic Information Nondiscrimination Act in 1995.

Expanding the circle to protect more PII in general is going to be a longer fight because we’ve already let economic power and consumer behaviour develop into a strong status quo. For instance, if strong regulations came in to limit the creation of profiles for targeted advertising and that resulted in Google withdrawing free email accounts from the market, it might not have a lot of popular support.


Part of the problem with "algorithmic accountability" is that you need a way to verify that the software that is being used is identical to the one that has gone through the audit. With open source software you can do this with checksum verification. Is this type of verification something that any AI or facial recognition software has provided?

A bit odd coming from a company that harvested so much facial data, no? https://www.lifewire.com/website-that-can-guess-your-age-348...

I would love to have an alarm go off every time someone enters my mailroom wearing a hoodie with with hood drawn over their head after 10pm. Do we have that technology yet? I would be your customer.

facial recognition is probably worst because it is harder to avoid, unless your are Muslim, but cellphone tracking is very bad: https://www.theregister.co.uk/2018/12/05/mobile_users_can_be... (bluetooth and wifi are also pretty bad)

The same should be said of the online advertising industry.

Whenever this topic comes up I think about Judas Priests "Electric eye" song

From the article: "The law should specify that consumers consent to the use of facial recognition services when they enter premises or proceed to use online services that have this type of clear notice."

So that's what Microsoft really wants. Allowed everywhere, minimial notice, no user ownership of data, and no opt-out.


This is great. I congratulate them and thank them for outlining specific policy objectives.

My immediate next thought, though, is that Microsoft operates its "Cognitive Services," including facial recognition, in China. That's worrying, even if Microsoft would loudly prefer that governments generally pass nice privacy laws.


"Let's make something new, which not even the scientists know where will lead! But first let's create a ton o bureaucracy and regulations so only a few super giant tech companies can participate to make these regulations with no further intentions..."

as someone born in america it amazes me that america sticks its head in the sand while china creams them through embracing tech. this is old tech that is already underused in america

You need a catchy slogan. “Your Face, Somebody Elses Money”
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: