
Microsoft Urges Congress to Regulate Use of Facial Recognition - doener
https://www.nytimes.com/2018/07/13/technology/microsoft-facial-recognition.html
======
madrox
Having worked at a large company that cares a great deal about COPPA, I've
seen how regulation really can work for consumer benefit. I'm not sure if
people appreciate how much COPPA cleaned up in child privacy. However, it's
definitely hurt the bottom lines of small businesses that liked playing fast
and loose with data.

The sudden commodification of facial recognition means most of the companies
racing to fill business needs are startups. Large corporations like Microsoft
are much better positioned to deal with industry regulation. This may be a
self-interested, anti-competitive move on Microsoft's part, but I don't care
as long as I benefit as a consumer, too.

~~~
nostrademons
Being a teenager when COPPA went into effect and having a number of netfriends
under 13 (at the time) - I'm not sure that employees at big companies really
understand all the unintended consequences of it.

By far the most common consequence amongst my friends was to teach us to lie.
(The second most common consequence was to shut us out of smaller websites
that didn't want to deal with regulation - at least until we learned to lie.)
"Are you under 13?" \- duh, no. "Are you under 18?" \- better answer no on
that one too, to be safe. "How old are you?" \- 20 is a good number. "What's
your birthdate?" \- how about Jan 1, 1970?

I had one friend who forgot her password to her Yahoo Mail account. "Why can't
you just use the password reset functionality?" "I forgot which birthday I
entered." "Why don't you call them up and have them reset it?" "I registered
under a fake name too, and forgot what I put."

And she was one of the savvier ones. I saw a number of people post their
street addresses, pictures of their house, vacations they were taking on the
public Internet, but when it came to their ages, "I'm, uh, 24" (adding a
decade and change).

People wonder why Millenials aren't more up-in-arms about data breaches and
identity theft, and why they prefer pseudononymous currencies like Bitcoin or
Ethereum over the real banking system with its KYC requirements. Maybe it's
because we've been trained to see identity as temporary, reputation as
something that can only be used against you, and information as something you
give out to get what you want at the moment.

~~~
sangnoir
> I'm not sure that employees at big companies really understand all the
> unintended consequences of it.

They knew - just like they know people don't read EULAs. They didn't care
because it was CYA action that took the bullseye off their collective backs.
Those that pretended to care requested "credit card age verification" to
improve their funnel conversion.

~~~
nostrademons
That's what I'd always assumed, but taking madrox's comment at face value, it
seems like the employees in question thought they were _actually_ cleaning up
the Internet. COPPA did no such thing; it just gave the _appearance_ of
cleaning up the Internet by making all children magically appear to be over 18
to those asking the question.

~~~
mcny
Reminds me of a story where PayPal froze some person’s account because they
were not eighteen when they created the account. I read it here on hon. Maybe
someone can find a link to the comment...

~~~
ddeck
" _If you opened your PayPal account before you were 18, close it_ "

[https://news.ycombinator.com/item?id=14226775](https://news.ycombinator.com/item?id=14226775)

------
pasbesoin
Private people can be accused of stalking.

If corporations are "people"...

Seriously. I don't want to live in a world where I am "stalked" by billion
dollar "people" who decide whether I get insurance, and at what price, and who
can search "big data" for any and every possible "infraction", also
retroactively, at their "discretion".

And who turn around and rat me out to whatever authorities, who are not
permitted to directly collect such data but who are free to take it -- or
purchase it, using my tax dollars -- from such third parties.

Two years ago, I helped a casual friend with a prior felony drug conviction,
to get and stay sober.

Now, Facebook et al. put me in that felon's "graph", and what happens to me?
For example, am I -- by computed association -- an insurance risk?

One simple example, of how you can't have a functioning society in the face of
such ueber-monitoring.

They will destroy what they are ostensibly trying to "shore up".

Society works, in good part, because people are free agents.

Ironically, the same message these bozos try to convey during election season.
And the one they use to rail against "big government".

Well, big (private) surveillance is no different. Worse, even, because it's
becoming apparent that people have little or no say in whether and how it's
done. Not even a vote during elections, with which to influence policy.

Politicians have, to a significant extent, externalized the political cost of
what they -- contrary to their rhetoric and also in the name of their big
business buddies -- are pursuing.

~~~
reaperducer
_Facebook et al. put me in that felon 's "graph", and what happens to me? For
example, am I -- by computed association -- an insurance risk?_

It's coming. Just wait.

But just because one insurance company was stopped, doesn't mean others have
been, or will be.

[https://www.theguardian.com/money/2016/nov/02/facebook-
admir...](https://www.theguardian.com/money/2016/nov/02/facebook-admiral-car-
insurance-privacy-data)

------
mleland
Are they really only taking this stand because Amazon is looking to get a
government contract for its Rekognition software, and Microsoft is trying to
deter that?

~~~
ohthehugemanate
I don't think it's SPECIFICALLY the rekognition contract, but that is the
broad stroke. Microsoft has a strong ethics review process for all things AI,
and routinely turns down fat projects because they don't pass ethical muster.
The contracts that rekognition is designed to bid for, are largely in the grey
zone where MS would turn a lot of things down. Government regulation would
help lower the cost of Microsoft's ethics policy.

~~~
sargun
According to who / what?

~~~
hirsin
Fernando Diaz, for one, has mentioned passing up projects. Principal on the
MSFT AI Ethics group (FATE)

------
mtgx
Why not support a law like GDPR instead? Why does every market needs its own
special regulations? Is there going to be a separate law for voice
recognition, too? And I could go on.

~~~
kevingadd
There's no way anything resembling GDPR would pass in the United States, at
least not in the next decade. Targeted regulations seem more likely to be
feasible because they will be smaller and easier to get through negotiations
and approved by the US House and Senate. (Not that I think even this sort of
regulation would be easy with how things are right now)

Laws the size of GDPR in scope often take more than a decade here to pass -
the ACA is a good example, and even after it passed there are still people
fighting over whether it's valid and how it should be enforced, while other
people are actively trying to repeal it. If we tried to handle something like
facial recognition that way, the law would be useless because by the time it
made it through the process and got put into effect the harm would already
have been done.

------
exabrial
Why? The people we don't want to use it (NSA, FBI, local police) are going to
do it anyway.

~~~
andrestan
Are those really the people you fear the most?

~~~
jMyles
People with a great deal of political power who are heavily armed and nearly
no public or civil accountability? Yes, it's reasonable to fear people in such
a position. History and common sense are both instructive on their likely
actions.

------
cinquemb
Laws and regulation are only local maxima on the capabilities of technology
and how they will be applied.

Consumer protection from this stuff is a joke in the age where GDPR skinner
boxes give people feelings of warm fuzzies when continuing to download crApps
on to their platforms to suck up everything passively and with users active
engagement, to backends that are ever more becoming open to the public for
consumption (due to the lack of corporate accountability on such downsides).
Combine that with increasingly cheaper storage costs, and more people becoming
knowledgeable of the tools… yeah if one is banking on MSFT and its current
behemoth brethren forever maintaining an advantage…

Kings of years past wanted to regulate the use of the printing press, and were
modestly successful at first, though only in time most people realized that
such diktats were futile.

------
0xBA5ED
Maybe this is more of a PR move than an ethics one, considering the growing
sentiment against tech giants. Facebook has developed an image of quietly
doing their deeds for a long period of time, then "getting caught red-handed"
all of a sudden and dragged through the media. Maybe it's about getting ahead
of the curve in this regard because they believe up and coming tech will
continue to deepen privacy concerns rather than plateau any time soon. Of
course, it's reasonable to assume the stakes will continue to rise with the
power and reach of technology, so it's a good idea to publicly appear
concerned long before the SHTF. It also passes off some of the responsibility
to the government for whatever disaster they think could happen.

------
sqdbps
Most likely they just want a federal law to deal with instead of having to
make unworkable carve-ups for illinois and texas.

This unsurprisingly provided an ideal preamble for the NYT to try and
reinforce the unsubstantiated claim that tech firms swayed the election and
salivate about how this might open the door to further regulation. To keep
things overboard newspapers should provide a disclaimer that they are
reporting on business rivals when they write about tech firms.

------
ihsw2
How does one reconcile this with jaded cynicism of "regulations for thee but
not for me"?

Furthermore, does this regulation target the hardware products themselves, the
software performing the recognition, the biometric data itself, the transfer
of this biometric data, aggregate ("anonymized") biometric data, the
processing of biometric data? There is a lot to talk about here.

~~~
mattnewton
I’m fine with Microsoft’s motives being hurting rivals with regulation, if it
gives me more privacy rights. In fact that’s great, because their coffers are
much larger than the eff or whoever else could take up this lobbying effort.

~~~
craftyguy
> if it gives me more privacy rights

Meanwhile, microsoft still defaults to collecting/retaining telemetry
information from users of their software.

~~~
bradford
<disclaimer, MS employee>

I'd like to point out that MS adheres to GDPR regulations and has applied
those protections to all users.

[https://www.techrepublic.com/article/microsoft-extending-
gdp...](https://www.techrepublic.com/article/microsoft-extending-gdpr-
protections-to-all-global-customers-heres-how/)

~~~
craftyguy
OK? But it still defaults to 'collect all the things':

> But users now have access to a privacy dashboard that allows you to easily
> regulate or opt out of any data collection.

How about microsoft does __not __collect user data by default and lets them
opt in?

~~~
zeusk
> > But users now have access to a privacy dashboard that allows you to easily
> regulate or opt out of any data collection.

> How about microsoft does not collect user data by default and lets them opt
> in?

A) Not all users are technical enough to understand how telemetry helps
developers find faults and better understand crashes/bug reports.

B) "Most" users don't care if data is collected about the software and not the
data they put in that software.

C) If you work in tech, I'm sure you know how many people pick options other
than default.

------
dendisuhubdy
no

~~~
dang
Would you please stop posting unsubstantive comments to Hacker News?

------
mindslight
Totally blocked out the Black person in the example scene. Classy!

Even disregarding the obvious concerns of totalitarianism, polite society is
going to require some accountability lest random developers' lame [0] biases
turn into universally baked-in ones.

[0] Or simply expedient. As in, the recognition probably performed poorly on a
darker face, so he was cut. But that won't stop the sales guy from promising
that the implementation is ready...

------
crb002
I don't see how they can under the 1St Ammendment; not to mention Microsoft
wants to kill off facial recognition startups with government red tape.

------
vonnik
More regulation will benefit companies like Microsoft and lead to less
innovation, because of the legal hurdles startups would need to clear. Much in
the same way that the primary beneficiary of GDPR has been Google, because
advertisers trust that its scale makes it more likelihood to be compliant with
the new laws. But facial recognition has a lot of applications that aren't
creepy, ranging from fun (fake mustache on Snap) to useful. Facial
identification might be a better target for regulation, but getting the
government involved here is inviting a bulldozer to the rose garden.

~~~
bargl
There are two technologies at play here. Face Recognition (who you are). Face
Detection (where the face is in frame). There really is nothing bad about Face
Detection. That would have to be paired with other data in order to uniquely
identify a person. But Face Recognition can give exactly who you are which
should probably get regulated.

We have a lot of protection for identification of people out there I don't
think this should be an exception. Although educating congress is going to be
a VERY tough job.

------
bufferoverflow
I don't understand why people expect privacy in public. Either when it comes
making public comments or walking on a random street. Literally anyone can
take a photo or a video of you, it's completely legal.

Expectation of privacy is quite establish in law:

[https://en.wikipedia.org/wiki/Expectation_of_privacy](https://en.wikipedia.org/wiki/Expectation_of_privacy)

~~~
monocasa
Because the scaling effects of automation change the natural balance of the
implications of no public privacy.

For instance, the South Dakota supreme court recently found that leaving a
webcam on public property for months in order to record everyone who showed up
to and left a private residence to be a fourth amendment violation.

[https://www.criminallegalnews.org/news/2017/nov/16/south-
dak...](https://www.criminallegalnews.org/news/2017/nov/16/south-dakota-
supreme-court-holds-warrantless-use-pole-camera-outside-residence-two-months-
constitutes-search-violation-fourth-amendment/)

~~~
Deadron
Missing the key part that this was done by the police targeting a specific
home. Otherwise security cameras would fall under this category.

------
dsfyu404ed
The problem isn't the recognition, it's the data retention...

You can recognize me all you want as long as you get rid of the data and
metadata when the my interaction with your system is over.

Edit: I'm not saying it's even possible to have facial recognition be useful
without lots of data retention, just that retention and potential for bad
things as a result of these sorts of data sets existing is the problem, not
the act of recognition itself.

~~~
s3r3nity
I'm not sure I understand your comment. How would you _not_ retain
data/metadata information about recognition and still provide any utility?

Wouldn't the analogy be like Facebook identifying your face in photos, but not
retaining the fact that you're in the photo?

~~~
ajmurmann
It depends on the use case. Example: I walk through the airport and an
application scans my face and either calls security because I was recognized
as a known terrorist or let's me pass because I'm awesome. The parent's point
matters for what happens after. In the not a terrorist case the problematic
system would retain that (it thinks) you walked through the airport. In a even
more problematic system you now get ads because you were there.

~~~
s3r3nity
Wait - wouldn't you _want_ the system to retain that information in your first
case if you were mis-identified as a terrorist? I.e. you were recognized, but
you were confirmed not a terrorist - so you won't be flagged later. Or maybe I
misunderstood.

Otherwise, you run the risk of not updating the model to fix the false
positive, and you might be flagged multiple times in the future.

~~~
ajmurmann
I wouldn't want the system to keep information about me having been anywhere
unless I'm a criminal. How would that information be helpful in updating the
model. It already worked correctly.

------
bsenftner
Ha! They must be aware they can't keep up with FR advancements, so they are
turning to regulations to stifle innovation. Plus pointing the finger at FR
and data retention conveniently moves attention off all the analytics their OS
captures and retains for Microsoft.

~~~
s3r3nity
Damned if they do - damned if they don't?

If they supported its use: "Microsoft is back to its old ways and is
untrustworthy."

When they do something pro-consumer & pro-privacy: "Microsoft can't innovate
in the marketplace and needs to stifle innovation."

(Yes, I know it's a straw-man argument, but it's more to point out the general
hypocrisy of such a comment.)

~~~
huntie
Corporations are amoral. All of their actions are self-serving and should be
viewed as such. While a particular action (such as this one) may be good for
consumers, this is merely a side effect and not the intention.

~~~
s3r3nity
I agree with your premises, but there's no reason to think that an action
can't be pro-consumer + pro-ecosystem + pro-company all at the same time.

