Hacker News new | comments | show | ask | jobs | submit login
Microsoft Urges Congress to Regulate Use of Facial Recognition (nytimes.com)
372 points by doener 4 months ago | hide | past | web | favorite | 116 comments



Having worked at a large company that cares a great deal about COPPA, I've seen how regulation really can work for consumer benefit. I'm not sure if people appreciate how much COPPA cleaned up in child privacy. However, it's definitely hurt the bottom lines of small businesses that liked playing fast and loose with data.

The sudden commodification of facial recognition means most of the companies racing to fill business needs are startups. Large corporations like Microsoft are much better positioned to deal with industry regulation. This may be a self-interested, anti-competitive move on Microsoft's part, but I don't care as long as I benefit as a consumer, too.


Being a teenager when COPPA went into effect and having a number of netfriends under 13 (at the time) - I'm not sure that employees at big companies really understand all the unintended consequences of it.

By far the most common consequence amongst my friends was to teach us to lie. (The second most common consequence was to shut us out of smaller websites that didn't want to deal with regulation - at least until we learned to lie.) "Are you under 13?" - duh, no. "Are you under 18?" - better answer no on that one too, to be safe. "How old are you?" - 20 is a good number. "What's your birthdate?" - how about Jan 1, 1970?

I had one friend who forgot her password to her Yahoo Mail account. "Why can't you just use the password reset functionality?" "I forgot which birthday I entered." "Why don't you call them up and have them reset it?" "I registered under a fake name too, and forgot what I put."

And she was one of the savvier ones. I saw a number of people post their street addresses, pictures of their house, vacations they were taking on the public Internet, but when it came to their ages, "I'm, uh, 24" (adding a decade and change).

People wonder why Millenials aren't more up-in-arms about data breaches and identity theft, and why they prefer pseudononymous currencies like Bitcoin or Ethereum over the real banking system with its KYC requirements. Maybe it's because we've been trained to see identity as temporary, reputation as something that can only be used against you, and information as something you give out to get what you want at the moment.


> I'm not sure that employees at big companies really understand all the unintended consequences of it.

They knew - just like they know people don't read EULAs. They didn't care because it was CYA action that took the bullseye off their collective backs. Those that pretended to care requested "credit card age verification" to improve their funnel conversion.


That's what I'd always assumed, but taking madrox's comment at face value, it seems like the employees in question thought they were actually cleaning up the Internet. COPPA did no such thing; it just gave the appearance of cleaning up the Internet by making all children magically appear to be over 18 to those asking the question.


This thread conflates two problems: how identity is managed on the internet, and corporate responsibilities in the face of that.

True age verification on the internet is a funny business. Unless we get some kind of government-backed ID, you have to trust what the user is telling you. It's a company's responsibility to take reported age seriously. Lying on age breaks the user agreement no one reads.

But if you tell the truth, and you're using an app designed for kids, your data will be protected according to the law, which is very pro-consumer.

In the end, if you choose to lie about your identity on the internet, that's your business.


Reminds me of a story where PayPal froze some person’s account because they were not eighteen when they created the account. I read it here on hon. Maybe someone can find a link to the comment...


"If you opened your PayPal account before you were 18, close it"

https://news.ycombinator.com/item?id=14226775


> People wonder why [Millennials] aren't more up-in-arms about data breaches and identity theft, and why they prefer [pseudo-anonymous] currencies like Bitcoin or Ethereum over the real banking system with its KYC requirements.

As a millennial, I've always wondered about this. I'll hear coverage on NPR fairly regularly about the latest data breach, and I always feel like NPR is either a) pushing the outrage on me or b) assuming that I must be outraged, because I'm never outraged. I see pretty much everything I put online as public, even if there is a veil of privacy (like FB for example).


The ones that outrage me aren't the passwords being leaked, it's the SSN that I never put online in the first place but was shared by companies I trusted with companies I don't trust but had no choice or notification of.


My guess is that the useful intended consequence was that companies specifically targeting young people for exploitation would think twice. Everyone knows the young will sneak onto porn sites. But an enterprise that constructs an app to exploit kids in particular is going to feel a big target on it's back and rightfully so.

Edit: My guess you'd have to be millennial to think other generations didn't learn to lie to adults in a multitude of ways.


People wonder why Millenials aren't more up-in-arms about data breaches and identity theft, and why they prefer pseudononymous currencies like Bitcoin or Ethereum over the real banking system with its KYC requirements.

I bet most Millenials barely know what the bitcoin is and are hoping Ethereum is some cool new drug.


Millennial here too.

I absolutely agree with this. The issues mentioned are too real (forgetting your birthday, etc).


> I've seen how regulation really can work for consumer benefit. I'm not sure if people appreciate how much COPPA cleaned up in child privacy

Yes absolutely! Similarly, HIPAA has been incredibly useful in forcing hospitals and other for-profit companies to at least kind of care about medical data security. It's not perfect, but without HIPAA that data privacy/security would be in a far worse state.


Here in Brazil, where something like HIPAA doesn't exist, doctors routinely send patient records over WhatsApp, 'because it's more convenient'.

I feel like despite making some some things more cumbersome, strict rules for sensitive data are usually a good idea.


Conversely, in the US a lot of medical data still needs to be sent by Fax... so it's still not perfect by any means


> Conversely, in the US a lot of medical data still needs to be sent by Fax...

That's actually largely because fax is exempt from security requirement applicable to electronic communication under HIPAAit's a loophole organizations exploit to avoid the cost of secure comms.


I get my records via an app. I honestly have no faith that it is secure, but if someone really wants to know what my creatinine value is, im sadly not concerned.


One day, your health insurance will be more expensive. One day, your might be denied a bank loan.


That's fine I wouldn't want to do business with that type of bank.


Clearly you haven't observed the trend of corporate consolidation in this country, especially in the financial sector.

What happens when there's only two, and they both deny you? Seems shortsighted to not care about your privacy.


Addendum to this: HIPAA told us (engineers who work in the health space) what to do, and HITECH told us how to do it.

Both are useful.


Which specific HIPAA IT security requirements would you not have met otherwise?

As I recall, it’s super basic controls like disk encryption, backups, system updates, and screensaver locks. Not, like, certified crypto hardware and static analysis.

I appreciate that we can punish people for failing to meet table stakes, but if you weren’t going to do that stuff anyway, something is deeply wrong.


If the number of breaches happening every week is an indication, something is deeply wrong not with individual companies, but most of the industry.


HIPAA doesn't really begin to address APTs with 0-days. The controls it specifies are necessary, not sufficient.


To play devil's advocate: Without HIPAA, health care costs in the US might also not be multiple times higher than in other developed countries. I have worked multiple jobs in the health sector, and it seemed as if the vast majority of the overhead and time spent was for the sake of complying with HIPAA.


The reason health care costs in the US are ballooning is not HIPPA. It's the Byzantine system of who pays for what, how much, and why, where nobody has an incentive to make care cheaper.

See:

* Why nobody can itemize a medical bill.

* Why a bag of saline may cost the patient $100.

* Why an in-network hospital will bring in an out-of-network independent contractor nurse to assist an in-network surgeon, leaving you with a $200 bill for the surgery + a $5,000 bill for the out-of-network assistant.

* Why nobody will tell you how much a procedure will cost, with, or without complications.

* Why nobody knows how much your insurance will pay for, until the bill is due. Often months after the procedure.

Compared to all of these problems and abuses, HIPAA seems the least of America's problems.

I'd like to contribute to reducing the cost of care, by shopping around for more efficient/more affordable/better dollar for value doctors/hospitals, but I haven't the foggiest idea of where to start. It's like going to a car dealer, saying that you want a Prius, and not knowing what the age of the car, its condition, or how much you will be billed for until after the title is irrevocably transferred to you. Oh, and the dealer across the street will happily sell you one for half what you paid for. Or double. You really have no idea.


> I'd like to contribute to reducing the cost of care, by shopping around for more efficient/more affordable/better dollar for value doctors/hospitals, but I haven't the foggiest idea of where to start.

Not to mention that this is almost completely out of your hands if you're unconscious or otherwise not able-bodied. If you get in a car accident and are unconscious, you're likely to go to whichever hospital the ambulance takes you to, and you're on the hook for that ambulance ride now, too. Before you even get to the hospital, your bill can already be over $1000.


That's an outlier, and it's outside of my control, that I don't care much to optimize for it. Ambulance rides in most other countries cost similar amounts, too.


> nobody has an incentive to make care cheaper.

Companies who act as the insurance company and the care provider have an interest in cheaper care. E.g. Kaiser Permanente


Well, they are essentially a single-payer system for their customers. That's not really surprising.

They also have an incentive to err on the side of providing less care. Since you still can't meaningfully comparison shop between them and their competitors, I suppose it is exactly like a national single-payer system.


And sometimes less care produces better outcomes. You can run too many diagnostics leading to more false positives.


> They also have an incentive to err on the side of providing less care.

This is only true at the end of a person's life. Before then, ignoring problems does not make them go away, it in fact makes it more expensive for them.


Not quite; there's also the chance you might switch providers before discovering a condition.


> I have worked multiple jobs in the health sector, and it seemed as if the vast majority of the overhead and time spent was for the sake of complying with HIPAA.

It is pretty easy to comply with HIPAA, and other countries also have privacy laws governing medical records. I suspect you've worked for incompetent organizations - some organizations intentionally drag out compliance and make things more difficult in order to complain or justify increased costs. This is the lamest excuse I've heard in a long time. All user/customer data should always be protected and compartmentalized to the greatest extent possible in any industry.


AIUI HIPAA really isn't that much different from European regulations regarding medical data, so no, that argument does not seem to hold any water.


Why US spends 2-3x in dollars per capita, and almost as much more in percent of GDP, on healthcare than Germany and France and similar countries is an interesting question...but is at actually an important question?

The thing that is causing most of the problems for people when it comes to affording healthcare is that the costs keep going up. We think we've finally got something we can afford...and then in a few short years we can no longer afford it. It has outstripped our pay and our investments.

This is not just a US problem, either. Costs are going up at roughly the same rate in those other countries. E.g., US to Germany dollars per capita ratio in 2017 for healthcare was 1.8.

In 2010, it was 1.8.

In 2000, it was 1.7.

In 1990, it was 1.5.

I'd happily for the US continuing to pay the high amount we are now paying if we could just curb the growth so that whenever I manage to get coverage I can afford I can keep it for a long time.

Here's a site with all the data available for download, and explorable interactively on the site [1]. Uncheck "latest data available" and use the year range slider to select a range, and it will show you the growth in several countries over that range. You can use the filters to narrow it to specific countries or groups like G7, G20, OECD, etc.

[1] https://data.oecd.org/healthres/health-spending.htm


It's not really HIPPA. I think it's more things like a lack of price transparency, the 'customers' of health care being disconnected from the users of health care and so on. Kind of like how B2B software is often horrible because the customers are disconnected execs and the users (employees) pay the price. This goes into more detail:

http://abovethecrowd.com/2017/12/18/customer-first-healthcar...


Agree. And anyone who sees a business opportunity in a shady area where basically “regulation hasn’t kept up with tech” just deserves to have the rug pulled from underneath by regulators. The message to the next company should be “if it seems shady, it’s probably not worth trying”.


> The sudden commodification of facial recognition means most of the companies racing to fill business needs are startups.

I know it's a common phrase, but to be rhetorical, when is facial recognition actually a "business need"?


Facial recognition is not a business need in and of itself, but it can be used to fulfill business needs


Agreed. Rare instance of corporate and personal interests aligning and what not.


Private people can be accused of stalking.

If corporations are "people"...

Seriously. I don't want to live in a world where I am "stalked" by billion dollar "people" who decide whether I get insurance, and at what price, and who can search "big data" for any and every possible "infraction", also retroactively, at their "discretion".

And who turn around and rat me out to whatever authorities, who are not permitted to directly collect such data but who are free to take it -- or purchase it, using my tax dollars -- from such third parties.

Two years ago, I helped a casual friend with a prior felony drug conviction, to get and stay sober.

Now, Facebook et al. put me in that felon's "graph", and what happens to me? For example, am I -- by computed association -- an insurance risk?

One simple example, of how you can't have a functioning society in the face of such ueber-monitoring.

They will destroy what they are ostensibly trying to "shore up".

Society works, in good part, because people are free agents.

Ironically, the same message these bozos try to convey during election season. And the one they use to rail against "big government".

Well, big (private) surveillance is no different. Worse, even, because it's becoming apparent that people have little or no say in whether and how it's done. Not even a vote during elections, with which to influence policy.

Politicians have, to a significant extent, externalized the political cost of what they -- contrary to their rhetoric and also in the name of their big business buddies -- are pursuing.


> I don't want to live in a world where I am "stalked" by billion dollar "people" who decide whether I get insurance, and at what price, ...

The use of big data will render insurance into exactly what it is not. I.e., if these companies have their way, they will know in advance exactly what medical condition you will have at what age, etc., and you will pay exactly for that, plus a big profit margin of course.

As a society, we should ensure that insurance stays insurance, and doesn't become an expensive loan that you pay off in advance.


Facebook et al. put me in that felon's "graph", and what happens to me? For example, am I -- by computed association -- an insurance risk?

It's coming. Just wait.

But just because one insurance company was stopped, doesn't mean others have been, or will be.

https://www.theguardian.com/money/2016/nov/02/facebook-admir...


China's Sesame Credit is one experiment I'm sure most countries are watching closely as it's designed to rate you based on your behavior online, social network, consumption and interactions in society at large

https://www.bbc.com/news/world-asia-china-34592186


>>Seriously. I don't want to live in a world where I am "stalked" by billion dollar "people" who decide whether I get insurance, and at what price, and who can search "big data" for any and every possible "infraction", also retroactively, at their "discretion".

If this can be automated passively and cheaply, its an inevitable outcome; regulation or not.


Are they really only taking this stand because Amazon is looking to get a government contract for its Rekognition software, and Microsoft is trying to deter that?


I don't think it's SPECIFICALLY the rekognition contract, but that is the broad stroke. Microsoft has a strong ethics review process for all things AI, and routinely turns down fat projects because they don't pass ethical muster. The contracts that rekognition is designed to bid for, are largely in the grey zone where MS would turn a lot of things down. Government regulation would help lower the cost of Microsoft's ethics policy.


According to who / what?


Fernando Diaz, for one, has mentioned passing up projects. Principal on the MSFT AI Ethics group (FATE)


I didn't find anything on Microsoft turning down projects due to ethical issues.

However, I found a podcast with Fernando Diaz talking about the ethical problems of using AI and experimenting with users without their knowledge, which is commendable.


Most likely. You'll rarely see a corporation encourage the government to do anything unless it helps them or hurts their competition.


Why not support a law like GDPR instead? Why does every market needs its own special regulations? Is there going to be a separate law for voice recognition, too? And I could go on.


There's no way anything resembling GDPR would pass in the United States, at least not in the next decade. Targeted regulations seem more likely to be feasible because they will be smaller and easier to get through negotiations and approved by the US House and Senate. (Not that I think even this sort of regulation would be easy with how things are right now)

Laws the size of GDPR in scope often take more than a decade here to pass - the ACA is a good example, and even after it passed there are still people fighting over whether it's valid and how it should be enforced, while other people are actively trying to repeal it. If we tried to handle something like facial recognition that way, the law would be useless because by the time it made it through the process and got put into effect the harm would already have been done.


Why? The people we don't want to use it (NSA, FBI, local police) are going to do it anyway.


Because there are companies that use facial recognition data that will be bound by these rules.


It sets precedent. If you're concerned about government actors, that will make it harder for less shrewd prosecutors to enter evidence tainted by the unlawful use of facial recognition.


This is really the crux of the matter.

Any "regulation" is just meant to divide-and-conquer, while the actual bad actors do whatever they please.

Moreover, I surmise that the "regulation" will rarely be invoked to protect vulnerable populations. However, I can certainly imagine an app that recognizes and faces of police officers and / or agent provocateurs and catalogs their presence at scenes of illegal activity might well be "regulated" out of existence.


Are those really the people you fear the most?


People with a great deal of political power who are heavily armed and nearly no public or civil accountability? Yes, it's reasonable to fear people in such a position. History and common sense are both instructive on their likely actions.


Laws and regulation are only local maxima on the capabilities of technology and how they will be applied.

Consumer protection from this stuff is a joke in the age where GDPR skinner boxes give people feelings of warm fuzzies when continuing to download crApps on to their platforms to suck up everything passively and with users active engagement, to backends that are ever more becoming open to the public for consumption (due to the lack of corporate accountability on such downsides). Combine that with increasingly cheaper storage costs, and more people becoming knowledgeable of the tools… yeah if one is banking on MSFT and its current behemoth brethren forever maintaining an advantage…

Kings of years past wanted to regulate the use of the printing press, and were modestly successful at first, though only in time most people realized that such diktats were futile.


Maybe this is more of a PR move than an ethics one, considering the growing sentiment against tech giants. Facebook has developed an image of quietly doing their deeds for a long period of time, then "getting caught red-handed" all of a sudden and dragged through the media. Maybe it's about getting ahead of the curve in this regard because they believe up and coming tech will continue to deepen privacy concerns rather than plateau any time soon. Of course, it's reasonable to assume the stakes will continue to rise with the power and reach of technology, so it's a good idea to publicly appear concerned long before the SHTF. It also passes off some of the responsibility to the government for whatever disaster they think could happen.


Most likely they just want a federal law to deal with instead of having to make unworkable carve-ups for illinois and texas.

This unsurprisingly provided an ideal preamble for the NYT to try and reinforce the unsubstantiated claim that tech firms swayed the election and salivate about how this might open the door to further regulation. To keep things overboard newspapers should provide a disclaimer that they are reporting on business rivals when they write about tech firms.


How does one reconcile this with jaded cynicism of "regulations for thee but not for me"?

Furthermore, does this regulation target the hardware products themselves, the software performing the recognition, the biometric data itself, the transfer of this biometric data, aggregate ("anonymized") biometric data, the processing of biometric data? There is a lot to talk about here.


I’m fine with Microsoft’s motives being hurting rivals with regulation, if it gives me more privacy rights. In fact that’s great, because their coffers are much larger than the eff or whoever else could take up this lobbying effort.


> if it gives me more privacy rights

Meanwhile, microsoft still defaults to collecting/retaining telemetry information from users of their software.


<disclaimer, MS employee>

I'd like to point out that MS adheres to GDPR regulations and has applied those protections to all users.

https://www.techrepublic.com/article/microsoft-extending-gdp...


OK? But it still defaults to 'collect all the things':

> But users now have access to a privacy dashboard that allows you to easily regulate or opt out of any data collection.

How about microsoft does not collect user data by default and lets them opt in?


> > But users now have access to a privacy dashboard that allows you to easily regulate or opt out of any data collection.

> How about microsoft does not collect user data by default and lets them opt in?

A) Not all users are technical enough to understand how telemetry helps developers find faults and better understand crashes/bug reports.

B) "Most" users don't care if data is collected about the software and not the data they put in that software.

C) If you work in tech, I'm sure you know how many people pick options other than default.


I'd like this to be verifiable. For Windows 10 telemetry it is certainly not.


There is a big difference between collecting telemetry (data about the product you're using) for improving the product internally and collecting private data for reselling to the highest bidder.


Microsoft has to play these games because Google and Facebook do. But if it disappeared overnight MS would still be in business... and they would not.


Here's the thing; Microsoft is a large corporation with lots of assets to protect. This kind of sketchy technology can easily be (is already?) a race to the bottom in terms of privacy and morality, and a big company has more to lose.

Incidentally, big companies also have economies of scale in complying with regulation.

So they can be all for facial recognition tech AND benefit from the regulation of it.


no


Would you please stop posting unsubstantive comments to Hacker News?


Totally blocked out the Black person in the example scene. Classy!

Even disregarding the obvious concerns of totalitarianism, polite society is going to require some accountability lest random developers' lame [0] biases turn into universally baked-in ones.

[0] Or simply expedient. As in, the recognition probably performed poorly on a darker face, so he was cut. But that won't stop the sales guy from promising that the implementation is ready...


I don't see how they can under the 1St Ammendment; not to mention Microsoft wants to kill off facial recognition startups with government red tape.


More regulation will benefit companies like Microsoft and lead to less innovation, because of the legal hurdles startups would need to clear. Much in the same way that the primary beneficiary of GDPR has been Google, because advertisers trust that its scale makes it more likelihood to be compliant with the new laws. But facial recognition has a lot of applications that aren't creepy, ranging from fun (fake mustache on Snap) to useful. Facial identification might be a better target for regulation, but getting the government involved here is inviting a bulldozer to the rose garden.


There are two technologies at play here. Face Recognition (who you are). Face Detection (where the face is in frame). There really is nothing bad about Face Detection. That would have to be paired with other data in order to uniquely identify a person. But Face Recognition can give exactly who you are which should probably get regulated.

We have a lot of protection for identification of people out there I don't think this should be an exception. Although educating congress is going to be a VERY tough job.


I don't understand why people expect privacy in public. Either when it comes making public comments or walking on a random street. Literally anyone can take a photo or a video of you, it's completely legal.

Expectation of privacy is quite establish in law:

https://en.wikipedia.org/wiki/Expectation_of_privacy


Because the scaling effects of automation change the natural balance of the implications of no public privacy.

For instance, the South Dakota supreme court recently found that leaving a webcam on public property for months in order to record everyone who showed up to and left a private residence to be a fourth amendment violation.

https://www.criminallegalnews.org/news/2017/nov/16/south-dak...


Missing the key part that this was done by the police targeting a specific home. Otherwise security cameras would fall under this category.


When you walk around in public, you are emitting photons that let me see you nude. Well, no human I know of can actually see you nude, but with some technology I can definitely do it unless you wear special clothing. So would you say me using that technology to take pictures of the photons people freely admit in public is fair and that I can then go sell those photos? Or does technology change things from how our society use to work?

In the same way a photo of the visible spectrum is not the same as a photo of lightwaves outside the visible spectrum, a photo is not the same thing as seeing someone, and using that photo to track their every facial expression, emotion, and interaction isn't the same as a photo.

Law has not yet caught up to technology, so we need to change the law.


The existing expectation of privacy assumes the capabilities of regular human beings. Humans cannot reasonably observe thousands of different locations simultaneously, 24/7, and remember of millions of faces. Machines can.

An analogous example: There's no speed limit for walking/running because human beings cannot run fast enough for it to be a problem, but we had to come up with speed limits for cars because they could do much more damage due to how fast they could go.


Because there's a difference between, "oh look, JustSomeNobody just went into the mall" and "JSN went into the mall, then he bought a slice of pizza, next he sat in the massage chairs. After he got up from the massage he went into the As Seen On TV store and bought some cheap doodads. Finally he exited the mall 45 minutes after he entered it."

Also, while anyone _can_ take a photo or video of you, they cannot do whatever they want _with_ that photo or video. If they use that photo to portray a scenario that simply did not happen then they can be in some serious trouble.


Anyone being able to take the photo and you being recorded all the time are two really different thing.


[flagged]


Are you deliberately ignoring the "all the time" part of the comment you're replying to? This is the entire fulcrum that their point resides on.


You realise stalking is just a series of semi-continuous short encounters?

A different (larger) amount of a same thing can have different properties, some of which might not be desirable. That's how one can allow one thing and not the other, because the effect is different, even though it can be argued that it's technically the same.


This is why laws that don't get too specific are better and we have human judges and lawyers litigate cases.


> No more video recording outside? A 30 minute limit? No more documentaries?

Intent is a thing.


Did you know everything on earth is made of atoms, killing people is just shifting some atoms around whats the big deal.


Tracking movements of certain populations through mass surveillance is not the same thing as being photographed in public.

Facial recognition going mainstream will have large ramifications.


This is disingenuous or naive and keeps being repeated in privacy discussions.

People don't expect privacy in public but they also don't expect all all their public movements to be recorded, stored, collated and analyzed.

These are 2 different things with dramatically different consequences. Any discussion on privacy needs to account for the distinction.


You're right, it's very well established in the law and as it turns out people do have a reasonable expectation that they will not be under surveillance in public.


I'm giving up privacy but I'm not agreeing to my likeness being used for profit. If someone uses my likeness for profit - say an artistic photograph - without my consent I can sue for damages. Mass facial recognition is similar because they sell the data and link it to other data, thus making other data more valuable. Without my face this value wouldn't be added, therefore they need to compensate me for it.


I think we as a society (or rather as a collection of societies, as I should acknowledge non US HN people) should be able to decide, hey, I don't want companies/cities/individuals tracking me wherever I go, knowing about which doctors I visit, where I sleep over, etc.


That is technically true, but isn't relevant.

Just because a person can take my picture in public doesn't necessarily mean they should be able to use face recognition to find out my name and lookup other information about me.


I think this is less about the right to privacy and more about the right to be forgotten.


[flagged]


Your contention is that the public policy decisions to start adopting license plates around the turn of the 20th century also encompassed the decision to track and permanently store all licensed vehicle movements?


> It is as if the privacy-deranged never thought about why your car has a gigantic retroreflective number plate

Why the snark? Can't people have a different opinion from you?


The problem isn't the recognition, it's the data retention...

You can recognize me all you want as long as you get rid of the data and metadata when the my interaction with your system is over.

Edit: I'm not saying it's even possible to have facial recognition be useful without lots of data retention, just that retention and potential for bad things as a result of these sorts of data sets existing is the problem, not the act of recognition itself.


I hate the idea of ubiquitous facial recognition spying on me everywhere but I also love the idea of some AR future where my wearable AR contact lenses project into people's names and other info next them them at a meetup or party or other social gathering.

If nothing else I'd like it to be able to remember the faces of people I personally meet (so I don't need to share their data but I do need it saved in my device). I don't see how that would be compatible with data privacy laws regarding facial recognition though.


I tend to like this idea as well, but struggle with how to make it privacy conscious in a way that satisfies everyone.

As an example, I toss around with the idea of "localized recognition" a la I upload a contact and his/her photo, and therefore my AR tool can recognize the contact image using ML, but it's not uploaded to any central server in any meaningful way (outside of maybe certain metadata to update the underlying ML models.) But even then, some might find that creepy...


I'm not sure I understand your comment. How would you _not_ retain data/metadata information about recognition and still provide any utility?

Wouldn't the analogy be like Facebook identifying your face in photos, but not retaining the fact that you're in the photo?


It depends on the use case. Example: I walk through the airport and an application scans my face and either calls security because I was recognized as a known terrorist or let's me pass because I'm awesome. The parent's point matters for what happens after. In the not a terrorist case the problematic system would retain that (it thinks) you walked through the airport. In a even more problematic system you now get ads because you were there.


Wait - wouldn't you _want_ the system to retain that information in your first case if you were mis-identified as a terrorist? I.e. you were recognized, but you were confirmed not a terrorist - so you won't be flagged later. Or maybe I misunderstood.

Otherwise, you run the risk of not updating the model to fix the false positive, and you might be flagged multiple times in the future.


I wouldn't want the system to keep information about me having been anywhere unless I'm a criminal. How would that information be helpful in updating the model. It already worked correctly.


Interacting with a system can take years. That's okay for privacy if the person is informed and has control, e.g. can request full deletion anytime.


I have a very little bit of experience here but I have a few questions based on your solution.

What value does a recognition have? Is it just metadata they can act on in the moment? AKA age, height, race, gender? Are you ok with them knowing it is you by name or a assigning some UID to you as a person and keeping metadata? What about a hash of your facial pattern? How do they know if they've seen you before?

To a lot of FR programs they want to be able to analyze their customer breakdown. They want to know more than just crowd information. Given that we regulate the storage of medical information so highly right now, and you can infer based on purchase practices whether an individual has certain medical issues, I can see regulation being needed. But it raises a lot of questions about how to realistically regulate a system like this.

I'm by no means saying "don't regulate." but we need to be thoughtful about how we proceed so that regulations have few loopholes and protect citizens from both corporations leaking data and the government tracking individuals needlessly.


The problem for me is feature creep. Someone may create a facial recognition database for worthy reasons, but once that database is created it doesn't take a lot to allow the usage criteria to change.

In Ireland the government is obsessed with issuing everyone with a "voluntary" Public Services Card, the photo on it being biometric. Officially it is voluntary, but you need one to get many public services, including getting a driving licence or passport - so not really voluntary.

The system has been used to catch a handful of benefits cheats - single numbers - so naturally the government hails it as a major fraud fighting tool.

Now catching benefit fraud is worthy, but where do you stop? You may think the GDPR protects the system from abuse, but that counts for nothing where the authorities pull the crime prevention and detection card. The protections provided by the GDPR can be cancelled out by a court order if Gardai claims it will solve/prevent a crime.

And once that has happened a few times it will start to become normalised and eventually abused. E.G. Applying for a civil service job and being turned down for going on an anti-government rally ten years ago.

If you create something that can be abused, it probably will be abused.


Ha! They must be aware they can't keep up with FR advancements, so they are turning to regulations to stifle innovation. Plus pointing the finger at FR and data retention conveniently moves attention off all the analytics their OS captures and retains for Microsoft.


Damned if they do - damned if they don't?

If they supported its use: "Microsoft is back to its old ways and is untrustworthy."

When they do something pro-consumer & pro-privacy: "Microsoft can't innovate in the marketplace and needs to stifle innovation."

(Yes, I know it's a straw-man argument, but it's more to point out the general hypocrisy of such a comment.)


Corporations are amoral. All of their actions are self-serving and should be viewed as such. While a particular action (such as this one) may be good for consumers, this is merely a side effect and not the intention.


I agree with your premises, but there's no reason to think that an action can't be pro-consumer + pro-ecosystem + pro-company all at the same time.


I guess you have never seen what their Face API is capable of. They’re as good as anyone in this game.


I guess you don't know, their solution is weak.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: