The sudden commodification of facial recognition means most of the companies racing to fill business needs are startups. Large corporations like Microsoft are much better positioned to deal with industry regulation. This may be a self-interested, anti-competitive move on Microsoft's part, but I don't care as long as I benefit as a consumer, too.
By far the most common consequence amongst my friends was to teach us to lie. (The second most common consequence was to shut us out of smaller websites that didn't want to deal with regulation - at least until we learned to lie.) "Are you under 13?" - duh, no. "Are you under 18?" - better answer no on that one too, to be safe. "How old are you?" - 20 is a good number. "What's your birthdate?" - how about Jan 1, 1970?
I had one friend who forgot her password to her Yahoo Mail account. "Why can't you just use the password reset functionality?" "I forgot which birthday I entered." "Why don't you call them up and have them reset it?" "I registered under a fake name too, and forgot what I put."
And she was one of the savvier ones. I saw a number of people post their street addresses, pictures of their house, vacations they were taking on the public Internet, but when it came to their ages, "I'm, uh, 24" (adding a decade and change).
People wonder why Millenials aren't more up-in-arms about data breaches and identity theft, and why they prefer pseudononymous currencies like Bitcoin or Ethereum over the real banking system with its KYC requirements. Maybe it's because we've been trained to see identity as temporary, reputation as something that can only be used against you, and information as something you give out to get what you want at the moment.
They knew - just like they know people don't read EULAs. They didn't care because it was CYA action that took the bullseye off their collective backs. Those that pretended to care requested "credit card age verification" to improve their funnel conversion.
True age verification on the internet is a funny business. Unless we get some kind of government-backed ID, you have to trust what the user is telling you. It's a company's responsibility to take reported age seriously. Lying on age breaks the user agreement no one reads.
But if you tell the truth, and you're using an app designed for kids, your data will be protected according to the law, which is very pro-consumer.
In the end, if you choose to lie about your identity on the internet, that's your business.
As a millennial, I've always wondered about this. I'll hear coverage on NPR fairly regularly about the latest data breach, and I always feel like NPR is either a) pushing the outrage on me or b) assuming that I must be outraged, because I'm never outraged. I see pretty much everything I put online as public, even if there is a veil of privacy (like FB for example).
Edit: My guess you'd have to be millennial to think other generations didn't learn to lie to adults in a multitude of ways.
I bet most Millenials barely know what the bitcoin is and are hoping Ethereum is some cool new drug.
I absolutely agree with this. The issues mentioned are too real (forgetting your birthday, etc).
Yes absolutely! Similarly, HIPAA has been incredibly useful in forcing hospitals and other for-profit companies to at least kind of care about medical data security. It's not perfect, but without HIPAA that data privacy/security would be in a far worse state.
I feel like despite making some some things more cumbersome, strict rules for sensitive data are usually a good idea.
That's actually largely because fax is exempt from security requirement applicable to electronic communication under HIPAAit's a loophole organizations exploit to avoid the cost of secure comms.
What happens when there's only two, and they both deny you? Seems shortsighted to not care about your privacy.
Both are useful.
As I recall, it’s super basic controls like disk encryption, backups, system updates, and screensaver locks. Not, like, certified crypto hardware and static analysis.
I appreciate that we can punish people for failing to meet table stakes, but if you weren’t going to do that stuff anyway, something is deeply wrong.
* Why nobody can itemize a medical bill.
* Why a bag of saline may cost the patient $100.
* Why an in-network hospital will bring in an out-of-network independent contractor nurse to assist an in-network surgeon, leaving you with a $200 bill for the surgery + a $5,000 bill for the out-of-network assistant.
* Why nobody will tell you how much a procedure will cost, with, or without complications.
* Why nobody knows how much your insurance will pay for, until the bill is due. Often months after the procedure.
Compared to all of these problems and abuses, HIPAA seems the least of America's problems.
I'd like to contribute to reducing the cost of care, by shopping around for more efficient/more affordable/better dollar for value doctors/hospitals, but I haven't the foggiest idea of where to start. It's like going to a car dealer, saying that you want a Prius, and not knowing what the age of the car, its condition, or how much you will be billed for until after the title is irrevocably transferred to you. Oh, and the dealer across the street will happily sell you one for half what you paid for. Or double. You really have no idea.
Not to mention that this is almost completely out of your hands if you're unconscious or otherwise not able-bodied. If you get in a car accident and are unconscious, you're likely to go to whichever hospital the ambulance takes you to, and you're on the hook for that ambulance ride now, too. Before you even get to the hospital, your bill can already be over $1000.
Companies who act as the insurance company and the care provider have an interest in cheaper care. E.g. Kaiser Permanente
They also have an incentive to err on the side of providing less care. Since you still can't meaningfully comparison shop between them and their competitors, I suppose it is exactly like a national single-payer system.
This is only true at the end of a person's life. Before then, ignoring problems does not make them go away, it in fact makes it more expensive for them.
It is pretty easy to comply with HIPAA, and other countries also have privacy laws governing medical records. I suspect you've worked for incompetent organizations - some organizations intentionally drag out compliance and make things more difficult in order to complain or justify increased costs. This is the lamest excuse I've heard in a long time. All user/customer data should always be protected and compartmentalized to the greatest extent possible in any industry.
The thing that is causing most of the problems for people when it comes to affording healthcare is that the costs keep going up. We think we've finally got something we can afford...and then in a few short years we can no longer afford it. It has outstripped our pay and our investments.
This is not just a US problem, either. Costs are going up at roughly the same rate in those other countries. E.g., US to Germany dollars per capita ratio in 2017 for healthcare was 1.8.
In 2010, it was 1.8.
In 2000, it was 1.7.
In 1990, it was 1.5.
I'd happily for the US continuing to pay the high amount we are now paying if we could just curb the growth so that whenever I manage to get coverage I can afford I can keep it for a long time.
Here's a site with all the data available for download, and explorable interactively on the site . Uncheck "latest data available" and use the year range slider to select a range, and it will show you the growth in several countries over that range. You can use the filters to narrow it to specific countries or groups like G7, G20, OECD, etc.
I know it's a common phrase, but to be rhetorical, when is facial recognition actually a "business need"?
If corporations are "people"...
Seriously. I don't want to live in a world where I am "stalked" by billion dollar "people" who decide whether I get insurance, and at what price, and who can search "big data" for any and every possible "infraction", also retroactively, at their "discretion".
And who turn around and rat me out to whatever authorities, who are not permitted to directly collect such data but who are free to take it -- or purchase it, using my tax dollars -- from such third parties.
Two years ago, I helped a casual friend with a prior felony drug conviction, to get and stay sober.
Now, Facebook et al. put me in that felon's "graph", and what happens to me? For example, am I -- by computed association -- an insurance risk?
One simple example, of how you can't have a functioning society in the face of such ueber-monitoring.
They will destroy what they are ostensibly trying to "shore up".
Society works, in good part, because people are free agents.
Ironically, the same message these bozos try to convey during election season. And the one they use to rail against "big government".
Well, big (private) surveillance is no different. Worse, even, because it's becoming apparent that people have little or no say in whether and how it's done. Not even a vote during elections, with which to influence policy.
Politicians have, to a significant extent, externalized the political cost of what they -- contrary to their rhetoric and also in the name of their big business buddies -- are pursuing.
The use of big data will render insurance into exactly what it is not. I.e., if these companies have their way, they will know in advance exactly what medical condition you will have at what age, etc., and you will pay exactly for that, plus a big profit margin of course.
As a society, we should ensure that insurance stays insurance, and doesn't become an expensive loan that you pay off in advance.
It's coming. Just wait.
But just because one insurance company was stopped, doesn't mean others have been, or will be.
If this can be automated passively and cheaply, its an inevitable outcome; regulation or not.
However, I found a podcast with Fernando Diaz talking about the ethical problems of using AI and experimenting with users without their knowledge, which is commendable.
Laws the size of GDPR in scope often take more than a decade here to pass - the ACA is a good example, and even after it passed there are still people fighting over whether it's valid and how it should be enforced, while other people are actively trying to repeal it. If we tried to handle something like facial recognition that way, the law would be useless because by the time it made it through the process and got put into effect the harm would already have been done.
Any "regulation" is just meant to divide-and-conquer, while the actual bad actors do whatever they please.
Moreover, I surmise that the "regulation" will rarely be invoked to protect vulnerable populations. However, I can certainly imagine an app that recognizes and faces of police officers and / or agent provocateurs and catalogs their presence at scenes of illegal activity might well be "regulated" out of existence.
Consumer protection from this stuff is a joke in the age where GDPR skinner boxes give people feelings of warm fuzzies when continuing to download crApps on to their platforms to suck up everything passively and with users active engagement, to backends that are ever more becoming open to the public for consumption (due to the lack of corporate accountability on such downsides). Combine that with increasingly cheaper storage costs, and more people becoming knowledgeable of the tools… yeah if one is banking on MSFT and its current behemoth brethren forever maintaining an advantage…
Kings of years past wanted to regulate the use of the printing press, and were modestly successful at first, though only in time most people realized that such diktats were futile.
This unsurprisingly provided an ideal preamble for the NYT to try and reinforce the unsubstantiated claim that tech firms swayed the election and salivate about how this might open the door to further regulation. To keep things overboard newspapers should provide a disclaimer that they are reporting on business rivals when they write about tech firms.
Furthermore, does this regulation target the hardware products themselves, the software performing the recognition, the biometric data itself, the transfer of this biometric data, aggregate ("anonymized") biometric data, the processing of biometric data? There is a lot to talk about here.
Meanwhile, microsoft still defaults to collecting/retaining telemetry information from users of their software.
I'd like to point out that MS adheres to GDPR regulations and has applied those protections to all users.
> But users now have access to a privacy dashboard that allows you to easily regulate or opt out of any data collection.
How about microsoft does not collect user data by default and lets them opt in?
> How about microsoft does not collect user data by default and lets them opt in?
A) Not all users are technical enough to understand how telemetry helps developers find faults and better understand crashes/bug reports.
B) "Most" users don't care if data is collected about the software and not the data they put in that software.
C) If you work in tech, I'm sure you know how many people pick options other than default.
Incidentally, big companies also have economies of scale in complying with regulation.
So they can be all for facial recognition tech AND benefit from the regulation of it.
Even disregarding the obvious concerns of totalitarianism, polite society is going to require some accountability lest random developers' lame  biases turn into universally baked-in ones.
 Or simply expedient. As in, the recognition probably performed poorly on a darker face, so he was cut. But that won't stop the sales guy from promising that the implementation is ready...
We have a lot of protection for identification of people out there I don't think this should be an exception. Although educating congress is going to be a VERY tough job.
Expectation of privacy is quite establish in law:
For instance, the South Dakota supreme court recently found that leaving a webcam on public property for months in order to record everyone who showed up to and left a private residence to be a fourth amendment violation.
In the same way a photo of the visible spectrum is not the same as a photo of lightwaves outside the visible spectrum, a photo is not the same thing as seeing someone, and using that photo to track their every facial expression, emotion, and interaction isn't the same as a photo.
Law has not yet caught up to technology, so we need to change the law.
An analogous example: There's no speed limit for walking/running because human beings cannot run fast enough for it to be a problem, but we had to come up with speed limits for cars because they could do much more damage due to how fast they could go.
Also, while anyone _can_ take a photo or video of you, they cannot do whatever they want _with_ that photo or video. If they use that photo to portray a scenario that simply did not happen then they can be in some serious trouble.
A different (larger) amount of a same thing can have different properties, some of which might not be desirable. That's how one can allow one thing and not the other, because the effect is different, even though it can be argued that it's technically the same.
Intent is a thing.
Facial recognition going mainstream will have large ramifications.
People don't expect privacy in public but they also don't expect all all their public movements to be recorded, stored, collated and analyzed.
These are 2 different things with dramatically different consequences. Any discussion on privacy needs to account for the distinction.
Just because a person can take my picture in public doesn't necessarily mean they should be able to use face recognition to find out my name and lookup other information about me.
Why the snark? Can't people have a different opinion from you?
You can recognize me all you want as long as you get rid of the data and metadata when the my interaction with your system is over.
Edit: I'm not saying it's even possible to have facial recognition be useful without lots of data retention, just that retention and potential for bad things as a result of these sorts of data sets existing is the problem, not the act of recognition itself.
If nothing else I'd like it to be able to remember the faces of people I personally meet (so I don't need to share their data but I do need it saved in my device). I don't see how that would be compatible with data privacy laws regarding facial recognition though.
As an example, I toss around with the idea of "localized recognition" a la I upload a contact and his/her photo, and therefore my AR tool can recognize the contact image using ML, but it's not uploaded to any central server in any meaningful way (outside of maybe certain metadata to update the underlying ML models.) But even then, some might find that creepy...
Wouldn't the analogy be like Facebook identifying your face in photos, but not retaining the fact that you're in the photo?
Otherwise, you run the risk of not updating the model to fix the false positive, and you might be flagged multiple times in the future.
What value does a recognition have? Is it just metadata they can act on in the moment? AKA age, height, race, gender? Are you ok with them knowing it is you by name or a assigning some UID to you as a person and keeping metadata? What about a hash of your facial pattern? How do they know if they've seen you before?
To a lot of FR programs they want to be able to analyze their customer breakdown. They want to know more than just crowd information. Given that we regulate the storage of medical information so highly right now, and you can infer based on purchase practices whether an individual has certain medical issues, I can see regulation being needed. But it raises a lot of questions about how to realistically regulate a system like this.
I'm by no means saying "don't regulate." but we need to be thoughtful about how we proceed so that regulations have few loopholes and protect citizens from both corporations leaking data and the government tracking individuals needlessly.
In Ireland the government is obsessed with issuing everyone with a "voluntary" Public Services Card, the photo on it being biometric. Officially it is voluntary, but you need one to get many public services, including getting a driving licence or passport - so not really voluntary.
The system has been used to catch a handful of benefits cheats - single numbers - so naturally the government hails it as a major fraud fighting tool.
Now catching benefit fraud is worthy, but where do you stop? You may think the GDPR protects the system from abuse, but that counts for nothing where the authorities pull the crime prevention and detection card. The protections provided by the GDPR can be cancelled out by a court order if Gardai claims it will solve/prevent a crime.
And once that has happened a few times it will start to become normalised and eventually abused. E.G. Applying for a civil service job and being turned down for going on an anti-government rally ten years ago.
If you create something that can be abused, it probably will be abused.
If they supported its use: "Microsoft is back to its old ways and is untrustworthy."
When they do something pro-consumer & pro-privacy: "Microsoft can't innovate in the marketplace and needs to stifle innovation."
(Yes, I know it's a straw-man argument, but it's more to point out the general hypocrisy of such a comment.)