That will certainly keep the conversation going although in a less civilized way. Perhaps it could bring enough pressure to bare on NTIA to get them to reconsider basic privacy safeguards.
What really bothers me is that nobody outside of our bubble will take this seriously until someone hacks in and downloads a copy of the database and posts it online for all to see. Only then will most people realize what's wrong with having your face in that database despite the absence of a conviction.
1. The process led by the NTIA, unlike the bulk of the EFF's post, has nothing to do with the government's collection and use of facial recognition data. This process was designed to come up with a voluntary guideline that commercial facial recognition companies could choose to adhere to or not. I agree that limits need to be set for government uses, but this process from the beginning was never about that. The NTIA did not have jurisdiction for anything involving the federal government, law enforcement, military, etc. This is more akin to the voluntary privacy standards that companies on the web will agree to on their own. Once they do, if they violate their own policy they are subject to sanction by the FTC. The guidelines coming out of this process were intended to be voluntary, so companies would not have to comply if they didn't want to. However, the idea was that the working group would come up with something that the industry would volunteer for and by doing so give some level of protection to consumers.
2. The industry representation was not opposed to opt-in on some scenarios, but they were opposed to mandatory positive opt-in across the board without exception. There are a number of scenarios where opt-in doesn't make sense. For example, imagine a system that used facial recognition in a retail setting to detect shoplifters and alert management to their presence. If you had been caught stealing in the store in the past, you would be enrolled into the system by the retailer. Each person entering the store (as seen on their already existing surveillance systems) would be checked against that database of shoplifters. How realistic would it be for the retailer to get written opt-in consent from the criminal?
3. Previous sessions the consumer advocates had agreed that facial detection was out of scope, but facial recognition and facial analysis were in scope for the guideline. At the last session, they changed their position and wanted to bring face detection in scope. The difference being, face detection is just finding a face in an image, like your smartphone does in the camera app vs face recognition which would be finding the identity of a face in a picture (Who is this? or is this Bob?). Can you imagine getting opt-in consent to draw this little yellow box around the faces in the camera app?
4. The NTIA did an outstanding job in an attempt to bring this process together. They organized the sessions and brought in numerous speakers from both sides (consumer and industry) as well as from the education and research sectors. We even had representation from Canada that spoke to a similar process they went through there. The NTIA wasn't forcing any standards on anyone, they simply offered their services to organize and facilitate it as they have done in other similar multi-stakeholder initiatives. The process went on far longer than the NTIA (and any of the participants) had expected, but it was making slow progress.
I don't know if this process is now dead or not. I suspect it will go on, just now the consumer groups won't have a voice in shaping this (by their own choice), which is a shame. It's also possible that the show of force by the consumer advocates will push the industry to bend more in their favor, either way it would be good for consumers to get some kind of guideline out there. It could always evolve and be revisited in the future as the technology changes.
How would that system work in a fair society government by law? Would suspects not found guilty in a court of law be there? How should false positives, that is people who the computer incorrectly predict as someone else, get a chance to clear their name (face) and get compensation when it happens? Should criminals that have served their time still be locked-out for-life and denied entry to stores that share the system?
Like vigilantes, I have a hard time seeing a way which such system can be created without massive abuse. Let assume that 1% of scanned people will be false positives, and say 5% of records will be falsely added by clerks (they are not trained to be cops or judges, and some will abuse the system to intentional hurt people they know). How many people would be hurt by this system if it was shared by the majority of stores in a large town?
I agree with the EFF is opposing this type of system, Retailers should not be allowed to "enroll" people in a Scarlet Letter system. The Credit System is already fucked up enough with very limited recourse for people that abuse it to extort consumers.
Retailers should not have that kind of power...
Your employer and industry are so far on the wrong side of the ethical divide that there's simply no way to reconcile ethical standards and what you see as justified non-opt-in faceprint collection.
If you're calling out the EFF's hypothetical wost-case like this, you certainly wouldn't use the hypothetical best-case of shoplifter recognition to provide argumentative cover for other no-opt-in products like your company's IRL analytics offering, would you?
The guidelines coming out of this process were intended to be voluntary, so companies would not have to comply if they didn't want to.
If they're voluntary, what is the purpose of complying with them? Why would anyone in industry adopt the standards? Seriously. What good will they do? Why didn't businesses voluntarily walk away from the table first? Should I believe they're honestly negotiating to voluntarily give up on existing/potential products to make the EFF happy?
What reason at all do I have to believe the industry isn't just seeking a set of milquetoast "consumer protections" they can make a big deal out of living up to, while not providing any real consumer protection?
I don't know if this process is now dead or not. I suspect it will go on, just now the consumer groups won't have a voice in shaping this (by their own choice), which is a shame.
So the industry couldn't steamroll the consumer protection groups and get a set of flimsy set of standards. And now it's those group's fault when consumers aren't protected?
Shame on you for pretending like they're choosing not to have their voice heard. It was heard, then industry saw money and decided to ignore them.
either way it would be good for consumers to get some kind of guideline out there
No, if the guidelines are weak they're worth less than nothing. The industry will have a standard they can "live up to" without impacting their business model. Then when consumers complain, they'll wave around the standards as if the boundaries of acceptable use of facial recognition are already settled and protect consumers.
tl;dr lulz @ "The NTIA did an outstanding job in an attempt to bring this process together"
I'd like to learn more about the process Canada went through. This is one of the top search results, is this it?
This is the Privacy Impact Assessment Report from Passport Canada.
Of course the knee-jerk argument is that wearing masks (both digitally in offline) creates a different personality which is more willing to engage in illegal activity, but for me, a staunch Constitutionalist, that still doesn't justify making it illegal.
This is, at the bottom line, about the reduction of anonymity equally in the real world as in the digital.
One more tool of control in the belt of the oligarchy.
I do think that NIR will eventually become more important (especially for car applications) and really the main stumbling block is massive sets of training data for it.
Canada just recently criminalised the act of wearing a mask at a protest. 10 years maximum sentence. For wearing a mask. Seriously! (Nice democracy they've got goin' up there.)
Wear a surgical mask, that seems acceptable in this day and age.
It seems like most of the privacy issues at hand today are already against the law -- the problem is enforcement.
It is never too late to fight something. Everything changes with time. It might be too late for an easy victory but if you study how things change and evolve over decades you'll see that's just a self-fulfilling prophecy. Many things that people say are "inevitable" have failed and many existing trends get reversed. For a couple of recent technological ones just look at the failing of the "inevitable" SOPA, the Comcast and Time Warner merger, and net neutrality actually becoming the regulation instead of being destroyed.
If you believe you can change something you often can, if you don't believe you can change it then you definitely won't.
We should retain the right to conceal our identity in reasonable ways and governments should enact laws that protect their citizens privacy from biometric recognition technologies.
Reality. Information leaks all over the place, trying to legislate your way to privacy is like trying to bail out a boat with a fishing net. Public information is by definition public, so we could all walk around with wide brimmed hats and masks or we could be realistic about the challenges ahead.
There's a big difference in a company legally operating a facial tracking system and a company illegally operating one.
In the case of the illegal one they have to hide it, it's much harder to profit from and thus would be less widespread. In the case of the legal one they can freely advertise their services and easily setup their system anywhere that local governments don't create hurdles.
Edit: Are you going to ban cameras in public places? Facebook? Cellphones? Satellites?
There are different degrees of trust though, and I would like one with a higher degree of trust earned.
In their homes? I'm pretty sure this is illegal. Or is it not if they are visible from a public area?
They know the address being delivered to, and now they can map the presence of specific biometric data to that location and time.
Illegal collection can't reliably be prevented. Get a photograph, extract the biometric data, enter them in your database, delete the photograph. Job done.
Maybe the only way to get control back on this is to require that these databases to be open source (or publicly available) and make it illegal to hide those databases. Citizens must also have the right to be removed from those databases.
I'm not certain how such systems would look in this case (everybody wears Guy Fawkes masks in public?), but this debate reminds me of prohibition vs. harm reduction in the War on [People Using] Drugs.
Just out of curiosity, do you know how that interacts with indecency laws? Like for example if someone is having sex in their home but someone is able to see it from the sidewalk through a crack in the curtains? This is purely hypothetical of course.
My understanding is that it comes down to a "reasonable person" doctrine - i.e. would a reasonable person consider the location to be private.
Another way of thinking about it is if there is a reasonable chance that you will be seen by people engaging in normal everyday activities.
Personally, I would say that peeking through a small gap in the curtains would not qualify as normal behavior. Someone doing that is clearly snooping.
On the other end of the spectrum, having sex in front of a window that fronts a public sidewalk with no curtains would pretty clearly be public indecency.
With OpenCV, I already have face detection, face recognition, eye detection, cascade creation (for detection of any feature I wish), a retinal model in conjunction with retinal scanning via webcam, and other power tools in that bag.
And it's all open source. It's easy enough, I made it myself: https://github.com/jwcrawley/uWHo
Writing software that parses Social Security numbers into lists is even easier than face recognition, and yet we still want standards around handling the information!
Okay, that was a slightly facetious example, but the point stands: it's a question of securing data, not technology.
It's a bit like my pirate radio station. I have it, I broadcast to a very limited distance, whatever, I don't really worry about the white vans. If I put a 400 kW TV transmitter on top of a mountain, that's a very different story.
So, if I buy/build a 1GP camera array, and drive it around the city, capturing all the data, "that's a very different story"?
Your example is in violation of the FCC.
Mine violates nothing.
Well, yeah, that was kinda the idea. My point is that it doesn't matter what is technically possible if consumer protection rules were to come into place. The regulatory environment would curtail the largest (ab)uses of the technology as entities that would use the technology wouldn't want to take the risk.
edit: So, under hypothetical regulation on facial recognition, a watchdog agency isn't going to know/care about your driveway camera, but your business based on using that 1GP array to sell person tracking and demographic data would raise some eyebrows.
Was your point about the ease of implementation an expression that you'd do it anyway if the regulations came into place? Put your business on the line because you personally know how to implement it?
It's trivially easy to sell customer data for profit at the expense of their privacy. It's easy to store personal information without securing it. It's easy to take credit cards without using SSL certs.
It's also easy to lock someone up and deny them due process, or to enter someone's home and seize their effects without a warrant, or to set absurdly high bail.
Etc. Again, it's because this is such an easy thing to do–and it's only going to become easier to collect, store, and analyze this data-that it makes sense to establish definitions of what constitutes fair use, and what constitutes abuse, and to do so sooner, rather than later.
But above is in public. What about "my" property? Or the supermarket? We already have cameras everywhere. Just look up. So what does it matter that they run software on the back end? The hardware is already there.
Walking in gives implicit permission that "you agree to be recorded or leave". Vandalization of cameras is criminal. Many places insist you not wear masks, or they call the cops.
Now with it all automated, it just takes running some scripts on a computer system to locate your face from any number of sources, so people who are not particularly under suspicion for anything will have their whereabouts identified automatically just as if they were actual suspects.
Same story with automatic license plates readers. Manual searching for a license plate is so time-consuming that it's unlikely to get done without cause. With automatic systems, records can be kept of every car that passes through a monitored intersection, whether if the car or driver is of actual interest or not.
It's not the same thing that's always been done, except now it's being done with a computer. It's the same thing that's always been done, except now it's being done for everyone, rather than for just a few select individuals of interest. It doesn't really matter that a computer is involved; it's the scope of the surveillance that is disconcerting. If huge numbers of humans were hired to track everyone manually, that would also be disconcerting.
Who cares? I'm not entirely sure. And I guess that's sort of the point. This level of surveillance has, in my opinion, surpassed what most of society is ready for. I don't think that most people have a good grasp on what's going on, or what impact it will / could have on their lives.
If arbitrary companies and government agencies can know my whereabouts at all times, should this have any impact to how I live my life? It's a question that society at large has never had to really answer before.
At some point we as society say "this is the line, you can go no further"
Communities create standards for behavior that are arbitrary all the time...
We have regulations for how tall your grass shall be, what color you can paint your home, how many dogs you are allowed to have etc etc etc etc.... 1000's of aribirary rules and regulations for society.
For businesses there are even more, things like handicapped parking, bathrooms, depending on the type of business the hours or days you are allowed to be open, etc etc etc 10's of thousands of rules arbitraly defined as to what a business can and can not do
This is more akin to regulations around either medical records, or finical records both have regulations around how the data can be stored, and how the business can use it.
And walking in doesn't give implicit permission that I agree for them to store, analyze, sell, leak, publish or otherwise use that data however they please, in perpetuity. Are they allowed to disclose my presence to law enforcement without a warrant? Can they tell my health insurance provider that I spent an unusually long amount of time in the dessert aisle?
I'm sure a lawyer could argue (probably quite successfully) otherwise, but from a let's-address-the-problem-before-it's-fully-mature standpoint, these are the kinds of conversations we're supposed to be having right now.
Well, they are people. The only distinction is they have no voting rights in the elections.
> And walking in doesn't give implicit permission that I agree for them to store, analyze, sell, leak, publish or otherwise use that data however they please, in perpetuity.
Further south of where I live, there's a sex toy shop with the following: http://www.covenanteyes.com/2009/08/26/truckers-pictures-tak...
The website's down, but they are still there, taking videos and photos of all who arrive. Completely legal.
But that's why I wrote uWHo. It's not hard. And I did so to push the envelope. People have chided me, and all I can say is look here: https://maps.google.com/locationhistory/b/0 and https://maps.google.com/locationhistory/b/0
> Are they allowed to disclose my presence to law enforcement without a warrant? Can they tell my health insurance provider that I spent an unusually long amount of time in the dessert aisle?
Can you do those things?
If so, why can't they?
You probably mean iris. The retina is the back of the eye, very few if any real biometric systems use this. The iris is the colored area of the eye that surrounds the pupil. It's very texture rich and less invasive to obtain.
My understanding was that it could back-calculate the retinal map.
Alas, I was indeed wrong.
For those who don't mess with this OpenCV, what this does is bring the colorspace and attributes of an image to something that the eye would see.
camera picture + retinal filter = simulation of how the eye would see the picture captured by camera.
As someone who has worked a bit using OpenCV to detect eye features, I can say pretty confidently that it isn't THAT easy.
OpenCV can help you start off by easily being able to detect that there is an eye in frame, but I think you are on your own beyond that.
My general approach was to use haarcascades to detect that an eye was in frame. Then you have to isolate the iris, and pupil. You can use Hough Circles or the Daugman algorithm for segmentation. I got a reasonable result using the Daugman algorithm, but Hough circles seemed to be more erratic. I had a huge problem with reflections though, which I think it is known that the Daugman algorithm suffers from. You pretty much have to take the picture of the eye in a controlled setting where you flash a light in the subjects eye such that the reflection in the eye is a small isolated circle. Even then I wasn't always to get a correct result. I think I may have not implemented the algorithm perfectly though.
I never got farther than that, but even if you are able to capture the eye perfectly you then have to actually build up a model of the persons features and then be able to compare it.
I would be interested if anyone else had better luck than I did.