That will certainly keep the conversation going although in a less civilized way. Perhaps it could bring enough pressure to bare on NTIA to get them to reconsider basic privacy safeguards.
What really bothers me is that nobody outside of our bubble will take this seriously until someone hacks in and downloads a copy of the database and posts it online for all to see. Only then will most people realize what's wrong with having your face in that database despite the absence of a conviction.
1. The process led by the NTIA, unlike the bulk of the EFF's post, has nothing to do with the government's collection and use of facial recognition data. This process was designed to come up with a voluntary guideline that commercial facial recognition companies could choose to adhere to or not. I agree that limits need to be set for government uses, but this process from the beginning was never about that. The NTIA did not have jurisdiction for anything involving the federal government, law enforcement, military, etc. This is more akin to the voluntary privacy standards that companies on the web will agree to on their own. Once they do, if they violate their own policy they are subject to sanction by the FTC. The guidelines coming out of this process were intended to be voluntary, so companies would not have to comply if they didn't want to. However, the idea was that the working group would come up with something that the industry would volunteer for and by doing so give some level of protection to consumers.
2. The industry representation was not opposed to opt-in on some scenarios, but they were opposed to mandatory positive opt-in across the board without exception. There are a number of scenarios where opt-in doesn't make sense. For example, imagine a system that used facial recognition in a retail setting to detect shoplifters and alert management to their presence. If you had been caught stealing in the store in the past, you would be enrolled into the system by the retailer. Each person entering the store (as seen on their already existing surveillance systems) would be checked against that database of shoplifters. How realistic would it be for the retailer to get written opt-in consent from the criminal?
3. Previous sessions the consumer advocates had agreed that facial detection was out of scope, but facial recognition and facial analysis were in scope for the guideline. At the last session, they changed their position and wanted to bring face detection in scope. The difference being, face detection is just finding a face in an image, like your smartphone does in the camera app vs face recognition which would be finding the identity of a face in a picture (Who is this? or is this Bob?). Can you imagine getting opt-in consent to draw this little yellow box around the faces in the camera app?
4. The NTIA did an outstanding job in an attempt to bring this process together. They organized the sessions and brought in numerous speakers from both sides (consumer and industry) as well as from the education and research sectors. We even had representation from Canada that spoke to a similar process they went through there. The NTIA wasn't forcing any standards on anyone, they simply offered their services to organize and facilitate it as they have done in other similar multi-stakeholder initiatives. The process went on far longer than the NTIA (and any of the participants) had expected, but it was making slow progress.
I don't know if this process is now dead or not. I suspect it will go on, just now the consumer groups won't have a voice in shaping this (by their own choice), which is a shame. It's also possible that the show of force by the consumer advocates will push the industry to bend more in their favor, either way it would be good for consumers to get some kind of guideline out there. It could always evolve and be revisited in the future as the technology changes.
How would that system work in a fair society government by law? Would suspects not found guilty in a court of law be there? How should false positives, that is people who the computer incorrectly predict as someone else, get a chance to clear their name (face) and get compensation when it happens? Should criminals that have served their time still be locked-out for-life and denied entry to stores that share the system?
Like vigilantes, I have a hard time seeing a way which such system can be created without massive abuse. Let assume that 1% of scanned people will be false positives, and say 5% of records will be falsely added by clerks (they are not trained to be cops or judges, and some will abuse the system to intentional hurt people they know). How many people would be hurt by this system if it was shared by the majority of stores in a large town?
I agree with the EFF is opposing this type of system, Retailers should not be allowed to "enroll" people in a Scarlet Letter system. The Credit System is already fucked up enough with very limited recourse for people that abuse it to extort consumers.
Retailers should not have that kind of power...
Your employer and industry are so far on the wrong side of the ethical divide that there's simply no way to reconcile ethical standards and what you see as justified non-opt-in faceprint collection.
If you're calling out the EFF's hypothetical wost-case like this, you certainly wouldn't use the hypothetical best-case of shoplifter recognition to provide argumentative cover for other no-opt-in products like your company's IRL analytics offering, would you?
The guidelines coming out of this process were intended to be voluntary, so companies would not have to comply if they didn't want to.
If they're voluntary, what is the purpose of complying with them? Why would anyone in industry adopt the standards? Seriously. What good will they do? Why didn't businesses voluntarily walk away from the table first? Should I believe they're honestly negotiating to voluntarily give up on existing/potential products to make the EFF happy?
What reason at all do I have to believe the industry isn't just seeking a set of milquetoast "consumer protections" they can make a big deal out of living up to, while not providing any real consumer protection?
I don't know if this process is now dead or not. I suspect it will go on, just now the consumer groups won't have a voice in shaping this (by their own choice), which is a shame.
So the industry couldn't steamroll the consumer protection groups and get a set of flimsy set of standards. And now it's those group's fault when consumers aren't protected?
Shame on you for pretending like they're choosing not to have their voice heard. It was heard, then industry saw money and decided to ignore them.
either way it would be good for consumers to get some kind of guideline out there
No, if the guidelines are weak they're worth less than nothing. The industry will have a standard they can "live up to" without impacting their business model. Then when consumers complain, they'll wave around the standards as if the boundaries of acceptable use of facial recognition are already settled and protect consumers.
tl;dr lulz @ "The NTIA did an outstanding job in an attempt to bring this process together"
I'd like to learn more about the process Canada went through. This is one of the top search results, is this it?
This is the Privacy Impact Assessment Report from Passport Canada.