Hacker News new | past | comments | ask | show | jobs | submit login

This really feels like attacking the symptoms not the problem. Shouldn't the city, state and federal government develop guidelines on what can and can't be done with this information and ensure lack of abuse? You're in public, you have no expectation of privacy -- whether the video is assessed by computers or an army of humans, does it matter? Don't human viewers have 'facial recognition technology'?

Progress can't be stuffed back into the bottle, but it does need to be guided and controlled. It feels very SF these days, sad to say, to long for the good old days by placing the collective head into the collective sand (as with allowing new/taller buildings to be built).

Technology is neutral, what matters is what we do with it.




the point is to prevent the capture of such data to begin with. As a privacy activist, we've seen that simply developing a 'use-policy,' while effective, can only go so far. Once local/state/fed authorities possess this data, it's a matter of when, not if, it will be abused (or sold off to private interests).

Your second question: there's a massive difference between being observed by an individual officer and being perpetually tracked by an apparatus of ubiquitous cameras that cross-reference your face with your background information, possible criminal record, citizenship status, etc. It also opens the flood gates for horrific scenarios like the 'social credit system' that they've implemented in China. Go look that one up and tell me you're still ok with facial recognition.


I spent a lot of time researching the social credit system and yeah, not a fan -- it's basically gameified totalitarianism.

However, again, I think that's about what you do with the ability and not the ability itself. You don't need facial recognition to implement the social credit system: a simple plastic card would do. Your first name, middle initial and last name as a triple are enough to uniquely identify you on the Texas voting registry 80% of the time [1]. This ship has long sailed. That's again why I'm in favor of regulating the problematic uses of information and technology and not addressing the specific technology or method of implementation.

[1] https://www.eitanhersh.com/uploads/7/9/7/5/7975685/agdn_v1_4...


"That's again why I'm in favor of regulating the problematic uses of information and technology..."

We agree on this in principle. But again, once authorities have any of this data in their possession, abuse always happens. Literally always.

IMO the root problem is not "oh, the cops are just using all my PII and biometric data inappropriately" the root problem is that "the cops have possession of all my PII and biometric data to begin with."

You have the symptom and disease reversed here, IMO.


I am wary of facial recognition, and I avoid the use of it. But I'm not convinced by this line of reasoning either, so let me play devil's advocate.

> But again, once authorities have any of this data in their possession, abuse always happens. Literally always.

Well, before something can be abused it must first be available to use. Conversely, once a tool is available to use some may abuse it.

For example, if collecting fingerprints or DNA were completely forbidden then that might prevent abuse of such data (such as false matches). But it would also prevent any beneficial uses as well.

Banning facial recognition prevents not only abuse but also any potential good uses, such as locating victims of abduction or trafficking, and perhaps other uses we cannot foresee.

Killing it in its infancy may be easier than doing so after it takes root, but it also gives society less opportunity to learn what the consequences of the technology may be, intended and unintended, good or ill.

We know it can be abused, especially in the hands of an authoritarian government, but does that mean it cannot be used responsibly? Anything that gives the state power could be turned against the people, as libertarians might warn, but social progress also requires that we learn to work together rather than reject anything which might do us harm.

Perhaps a better argument for an early and complete local ban might be that it allows other regions to be the test subjects. Or that by taking a less compromising stance the anti-facial recognition side gains a stronger bargaining position at the table. But those arguments are not as attractive, maybe.


"Perhaps a better argument for an early and complete local ban might be that it allows other regions to be the test subjects."

It's a valid thought, honestly. Though seeing how tightly the police hold onto this tech once they have it makes it extremely difficult to just test the waters (and also requires vigilant public oversight, which the sheriffs' associations will fight tooth and nail).

Also having cops test this tech out, knowing they're going to be deliberately monitored to how often they use it for good reasons (e.g. child abductions) vs abuse it, would probably produce incredibly biased results. Think about it-- the experiment would be entirely self-serving: cops get to trumpet that it helped them for the legit crime here and there (and sitting through public safety committees, believe me, they will TRUMPET it), while showing that zero cases of misuse happened.

Ultimately, we have to think in systems: sure, ubiquitous surveillance would undoubtedly solve the horrific crime here and there, but at what cost to who we are as people? At what cost to how we protect minorities and the undocumented? At what cost to our already eroding public trust?


> Also having cops test this tech out, knowing they're going to be deliberately monitored to how often they use it for good reasons (e.g. child abductions) vs abuse it, would probably produce incredibly biased results. Think about it-- the experiment would be entirely self-serving: cops get to trumpet that it helped them for the legit crime here and there (and sitting through public safety committees, believe me, they will TRUMPET it), while showing that zero cases of misuse happened.

To be fair, wouldn't that suggest strong oversight might work then? True, any test might differ from real-world conditions, but theories need to be tested one way or another and it would provide some evidence.

While caution during early testing might lead to less misuse, one could also imagine countervailing factors. For example, lack of familiarity with a new technology might lead to might lead to mistakes. Regulations are written in blood, as they say, and the development of new ethical guidelines may take time.

Which, as we've noted, could be a pragmatic reason to let others be the test subjects. I'm not eager to open the can of worms myself, though it might feel a bit selfish to put it that way.


"To be fair, wouldn't that suggest strong oversight might work then?"

Fair point, that might work if: 1. a public safety/citizens oversight committee does its job consistently, 2. isn't loaded with police-friendly stooges 3. and isn't gradually de-fanged over time in terms of its power.

All three things, with time, can be manipulated by any given city hall, which is often lock-step with the police force.

"...but theories need to be tested one way or another and it would provide some evidence"

Agreed. And I say let's look at how they've deployed facial recognition in China to put those theories to bed.


"apparatus of ubiquitous cameras"

Serious q - why not stop the surveillance and the cameras?


would if we could. that genie's been long out of the bottle, but facial recognition hasn't been adopted to the same extent, yet, because the technology is so nascent.


Yes, but that genie is out of the bottle too. Pretending otherwise isn't going to help.


You sound pretty cynical on the issue. Maybe you should come advocate with us and see for yourself the opportunities we still have to create meaningful change in this space.


And people still rob and murder. But some reason we insist on laws against it.


Right, although I think that your argument would be more akin to regulating knives and duct tape instead of the crimes people commit with them. I see your point though.


> You're in public, you have no expectation of privacy

That's wrong from the start and leads to people not using their rights (e.g. not going to a demonstration, because they have to fear long-term repression).

> whether the video is assessed by computers or an army of humans, does it matter? Don't human viewers have 'facial recognition technology'?

Scale matters and computers are machines of scale. When I no longer have a risk of getting recognized somewhere but instead know that I will and that this information can be stored long-term that has consequences on peoples behavior. See above why that's bad.

Sure, in theory you could try to employ half the population of SF to get the same result as one computer. That would lead to discussions about usage of limited city resources very, very fast and probably stop this in it's tracks. These options are only equivalent in theory, not in practice.

> Technology is neutral, what matters is what we do with it.

Banning usages society deems bad is a valid option of "what to do with it". If you want other options you are always free to argue for them, but then you can no longer claim it's neutral.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: