Hacker News new | past | comments | ask | show | jobs | submit login

The dislike for facial recognition technology is another one of those hard to comprehend stances for me. The article is more like a moral call to action, it doesnt contain any facts or information or even arguments against it. All I can find is that it claims "costly errors, discrimination and privacy invasions" are the problem. How can facial recognition software possibly cause all of that? Software operation basically free, discrimination is ridiculous - if anything it helps with less discrimination, it scans everybody after all. and then privacy invasion? You mean you walk around in public spaces and you think you have a right not to be recorded? How is this different from a police officer walking by and scanning your face? No, in reality this debate is a weird sort of antitech sentiment. It's like a symptom of a deep dissatisfaction with how fast the world is developing and people want to go back to the good old times. Well sorry, but I actually like the fact that a criminal can get caught in minutes because his face is scanned.



How about 3rd parties collecting video feeds from a number of city locations, analyzing and face-recognizing people in them, cross-referencing that with other sources and building a database that they sell to private investigators and other interested parties. Combine that with license plate DBs and you can track movement of anyone both by car and on foot. Then you can get even more creative and cross-reference that data with marital status of people and find those who're seen walking the street or having a drink with someone who's not their partner (which we can find thanks to soc. networks). And boom, you've got yourself a mass-scale blackmailing tool that runs on auto-pilot. Possibilities are endless, and it's not even that hard to build something like that anymore.


This is a very narrow view of potential applications of facial recognition. I take it for granted that this technology has enormous affordances for social control. "Criminality" may be an out-of date notion of the kinds of behaviors that governments and other institutions like those in workplaces (e.g. the academy, employers, corporations). Surveillance has major political implications and we should not treat the advancement of technology as independent from the society and the political economy.

Surveillance is the key concept in a major design of social control, Bentham's "Panopticon." When anyone may be watched, people must live expecting that they will be watched, and the will of those with the power to create sanctions will enacted. This will be a historical novelty if facial recognition systems become widespread. Historically, only limited public venues were subject to effective widespread surveillance (except in extreme authoritarian cases like East Germany). Increasingly, online spaces are subject to surveillance that is limited by browser instrumentation, with facial recognition it will be possible to track people offline as well, if you have access to the cameras.

Surveillance is the twin of transparency, which can be thought of as possible tool for democratic control. I do believe that transparency i.e. through freedom of information acts has some positive characteristics for democracy but has a weak empirical record compared to panoptic control. Facial recognition might have affordances for collective action, such as making it easy to produce evidence of excesses or corruption.

That said, maybe we should not give the authorities the power to implement technological changes without the consent of the governed? Technology moves faster than policies can adapt to them, but old laws and institutions like the police evolved in times when the tools of day could not have been imagined.

Edit: Regulating technologies like facial recognition popularizes the control of social systems which counter acts authoritarianism and high concentration of power and resources.


>> Technology moves faster than policies can adapt to them, but old laws and institutions like the police evolved in times when the tools of day could not have been imagined.

Very good point.


Ok so outside of their own home people have to behave as if somebody watches them 24/7. No offense, but we already have data on what that kind of belief produces and that's the belief that God always watches you. Seems to be beneficial.


Not sure you have the option of not believing in the government if you find their policies oppressive. Doesn't really seem like the same thing to me.


> How is this different from a police officer walking by and scanning your face?

If facial recognition were widely deployed and the data matched up, it would be more like a police officer following around wherever you go, 24/7.

The fear is that you could use the tech to build ultra-detailed activity profiles of any person


An algorithm isn't going to stand in front of me in court and explain its actions.

The problem isn't that there could be mistakes or bias in the technology, but that no one can be held accountable for them.


I would like to seriously ask you to consider a more nuanced stance on this subject, as you now seem either uninformed about the technology behind these systems or wilfully naive. Please consider the dangers that could spring from such capabilities, if not now, then in some time when another power has control of these capabilities.


> You mean you walk around in public spaces and you think you have a right not to be recorded?

The right? Maybe not. But I surely don’t want Big Brother always watching. I gotta ask, have you read 1984?


It was a long time ago, but have you read it? Because if I recall correctly, the surveillance happened inside your own home from your own TV. Other than that, the book '1984' wasnt really about surveillance, it's more about a system of lies and the transformation of speech into a political tool to control people.


It was about totalitarianism. An inescapable State.

Public surveillance everywhere is a precursor to that.

The surveillance within your home is very much already a reality.

Now we have people welcoming this creeping totalitarianism with concerns of safety. Trading justice for individual crimes at the ever-growing risk of totalitarianism.


I don't think you're recalling correctly: surveillance was a huge part of how people were controlled in 1984. It wasn't the only tool, but it certainly was a main one of them.


> The researchers Joy Buolamwini at Massachusetts Institute of Technology in Cambridge and Timnit Gebru, then at Microsoft Research in New York City, showed that some of the most advanced facial-recognition software failed to accurately identify dark-skinned women 35% of the time, compared to a 1% error rate for white men. Separate work showed that these technologies mismatched 28 US members of Congress to a database of mugshots, with a nearly 40% error rate for members of colour. Researchers at the University of Essex in Colchester, UK, tested a facial-recognition technology used by London’s Metropolitan Police, and found it made just 8 correct matches out of a series of 42, an error rate they suspect would not be found lawful in court.


So improve the software and fix the bugs? How is this a valid response?


I think the point is it shows systemic bias, which may be very difficult to "debug".

It is important to apply ethics when developing technology, as rationality alone is not sufficient.


I dont see why it should be difficult to debug unless you think this is all some kind of conspiratorial plot by white people or the "white system" who are all secretely racist against black people and that's why the detection rate was purposefully kept low by these evil people. And implying that we have to be irrational to be ethical is also pretty weird.


It's not a "conspiratorial plot", no. But time and again, research has shown that the implicit biases held by the people building these tools [0] are observable in the tool's results — and all people hold implicit biases.

It's not the tool we need to debug; it's us.

[0] Where, by "these tools", I mean machine learning writ large.

EDIT: I highly doubt your comment's parent was suggesting we had to be irrational in order to be ethical. Rather, I believe they meant we should be rational and ethical. Wouldn't you agree?




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: