This doesn’t sound groundbreaking all and I’d be very surprised if FBI/NSA/DHS/Palantir didn’t already have a system like that. Maybe they were just reserved for higher-value targets. Of course, NSA isn’t gonna tell you what it has constructed until decades later, so claiming that this goes far beyond NSA capabilities is reckless at best and clueless at worst.
I think we (both here at HN and in the larger society) have this perception that police departments, etc., are part of a well-organized hierarchy of well-considered processes and technology that start at the FBI/NSA/etc and works its way down, all carefully vetted for technical and ethical standards, and disclosed to the public who is ostensibly their employer.
The reality that I’ve seen is quite different. As the article points out, individual police departments (and their officers!) often independently research and procure services, based on hearsay and questionable morals. Does the service help ‘get the bad guys’? Then it’s good. Does the service obviously violate copyright and ToS agreements? Not our problem.
I learned this when I built a scrappy little website for a one-man company who builds training kits for first responders. As low-tech as this site was (e.g., ordering the product entailed sending a purchase order by postal mail) I was struck by the apparent success of my client’s product: he had dozens of endorsements by apparently top-level influencers in the field, and didn’t have much competition. There were some certifications that he matched, but it didn’t seem all that hard to sell a few thousand dollars of technology to small/mid-level LEOs. (This isn’t a criticism of his product, which seemed fine and certainly didn’t involve privacy or surveillance technology.) The numbers in the article about Clearview’s services are in the same range — easy to justify if it’s only a few thousand a year and apparently produces good results.
Exactly. This is a prime example of commoditization. The product doesn't have to be sophisticated. Just good enough to handle the most common use cases reasonably well, at a cheap price.
That's just it, though. I think there is broad popular understanding that the FBI and NSA have just about any surveillance capabilities we can imagine. But, we think, they only use it to chase terrorists, maybe pedophiles. Local cops instantly identifying a shoplifter by matching the surveillance photo to a facebook photo still seems like sci-fi, even if it was increasingly "near future" sci-fi.
Just because the tech isn't ground breaking doesn't mean the application isn't a concern to society.
Facial recognition is a bigger threat to our current way of life than just about anything else I can fathom other than climate change or nuclear war. The scariest part is that few people seem to recognize or care about the risk.
But when it's based on phone GPS, wifi networks etc., then people are fine with it. And that type of tracking is very possible for many years now through smartphones. But it feels less viscerally spooky.
I will however probably walk into at least one photo that gets posted publicly in the course of a day.
Humans are visual creatures. To "see" is synymous with "to understand". Vision is a high-fidelity sense, in ways that even other senses (hearing, smell, taste, touch) are not. And all our senses are more immediate than perceptions mediated by devices (as with radiation or magnetism) or delivered via symbols, data, or maths.
This is a tremendously significant factor in individual and group psychology. It's also one that's poorly explored and expressed -- Robert K. Merton's work on latent vs. manifest functions, described as the consequences or implications of systems, tools, ideas, or institutions, is about the closest I've been able to find, and whilst this captures much of the sense I'm trying to convey, it doesn't quite catch all of it.
But his work does provide one extraordinarily useful notion, that of the significance of latent functions (or perceptions):
The discovery of latent functions represents significant increments in sociological knowledge. There is another respect in which inquiry into latent functions represents a distinctive contribution of the social scientist. It is precisely the latent functions of a practice or belief which are not common knowlege, for these are unintended and generally unrecognized social and psychological consequences. As a result, findings concerning latent functions represent a greater increment in knowledge than findings concerning manifest functions. They represent, also, greater departures from "common-sense" knowledge about social life. Inasmuch as the latent functions depart, more or less, from the avowed manifestations, the research which uncovers latent functions very often produces "paradoxical" results. The seeming paradox arises from the sharp modification of a familiar popular perception which regards a standardized practice or believe only in terms of its manifest functions by indicating some of its subsidiary or collateral latent functions. The introduction of the concept of latent function in social research leads to conclusions which show that "social life is not as simple as it first seems." For as long as people confine themselves to certain consequences (e.g., manifest consequences), it is comparatively simple for them to pass moral judgements upon the practice or belief in question.
-- Robert K. Merton, "Manifest and Latent Functions", in Social Theory Re-Wired (https://www.worldcat.org/title/social-theory-re-wired-new-co...)
Emphasis in original.
Giving hate groups a greater ability to stalk and harass their victims is also pretty scary.
Peter Thiel funding this, while he demolished Gawker Media, through litigation for his own invasion of privacy.
Forget Silicon Valley, we urgently need federal regulation to limit this assault on our privacy (at the very least it can slow down our country’s inevitable decline into a black mirror episode)
Welcome to our brave new world: techno-utopian utilitarianism, exemplified by the sophomoric philosophies of Zuck and Thiel.
I disagree. I think many care about the evil they're complicit in in their every day working lives there, but they inherently choose the money over good and hence target their "goodness" on espousing their virtuousness on unrelated, seemingly utterly disconnected "issues" from what they're doing and supporting day-to-day.
We really need regulation here. Urgently.
The US appears to have been the leader in such regulation in the past. The problem is, they don't do that anymore. They haven't passed any laws related to user rights or privacy in a long time, and are actively trying to make encryption illegal.
The same is true for the Australian government, and those of several developing nations. We can hope that the EU does something, but... the impact will be limited.
It's especially bad for people living in non-first-world countries like India where the citizens aren't educated on the consequences of law enforcement agencies using tech like this. Laws taking away the right to privacy are being pushed through regularly. Recently they've started using facial recognition to identify protestors: https://www.fastcompany.com/90448241/indian-police-are-using...
I really wish that some leading tech companies would try and push regulation through, but that will never happen since apparently privacy erosion and constant user tracking is critical for revenue for seemingly all of them (except Apple, I suppose).
Also, even if somehow regulations were put in place that made it necessary for websites to try and protect user data and made it illegal to scrape PII, there's nothing stopping government agencies from developing tools like these for themselves. Aaaand we go back to the first paragraph of this comment. This is a sad state of affairs.
But surprisingly a Pew Research  study recently found that more than half of Americans trust law enforcement with facial recognition tech.
This opens up a whole new can of worms with issues like selling this info back to the victim.
I am not a lawyer. Is it possible to file a (class action?) lawsuit against Clearview AI and its clients (police agencies, etc.) in light of this breach of TOS? At the least, this should suffice in procuring a subpoena to obtain more information on the exact extent of misappropriation of public data at play here.
It seems the EFF has some play in this area: https://www.eff.org/pages/face-recognition
But it’s hard to identify what efforts I could support with my time where I am. The right to repair movement has done a good job of communicating state by state proposed laws that can be advocated for. Is there anyone doing the same thing for privacy?
It’s a mistake to think about US regulations as purely federal.
That raises the barrier to entry and compliance so high that it strangles a lot of entrants.
If it is a problem that the government identifies you as a protestor, at that point it doesn't matter that there was a regulation telling them not to. The government needs to be controlled to the point that it doesn't matter whether they can identify a protestor or not, because peacefully protesting should not be a crime that warrants government intervention.
It should be hard for the government to imprison you or otherwise impinge your freedom, for only serious offenses, with a high burden of proof, in a public trial.
> The government needs to be controlled to the point that it doesn't matter whether they can identify a protestor or not, because peacefully protesting should not be a crime that warrants government intervention.
Right, but that was never the case, and it probably won't be in the future.
Because this idea where the government wants to jail peaceful protestors, but surely they'll refrain from using certain technologies to find them if we just ask nicely does not make sense to me.
This cat is out of the bag. Findface.ru now actively courts law enforcement and other interested parties. The west does not have the monopoly on this.
We don't need regulation here, urgently or not. This whole push towards banning things --- this company, the EU facial recognition thing, and so on --- strikes me as just another moral panic used as an excuse for a few to impose their opinions and power on the many.
I've yet to see privacy advocates identify actual undeserved harms that have come to people as a result of the technology that they want to regulate. Loss of "privacy" in public is only a harm if you already accept the premise of the argument, which I don't.
There are limits to what we consider acceptable even in public spaces; for example, upskirt photos aren't ok even you're in a public place. I think it's still reasonable to consider that one day (maybe today, for many people?) it might mean that every single moment of their life outside is being recorded, which was literally not possible until recently. It's a valid thing to discuss.
Example: it's already legal to keep tabs on people in public. There are businesses build on this idea, private investigators. A little sleazy? Expensive? Sure. But legal. If you want to be consistent, you should ban them too.
If X is what causes harm, X should be disallowed no matter the price.
Alot of X's aren't a problem until they can scale. It's not pragmatic to outlaw everything that might be a problem at scale but might never be able to achieve that scale.
We're in an era where we are discovering alot of abuses that could only be classified as an issue due to scale and efficiency.
For many things, the scale is the problem.
These kinds of things tend to increase attention given to issues, yes. I don't think it's unreasonable to think that people care more about things that are easily and practically abused, because, well, those are the things that are more likely to actually affect them. Plus it's a lot harder to argue against some formless "maybe people could be watching me" threat, but a lot easier to reason about a specific example.
> Example: it's already legal to keep tabs on people in public. There are businesses build on this idea, private investigators. A little sleazy? Expensive? Sure. But legal. If you want to be consistent, you should ban them too.
I'm not really a fan of private investigators, to be honest, but I haven't really given it as much thought as I should before I argue my case online.
Good example is paper records, lots of sensible stuff is recorded on paper records. Access to those is often less than perfectly secured. But, because accessing hundreds of them is tedious, and stealing them might require an actual truck, this is not a real world problem. The moment we digitize this data, it get‘s so easy to access, copy, etc that the old level of access protections is no longer enough.
Similar problem with scaling up face detection. Lot‘s of jerks would surely like to harass other people by following them around everywhere, spying on them and generally make their life’s miserable. Until know, this was really expensive - time and money wise - so it did happen only rarely. But once this gets automated, it also gets cheap.
Should we bother passing a law saying it is illegal to teleport across international borders, thus evading immigration checkpoints? Should we set tax rates for the sale and import of time machines? Or does it make sense to wait until either is remotely possible?
Harms like protesters getting identified, in Hong Kong and elsewhere?
I see a lot of people saying that it's terrible that protesters might be identified. I think at least some of these people are secretly upset that people can't break windows and burn cars with impunity.
You asked about harm. Expectation of privacy is a different matter. And they might expect privacy due to wearing masks - why is it okay to regulate those, but not facial recognition? While I'm against regulating facial recognition, I'm at least aware of the harm it causes.
> If you're a peaceful protester, it shouldn't matter.
It shouldn't, but it does. If the government and corporations were so perfect that it wouldn't matter, you wouldn't be protesting in the first place.
> I think at least some of these people are secretly upset that people can't break windows and burn cars with impunity.
This is a pure ad hominem, and not a very good one either. The motivations of an imagined 'some' people are irrelevant to the harms of surveillance and facial recognition.
My guess is that this company has no way to verify that they don’t process EU citizen data. They almost certainly do if they’re scraping so pervasively. And I don’t think they can credibly claim users gave consent let alone all the other rules they need to follow.
Looking forward to someone challenging them on this and hopefully the EU taking action. This feels like exactly what GDPR should protect against.
Since the company doesn't do business in the EU, the GDPR can go get knotted.
PS. My gay mates have also not decided to go straight just because Uganda outlaws it.
That's not how international law works though, especially when wielded by a large economic block. If the EU wants to put pressure on a company the pain is harsh. For instance they can blacklist the company and it's C-suite from international banking and ask any in-treaty country to extradite or arrest employees.
Also are you admitting to breaking EU law and moral/ethical codes on HN ?
I also freely admit to breaking a lot of blasphemy laws.
None of them are laws where I live, so I won't ever get extradited.
Our government has abandoned us, and total surveillance is the future unless something radical changes.
Fun tip: get an old analog radio, like in an old non-connected car or a boombox or walkman or clock radio or something, go somewhere quiet and private, and listen to whatever you want. And realize that nobody knows what you’re listening to—it’s your secret. I find this to be a strangely powerful experience.
> While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
What's appalling here is that Clearview is playing in God mode, monitoring and acting on what Law Enforcement is doing, IN THEIR OWN INTEREST, without any oversight. The potential for this to backfire is astronomical
The last kind of person on Earth I want making an app like this is someone that doesn't care about terms of service, morality, contracts, or upholding the law. It seems like he just got into it for the money, and has no compunction about unethical behaviour. "Everybody's doing it" is a cliche, and idiotic, response. Don't take any wooden nickels when you sell your soul...
> Police officers and Clearview’s investors predict that its app will eventually be available to the public.
Mr. Ton-That said he was reluctant. “There’s always going to be a community of bad people who will misuse it,” he said.
> Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.
“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”
And then read this:
> Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see. After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”
Wow. Just wow.
If we are to believe the tests journalists did, it was pretty good considering the app authors just pulled some VK/russian dating app photos
Edit: Thanks for downvotes. Here's the article: https://www.theguardian.com/world/2016/apr/14/russian-photog...
The app is called FindFace
We all know that once technology allows a beneficial behavior you can easily get away with, nothing can stop it. See torrents, ad blocking, reverse-engineering, cracking, etc.
But even from a legal/moral perspective, it's not clear where the line is. The data is publicly available, uploaded voluntarily by the people themselves. The algorithms are freely available. People are allowed to take photos...
Sure, the end product is creepy, but where along the way did we go too far?
That's when we went too far.
This isn't hard.
Edit: And as a random aside, I'd be surprised if Clearview wasn't violating copyright law, here. When a person uploads a photo to Facebook, the user grants a license to Facebook.
So unless I'm missing something, Clearview is illegally copying and using these works without permission...
Can one recognize people whose face they saw somewhere (on TV, on a dating app etc) without "permission". (e.g. "I'm pretty sure I just saw my tinder match going into a bar with someone")?
What if someone has a very good memory for faces and a curiosity to match? What if someone employ scouts to report when they spot certain people?
What if someone automate these processes?
Where is the line between private informatio and publicly available raw data such as photons bouncing off people's faces?
Let's try it!
How do you define a biological weapon?
If I know I'm sick and I deliberately sneeze on people, am I a weapon?
If I pay people with an illness to sneeze on people, is that a weapon?
What if I cultivate smallpox in a lab and spread it with an aerosol sprayer instead of using human carriers?
Where is the line on what is a biological weapon? Where along the way did we go too far?
Ever been to an event where photos were being taken?
Ever walked by a surveillance camera?
Ever walked by a house with a video doorbell?
Like in the movies.
If I set the value of my photo at $2 million and company X sets it at $0.20, am I forced to sell it at $0.20? If two people can't decide on a valuation it's only fair there's no transaction.
If it is not your photo, then unless you are a celebrity or otherwise a famous figure the law is quite clear that you do not have any recourse for your photo being used. You must use the courts to determine the valuation of your likeness.
The final step of identifying people involves actual individual photos, and these photos are displayed to the police officer. It is not just the aggregate.
If somebody took your photo in a public place, why should you deserve compensation?
For public place photos, the photographer has rights to the photo. Anyway, most of the photos on social media are not these public place photos.
It is not clear how this will play out though. Is it even possible to hope for a state that doesn't spy on its citizens? I'm not so sure anymore (thanks, all you f-g terrorists). Maybe our struggle has to be to regulate and enforce how the spying is done, and used, and live with the fact that it can be abused before it is corrected.
If anyone has a clear view of how such pessimism might be wrong I'll be happy to hear it.
Or if they did, then they've done a grand and thorough job of destroying the freedoms of the West that they find so offensive.
The problem is there are good and bad reasons, but even the good reasons will be corrupted by bad people.
Sure. And then we over-reacted by passing laws limiting cars to the speed of a horse. I'm much more worried about that kind of kneejerk regulation than I am about the actual supposed privacy problem.
Without regulations, laws and audits, everyone is screwed. Oh yeah, even law enforcement is screwed if/when it blindly believes in these systems and considers them fool proof and beyond suspicion.
Creepview — now, that’s my name for this company. It also makes sense that Peter Thiel put money into it.
The only way to deal with this is to recognize that privacy in public spaces was a temporary concept in society available for a limited period. Allow everyone this information and let the cards fall in a more balanced way. Anything else would be oppression.
Maybe, maybe not. But it's an easy and incongruous thing to say from a throwaway account.
Even easier if your life and/or livelihood doesn't depend on a degree of personal privacy.
What actually are the precision and recall of systems that search over faces of the entire national population? I would have thought enough people look similar that precision would be inherently low (even as a human it is occasionally hard to tell people apart in photos), but the claims here (and in similar articles NYT has had on Chinese companies doing similar things) is implying near perfect numbers.
How has this guy not been targeted by organized crime groups already?
You make a good point, a simple ban of this technology isn't good enough. The ban needs to go further, to ban the kinds of things that enable the technology, like massive databases that aggregate people's personal photos or surveillance video.
There does not seem to be a feasible way to stop this. We may simply need to accept that a face will be enough to positively ID anyone.
On the other hand, it would push copycats into developing alternative business models to capture the demand while still hiding from civil legal action.
And there’s wider societal costs from potentially chilling innovation like this through the social media companies acting like a trust.
it is telling about the attitudes of law enforcement that they skirt prohibitions of firstparty use of facial recognition by consorting with an entity such as clearview
Apple and android should blur faces by default until other people explicitly give consent to be photographed. That's the future I want to see.
They specifically prohibit on their site the very Thing they’re doing to others.
Data aggregation and transparency were supposed to be the foundation of open government, but it looks like the citizens (by way of consumerism/capitalism/communism) are the victims of legislated privacy violations by law makers that want nothing to do with transparency. It seems like a conflict of interest if business is pulling the strings of politics.
If the citizens pushed harder for a transparent government, other than encryption, what else can they legislate to turn the tables on that debate (ie. No privacy for government is a no go)?
Sure, something might happen. Anything might happen. The news is supposed to tell us things that did happen.
If some journalist wants the world to know what their crystal ball says the world is going to be like, they should publish a book, write a blog, whatever. Don't pretend it's journalism.
Humans are notoriously bad at predicting the future.