The story about Clearview finding a witness able to clear a wrongfully accused man, seems like it's intended to make us feel like it's good tech since it can help defense lawyers also. Seems to gloss over why he was able to be charged in the first place...
"Although Mr Conlyn said he was the passenger, police suspected he had been driving and he he was charged with vehicular homicide...The witness, Vince Ramirez, made a statement that he had taken Mr Conlyn out of the passenger's seat. Shortly after, the charges were dropped."
It is undoubtedly a feel-good story. Another “facial recognition feel good story” I’ve heard was a child porn case - the police ran the perpetrator’s image to find him and arrest him… and then ran the victim’s image to find her and inform her family, so they could get her therapy and support. Yet another story is the detective who ran a fugitive from a cold case and it pulled up an obituary for the fellow, effectively closing the case on the spot - not quite “feel good”, but it does serve to illustrate that these technologies have all sorts of applications in policing beyond “take a picture of the bad guy to send him to jail”.
What to make of these stories? On the one hand, we’re only hearing these stories because these stories make them look good. On the other hand, these stories… kinda do make them look good? Giving defendants access to the evidence they need to save themselves from a miscarriage of justice, locating and protecting victims of the worst crimes imaginable - those are seriously good things! We should be careful to respect the gravity of those results in our calculus of how to treat these feel-good stories.
Ultimately, my analysis is that facial recognition isn’t “good tech” or “bad tech”, it’s just “extraordinarily powerful tech”, and thus it will necessarily inherit the moral valence of whoever’s using it, and those users’ intentions and actions.
If they give that tech to defense lawyers and the lawyers use it for good, sometimes you will get stories like “we put in a single frame of body cam footage and in a few seconds it gave us back the only person who could save our defendant” (and the facial recognition companies will be falling over themselves to tell you all about it).
If they give that tech to sports stadiums and the stadiums use it for evil, sometimes you will get stories like “we put in the headshots of every lawyer who’s suing us, and prevented one from enjoying a football game with her friends and family” (and the media will be falling all over themselves to tell you all about it).
If they give that tech to police, well, I would guess your opinion of them would end up roughly the same as your opinion of the police.
The obituary cold case example leads me to the opposite conclusion. A determined criminal can literally use another murder to close the case on their existing murder with a well crafted prosthetic.
Facial recognition AI is just facial recognition AI, all it will do is pull up that newspaper clipping. It does not and cannot make any kind of ruling on who’s a suspect - that is a decision for the police to make.
Now, if you have a dim view of the police who are using it, you might expect they will see an obituary and close the case on the spot without investigating further; if you have an optimistic view of the police, you might instead expect they will investigate that obituary to validate it - perhaps by asking the newspaper who ran that obituary, which would lead them to you or one of your associates.
I don’t know what happened in that specific case I mentioned; it’s just a story I heard floating around the industry. I tend to have a positive view of police so my assumption in the absence of evidence is that if they cared enough to investigate a cold case, they would also care enough to do some digging on the obituary that turned up, but I don’t know. It’s valid to have a negative view of police! In that case, your assumption might be that they saw that obituary and used it as a convenient excuse to close a case and improve their metrics.
This is part of why I say face rec inherits morality from its users.
To be honest, when I wrote that I was imagining a step that probably doesn't exist yet, which would be connecting the facial recognition to some sort of LMM that would search the web along with private databases and filter results based on context before showing them to a human. So if, e.g. a photo of a person popped up on a camera in Las Vegas an hour after a murder committed by a very similar-looking person in Miami, the Vegas photo would be ruled out. Such an engine would be prone to all kinds of spoofing. I wasn't really thinking about the competence or vigilance of the police (which I think, as in any profession, varies wildly from place to place, officer to officer, and one circumstance to another).
Even as it stands, though, the technology seems vulnerable to deepfakes. Someone in control of a private camera that fed into the database could get away with murder and frame someone else for it.
I don't honestly understand how the Clearview thing would make any difference. The burden of proof is on the prosecutor right? So in court they would need to present proof that he was in the front seat, and if he wasn't there wouldn't be any such proof? Or has the US justice system become so dysfunctional that people actively must prove their innocence?
It is, yes. The specific mechanism is that most cases don't go to trial; accused are offered plea deals with the threat that they will receive a much, much harsher sentence if they push for a trial. This threat is often made good too: police testimony is usually enough to get a guilty verdict and police say what prosecutors want them to.
Basically if they want you they've got you. To get out you have to have a very specific alignment of resources, sympathy and luck, and the risk if it doesn't work out is massive even then.
Yup guantanamo north I've heard it called. Like 30 people have died in houston jail while awaiting trial in the last two years alone. It's endemic to the point of being basically intentional now.
in Germany if the driver is obviously drunk or drugged and not fit for driving you may lose the license as a passenger, especially as a sober passenger, yes. you have to take the wheel or prevent the ride. not sure about the US.
Surprised it is only 1M given 33.6 million Americans use mass transit daily, many cities have vast networks of cameras, private corporations frequently share camera feeds, 2.5 million people that pass through US airports daily, etc. Walmart, AT&T, Kohl’s, Best Buy, Albertson’s, Home Depot, etc — at some point used them; Walmart alone has roughly 37 million daily customers.[1]
Back in 2021, per NYT — “In January 2020, Clearview had been used by at least 600 law enforcement agencies. The company says that is now up to 3,100. The Army and the Air Force are customers. U.S. Immigration and Customs Enforcement, or ICE, signed [a contract with Clearview]” [2]
Clearview AI has been around awhile, at least in the beginning they simply pulled faces/names from social media profiles, though I wouldn’t be surprised if they hadn’t expanded their pipeline.
Any residents of CA, VA, or IL that have opted-out of Clearview with a 'Do Not Sell' request? I was recently wondering what that's like.
Of course, it runs into the same problem as many other opt-outs where you have to provide them some information so they can identify the records that correspond to you, and presumably delete or mark them as "don't sell". Except with Clearview, the only reference material you can provide is an image, so that would definitely bother some folks.
I emailed them a long time ago to tell them to delete my data. They asked for more data, which I declined. I'm curious what their legal requirements would be in that case. I'd hope that since they could easily figure out who I am and delete my data from just my email, that it would mean they're still required to do so without me giving even more data, but who knows.
We used to think we could reliably use lie detectors, bite marks and finger prints to solve crime. We have learned these aren’t reliable. Why are we blindly trying to do the same with facial recognition? I am not convinced it is any better.
Things are less black and white with AI. Things like chatGPT bother me because they don't tell you the probabilities of certainty when they have this information available. (I know there is bias too, but the probabilities would be a best case)
Anyway, there is a huge difference between someone with a clear photo and you are showing up as a 99% with the next highest being 10%, vs a bunch of people around 20% chance. Not to mention, you could further triangulate with phones wifi/cell towers.
Maybe multiple pieces of evidence is too much to ask for petty crime.
An AI hallucination should NEVER be considered "evidence". As long as we cannot PROVE, ie mathematically, when an ML model is "right" or "wrong", it should never be considered "beyond a reasonable doubt" and if it ever is, that is a failure of the justice and jury system.
Do you think we decide to do things based on whether it works, or is right? It’s a product they’re selling. Nobody really cares about anything except making money.
Curious about the distinction between use of this vs fingerprints, dna, or even witness lineups. Are they all equally bad? Is the main issue that facial detection Algos are less accurate?
There are lots of problems with facial recognition but it's mainly about the size and source of the data set and the ease and frequency with which it gets used.
You're going to get people who look alike, and so if you get flagged incorrectly you can end up in a jail cell where you can sit for months if not years waiting to prove your innocence in court at which point you either get wrongfully convicted and spend years in prison or you get released but by then you've probably lost your job and your home and you have little to no chance of finding work again since you now have an arrest record.
All of that can happen with a witness lineup or a fingerprint match, but if you live three states away the odds of you being pulled into a police line up is extremely slim. If you don't have a mugshot on file with the police a witness isn't going to have anything to point to that implicates you.
It's the same reason it's wrong that the police collect the DNA of every American by default, but looking for DNA matches isn't instant and easy either. Samples have to be collected carefully and sent off to labs, so DNA isn't gathered and checked for every minor traffic violation or shoplifting incident.
Police don't dust for prints and gather DNA evidence for every single crime no matter how minor, and people don't leave usable prints or DNA evidence everywhere as easily or as often as they leave a record of their face.
I can use a tiny bit of latex and sprit gum and show up as someone else on grainy CCTV footage, but it's a lot harder to forge someone else's DNA and leave that all over a crime scene. I can wear gloves and hide fingerprints easily, but it's not as easy to leave fake fingerprints all over everything in a way that's not obvious.
There's just no good reason for every American everywhere to be a suspect in every single crime.
This technology inserts everyone, everywhere, into a police line up for every crime no matter how small and that's a huge amount of risk even if the software manages to come up with a reasonable match. Considering the record facial recognition has with unreasonable matches, especially for people with dark skin tones, and how this company admits that they don't want their software audited for accuracy I'm guessing there are a lot of errors being made.
Such a tool could be used if there were regulations around its usage in police stations and proper auditing of its use. Seems like we're far from it. I hope it doesn't fall into the hands of foreign adversaries.
Personally I'm a lot more worried about the domestic adversaries who already have it. No chinese cia or whatever has ever knocked my teeth out and threatened to kill me, but an american police certainly has.
Do we even know why or how AI matched two pictures together? Did AI cheated during learning phase and used details unrelated but present in training set to get higher score?
Is there any reason to believe this number? Could it be significantly padded to make it sound more useful than it is? Could it be padded to convince some LEO types that it's more useful than it is? Could it be low balled to make it sound like the LEOs aren't just sitting there scanning everyone they come across?
"CEO Hoan Ton-That also revealed Clearview now has 30bn images scraped from platforms such as Facebook, taken without users' permissions."
Not sure about this -- one would have to study FB's terms of use in detail. In any case, implicit consent was given: if you don't want your picture to be used, don't ever upload it anywhere.
I haven't even uploaded it to LinkedIn, which might help to understand my surprise about a chat I had with the security officer of a resort in Cancun. He seemed quite pleased that he could positively identify me as software engineer working out of SV (all I had given was my California driver license). To this day, I don't know how I earned that conversation though (he was friendly enough, still ...), but I "won" the TSA lottery many times as well, so I somehow must trigger a red flag (perhaps not uploading one's picture is one).
Or was given questions to ask and not coached on how to ask those questions when the executive predictably waved it away.
"Are you sending any info back to your servers about what else is on the local network" would have been a perfect question, with a possibly worrying answer, but now instead everyone is laughing about "hur dur does tiktok use wifi" and using a dumb person being dumb to handwave away legitimate concerns.
The US gov hates TikTok because they can’t influence them like they influenced twitter during COVID. US tech industry hates TikTok because they are having their lunch eaten by it. Perfect formula for corpo-fascist policies like banning software for “security” reasons.
"Although Mr Conlyn said he was the passenger, police suspected he had been driving and he he was charged with vehicular homicide...The witness, Vince Ramirez, made a statement that he had taken Mr Conlyn out of the passenger's seat. Shortly after, the charges were dropped."