Some people here are saying that this is bad because the victim is asking 1bn, that it's a publicity stunt.
Well, it better be a publicity stunt! You can't change those stupid behaviors of facial recognition without such bad publicity. I hope they make as much publicity as possible, and maybe some money (but he won't get 1bn, that's not the point).
But AFAICT from the bloomberg article, at least (I won't go through the Oath hoops to read engadget), it doesn't sound like facial recognition was the problem here. The issue was that someone (presumably) used his ID, thus associating his identity with a crime; this led to the police coming for him. The police may not have been quick enough to recognise that the person they found didn't match the arrest warrant they held, but that's not the fault of some camera in an Apple store.
It would appear you do need to allow 1st-party JavaScript to allow the <script type="application/ld+json"> portion to load which contains the "articleBody": ....."
I am connected through my VPN, currently in NL, so yeah, GDPR.
Sure, I know there are ways I could work around it. But I shouldn't have to. It's supposed to be easy for a visitor to refuse consent for all that unnecessary tracking - which has no justification in terms of legitimate interest for the purpose of publishing a news article. It shouldn't require technical countermeasures (that 99% of users will not understand).
If they don't offer an opt-out that is as easy as opt-in, I'm going to assume malign intent on the part of the site, and they can do without my traffic.
> Sure, I know there are ways I could work around it. But I shouldn't have to.
Sadly that's the world we live in.
> It's supposed to be easy for a visitor to refuse consent for all that unnecessary tracking - which has no justification in terms of legitimate interest for the purpose of publishing a news article. It shouldn't require technical countermeasures (that 99% of users will not understand).
It shouldn't yet, it does .
> If they don't offer an opt-out that is as easy as opt-in, I'm going to assume malign intent on the part of the site, and they can do without my traffic.
Exactly, essentially neither Apple nor the police bothered to check if the name they got from an ID card was actually realistic before acting on this information with violence.
They searched for the wrong face into facial recognition software ... which did it's job perfectly.
This is akin to the police arresting someone because their name matched a name a criminal left behind. Happens all the time, and I'm sure that more than a few wrongful convictions happen this way. This is 99% the fault of the police.
Perhaps there's an argument that facial ID lowers the bar slightly for this to happen, but that's about it. When push comes to shove, neither Apple, nor the police did their work properly before resorting to heavy handed tactics ...
I have Safari set up to automatically use reader mode on any Oath sites I might want to read, which admittedly isn't many as they're almost all terrible anyway.
> You can't change those stupid behaviors of facial recognition without such bad publicity
What's the "stupid behavior"? Some Apple stores have been robbed. Apple got video footage of the robbery and setup a system to look out for the suspect and alerted authorities when the suspect entered the store. It's no different than an FBI wanted poster. The knee jerk reaction of anti facial recognition has gone overboard.
Also, it's also not entirely clear but the article suggests that the person had used a stolen ID of the arrested teenager, which makes the arrest all the more reasonable:
> Apparently, the real perpetrator used a stolen ID that had his name, address and other personal information
Finally it's worth noting that Apple didn't arrest the person, the police did. Apple likely just notified them and then the proper due process happened
In that case, I don't see the role facial recognition played then. The thief walked in the Apple store with an ID stating he was Ousmane Bah. Apple contacted authorities that someone with an ID of Ousmane Bah robbed a series of Apple stores. Police made arrest and discovered that the man who robbed the stores used a false ID.
The reason you don't see the role that facial recognition played is because you still have not read the article.
I'm genuinely curious, as this is happening more and more on hacker news. Why do you come on a thread about specific article to discuss it?
When I read your 1st comment, it was clear you hadn't read the article. Then, someone makes a comment to correct some of your mistakes, and you still reply without reading the article. It just puzzles me.
Apple identified (using facial recognition) the same person who stole multiple items across different states and since the thief used a fake name - they associated all crimes with that name.
So in the end the only link was NAME due to the stolen id.
Apple took the stolen ID used by the criminal. Then Apple incorrectly linked the details of that ID (with no photo), with images of the thief.
Following further thefts associated with that ID, the innocent man was accused of all the thefts based on the ID and an INCORRECT photo match to the stolen ID which Apple had created in its facial recognition system, and used as the basis to inform the police of the suspect.
I tried my best not to be snarky. I thought the article was a normal average article.
Edit: I'm not sure the article says facial recognition was responsible, I think that's what the courts will decide. But to me, its clear that facial recognition was involved, based on the article's title and content
I think the point of disagreement regarding the "link" is not that there is none, but that it seemed to be nothing peculiar about facial recognition.
If Apple sent a security camera picture of the robber together with information from the stolen ID to police, and police the same would have happened. So the core issue here is that the validity of the ID was never questioned, not the use of facial recognition. That might just have amplified / accelerated the identification of the wrong person.
But it wasn't the facial recognition that led back to the innocent man, it was the fact that the perpetrator used his name and details. It could have been facial recognition, or the same security guard who saw him at all three stores, it doesn't matter.
He was arrested because his name and details were presented at the crimes by the perpetrator. I've read the article and this is what I've taken away from it.
I really can't understand how facial recognition played a critical role in his arrest over his name being used.
We've both read the article. On rereading the article, its not clear if the stolen ID was present at all the crimes. My assumption was that it was not.
I make that assumption, because the person accussed of multiple crimes is suing Apple based on their use of facial recognition. He wouldn't do that if the ID was used in each crime.
Another article confirms my assumption is correct.
"A detective with the New York Police Department allegedly told Mr Bah that the thief probably used Mr Bah's driving licence as identification during one of the robberies. The detective reportedly said that this may have caused Mr Bah to be charged with thefts committed at Apple Stores in New York, Delaware, New Jersey and Massachusetts, according to court papers."
If it was the same person who made all of the robberies (not Mr. Bah, but the person who stole his identity) then the facial recognition worked flawlessly.
Unfortunately "Stolen ID Blamed for False Arrest" doesn't make for a good story.
It may not even have been facial recognition by computer. The facial recognition may well have been done by a human looking at security camera footage after the facts.
That would be consistent with “Apple said on Tuesday it doesn’t use facial recognition in its stores”.
On the other hand, “Security Industry Specialists Inc., a security firm that’s also named as a defendant, declined to comment on the suit” could mean that Apple hires a third party for its store security, and that firm could use facial recognition.
I suppose the point I'm making is that the facial recognition system correctly identified the individual responsible for the crimes. Mr. Bah was not mistaken as the perpetrator by the system. The only reason he's involved is because his identity was stolen. If only one store was robbed and no facial recognition was involved the outcome would have been the same. So he was tied to the other crimes because of the facial recognition, but the only reason he was tied to any at all was, again, because of his stolen identity.
The article linked may well be longer, but it certainly isn't "better" by any definition. It starts by claiming that Apple, not the suspect, filed the suit, and seems to state later on that the arrested man was killed by police. I say "seems to state" because the grammar and vocabulary are so very odd throughout the article that I suspect it is the output of some text generation algorithm trying to recycle the story from a different source.
Not the person you're asking, but FWIW, I regularly read the comments before deciding whether to read the article. In many cases, the top comment is some expert in the field in question who explains why the article is a waste of time.
I can understand that. But, do you debate specific points in the article based on the views you formed by NOT reading it?
I get that some comments can be interesting without reading the article. But someone discussing and finding fault with an article they haven't read (properly?) ... that's the bit that just baffles me.
Reading the article seems like a reasonable minimal entry requirement, IF someone wants to discuss the content itself.
I think when your lawsuit's claims sound more like a quote from Dr Evil than anything remotely plausible, people are entitled to apply appropriate quantities of mockery.
While true, a court might be inclined to toss the case if there’s the perception that it’s frivolous. Judges don’t look kindly upon these kind of lawsuits.
That was my point, though. He’s abusing the legal system by asking for compensation he knows he will not receive to gain publicity. The issue involved is fine but I’m not a fan of exploiting and clogging up the legal system in this way. If this is “what it takes” I think we have a bigger problem we need to solve.
They’re going to do that anyways. Most likely the case will be settled so the guy goes away and the press dies down; he’s not going to continue his moral crusade against Apple violating his privacy after that.
I mean, $10k seems like a reasonable compensation to me. It sucks that he might not get as much news attention, but I don’t think that’s what a lawsuit is supposed to be. The newsworthy bit should be the wrongdoing, not the outrageous demands; he could have tried to get support for his plight without the claim.
The argument behind entertaining huge fines or damages in tort cases is that it acts as a deterrent against future violations of the law. If the damages were only $10K then Apple would have less incentive to create safe software and fix bugs like this one. If the damages are a potential $1B they will take the issue seriously. It is fine when the defendant is a huge company like Apple IMO.
They provided video of someone they claimed was this kid. But it was actually someone who apparently used a stolen or counterfeited ID. And not really even that. It was likely someone who their facial recognition system matched with someone who had used that ID.
Maybe I'm just naive, but I doubt that people typically get arrested based on such iffy identity theft. It wasn't a photo ID. And such ID is typically useless for legally meaningful authentication. It won't get you alcohol when you're underage. It won't get you a bank account. Or a drivers license, or even a replacement social security card.
As far as I can tell, this is strictly supposed to be compensation for hardship. The size of the entity that is causing the hardship doesn’t affect the amount of suffering. So why should the payout be different? (Otherwise I’d just seek to be wronged by the organization with the biggest pockets?)
The law provides remedies: someone has been wronged, and the legal system provides recourse. The law also regulates conduct: the punishment acts as a deterrent, to avoid similar violations in the future.
A negligible settlement won't act as a deterrent for an entity with deep pockets.
If we keep letting a corporate entity get away with undesirable practices by merely slapping them on the wrist, we are actively signalling to them that they should continue to engage in those practices.
The only way to stop a corporation from doing something undesirable is to make the punishment great enough that the undesirable act is a net financial loss to the corporation.
I would personally fully support massive % of yearly revenue fines for anything to do with privacy.
Actually, that brings up a good point: what exactly should the remedy in this case be? I don’t think it’s clear that Apple is even at fault here. It’s a pretty shaky case and I’d place more blame on the officers who arrested him without double-checking than an in-store camera.
Just because something is illegal does not stop a corporation from doing it. If the financial penalty for an illegal act is less than the financial gain from carrying it out, it will be carried out.
See: petrochemical pollution, bank money laundering, telcos selling customer data, etc.
Legal or not, the discussion is whether a 1B punishment is appropriate. Yes, I think it is appropriate for an organisation with a reported quarterly revenue in the 90B range, maybe even on the low end.
By making something illegal, the government can go after the company by representing the collective will of the people it stands for. I think it’s much more reasonable to have billion dollar fines in that case.
I think corporal punishment along with money would be great as punishment when it comes to corporations/rich people. Maybe only for the next 5-6 years, just enough to clean around.
What do you mean by clogging up the legal system? It doesn't matter how much he asks for really. This should be a valid case and it should cost Apple enough to hurt them.
We can't tell what the arrest was worth to the teenager. Depending on timing and results on his life, that could be anything between mild inconvenience and 1bn.
All Apple did was provide the police with the information they had, in good faith: that is, the information on the ID the thief had.
It's not Apple's fault the ID was stolen and the information was false. It's also not Apple's fault that the police didn't do basic due diligence to determine if it was physically possible for this guy to have committed the crimes before arresting him.
What, exactly, is Apple to blame for here that "this should be a valid case and it should cost Apple enough to hurt them"?
The police follow different procedures based on who is making the report.
If you are an individual, and someone steals your iPhone, then you use the locator feature to pinpoint the exact address where it is, look up the name of the person living there, have a plausible story for why that person had opportunity to steal it, dig up the serial number from your records, and hand all that info over to the police, they will give you a report that you can give to your insurance company, caution you against approaching the thief yourself, and do literally nothing else. You did all the investigative work. All they'd have to do is show up with a search warrant. Aaaand nothing happens.
If you are Apple, and someone steals your iPhone, then you use facial recognition tech to link the thief to the name and address of a stolen identity, then the cops will raid that address and arrest the person, without even bothering to investigate, or even validate Apple's info, first. Perhaps they call a VIP number rather than 911 or the public number for the main automated phone tree, that bypasses all the layers of "we're busy with important work, so piss off"?
This is the same problem as SWATting attacks. Police procedure is hackable, by saying the right words to the right people, to turn it into a guided weapon against a target, who is occasionally killed by a cop made too twitchy by the "everyone wants to kill you" training.
Apple provided false information to the police. This resulted in an arrest, which is potentially traumatic and violent. They could have given just the video footage of the thefts. But instead, they became part of a system that hacks police procedure to launch attacks against innocents. Someone was arrested on the basis of the output of a facial recognition program, without any consideration to the fact that it had been hacked, using the very simple and very easy attack of linking the face to someone else's name.
Why would giving the the video footage, and not the ID information they had on file, have been better?
Consider it from the perspective of not having the information that the ID was false. Why would you give the police less than all the information you had? Why would you want the police to have to do all the work to figure out who that face on the video is, when you have their ID?
Furthermore, even if you knew the ID was false (or even suspect), giving the ID information to the police, along with your assessment of its validity, would still be better. They would have an extra data point to use to try and trace the real thief.
And, again, the facial recognition program did not perform the identification of the thief. It merely linked his presence in multiple stores at the times of theft. The identification information came solely from the stolen ID the thief gave when he purchased something legitimately in one of the stores.
I operate under the assumption that the majority of cops are just doing the job for a paycheck, and not out of any genuine desire to do the job correctly arising from a professional work ethic. I get it--it's just like every other job. Only a few people actually care about what they do, and the rest know what it means to just be an employee.
Giving the footage, without the name, forces the cops to positively identify the suspect through investigation of crime scene evidence. The FaceID-generated name is just speculation from Apple.
Operating under the assumption that the cops will do the absolute minimum amount of work to clear the case, just giving the video runs the risk that they will decide that identifying the suspect is too difficult, and give up. But bear in mind that they failed to identify the real suspect anyway, and also arrested an innocent person, just because Apple said that's the person who did it.
The professional cops, who are usually not as knowledgeable about computing science as the professional software developers, may be unaware of the GIGO principle--garbage in, garbage out. They don't know how Apple assigned a name to the face, but they trust them, because Apple spends a lot of marketing dollars getting people to believe that all their stuff "just works".
The facial recognition software linked the face with a name attached to all the faces that didn't. Apple can certainly give a rate of false positives and false negatives for the face-matching, but I'm not sure they have any idea how many people are showing them fake IDs. That's something cops and bouncers would know, not ordinary retail employees.
If you were a cop, and someone dropped this case on your desk, what would you do? I, personally, would search for that name on every corner of the Internet before getting off of my ass. I'd be looking for any excuse I could find to not go out to arrest someone, because that's work, and solving property crimes, especially shoplifting cases, is not noteworthy or glamorous. But Apple is rich, so I'd put just a tiny extra effort in to find the real thief, in case I ever wanted to run for sheriff, and get them as a corporate campaign donor.
I'd probably go to one of the department's fake Facebook accounts, upload a cropped still from one of the videos, and see if their auto-tagging software produces a match. If that didn't come up with anything, I'd have to get a search warrant for the named suspect and their home address, looking for any stolen Apple stuff. Then I'd see that the named suspect does not match the video footage. I'd give the best face-shot and a short video clip to the local TV news to see if that generated any tips or leads, and then finally write a report saying that the suspect could not be identified or located from the available evidence. But I'd be polite about it, because Apple is rich, and would certainly want to read the report. And then that would be it. Because every retailer has to live with a nonzero amount of shoplifting and other forms of shrinkage, even the rich ones.
> The FaceID-generated name is just speculation from Apple.
No, it's not. As I've said multiple times, the name came from the stolen ID the thief had. The only use of facial recognition in this case was to link his presence at the time he gave the ID, to his presence at the other stores where thefts took place.
Facial recognition played no role in actually identifying the thief (either the actual one or the poor sap whose ID was stolen).
Do you realize that the first paragraph of your post directly contradicts the second?
1. Facial recognition played a role.
2. Facial recognition played no role.
One of the two must be false. It is the second. Facial recognition played a role. Specifically, it linked the face seen in video where a crime was observed to a face that misidentified itself.
The speculation on Apple's part was that the name the face presented for itself was genuine, and not a deception. It did not stop to assess the probability of all customers presenting fake ID, or the conditional probability of a customer presenting genuine ID given that they are a suspected serial shoplifter.
That last one is relevant. A person that can do a grab-and-run theft might also be willing to do a fraudulent purchase theft by giving a stolen, cloned, or credit-fraud card and stolen or faked ID. So while you might be able to assume that regular customers usually give genuine ID, giving fake ID and stealing from retail stores are probably not independent events. Once you have reason to believe a person is stealing, based on the facial recognition matches, you can't trust that they are who they said they were with the same confidence.
And what would Apple have done if the same person had given a different stolen or faked ID, on a different occasion, and that name was also linked to the case by facial recognition?
Apple is not the police, and it's not their job to solve the crime for the police. It's their job to hand over all relevant information so the police can solve the crime. And what would Apple have done if they had identified this person instead by good old-fashioned human facial recognition, in one store—they check their security camera records, see the person who gave the ID, and spend some time matching it up with thousands of dollars worth of theft in the same store. They would likely have done just the same: handed the ID over to the police, along with the information about the thefts.
My first paragraph does not contradict itself. Facial recognition software was not used to identify the thief. That was purely done through the use of the ID. Facial recognition software was then used to link the already-identified thief in a single instance to other instances of theft at other stores. Even if the facial recognition software had never been used, the guy the ID was stolen from would still have been fingered as the thief in that first instance.
This is not a story about Big Bad Apple, or Big Bad Technology Gone Wrong. This is just a story about Humans Making Mistakes. Specifically, it's a story about the police not doing their due diligence.
Don’t. I think it should cost them proportional to the amount of “suffering”: this is a lawsuit after all, not a criminal trial.
> We can't tell what the arrest was worth to the teenager. Depending on timing and results on his life, that could be anything between mild inconvenience and 1bn.
Agree as well, but I hope you can forgive me for leaning towards “mild inconvenience” rather than “one billion dollars”.
> That was my point, though. He’s abusing the legal system by asking for compensation he knows he will not receive to gain publicity.
Asking for an unrealistically huge sum is par for the course. The judge can arbitrarily reduce that sum down, all the way to zero.
> The issue involved is fine but I’m not a fan of exploiting and clogging up the legal system in this way.
The legal system isn't being exploited or clogged up, he was going to seek damages (as one should) in either way, whether the number sought is a billion dollars or a thousand dollars doesn't make a difference.
The only entity responsible for causing a false arrest are the police, as they are the ones that have the authority to arrest. Unless they were intentionally given fraudulent information, I don’t see how anyone other than the police are responsible for a false arrest, it is their duty to do due diligence.
I don’t know why you’re bringing up these hypotheticals: Apple is equally likely to have not lied to the police. Do you believe they were purposely acting in bad faith?
I'm arguing that they misused data that they had. And so they actually didn't have enough evidence to file a police report.
But it's also arguable that the police screwed up, by not noticing that the photo from the video didn't match the suspect. So maybe the kid should also sue the police for false arrest.
No idea, that’s something that will be decided if the the lawsuit ever goes to court. But I’m pretty sure that the final value will be significantly less than a billion dollars.
Because it looks like the person is using it to drum up publicity, possibly in an attempt to get Apple to pay them a decent settlement to get the media to shut up. I’m not really a fan of people using the legal system in this way.
What should be the exact dollar amount to avoid "looking like he's using it to drum up publicity"? You presumably have some idea of what the "right answer" is since your gut is telling you $1bn is too much.
If I was mistreated by the police for a few days with no permanent harm, arrester on insufficient evidence, the police should pay me a few hundred or thousand dollar a day, maybe. Apple should pay nothing because because they did nothing wrong by sending evidence to the police.
The real problem is that the government considers arrest without conviction to be a civil duty, not a tort.
> If this is “what it takes” I think we have a bigger problem we need to solve.
Well, yes - we do. Using publicity (via mass media and/or the legal system) is, at this point, one of the few recourses citizens (at least in the US) have against tech giants when they engage in morally-questionable behavior. In the idealized capitalist world, we could move to competitors, but the current situation is that a number of markets are extremely monopolistic (search engines, online markets, platform app stores) or duopolistic (mobile devices, computers, news aggregation) - and consumer apathy, combined with toothless or entirely missing regulation, trivialize the impact of anything except for the most visible of reactions. If there were stronger legal repercussions in cases like this, it would make publicity stunts unnecessary.
The publicity is obviously focused on the wrong thing, though: that’s exactly why we’re having this discussion instead of talking about the privacy implications.
The publicity is not kids fault, its engadget's and media's. They want eyeballs. Anyone with the most basic legal knowledge knows that's not how damages work and he won't even get a million.
He's only "gaining the publicity" you're giving him, by ignoring the subject of the case: Apple programmed their computers to catch a citizen, so they could turn that citizen over to the police - but they screwed up and programmed their computers to catch the wrong person, therefore delivering on a promise that has been made, time and again by one dystopian author after the other, and which demonstrates the fact that we must always be willing to face up to abuses of technological power, whatever the cost.
Hey, a billion dollars got your attention. Perhaps, in fact, after investigating it: we will find that in fact Apple has done more than a billion dollars worth of damage to society. If they can screw up like this once - perhaps there is more? 100 more cases, perhaps?
We will see, once enough publicity from this "abusing of the legal system" runs its course ... considering that, indeed, there may in fact be bigger problems to solve.
I know this view is not going to go down well with the HN crowd, but consider this: The name used by the actual thief was used to track down Bah and arrest him. The normal course from there would be a long haul process to prove he was innocent. But he was exonerated by the presence of facial recognition which showed that the person in the store was not Bah. The facial recognition here seems to have done neither more nor less than a regular CCTV camera in a store. If anything the presence of the cameras helped free him!
The facial recognition prevented Bah from recovering his account. He was accused, and I believe arrested, multiple times. Despite being proven innocent time and time again.
If I understood this article correctly, the thief did not always sign in/show an ID. Sometimes the facial recognition just tied the man to the stolen ID. The CCTV did help prove his innocence, when it wasnt misteriously missing, but the facial recognition, was in fact a problem.
He should ask for a billion dollars - the article says that Apple’s mistake resulted in his arrest, and even though it was a mistake and he was not convicted, I don’t think that arrest is ever going to be removed. He will forever be someone who was arrested and it will show up if someone does a background check. The police might pull him over for a minor traffic violation, the computer will pull up that the occupant has been arrested for theft in the past, suddenly its a whole different conversation because the cop is looking for an easy arrest.
If Apple had circulated a photocopy of the possible perpetrator to its stores, to be shown to staff and taped beside the POS, and a member of the staff recognized him and called the police, would that be a billion-dollar mistake too?
I think the facial recognition is a red herring. What matters here is that the perpetrator stole someone's ID and used it during the crime. When the owner of the ID entered the store, he became a suspect and was arrested. Shitty for him but perfectly understandable.
The linked article says nothing about where he was arrested.
(The NY Post article that is linked to in the linked article mentions that he received a summons in the mail and was later arrested, but it doesn't say where he was arrested. I concede that it wasn't in the Apple store, but there is no indication of that in the original article))
I’m confused. Has the linked article been changed from this Engadget article to the current Bloomberg article, or am I suffering from some sort of brain fart?
EDIT: According to this comment, they have switched the link. It’s a bit misleading to change the link once a substantial conversation has already happened based on the original link.
Yes, mods can edit posts after the fact. Unfortunately there's no edit history on HN so it's difficult to tell what happened unless someone says something.
What's bothering me the most is that he claim facial recognition is involved while no evidence at all is provided in the body of the article.
Look like a plain old identity theft with the ID card he lost... As long as they recon his innocence after "humanely comparing" his face in a police station with video footage what would actually worth 1 Bn$ ???
A few year ago a coworker of mine was often contacted by police because his car plate was used by a matching stolen car... It was embarrassing but he didn't sue the car-maker as far as I know...
It's disappointing to see that the 1 billion figure is what really grinds people's gears in here. That's the thing you should be the least concerned about.
It's probably because it's the most objectionable part of this, so it's the main target for nitpicking. The "Apple's responsible for a fuckup of their own product" isn't controversial, and I don't think "fuckups of ML used for law enforcement are a serious thing" is controversial either.
Aye, you would think that people would recognise that the value is directly proportional to the propensity of it being covered in news articles and, thus, starting discussions, such as this.
Yes that's bad in the US, many important and high paying jobs ask this question during the interview process, and it could disqualify someone if a roughly equivalent candidate who answers "no" to this question is also in the pool of applicants.
It gets even trickier when applying for jobs related to the government. If nothing else, it adds stress and time to the application process because you have to explain it several times.
Based on the number of articles that get posted on here complaining about suboptimal hiring processes in US tech jobs, I think it's fairly obvious that this guy is not going to have a good time applying to jobs for the rest of his life.
The fact that a criminal record cannot be revised if it contains false allegations and convictions seems to be about as flawed as facial recognition software leading to a false arrest. Seems like a good opportunity to fix both.
It's complicated in the United states, federal law (FCRA and employment law) and state law are both involved.
Since this teenager is in New York, this arrest record (which will not be publicly available forever, and could be sealed immediately by court order) should not be a factor in their future interviewing in that state.
Employers may not ask about or consider arrests or charges that did not result in conviction, unless they are currently pending, when making hiring decisions. They also may not ask about or consider records that have been sealed or youthful offender adjudications.
Generally speaking if you answer "no" on an employment form and then get denied for having been arrested even though no charges ever stuck and that is specifically the reason for denial (most employers aren't this stupid though) then there is a settlement check in your future.
Well I blame Apple for this. Why? Because they obviously had video of the real thief and there is no excuse that they could not have compared the results of the two before notifying the police let alone have the system immediately of facial recognition know they aren't the same people visually.
If anything I want to know, does the store record anything outside its physical interior? What is the retention of this data. What is the process and safeguards before a situation like what just happened? Who makes the final decision? Are your images tagged each time you use your ID for any engagement within the store? Heck, do they even post notices they do this?
For a company that touts privacy they really blew it and in a spectacular way
Sure, they made mistakes there.
But I also blame the police here. They shouldn't blindly act on accusations just because they came from big corporation.
> For a company that touts privacy they really blew it and in a spectacular way
Exactly! It's a little sad to see that they blew it like this.
With their focus on privacy they really have something that separates them from google. I'm curious how they will react to this.
> Because they obviously had video of the real thief and there is no excuse that they could not have compared the results of the two
I've probably missed something in the article but where does it say anything about them having pictures of the wrongly accused that were supplied to the police? The police had an arrest warrant -with the wrong* picture- on it. They didn't have an arrest warrant with -his- picture on it.
* wrong as in it was the thief, not the person the police actually (wrongly) arrested.
> Because they obviously had video of the real thief and there is no excuse that they could not have compared the results of the two before notifying the police
That isn't Apple's job to do at all. It's for the police to do. They had a no picture ID and video and they gave BOTH to the police. Apple's job is done. The police failed to follow up properly.
From this account, it's hard to see how Apple is at fault. Someone apparently may have used his lost (non-photo) ID to identify themselves in an Apple store, with the result that Apple associated his name with theft. They reported this to the police. Is that so unreasonable? Seems like it's a valid case for further investigation.
When the police showed up at his home with an "arrest warrant [that] included a photo that didn’t resemble Bah" they should have immediately figured out something was wrong; that perhaps they've got the wrong guy? As such, they probably shouldn't have followed through with the arrest at all, pending further investigation, unless they had reason to believe the individual - despite not matching the warrant - presented an immediate threat. Which sounds unlikely.
So ISTM that the parties at fault here are (a) whoever fraudulently used his lost ID, thereby falsely associating his identity with criminal activity; and (b) the arresting police who did not pay attention to whether the subject being arrested matched the warrant they had in hand.
But while I am still deeply uncomfortable with the widespread deployment of facial recognition, etc., and would like to see it restrained, I don't see that it's really the problem in this case as reported.
> they should have immediately figured out something was wrong
Even if the warrant had looked sufficiently like him to confuse people, it should have been a quick conversation about if he'd lost any ID, what kind, where, and then where he was on certain dates to clarify that he wasn't the thief.
Do US police have to arrest someone to question them?
So thief walks in the store with the stolen ID. If facial recognition is working correctly, then this identity theft should have been recognized immediately, unless the actual guy has never been in an Apple store.
Thief steals a bunch of things, and the software ties the thiefs face to the stolen ID. Without facial recognition, they still would have tied the ID to the theft.
Apple sends the police the name and address on the ID, as well as maybe an image of the thief. The police arrest the man matching the ID without reviewing the footage.
This seems like the polices fault for not following through with a proper investigation. If anything, the footage from Apple saved this teenager. He still has an arrest on record, and as others mentioned that could ruin his life. But he needs to take that up with the police, not Apple.
The only way I see this involving facial recognition is if the thief came in to case the place, used the stolen ID, then came back later without showing the ID and stole a bunch of things. Or if they have software that flags thiefs, even when employees dont see it.
Now all that being said, there are questions to be raised about the surveilence brought by Apple. Do they notify their customers that they are being recorded? How long do they store this data? Are they analyzing your behavior (frequency of visits, suspicious activity, where you walk in the store, etc)? Is their facial recognition biased/inaccurate?
Apple prides themselves on being privacy focused, but they are clearly infringing on their customers.
My conclusion: The face-recognition functioned completely correctly - it associated multiple acts by the same individual - but Apple incorrectly used a non-photo ID as identification of that individual, and thus incorrectly identified that individual as Bah.
Sequence of events summary:
Thief used Bah's stolen, non-photo ID.
This led to Bah being arraigned for theft.
Apple then identified video of the thief in multiple thefts at other stores (a detective asserted the matching was done by face-recognition).
This led to Bah being arrested.
Police then realized that Bah did not resemble thief, he was released, and various charges have been dropped.
One assumes "to make sure it makes it into the news to try and force/shame Apple into a quick settlement rather than taking it to court where it'll probably get tossed."
There’s an aspect of compensation to it, which is obviously lower than a billion. Then, there’s “punitive” damages, I. e. The wish to hurt the company in excess of individual damages to discourage future behavior.
Obviously, actual damages are too low for a company like Apple for a matter such as this to even escalate to the decision level. For me here in Germany, I’d say $10,000 would be around my over/under for being wrongfully arrested. For a black teenager in the US justice system, you’d probably have to double it, but that’s still nothing in the Apple scheme of things.
The US is somewhat rare in piggybacking on individual tort law to also regulate corporate wrongdoing, and people often chafe because the individual gets so much more than what seems appropriate. The alternative is empowering agencies to seek fines that go to the government. The Consumer Financial Protection Bureau is one such attempt, and also an example of what can go wrong with it.
Cameras can be used ethically in public and semi-public areas. While I don't know if we have official/specific guidelines in the US, I'm sure with a little effort we could create some that would allow for their very effective use, and not overreach, violating personal rights or leading to a police state.
In this case, it seems a mix up occurred and a little police work easily led to clearing the person's name of the charge. Looks like the system worked (but it was very inconvenient for the accused). The camera is just a single piece of evidence, and not the entire case. So long as courts realize this, it should actually help deter crime AND make prosecution more correct in catching the correct perpetrator. There is a long history of just finding someone to blame. I hope cameras make catching the correct person easier, to avoid this type of policing failures.
The "mix-up" occurred because there was an ID with no picture on it. So, Apple assumed it was him - with no corroborating evidence to that. They then flagged their AI to look for him, not comparing it to pictures/video of the actual perpetrator, and called the police when he came in.
>...and a little police work easily led to clearing the person's name of the charge
True but there's a couple of problems with this:
1) He can never answer "no" on a questionairre of whether or not he's been arrested before. Depending on how strict employers are, this could prevent him from even being considered for employment.
2) There's an actual potential he lost his car, job, place to live, etc. while waiting to be cleared. We don't have a time-frame for this, so it's anyone's guess, but it's a safe bet that his employment, alone, wouldn't have been safe (e.g.: no call, no show).
>I hope cameras make catching the correct person easier, to avoid this type of policing failures.
The policing failure,as you refer to it, was actually a failure on Apple's part and not the part of the police.
> They then flagged their AI to look for him, not comparing it to pictures/video of the actual perpetrator, and called the police when he came in.
Nope, that's not what happened. The article is poorly written, but read it and it might make sense.
" However, since the ID didn't have a photo, the lawsuit states Apple programmed its stores' face recognition system to associate the real thief's face with Bah's details"
In other words, the only thing Apple used facial ID for was to link the thief's face to thefts where the thief didn't show ID. That allowed them to report the wrong person for multiple thefts (presumably instead of just one). (Yes it's terribly unclear from the article.) But it's fairly clear the wrongly accused was not arrested at the store, because the face recognition was primed for the real perp's face.
>the lawsuit claims Apple programmed its stores' face recognition system to associate the real thief's face with Bah's details. In a statement to Engadget, an Apple spokesperson said the company does not use facial recognition in its stores.
So anyone know how likely it is if that thing happened or not?
Facial recognition systems are everywhere, even in Poland small shops(FreshMarket, etc.) have it built into cash registers.
Is this something that concerns me? Absolutely not and I can even say I don't give a fuck. Most of the government institutions already have your photo, signature, and many more from ID/Passport or any other government documents you made. So why you should be afraid of companies who does it too? I know, I know, privacy, ads, tracking and stuff. :)
But is this still fight for privacy? I see many startups us this rhetoric to acquire clients. But it turns into a terrible joke like it was with usability 15 years ago. Where companies offered services that were not needed to collect even more money.
I don't understand what role the facial recognition played in his arrest. The article says the actual thief used his name and other identifying info - it makes no claim about how the facial recognition software was utilized to make the arrest...
> the lawsuit argues that Apple's "use of facial recognition software in its stores to track individuals suspected of theft is the type of Orwellian surveillance that consumers fear, particularly as it can be assumed that the majority of consumers are not aware that their faces are secretly being analyzed."
This, coupled with the $1 billion being asked, makes this seem like a publicity stunt :/
Calling this suit a publicity stunt trivializes it and makes it sounds like there's no value beyond getting some publicity. I don't think that's true. This is using the legal system for privacy activism in a positive way. Making corporations understand that there's potentially a very real cost to acting on false positives could be a good thing that actually moves the needle on our right to privacy.
Just imagine if stores knew a false accusation based on nothing more than facial recognition could cost them millions. They'd have to put in more effort than "The computer said you're a criminal". That would be great.
> The apple's false accusation resulted wrong arrest.
But it was only false in hindsight - someone came in, used his ID, and their face was associated with it - that's perfectly sensible and normal behaviour from their systems.
> The arrest warrant included a photo that didn’t resemble Bah
It is not perfectly sensible and normal behavior to demand ID from customers and then set up a secret tracking profile, based on facial recognition and linked to that ID, that is apparently linked across all the stores for your company.
The buried lede is that everyone who has ever walked into any Apple store is being tracked by Apple in all Apple stores, and possibly also anywhere else in public. The false arrest allowed that info to leak to the public.
> everyone who has ever walked into any Apple store is being tracked by Apple in all Apple stores
That hasn't been confirmed yet, has it? I might have missed some extra info but the original story only mentioned they were tracking that ID because of the thefts.
> based on facial recognition
Has that been confirmed yet? I may also have missed that but again the original articles only had that as a claim from the man suing them, nothing concrete (and I believe Apple also denied it?)
Apple can keep all the surveillance footage from all their stores.
Apple can run FaceID on any image they have.
In this instance, Apple must have run FaceID from images from the recordings that show the thefts. It is not necessary that they did this on-site or in real time.
A statement by Apple claims that they do not run facial recognition software in their stores. But the wording was very specific; they did not deny uploading the in-store video to an offsite server to detect faces and identify them. They certainly did it for one person. They could do it for more, limited only by their computing power. And they are a computing hardware company.
Apple is not tasked with arresting people, the police are the ones with the power and the police should be the ones liable for not doing proper due diligence (if they didn’t).
This only makes sense if he can prove that he was “damaged” to the tune of millions of dollars, which I think we agree he likely wasn’t. I’m all for privacy and restrictions on the use of blanket facial recognition but trying to scare companies with out-of-proportion fines seems like it’s do more harm than good.
But consider how reckless Apple was. Reportedly they had some store video showing thefts by someone with a particular facial feature hash. And I'm guessing that they also linked that hash to a non-photo ID. Which are easily counterfeited.
So say they had stopped there, and called the police when someone with that hash walked into a store. That would be bad enough, given how unreliable facial recognition is.
But they went further. They had this kid arrested, relying on doubly unreliable information. That's what's reckless, and why they should be punished. If for no other reason, to set a precedent.
So in a way, the actual risk that the kid faced is irrelevant. What matters is what might have happened.
You can’t “have” someone arrested. The police chose to arrest someone, and it is their duty to determine if the information given to them constitutes a reason to arrest someone. Unless it was given to them in with the intent to arrest someone under false pretenses, I don’t see how anyone other than police are responsible.
> The arrest warrant included a photo that didn’t resemble Bah, he said in a lawsuit filed Monday.[0]
So sure, you could argue that the police screwed up. But it's also arguable that Apple could have googled his Facebook page, and discovered that it wasn't him.
Yes, that is of course up to the courts as to how much Apple is responsible. But I would hold the police far more responsible than Apple for not doing their job.
I don't agree with your interpretation of the story.
Apple associated the criminal's face with the victim's ID. The police arrested the victim. The victim's face was never even in Apple's system! (If I read the article correctly.)
The exact same thing would happen if Apple hadn't used Face Recognition software at all.
This is the thing I've been struggling with in everyone else's comments. From what the article suggests, the victim never even appeared in the Apple store, but the thief came back in at some point, so the police went to the address that Apple had on file for the thief. Why is this Apple's problem, beyond them associating a face to an ID _that was given to them_?
Are you saying that people trying to bring privacy issues to light will just be considered to be “crying wolf”, and that would hurt legitimate efforts elsewhere? From my experience reading about privacy issues, this already happens when people bringing privacy issues to light and attempt to be reasonable.
No. I’m saying that this person seems to care more about getting people to talk about his $1 billion damages figure than focus on the privacy issues involved.
Given the way society has a tendency not to wait until a court proves someone guilty and defaults to a "no smoke without fire" assumption of guilt I'm not sure being falsely arrested is actually all that harmless. If this person ever applies for a job and is asked "Have you ever been arrested?" he may have to say "yes" (unless there's a way to expunge all records of the arrest, including any media coverage). That could have a material impact on his future.
I don’t think they can ask that, but it doesn’t take any mental gymnastics to imagine a false arrest leading to false incarceration if the accused can’t afford a good lawyer. Or even being fired from a low-end job when they miss some work because of the arrest. Could be 10-20k in damages right there between legal fees and missed work
A harmless offence by Apple? Consider that AI facial recognition is already known to have a bias which disproportionally affects people of colour [1]. Then consider that in the US the police also have documented bias against the same group [2]. This particular incident may have had a relatively small effect on this individual, but the potential for harm for this group of people when using this technology is increased by its use. That to my mind makes its use unacceptable for law enforcement, at least until this bias is addressed and corrected.
If police officers are more likely to stop a black person, then yes, black people will be caught doing more crimes than other groups.
That doesn't say that other groups don't commit just as many crimes, it means that they get caught less frequently because they're not black (and therefore not stopped as often).
I expect to see such a foolish argument on t_d, not HN.
why are they more likely? beacuse that group commits more crimes? cops stop people in some neighborhoods more often too because those places have more crime. its just facts.
getting caught less frequently? you mean because of committing less crimes? the same data shows up even when the police officers are black.
Ok, I accept that Vox is not the best source of information on this matter. Here is a compilation of studies where the author has tried to include either peer-reviewed studies or reviews of data that tend to speak for themselves and don’t require much statistical analysis. he notes that most (but not all) of these studies do factor in variables that address common claims such as that the criminal-justice system discriminates more by class than by race, or that racial discrepancies in sentencing or incarceration can be explained by the fact that black people commit more crimes. And he's also included a section for studies that do not find bias in various aspects of the criminal-justice system. There are far fewer of these, though he is open to the possibility that he missed some.[1]
People have been killed in SWAT'ing in the past. This isn't a far cry from that.
I wouldn't trust US police to make a peaceful arrest of anyone at this point. It's a potentially lethal encounter depending on how trigger happy these cops are.
The scale of offense has no correlation to how the cops would respond during an arrest, a slight mis-understanding during encounter could lead to unintended consequences.
That said, this whole billion dollar is a good way to get PR and force Apple to settle which they will.
People shouldn't get large payouts because a reasonable action has a remote possibility of triggering unintended consequences, especially when those consequences don't actually occur. Apple gave the police evidence that pointed in the direction of this person. The police thought that evidence was sufficient to arrest him. The evidence turned out to be misleading, but in the vast majority of cases it would not have.
I'll be one of the first people to tell you that all cops are a hair trigger away from entering "asshole cop" mode but the nature of the crime definitively determines how they act. Arresting someone with an outstanding warrant for shoplifting/theft is going to be handled very differently than arresting someone who has a history of being involved in drug trafficking violence.
Well, it better be a publicity stunt! You can't change those stupid behaviors of facial recognition without such bad publicity. I hope they make as much publicity as possible, and maybe some money (but he won't get 1bn, that's not the point).