> There’s nothing in the documentation to indicate what criteria the the HPPD would use, or is currently using, to add a face, license plate, or smartphones to a “blacklist,”
Regulation can't come soon enough for this industry.
About 20 years ago one of the youngest cops ever to be promoted to detective in the City of Chicago told me this after a long night of drinking...
"If I see a car with more than two black people, I just followed it. Sooner or later they'll make a mistake and I can pull them over. Very often one of them will have an outstanding warrant.". When my face turned to horror he asked me the jaw-dropping question "you are Italian, right ?"
Honestly, if ML is statistically driven, how long before it determines that African Americans and Latinos are over represented in the criminal database and should have a higher blacklist score as a rule and the whole cycle repeats. Meanwhile innocent looking white kids can keep selling meth in the park.
This is sinister in a subtle way: profiling in this capacity is not only wrong, it is self-perpetuating. If police are trained to do this, then more African Americans are arrested even without warrants, and then end up in the system / in legal trouble, which means more are likely to have warrants. In turn, this means that future police trained to do this are likely to find success doing it by virtue of past police doing it.
Thing is, you don't even need overly racist whoever for this to happen, a completely "neutral" neural net would happy take the same stroll down that particular primrose path without tightly controlling for self-reinforcing effects like this.
Evolutionary systems are very good at perverting any and all definable fitness functions - it's a red queen's race. At best, if you want systems to behave you either have to supervise them all the time or have a system which can which itself must be supervised less often. While we want human morals and ethics to be a part of our systems, humans will forever need to remain in the loop.
It’s fascinating to see that we’re _already_ at a point where the AI alignment problem («what morality do we want our automated systems to have») is becoming a practical reality, long before AGI or self-improving AI appears to be imminent.
I think this is a clear, contempirary demonstration that philosophizing around the use of AI (even today) has practical benefits beyond the practice of philosophy as an inherently worthwhile species-wide intellectual pursuit.
Bureaucratic procedure (of governments, religious organizations, corporations, armies, NGOs, ...) always leads to bad outcomes in edge cases if individual administrators are not given enough autonomy to make exceptions and enough oversight/accountability to prevent abuse.
I think it’s also fascinating how this is an age-old philosophical problem, as in the stories of the malicious genie. People have known for a long time that the rigid, precise specification of morality or desirability isn’t easy.
Thats exactly what amazon tried to do here, but because the AI based its decisions on amazon's previous hires (that is, almost no women) and blinded the AI to gender, it instead proxied women for one of a dozen other things such as school, courses taken, etc.
That’s the problem with ML and statistics. With the wrong input they quickly turn into something that constantly reinforces the status quo. It’s a dangerous path if not handled carefully.
You can see this even in advertising. All the ads I am being shown are about stuff I have bought or was interested in. They can’t give me new ideas or handle when my tastes have changed.
Yep, a related question you might ask is: Would machine learning end slavery, or perpetuate and strengthen it, assuming it existed today as an accepted mass societal institution.
Consider it would likely be wealthy slaveholders that would aggressively deploy it first, being as they hold most of the capital, and have the most to gain from it.
I think people should consider the answer to that question very seriously.
From your story it seems like human guards are profiling African Americans and Latinos now. At least with a bot you'd be able to explicitly prevent them from doing so. Probably a lot harder to prevent a human from using biases
Machine learning - using statistical associations between features in a large population to make predictions about individuals - is profiling. There is no such thing as removing profiling from machine learning. The profiling is the only thing there.
I am not suggesting he is correct, I am simply pointing out that if your job is to arrest people with outstanding warrants, how would you go about it? (again, if you were a robot)
> if your job is to arrest people with outstanding warrants, how would you go about it?
How about: go to their place of residence and knock on their door?
Or: go to their place of work and knock on the door?
After those have failed, try getting a wiretap warrant for their telecom devices. Only after you've got that wiretap warrant, locate them and arrest there.
You know... the stuff that law enforcement is _supposed_ to be doing...
Following a vehicle whose license plate, or its owner, is associated with outstanding crimes makes sense. Following a random vehicle which has not yet committed a crime is wrong. Following a vehicle because it has black people in it is illegal, unethical, and racist. Following a vehicle because an occupant shows up on a face search database should be morally reprehensible for the very same reasons.
Robots will do all those things you suggest simultaneously, and watch every street with cameras, scan faces at every store, pull over cars remotely, turn off your power in your house, get your fired from your job, etc... This is better than racist cops?
By being outraged at one cop's admission of a short cut to finding people who are due in court we are bringing the abomination of robots down on our heads.
My point was that people aren't looking for a real solution, they are just outraged irrationally about current events and not seeing how the masses are being steered towards a worse result than they currently have.
No, it's still wrong, immoral, to kill that innocent person to save the crowd of people.
You may, however, still choose to do it anyway, knowing full well that it's immoral.
If killing one innocent person will somehow magically save one million, that too is immoral. And I'd kill that one person to save the million regardless, accepting that it is an immoral act. It's a worthwhile thing to live with, to save so many. Obviously this is in the land of outlandish hypotheticals. Non-the-less, the morality is just the same if the ratio is 1 to 10, 1 to 100, or 1 to one million: intentionally killing - murdering - an innocent person is immoral in all circumstances.
These are the documented instances, but it can be difficult to determine when such systems are being used in the first place:
Muckrock and Rutgers have been asking governing bodies (via FOIA) whether they're using automated decision-making systems and, if so, how. Many reject their request. [^2]
The practice of "parallel construction" could further conceal their use.
"What would a robot do?" should not be our ethical solution-finder, as evidenced by Google's customer service, Amazon's product recommendations, and Boeing's equilibrium.
I am not even remotely suggesting we use robots as any sort of guide post for ethics.
I agree, that these things end up bad, but that is my point.
If it's a robot we _should_ be equally outraged, but it's actually not possible unless the robot is _designed_ to be racist.
This is the real concern, humans will be replaced by robots because outrage will simply have less impact. And therefore because we couldn't stomach humans being racist (or whatever the answer is here?) we have to live with robots doing whatever they are designed to do, without any recourse.
The unbalanced outrage will lead to an army of robots running our legal system.
Do not give a Chicago cop the benefit of the doubt.
Using statistics, a robot would say that the entire CPD is racist. And then it would be taken to Homan Square and beaten until it confessed, while the union rep held a press conference explaining both the robot's extensive criminal history, and the officers' understandable caution in apprehending and controlling such a dangerous individual.
Guess what: if every cop is going around doing mental statistics about who is 'more likely' to make a mistake, they'll introduce bias because they are biasing their sampling with their pre-conceived notions. If you spend your whole day chasing black people and white people get no oversight, then of course you'll be introducing bias.
So whether this cop was being overtly racist or not, he was perpetuating an inherently racist system.
I think we are debating two different things. I don't support his behavior. But if it was a robot, it wouldn't be racist or appalling, but it seems to be that if a human does it is considered racist.
Recording faces and license plates and reporting on data found can't be racist, it is purely neutral. Which is why I think the robot in the story has this job.
I suppose we could blame the programmer or admin if it's designed or configured to go after a particular race. But even a robot can only arrest so many people in a day, and if you optimize an algorithm based on AI (also robot running on data), would it ultimately look like this officer's choices?
I am comfortable saying the algorithm could be made neutral, but I expect it would be extremely difficult to feed it neutral data. Most of what we have now is already heavily skewed because of human involvement. Perhaps starting from scratch, a place of zero knowledge, with absolutely no input allowed from humans.
But even then, I would not accept following two random people around based on skin color and waiting for them to make a trivial mistake that justifies deeper investigation, regardless of what any data might say.
>I would not accept following two random people around based on skin color and waiting for them to make a trivial mistake that justifies deeper investigation
And I agree with you. But guess what a robot would do? It would follow everyone, all the time, everywhere. By being overly outraged about a small minority of racist cops (I really don't believe it's the majority) we will get something worse that will never ever be able to protest our way out of.
Sometimes, I think the answer to a problem is to accept there will always be some problems. If we continue to try and make a perfect system we end up at a wall and then irrationality pushes us into a bad place, worse than what we had before.
I wonder what the problem was. The article states that the robot first ran over the child. Are the sensors scanning only too high? How does it avoid low obstacles then?
At the peril of sounding heartless I’d say that it is quite probable that the kid ran into the robot because they were not watching where they were going, it has happened to me several times that a kid ran into me from the back.
I'm waiting for someone stick a speaker to one that just repeats "EXTERMINATE! EXTERMINAAAATE!" line as this robot bumbles around whatever corporate campus it's deployed on.
It might get you banned from that particular retirement community but it's hard to imagine being prosecuted for hilariously taping a speaker to what's basically a really buff roomba
There's one of those Knighscope K5 units at The Beacon, in San Francisco, between 4th, 5th, King, and Townsend. It's very slow moving in its normal mode. You can walk up to it. It's probably there to keep homeless people from camping in their plaza.
Or maybe to discourage the undesirable element of drunk white guys from the ballpark.[1]
Harder and hard to write any sort of fiction where the protagonist does anything slightly illegal, not to mention extremely illegal, and is not immediately apprehended.
Very few people violating the law are immediately apprehended for it. It would be very hard to write realistic contemporary fiction where that happened consistently.
Just put a bag or sack over the top of it. All visual, radar, laser and ultrasound sensors will become pretty useless when covered, but there won't be any physical damage done to police property.
I give it 3 months before someone with a mask on shows up and shoots it 8 times in the "head". Same result with a drone. That's why the ones in the movie robocop worked, because it had guns, and a mind of its own (usually a brain from a convict too). This is more like a roomba with a camera.
I don't think anyone will steal it for fear of GPS tracking.
I don't expect it will be destroyed by a cunning criminal with an agenda. It's just going to get beaten up by some kids goofing around, or some drunk dude walking home from the pub, or someone who just wants to smash things. They'll quite possibly get caught.
Pretty much. It’d be like walking into police station and attempting a crime.
By the time they do something everything about them is known. It’s self defeating.
Debatable if the concept is staying. But Knightscope is almost certainly not going to be around for long. Take a look at their financials and you’ll see they’re very much lacking both a revenue stream and a reliable fundraising strategy.
The referenced twitter link shows a news broadcast of them describing it. He starts to laugh near the end of what he was saying - almost like he was going to say, this can't end well. And then she blurts out - I Kinda Like It - cutting him off.
I don’t know why (maybe it’s the shape or something...) but this thing elicits a strong desire within me to run it over with my pickup. Then incinerate the remaining pieces with a boring company flame thrower.
It weighs around 400lbs. Pushing it over is not something most would be capable of. Were they successful, It has omni-directional vision, and undoubtedly that information is sent to the cloud almost immediately, so this seems like a bad strategy.
I had a great day. It was not I who chose to take the conversation in a scatalogical direction; I just encouraged the person who did so to stick with their stated area of expertise.
Regulation can't come soon enough for this industry.
About 20 years ago one of the youngest cops ever to be promoted to detective in the City of Chicago told me this after a long night of drinking...
"If I see a car with more than two black people, I just followed it. Sooner or later they'll make a mistake and I can pull them over. Very often one of them will have an outstanding warrant.". When my face turned to horror he asked me the jaw-dropping question "you are Italian, right ?"
Honestly, if ML is statistically driven, how long before it determines that African Americans and Latinos are over represented in the criminal database and should have a higher blacklist score as a rule and the whole cycle repeats. Meanwhile innocent looking white kids can keep selling meth in the park.