Hacker News new | past | comments | ask | show | jobs | submit login
ACLU to police using robots: Tell us more (techxplore.com)
106 points by dnetesn 3 days ago | hide | past | web | favorite | 66 comments

ACLU made this request in August of 2019 [0] , guess it slipped notice back then.

To be honest, they can be very unnerving o watch in action and for those who are video game fans; think Half Life series; they conjure up not so good images. Terminator series would likely be the most related that the public would freely associate with them, especially video on one of these dogs running across terrain.

by default I have little issue with them, however with the recent focus on no knock raids and such I am loathe to give the police any tool which is more in line with military use. They already abuse the transfer of purely military vehicles to their forces and their SWAT teams long lost any relationship to traditional police

[0] https://data.aclum.org/wp-content/uploads/2019/11/ACLU-Publi...

We shouldn't be worried while police doesn't loose their connection with their people and treat cases properly according to the law, but now it seems that they're being taught to use force for whatever reason in most situations. It's not only a phenomena that could happens in totalitarian states or broken states, now it reaches all countries.

Or Bradbury's Fahrenheit 451. Just stick a little hypodermic needle with a lethal dose of morphine onto its manipulator, and Spot slips easily into the role of the mechanical hound.


My reading of most HN threads has been in favor of more automation, objective verifiabilty, and technological use in law enforcement, e.g. body cams.

Also strong condemnation of shoot-first-ask-later habits, which are inarguably more appealing when a person is at risk instead of an automaton.

It would seem to be that those wanting police reform would be most interested in removing as much of the problematic human element as possible.

Most of the successful police reforms (that I'm aware of here in California) have revolved around community policing and getting back to that human element. The rest have been oversight based, like the police body cameras.

Policing without infringing on rights is a very hard problem and I don't think many serious reformers see any one solution or ideology as sufficient on its own.

It's important to not go down this slippery slope.

Right now robot dogs might only be used in extreme cases, but if continue to normalise this, it's only a matter of time before they're used in very morally questionable ways, such as arming them and using them to control (rightful, non-violent) protests, or being given autonomous control and letting them patrol cities and what not.

This should be avoided. At all costs.

Robots are used everywhere already. Everywhere.

This is getting press because it's kind of edgy-looking, involves police, and the ACLU.

It's not a question of if, but when and how. Implementation. The most interesting question to me is how regulation will unfold as robots become ubiquitous in public life.

At least in the United States, I suspect that it will take several rather grotesque disasters before regulatory bodies get seriously involved and get around to passing the 3 Laws.

And you know how the 3 laws turned out in Asimov's world...

Seriously though, I don't think Asimov was being naive with his choice of laws and then showing how they were then subverted. I'm sure he was aware of things like Godel's proofs and the difficulties involved with fuzzy logic and AI.

Asimov kept his laws simple so they would work in a story, but was likely well-aware that real laws could be made more nuanced and that they inevitably would still fail in the ways he described.

Implementation is going to be very, very hard to get right, if at all possible.

> And you know how the 3 laws turned out in Asimov's world...

Pretty well? If you're referring to the original book (and not the excellent Will Smith movie that deviated from the source material significantly but was extremely entertaining in its own right), the robots correctly surmised that humans lacked the wetware capacity for global-scale planning and took on that burden. The world at the end of Asimov's story is many things, but it isn't dying from an utterly avoidable climate change disaster because of a transcontinental tragedy of the commons.

(It is, of course, a fiction. But if we can't take away from the fiction "We should trust robots with global resource planning" without some critical thinking, we shouldn't take away "robots can never be trusted and will always betray humanity" without some critical thinking either).

I would assume they're referring to the original short stories published in I, Robot, which has numerous robots gone awry stories, but not the the taking over planning for all of human civilization plot that you are referring to.

That plot was from books he wrote 40 years later to unify the Foundation and Robots series. Many fans consider it a rage-inducing retcon.

I'm referring specifically to "The Evitable Conflict" (https://en.m.wikipedia.org/wiki/The_Evitable_Conflict), which I believe appeared in the original 1950 collection. It specifically includes the plot of the world coordinating machines causing some harm to come to some individuals to optimize the survival of the machines and the outcome for humanity as a whole.

I'll have to look up the other stories you referenced though; I'm unfamiliar with them and they sound fascinating.

>It is, of course, a fiction. But if we can't take away from the fiction "We should trust robots with global resource planning" without some critical thinking, we shouldn't take away "robots can never be trusted and will always betray humanity" without some critical thinking either.

Ironically, it isn't so much leaving the robots to their own devices that is the issue. It's what we'll talk ourselves into doing with them that concerns me.

Then again, I'm a Protomen fan, so I may be prejudiced in that regard.


Realistically the three laws are silly here.

Better laws in this context would be something like “police robots cannot be armed”. Then maybe some other restrictions about circumstances under which they can be used — eg do we want a police robot sitting on every street corner monitoring? (Answer: maybe?)

> Robots are used everywhere already. Everywhere.

As far as I know, robots are used in manufacturing, surgery, home cleaning gimmicks, and war.

Use of robots in war is already problematic, and seems to have greatly emboldened some countries to carry out campaigns of terror and assassination on other countries' territories. We really don't want the police to start on the same path.

I wouldn't call washing machines/dryers/dishwashers "home cleaning gimmicks".

I was thinking more about automated vacuum cleaners.

You can take a very broad view of the term 'robot', where it is more or less equivalent with 'machine', and then you world be right - they are literally everywhere.

But given the reaction to this article, I think that it's clear most people have a more narrow definition of 'robot', one which draws a line between a machine which splashes water and detergent inside itself for a pre-determined amount of time on one hand, and semi-autonomous walking machines on the other hand.

Fair enough, but I wouldn't call washing machines/dryers/dishwashers robots.

> Robots are used everywhere already. Everywhere.

Robots are used in manufacturing, in closed relatively controlled environments.

They don't yet freely roam the streets among human population and that is the issue here.

> Robots are used everywhere already. Everywhere.

That's utterly irrelevant - the parent poster was talking specifically about robots used against humans. And the reason it's dangerous, is that you no longer need to convince a human to pull the trigger. It concentrates unchecked power into even fewer hands.

Robots are used against humans everywhere, already.

Remember Predator drones? Been in the skies since the early 2000s. They require human authorization, and it is very unlikely that any AI will be given trigger control anytime soon. Nobody would trust their life to an algorithm, and if you rely on the algorithm to determine who's a friendly, your users are not going to like you.

By 'everywhere', you mean war zones. I don't have Predator drones flying over my head, and I want to keep it that way.

> Civilian applications for drones have included border enforcement and scientific studies, and to monitor wind direction and other characteristics of large forest fires (such as the drone that was used by the California Air National Guard in the August 2013 Rim Fire).

> On 18 May 2006, the Federal Aviation Administration (FAA) issued a certificate of authorization which will allow the M/RQ-1 and M/RQ-9 aircraft to be used within U.S. civilian airspace to search for survivors of disasters.


The seeds are sowed.

I also find it very Ameri-centric to believe that you are safe just because you are not in a war zone. Consider the well-meaning families that have to deal with these things, knowing that at any point in time they could be blown to oblivion with no prior warning. The majority of them are not terrorists, yet they all must deal with constant paranoia.

I believe there is a critical misunderstanding here going on; this isn't an argument against a "pure" robot, aka a mechanical contraption that can move and manipulate things, this is an argument against artificial intelligence, which people commonly use robot as a synonym for. Police robot tech is still manually controlled because contrary to the hype the cutting edge of robotics AI is still in the realm of not falling over when some dude from Boston Dynamics hits it with a stick.

Before these robot dogs were available, your main two options for reconnaissance of a volatile situation was either sending someone in who will shoot to kill if shot at or a robot with limited mobility (comparatively).

Now there's an option to send a robot with significantly better mobility that wont go shooting at people who shoot at them.

I think if this becomes more prevalent there may be less situations where the police move directly to dynamic entry (SWAT team going in). That may lead to less deaths in the long run.

How long before the robot dog has a gun mounted on it to protect the investment of having a robot dog?

That's unlikely to happen. Most shootings are excused by claiming the officers life was in danger based on the information he had.

It's going to be hard for them go before a judge and say that they needed to shoot to protect the robot. The advantage of robots is that they are just a line item on a budget.

First, the "snake-head on a goose-neck" manipulator that usually mounts on Spot will have a pair of electroshock probes added, as a non-lethal means of incapacitating a suspect at touch range.

Then they will add a 20ga shotgun that shoots a single beanbag or netting cartridge.

Then, after that, they will use the lethal firearms.

Or we could not. Slippery slope arguments assume people make the worst choices, and there's no reason that's necessary.

Declare automated surveillance drones categorically unacceptable because of a hypothetical risk they could have weapons mounted on them and you get more people and animals killed unnecessarily.

I am not making that particular argument. I'd prefer to mount the .38 special revolver on the robot now, make sure the trigger-pull servo is 100% human-controlled, and train the robot operator on the rules of engagement and remote use of lethal force right from the start, rather than just wait until the inevitable happens to catch up.

A cop-controlled robot is an extension of the cop. It's not the robot's fault if its technology is misused, but the cop's. The problem underlying all this is that cops have too much unchecked power already. At least if they're shooting people remotely with armed robots, we would know the camera wasn't malfunctioning, and that the video should be available for the civil lawsuit.

Rather than freaking out about the possibility that robots might be armed, just assume that they are already weapons, and craft the human policy around the technology. I.e. when a robot is armed, the firing mechanism for the weapon must be under direct human control. If you claim your robot will never be armed or used to intimidate, you can conveniently skip out on writing that rule, so when it happens someone gets a free escape responsibility card, with "we weren't trained for this" printed on it.

But if the operator, nor anyone else, is in immediate danger of being harmed, what would ever be the rationality to use the weapon?

What is the difference between police calling in a SWAT sniper or an armed robot dog? In both cases the officer has the ability to remotely order the shooting of a target.

Yet police officers do not call in snipers into every occasion. What is important is not the means, but the constraints imposed on the means to incentivize proper decision making.

Snipers are not machines.

Sure, but what is the difference between a human or a machine agreeing to the same order?

Of course the human may have free will, but the majority of people are trained to follow orders from figures of authority. It is improbable that a police sniper would refuse a hit based off of some ethical dilemma, especially when all the information they have on a situation is fed to them by their police colleagues.

Heroes are rare and their presence should not be taken for granted. Counting on a human to do the right thing is almost never the right choice if you can help it.

In an individual instance, sure, but over time people can and do defect from corrupt forces.

>This should be avoided. At all costs.

Why? Using robots just sounds like an implementation detail. They should be able to do anything you're fine with human police doing.

they already have unmanned drones with lethal weapon in war

the deployment of machines in urban city is only a matter of time, with the rising stake and the scare of massive social movement, "they" will justify the usage to suppress "crime" of dissident

whether the robots are manned or autonomous or self-aware, one thing is inevitable are the harms/deaths caused by "them" eventually

What do you see as the alternative?

It's weird to proudly proclaim that you're committing a logical fallacy as your first sentence.

Robots will be performing arrests in 20 years.

Wasn't an autonomous robot, but I believe that the Dallas PD were the first police force to kill a suspect with a robot, back in 2016. They had it place a bomb on the opposite side of a brick wall from a sniper that had killed several police officers.


That strikes me as a situation where the use of a robot to kill was appropriate.[1]

However, I'd be interested in hearing the perspective of people who disagree. Maybe I'm overlooking something.

[1] I'm assuming that the police had already exhausted options to resolve the situation non-violently and that a non-lethal robot wasn't available or viable (do they make TASER robots?).

While I don’t know all the circumstances of the case, I do know police in other countries manage to capture people like this without resorting to using IEDs.

Police in the states already get away with all kinds of corruption, abuse of power, violence/killings, so I think we should do as much as possible to avoid normalizing these kinds of tactics. The only comparable case that comes to mind is when police bombed an apartment block of radicals in Philadelphia:


I was going to bring that up. Totally agree - further normalizing the use of violence by the police should be avoided.

While not a sniper, this is the police in the UK apprehending a suspect with a machete. I've never heard of similar tactics being used in the US. And conversations with police have led me to believe they would actively avoid such tactics and instead resort to violence. https://www.youtube.com/watch?v=9mzPj_IaMzY

I don’t think US police would address the situation much differently if they had the same advantage in numbers.

You can actually watch dashcam/bodycam footage of police shootings. In situations where the suspect is armed with a bladed weapon, there’s usually an attempt to call for backup and subdue them with less lethal means. It’s just that there’s more often 1-2 officers instead of a dozen onscene.

Also note that, in the clip you shared, at no point does the machete-wielding suspect break into a full-on sprint towards any of the officers. In the police shootings I’ve seen, that moment is the one when police open fire, and rightly so: human reaction time is such that a knife equals or beats a gun within 21 feet or so. This is why you see the British cops backpedaling whenever machete man turns to face them—American cops have done the same in the footage I’ve seen. Also consider what would have happened if machete man here actually made an honest attempt to start killing officers. He would have been successful.

And it’s not like British police don’t use lethal force when it’s called for. Just look at what happened to the terrorist on London Bridge.

While minimizing the use of violence is an indisputably good thing, there’s a big gap between the UK machete case and the US sniper case. There are certainly lessons the US police could learn from the machete incident, but I don’t see how they apply to this sniper incident.

The sniper was a trained soldier that was an imminent threat to the police officers—he had already killed several and given his training, motive, and the premeditation of the crime, it was entirely likely that he would have killed additional officers who attempted to arrest him (by way of booby traps or similar).

After the Dallas shooter had already killed several police, and was holding in a position where approaching him would be extremely risky, I don't see why they should bother considering his life.

This has nothing to do with routine police use of force, where you could argue that some police should sometimes take some more risks in balance with the public good.

It probably isn't very common where police need to capture a holed up military-trained sniper who has already taken out 5 police officers. Do you have any examples of other countries capturing people like that without any more casualties?

I can't argue that killing him was a bad idea. He had the high ground and had killed and wounded several police officers. A pound of C-4 in a public place is pretty risky though. He was in a brick alcove on the second floor of El Centro Community College. They detonated the c4 on the other side of the brick wall.

Whilst any explosive in public can be a potentially unsafe compound, C4 is probably one of the better choices.

+ It is really difficult to detonate accidentally. (Even when it burns it may not detonate.)

+ It can be shaped, so that the charge only moves in the intended direction. (So the first floor isn't at immediate risk.)

It was not autonomous.

Yeah that's what I led my comment with...

My bad - I read that too fast. Apologies.

Which was a sound choice under the circumstances.

Did the sniper deserve to die? Wrong question. In an ideal world, a man in a bulletproof lycra bodysuit would have shown up and apprehended him bloodlessly. We don’t live in that world. We live in the world where there was a tradeoff between ending only the snipers’ life and endangering more of the lives of the cops. Whichever choice is made, someone gets the short end of that tradeoff[1].

So the right questions are

1) Who should have gotten the short end of the tradeoff in those particular circumstances?

2) How to we create an organization where people hold the values that lead them to make our preferred tradeoff?

[1] https://slatestarcodex.com/2018/10/24/nominating-oneself-for...

Unfortunate title - I was thinking the ACLU was going to deploy a bunch of robots

Yeah, it took me a while to not read "police" as a verb.

Wow. I definitely read it as “ACLU would use robots to police” and I thought it was just a clumsier way of saying “ACLU to police the use of robots”. I didn’t actually realize police was a noun until your comment.

I'm interested in how robotics can be used to help in typically dangerous situations like domestic violence calls and suicide by cop calls. If a robot can help save a life then heck yeah! If they are just used as portable spy platforms I'm a little less enthusiastic...

The ACLU should build their own robot that chases around the other robots and keeps an eye on them

Cf https://en.m.wikipedia.org/wiki/Metalhead_(Black_Mirror).

A fictional dramatisation of a post-apocalyptic World in which robot dogs hunt humans (IIRC).

Let's not over-dramatize what these are. They're remote-controlled vehicles with a bit of pathfinding and modular add-ons. The fact they walk rather than roll is a triumph of software engineering, but it doesn't make them into droids off Star Wars.

Police have been using remote controlled vehicles forever.

We learned nothing from Robocop... ..sorry I know this is serious

We seem to have learned from Robocop that turning critically injured humans into cyborg enforcers is awesome and any problems in the system will self-correct via clever word lawyering and the power of the human will. Unless I saw a different Robocop movie. ;)

Well I thought the message was to only turn really good cops into robots, someone strong enough to deal with the existential hell you're about to put them through.

All of this has happened before, and it will all happen again.

> "...the Mass State police is our only public safety-focused relationship to date.

I totally read that first as "mass police state" probably because that's what this is. A robotic mass police state.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact