To be honest, they can be very unnerving o watch in action and for those who are video game fans; think Half Life series; they conjure up not so good images. Terminator series would likely be the most related that the public would freely associate with them, especially video on one of these dogs running across terrain.
by default I have little issue with them, however with the recent focus on no knock raids and such I am loathe to give the police any tool which is more in line with military use. They already abuse the transfer of purely military vehicles to their forces and their SWAT teams long lost any relationship to traditional police
My reading of most HN threads has been in favor of more automation, objective verifiabilty, and technological use in law enforcement, e.g. body cams.
Also strong condemnation of shoot-first-ask-later habits, which are inarguably more appealing when a person is at risk instead of an automaton.
It would seem to be that those wanting police reform would be most interested in removing as much of the problematic human element as possible.
Policing without infringing on rights is a very hard problem and I don't think many serious reformers see any one solution or ideology as sufficient on its own.
Right now robot dogs might only be used in extreme cases, but if continue to normalise this, it's only a matter of time before they're used in very morally questionable ways, such as arming them and using them to control (rightful, non-violent) protests, or being given autonomous control and letting them patrol cities and what not.
This should be avoided. At all costs.
This is getting press because it's kind of edgy-looking, involves police, and the ACLU.
It's not a question of if, but when and how. Implementation. The most interesting question to me is how regulation will unfold as robots become ubiquitous in public life.
At least in the United States, I suspect that it will take several rather grotesque disasters before regulatory bodies get seriously involved and get around to passing the 3 Laws.
Seriously though, I don't think Asimov was being naive with his choice of laws and then showing how they were then subverted. I'm sure he was aware of things like Godel's proofs and the difficulties involved with fuzzy logic and AI.
Asimov kept his laws simple so they would work in a story, but was likely well-aware that real laws could be made more nuanced and that they inevitably would still fail in the ways he described.
Implementation is going to be very, very hard to get right, if at all possible.
Pretty well? If you're referring to the original book (and not the excellent Will Smith movie that deviated from the source material significantly but was extremely entertaining in its own right), the robots correctly surmised that humans lacked the wetware capacity for global-scale planning and took on that burden. The world at the end of Asimov's story is many things, but it isn't dying from an utterly avoidable climate change disaster because of a transcontinental tragedy of the commons.
(It is, of course, a fiction. But if we can't take away from the fiction "We should trust robots with global resource planning" without some critical thinking, we shouldn't take away "robots can never be trusted and will always betray humanity" without some critical thinking either).
That plot was from books he wrote 40 years later to unify the Foundation and Robots series. Many fans consider it a rage-inducing retcon.
I'll have to look up the other stories you referenced though; I'm unfamiliar with them and they sound fascinating.
Ironically, it isn't so much leaving the robots to their own devices that is the issue. It's what we'll talk ourselves into doing with them that concerns me.
Then again, I'm a Protomen fan, so I may be prejudiced in that regard.
Better laws in this context would be something like “police robots cannot be armed”. Then maybe some other restrictions about circumstances under which they can be used — eg do we want a police robot sitting on every street corner monitoring? (Answer: maybe?)
As far as I know, robots are used in manufacturing, surgery, home cleaning gimmicks, and war.
Use of robots in war is already problematic, and seems to have greatly emboldened some countries to carry out campaigns of terror and assassination on other countries' territories. We really don't want the police to start on the same path.
You can take a very broad view of the term 'robot', where it is more or less equivalent with 'machine', and then you world be right - they are literally everywhere.
But given the reaction to this article, I think that it's clear most people have a more narrow definition of 'robot', one which draws a line between a machine which splashes water and detergent inside itself for a pre-determined amount of time on one hand, and semi-autonomous walking machines on the other hand.
Robots are used in manufacturing, in closed relatively controlled environments.
They don't yet freely roam the streets among human population and that is the issue here.
That's utterly irrelevant - the parent poster was talking specifically about robots used against humans. And the reason it's dangerous, is that you no longer need to convince a human to pull the trigger. It concentrates unchecked power into even fewer hands.
Remember Predator drones? Been in the skies since the early 2000s. They require human authorization, and it is very unlikely that any AI will be given trigger control anytime soon. Nobody would trust their life to an algorithm, and if you rely on the algorithm to determine who's a friendly, your users are not going to like you.
> On 18 May 2006, the Federal Aviation Administration (FAA) issued a certificate of authorization which will allow the M/RQ-1 and M/RQ-9 aircraft to be used within U.S. civilian airspace to search for survivors of disasters.
The seeds are sowed.
I also find it very Ameri-centric to believe that you are safe just because you are not in a war zone. Consider the well-meaning families that have to deal with these things, knowing that at any point in time they could be blown to oblivion with no prior warning. The majority of them are not terrorists, yet they all must deal with constant paranoia.
Now there's an option to send a robot with significantly better mobility that wont go shooting at people who shoot at them.
I think if this becomes more prevalent there may be less situations where the police move directly to dynamic entry (SWAT team going in). That may lead to less deaths in the long run.
It's going to be hard for them go before a judge and say that they needed to shoot to protect the robot. The advantage of robots is that they are just a line item on a budget.
Then they will add a 20ga shotgun that shoots a single beanbag or netting cartridge.
Then, after that, they will use the lethal firearms.
Declare automated surveillance drones categorically unacceptable because of a hypothetical risk they could have weapons mounted on them and you get more people and animals killed unnecessarily.
A cop-controlled robot is an extension of the cop. It's not the robot's fault if its technology is misused, but the cop's. The problem underlying all this is that cops have too much unchecked power already. At least if they're shooting people remotely with armed robots, we would know the camera wasn't malfunctioning, and that the video should be available for the civil lawsuit.
Rather than freaking out about the possibility that robots might be armed, just assume that they are already weapons, and craft the human policy around the technology. I.e. when a robot is armed, the firing mechanism for the weapon must be under direct human control. If you claim your robot will never be armed or used to intimidate, you can conveniently skip out on writing that rule, so when it happens someone gets a free escape responsibility card, with "we weren't trained for this" printed on it.
Yet police officers do not call in snipers into every occasion. What is important is not the means, but the constraints imposed on the means to incentivize proper decision making.
Of course the human may have free will, but the majority of people are trained to follow orders from figures of authority. It is improbable that a police sniper would refuse a hit based off of some ethical dilemma, especially when all the information they have on a situation is fed to them by their police colleagues.
Heroes are rare and their presence should not be taken for granted. Counting on a human to do the right thing is almost never the right choice if you can help it.
Why? Using robots just sounds like an implementation detail. They should be able to do anything you're fine with human police doing.
the deployment of machines in urban city is only a matter of time, with the rising stake and the scare of massive social movement, "they" will justify the usage to suppress "crime" of dissident
whether the robots are manned or autonomous or self-aware, one thing is inevitable are the harms/deaths caused by "them" eventually
However, I'd be interested in hearing the perspective of people who disagree. Maybe I'm overlooking something.
 I'm assuming that the police had already exhausted options to resolve the situation non-violently and that a non-lethal robot wasn't available or viable (do they make TASER robots?).
Police in the states already get away with all kinds of corruption, abuse of power, violence/killings, so I think we should do as much as possible to avoid normalizing these kinds of tactics. The only comparable case that comes to mind is when police bombed an apartment block of radicals in Philadelphia:
While not a sniper, this is the police in the UK apprehending a suspect with a machete. I've never heard of similar tactics being used in the US. And conversations with police have led me to believe they would actively avoid such tactics and instead resort to violence.
You can actually watch dashcam/bodycam footage of police shootings. In situations where the suspect is armed with a bladed weapon, there’s usually an attempt to call for backup and subdue them with less lethal means. It’s just that there’s more often 1-2 officers instead of a dozen onscene.
Also note that, in the clip you shared, at no point does the machete-wielding suspect break into a full-on sprint towards any of the officers. In the police shootings I’ve seen, that moment is the one when police open fire, and rightly so: human reaction time is such that a knife equals or beats a gun within 21 feet or so. This is why you see the British cops backpedaling whenever machete man turns to face them—American cops have done the same in the footage I’ve seen. Also consider what would have happened if machete man here actually made an honest attempt to start killing officers. He would have been successful.
And it’s not like British police don’t use lethal force when it’s called for. Just look at what happened to the terrorist on London Bridge.
The sniper was a trained soldier that was an imminent threat to the police officers—he had already killed several and given his training, motive, and the premeditation of the crime, it was entirely likely that he would have killed additional officers who attempted to arrest him (by way of booby traps or similar).
This has nothing to do with routine police use of force, where you could argue that some police should sometimes take some more risks in balance with the public good.
+ It is really difficult to detonate accidentally. (Even when it burns it may not detonate.)
+ It can be shaped, so that the charge only moves in the intended direction. (So the first floor isn't at immediate risk.)
Did the sniper deserve to die? Wrong question. In an ideal world, a man in a bulletproof lycra bodysuit would have shown up and apprehended him bloodlessly. We don’t live in that world. We live in the world where there was a tradeoff between ending only the snipers’ life and endangering more of the lives of the cops. Whichever choice is made, someone gets the short end of that tradeoff.
So the right questions are
1) Who should have gotten the short end of the tradeoff in those particular circumstances?
2) How to we create an organization where people hold the values that lead them to make our preferred tradeoff?
A fictional dramatisation of a post-apocalyptic World in which robot dogs hunt humans (IIRC).
Police have been using remote controlled vehicles forever.
I totally read that first as "mass police state" probably because that's what this is. A robotic mass police state.