Hacker News new | past | comments | ask | show | jobs | submit login

Given the current state of best of breed, up close facial recognition, using the term "works" in a life and death scenario is an irresponsible overreach.

"Fail early, kill innocents often" is a terrible paradigm.




Generalized systems search massive databases, these systems can have much narrower data sets. "Who is X?" and "Is this X?" are very different computationally.

Also; "fail early, kill innocents" is a great paradigm for staying in power.

The beatings will continue until moral improves.


Doesn't have to be perfect, just has to be better than human performance. I heard a lot of stories out of Afghanistan and Iraq that ended up boiling down to "They had a big thing on their shoulder and it was pointed at a tank so we had to kill them", nevermind that that was a TV camera one in ten times.


> Doesn't have to be perfect, just has to be better than human performance.

Firstly, it's about time we stop pretending that people wait until technology does something better than humans do before they deploy it. It will be deployed long before that point. Secondly, there are more reasons to be concerned about this sort of technology than just whether it is effective at its intended purpose (identifying people who do not want to be found, at long range, for the purpose of assassination).


> Firstly, it's about time we stop pretending that people wait until technology does something better than humans do before they deploy it.

I think we need to stop pretending that humans are perfect, or even acceptably good.

Computers get better as time goes off. This technology wasn't a thing ten years ago. Now it's questionable. In ten years it'll be better than it was today. Humans will be exactly as good in ten years as they were today and exactly as good as they were a hundred years ago. And what they are today is not good enough.

> identifying people who do not want to be found, at long range, for the purpose of assassination

I think we need to stop pretending that humans are paragons of virtue. They do things we should be concerned about even when they can't execute effectively.

Long-range identification and assassination of people who don't want to be found is a capability that already exists, and in fact has existed for centuries - for a given value of "exists". Sniper teams, helicopter gunships, and artillery spotters have precisely this role and they make mistakes and kill the wrong people all the damn time.

And on top of that... when was the last time you heard about some US soldiers committing war crimes, in person, with their bare hands? No technology involved at all. No technology needed. Stop blaming it for human failings.


> Computers get better as time goes off. This technology wasn't a thing ten years ago. Now it's questionable. In ten years it'll be better than it was today. Humans will be exactly as good in ten years as they were today and exactly as good as they were a hundred years ago. And what they are today is not good enough.

Okay. Automated call systems have been deployed for decades now and continue to increase their market penetration. They are still not nearly as useful or good at what they do as a human would be, but that did not make any significant difference to the speed and rate at which they were deployed. Your argument is that abstractly, at some point, they should be better than humans. Maybe so, but that's not what I was arguing--I was arguing that the conditions for a technology's deployment are only distantly correlated with how good they are in comparison to humans performing the same task, not making some philosophical point about how the machines will not replace us or whatever.

> And on top of that... when was the last time you heard about some US soldiers committing war crimes, in person, with their bare hands? No technology involved at all. No technology needed. Stop blaming it for human failings.

I'm not really going to bother to respond in depth to the rest of the stuff you said since it seems to be responding to points I did not make (when did I ever say people were paragons of virtuous, hadn't killed people before, didn't make mistakes, or didn't use technology to kill people?). I am merely pointing out that technology is not value neutral; the particular technology we are talking about is explicitly designed to do pretty awful things. Responding that the real problem is people is missing the point; it's another iteration of the "guns don't kill people" argument.


I guess I'm a bit confused, why would they deploy something when humans are still better off without it?


Politics. Money. Expediency. The cheaper and easier you make it to do something, the easier it is for you to get signoff to do it. Humans require oversight and care. A robot staking out an area? Not nearly as much, and folks don't get nearly as much backlash when a robot doesn't come back home.


People routinely replace humans with services that don't do the job as well if it s cheaper to deploy, well marketed, does something else that the human wouldn't, or part of a mutually beneficial arrangement between the manufacturer and the buyer. For a case in point that has nothing to do with the military, have you ever interacted with an automated call system that you felt was easier to use and more helpful to you than a human answering the line would have been? I have literally never experienced that, but I have witnessed a huge percentage of companies switch from humans to automated call systems.


Because military contractors like money.



For a site with a readership so familiar with "scaling out", I am surprised about takes like that.

Does it matter if the technology makes the precision a bit better and the individual person evaluated for being killed, has a little better chance of surviving, thanks to being identified by a machine?

When... the machines allow many more people being killed. After all, this is what automation allows us to do. How many qualified and highly trained snipers could the military deploy? Not very many, compared to how many drones and other augmented systems they can deploy now and in the future.

Imagine every street corner equipped not only with CCTV, but augmented sniper turrets. I wouldn't bat an eye if a the next invaded city was blanketed with systems like these.


> Imagine every street corner equipped not only with CCTV, but augmented sniper turrets. I wouldn't bat an eye if a the next invaded city was blanketed with systems like these.

We already do that. It's just that the computers are squishy, unverifiable, black-boxed wetware that tend to commit war crimes, and the actuators are twitchy pieces of crap that are so bad at shooting that they have to be equipped with machine guns. What do you think soldiers are doing when they get shot at or blown up by insurgents or accidentally shoot reporters or civilians? They're not sitting in their base playing cards, that's for sure.


I know. But that wetware doesn't scale out easily, their families vote at home and so on. Machines are cheap. 100 humans with 15% failure rate will kill a number of people, sure.

100 thousand automatic machines with 5% failure will kill many, many more. Possibly forever, because there will be no need "take our troops home". The troops are already at home, pressing kill/live in their drone baracks.


I don't really like that line of thinking. Saying, "well, a human wouldn't have been able to do better" serves only to absolve anyone of any responsibility for the death. It's a lot easier to say it was "unavoidable" if a human isn't responsible.


'computer said boom' is the next level in isolating the killers from the killed. It makes it that much easier to do the killing and 'a la carte' will make it even more so.

Imagine if one of the superpowers one of these days would develop the technology to smoke anyone they wanted anywhere on the surface of the planet with 100% accuracy. Do you believe that would lead to more or less deaths? Do you believe this would lead to unchecked use of that power?

Personally I'm not too optimistic.


"Fail early, kill innocents often" is perfectly acceptable in foreign countries especially if the population there is viewed as backwards in some way.


I'm sorry, but I couldn't find specific reference to where they were saying this tech would SOLELY be used in "life and death scenarios" or be linked to any sort of "kinetic action"

The only mention which comes close:

> “Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,”

"Fusion" is military parlance for "we would use a variety of sensor inputs and systems" to make inferences. So, this would likely be only one component of many others used to determine identity and/or hostile intent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: