Hacker News new | past | comments | ask | show | jobs | submit login
The military is building long-range facial recognition that works in the dark (onezero.medium.com)
104 points by Coryodaniel on Jan 15, 2020 | hide | past | favorite | 75 comments



The responses on HN to this facial recognition technology vs China’s facial recognition technology is mind-boggling. Commenters saw the Chinese tech as dystopian (rightly so), but yet see this technology as a way to “to make sure we're getting the right people“, but that we still might want to think about how its use could eventually go “too far”.

If China’s facial recognition system is currently “too far”, how is this tech not also already too far? I guess if a technology is only used to recognize and assassinate foreign nationals, and not surveil citizens (which it will eventually be used to do), most Americans are okay with it. Some commenters are critical of this research, but the level of concern in these comments is way less than on posts about similar Chinese systems.

The point isn’t that this tech could increase accuracy and kill somewhat fewer civilians compared to the current amount of civilians killed regularly by U.S. drone and air attacks around the world. The entire basis for this activity - shooting missiles into civilian areas thousands of miles from home in endless wars - is the issue. The fact that the military sees a use for this kind of technology is the core of the problem, and no matter how well it works, it will only increase the efficiency of assassinations performed by the U.S. military, not abolish them.


[flagged]


> Technology is morally neutral

This argument needs to stop being used. Developed knowingly, a race-condition-full Therac-25 machine is objectively bad and a side-effect-free cancer cure is objectively good. It's along the same lines of "guns don't kill people, people kill people". It is a gross simplification of morality and ethical theories but nonetheless touted by people on the internet way too often. If anything it highlights the need for an ethics class as a requirement for any engineering degree.


> Developed knowingly, a race-condition-full Therac-25 machine is objectively bad

You've gone and proven my point with that "developed knowingly" disclaimer, because rather than merely a technology, you're describing a deceptive and harmful act committed by a human being.


Knowingly is not a disclaimer. Whether the developer intended it doesn’t change it’s an immoral technology if ever used. It is moreover absolutely not a proof or excuse for moral relativism.


I'm not advocating any sort of moral relativism. The act of intentionally building faulty medical equipment is morally wrong. Even the act of negligently building faulty medical equipment is morally wrong. But the faulty medical equipment itself is an inanimate object and it's a category mistake to ascribe moral judgment to it.

> Whether the developer intended it doesn’t change it’s an immoral technology if ever used.

Knowingly or negligently using unsafe medical equipment to treat patients is morally wrong, yes. You're again describing an act performed by a person and not the technology itself.



Who gets to decide who are terrorists and who are not? And based on what evidence? For very good reasons we have due process for that when it comes to our judicial system. As well the killings proceeded by the US are outside established international norms and even treaties sometimes.


Doing evil is generally enabled by such delusions of righteousness. Chinese state media undoubtedly frames their actions in a similarly justified way to how USian state media drones on about "terrorists".

I'm not equating the two entities - they certainly have drastic differences in aims, scope, chosen targets, etc. But rather evil is evil, and should be called out directly rather than excused for not being as severe as some other, especially more removed, evil.


That's fine. I don't think it's evil to kill terrorists. I especially don't think it's evil to develop more effective methods of identifying potential terrorists before killing them.

If anything's a delusion of righteousness, it's pacifism and isolationism.


“Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,” the DFBA’s director wrote in presentation notes last year.

It also opens up the possibility of weaponry optimized for an individual target's physical and mental weaknesses, personalized propaganda, and attacks on people's social connections, including noncombatants.


Framing is key for how you think about these military technologies.

You could frame this as the government making it possible to kill people in the dark automatically or as another data input to a vast array of data sources used to make life/death decisions for high-value military targets.

The technology has the potential to make sure we're getting the right people but almost certainly its use will be pushed too far. It has a utility and we should rightly be concerned that it doesn't get used outside of its limited intended application.


"We kill people based on metadata" - General Michael Hayden

Also this bit from https://en.wikipedia.org/wiki/Civilian_casualties_from_U.S._...: "Between 2009 and 2015, out of 473 strikes between 64–116 non-combatant deaths occurred. However during that period, the Obama Administration did count all military-age males in strike zones as combatants unless explicit intelligence exonerated them posthumously."

I don't think I trust the US military to determine what a "high-value target" is based on their track record. I also don't think more data will help, since the one of the basic steps for understanding data is to understand how limited and/or detailed the dataset you are working with actually is. If they haven't clearly understood the dataset they are working with now, there is basically no chance that more data will help. There needs to be a culture shift, not just a refinement.


Between 2003 and 2019, 183,535 – 206,107 civilians died in the Iraqi War / Occupation (https://en.wikipedia.org/wiki/Casualties_of_the_Iraq_War#Ira...). The US is seen as responsible for about 1/3 of those.

If drone strikes replace ground troops, they're an absolute godsend. If they supplement them, it's another tragedy. I'd love to live in a world where no one is killed and these sorts of body calculations aren't necessary but it's pretty clear we choose not to live in that world. As a result, technology can either be a tool to reduce violence by making it more precise and targeted or it can automate violence and make it cheap.


What do you envision a world where all wars are fought with drones or other robotics would look like?

International wars are generally fought to either gain territory (which is only valuable if it's livable and people tend to live where it's livable) or resources (and people tend to live near or work near where there are resources). How would a robotic army not still lead to civilian casualties in this scenario? Wouldn't that just make it cheap (in the public opinion sense) for a larger economy to fight senseless wars?

Besides that, think about how civil wars would be impacted. Think about how the recent genocides in darfur or kosovo would play out if they had access to current information tech and drones.

I hope wars stay extremely expensive, both in the monetary sense and in the public opinion sense, of which one component is that the ones risking their lives are actually the citizens.


War has dramatically changed. I don't envision mass drone armies bombing cities. I imagine extremely precise and targeted strikes enabled by technology.

Why fight a whole war when you can just kill the generals?

If we want to think about war from an economic perspective, it doesn't make sense to kill civilians. Civilians are valuable. Murdering the labor force you seek to extract rent from is usually a bad decision. Civilians probably don't care as much about the political situation as their personal situation. It also just complicates things.

It's possible that technology pushes war in this direction but it could also go in the direction you hint at. A country may be able to conduct an asymmetric war where their soldiers/civilians face 0 risk and the other side faces all of the risk. A country may also use this technology to control its own civilians.

I think it's an active choice and if we don't push in the better realistic direction the other path will be taken.


Well you mentioned framing, but you aren't even a little worried that the US military continues to kill thousands of people by drone strike with total impunity across several undeclared war zones? This isn't a theoretical. This is going on now. This technology will be used to kill people with absolutely zero accountability. Maybe we should strive for a planet where extra-judicial killings are hyper-efficient isn't where we invest our best minds and our money. I reject the framing where we have to accept this.


I agree with you in condemning the indiscriminate killing done by the US. Technology may make existing operations slightly less evil.

If you look at the developments to the hellfire missile: https://www.thedrive.com/the-war-zone/31409/everything-we-kn... . You can see that it's now more accurate and less likely to cause collateral damage.

The problem with the technology is that the more automated it gets, the more tempting and easy it is to use this technology liberally and frequently. I'm not against the inherent development of the technology. Only its application.


> Technology may make existing operations slightly less evil.

Technology is enabling these operations.


Can technology be a tool of deescalation and an increase in precision? Do drone strikes prevent carpet bombing? I honestly don't know. We are in one of the most relatively peaceful times.

The technology I worry about most is the hypersonic missiles.


Nuclear tipped anything stealth is worrying. The Shvkal [0] torpedo also comes to mind.

0: https://en.wikipedia.org/wiki/VA-111_Shkval


Given that they're killing people, with zero accountability, would you prefer technology to help them do so more accurately, or not? Would you prefer less civilians as collateral damage, or more?

Yes, I know, you'd prefer that they stop killing people. To do that, you have to either 1) persuade a large number of people to stop trying to kill Americans, 2) persuade America to get out of the Middle East and just let things happen over there however they're going to happen, or 3) persuade the powers that be that, even if we stay in the Middle East, killing people that want to kill us is not the most effective route to peace.

And those are fine goals. But until you achieve one of those three, it isn't amiss that, when we're trying to kill someone, we do the best we can to actually kill one of the people that's trying to kill some of us, and not some random other person.


While I respect your opinion, it presents a false choice because you implicitly accept this reality. Technology enables these extrajudicial killings and further developing and deploying more technology legitimizes its use. Drone strikes have expanded at the pace technology has made them feasible, though the threats have not.

The dilemma they present you with is perfectly designed for engineer minds (like me and you) who love to solve problems, increase efficiency, and feel moral about it. In short, it's a trap to get us talking about the wrong things, stuck in details.

This is the gradient that pushes us towards dystopia. They feed us fear and lies, take preemptive steps, and we get stuck arguing technicalities, legalities, and tuning up the war machine.

So, in short, I fully reject this framing and urge you to do the same.


You're assuming that more data leads to better decisions. That only holds true if the data is analyzed correctly and one views it without much bias, and considering incomplete data previously was used to justify killings then I'm not sure more data will help.

I'd also posit that expanding the US surveillance capabilities now to better the current situation might also lead to a worse future if we manage to fix the underlying problems you numbered.


It used to be hard to kill someone. You'd need a sniper team, or a strike force. You'd take out your target and anyone putting up a fight, including collateral damage. You'd run the risk your team would get killed. Today, you don't run those risks. The stakes are low. More people can be killed with less fuss, and little political fallout at home.


Fair point. There's Somebody's Law (I forget whose) that adding lanes to roads increases travel time, because traffic increases more than enough to compensate for the new lanes. This may have a similar effect - decreasing the per-kill collateral damage (or at least media attention) may result in even more killing.


Perhaps "Jevons paradox" was the name on the tip of your tongue?


No, but it's related. The one I was thinking of was specifically related to roads and highway traffic.

But, for what I was trying to say on this topic, Jevon's paradox works, too.


Braess' paradox is specific to roads, although IMHO it's caused by a different mechanics and IIRC only applies when a route is completely removed (or utterly crippled to achieve the same effect)


Who defines a "high-value military target" though?

If you look at the US "leadership", surely anyone who is reading HN still has enough mental capacity to worry about that kind of judicial power.


Exactly. I think too many Americans assume that the U.S. government are the good guys and that they will always pick out bad guys to kill, ignoring the many ruthless the dictators and military coups the U.S. has backed in order to topple popular, democratically-elected governments around the world. What’s to stop the military from using it to commit terror and murder indiscriminately?


Given the current state of best of breed, up close facial recognition, using the term "works" in a life and death scenario is an irresponsible overreach.

"Fail early, kill innocents often" is a terrible paradigm.


Generalized systems search massive databases, these systems can have much narrower data sets. "Who is X?" and "Is this X?" are very different computationally.

Also; "fail early, kill innocents" is a great paradigm for staying in power.

The beatings will continue until moral improves.


Doesn't have to be perfect, just has to be better than human performance. I heard a lot of stories out of Afghanistan and Iraq that ended up boiling down to "They had a big thing on their shoulder and it was pointed at a tank so we had to kill them", nevermind that that was a TV camera one in ten times.


> Doesn't have to be perfect, just has to be better than human performance.

Firstly, it's about time we stop pretending that people wait until technology does something better than humans do before they deploy it. It will be deployed long before that point. Secondly, there are more reasons to be concerned about this sort of technology than just whether it is effective at its intended purpose (identifying people who do not want to be found, at long range, for the purpose of assassination).


> Firstly, it's about time we stop pretending that people wait until technology does something better than humans do before they deploy it.

I think we need to stop pretending that humans are perfect, or even acceptably good.

Computers get better as time goes off. This technology wasn't a thing ten years ago. Now it's questionable. In ten years it'll be better than it was today. Humans will be exactly as good in ten years as they were today and exactly as good as they were a hundred years ago. And what they are today is not good enough.

> identifying people who do not want to be found, at long range, for the purpose of assassination

I think we need to stop pretending that humans are paragons of virtue. They do things we should be concerned about even when they can't execute effectively.

Long-range identification and assassination of people who don't want to be found is a capability that already exists, and in fact has existed for centuries - for a given value of "exists". Sniper teams, helicopter gunships, and artillery spotters have precisely this role and they make mistakes and kill the wrong people all the damn time.

And on top of that... when was the last time you heard about some US soldiers committing war crimes, in person, with their bare hands? No technology involved at all. No technology needed. Stop blaming it for human failings.


> Computers get better as time goes off. This technology wasn't a thing ten years ago. Now it's questionable. In ten years it'll be better than it was today. Humans will be exactly as good in ten years as they were today and exactly as good as they were a hundred years ago. And what they are today is not good enough.

Okay. Automated call systems have been deployed for decades now and continue to increase their market penetration. They are still not nearly as useful or good at what they do as a human would be, but that did not make any significant difference to the speed and rate at which they were deployed. Your argument is that abstractly, at some point, they should be better than humans. Maybe so, but that's not what I was arguing--I was arguing that the conditions for a technology's deployment are only distantly correlated with how good they are in comparison to humans performing the same task, not making some philosophical point about how the machines will not replace us or whatever.

> And on top of that... when was the last time you heard about some US soldiers committing war crimes, in person, with their bare hands? No technology involved at all. No technology needed. Stop blaming it for human failings.

I'm not really going to bother to respond in depth to the rest of the stuff you said since it seems to be responding to points I did not make (when did I ever say people were paragons of virtuous, hadn't killed people before, didn't make mistakes, or didn't use technology to kill people?). I am merely pointing out that technology is not value neutral; the particular technology we are talking about is explicitly designed to do pretty awful things. Responding that the real problem is people is missing the point; it's another iteration of the "guns don't kill people" argument.


I guess I'm a bit confused, why would they deploy something when humans are still better off without it?


Politics. Money. Expediency. The cheaper and easier you make it to do something, the easier it is for you to get signoff to do it. Humans require oversight and care. A robot staking out an area? Not nearly as much, and folks don't get nearly as much backlash when a robot doesn't come back home.


People routinely replace humans with services that don't do the job as well if it s cheaper to deploy, well marketed, does something else that the human wouldn't, or part of a mutually beneficial arrangement between the manufacturer and the buyer. For a case in point that has nothing to do with the military, have you ever interacted with an automated call system that you felt was easier to use and more helpful to you than a human answering the line would have been? I have literally never experienced that, but I have witnessed a huge percentage of companies switch from humans to automated call systems.


Because military contractors like money.



For a site with a readership so familiar with "scaling out", I am surprised about takes like that.

Does it matter if the technology makes the precision a bit better and the individual person evaluated for being killed, has a little better chance of surviving, thanks to being identified by a machine?

When... the machines allow many more people being killed. After all, this is what automation allows us to do. How many qualified and highly trained snipers could the military deploy? Not very many, compared to how many drones and other augmented systems they can deploy now and in the future.

Imagine every street corner equipped not only with CCTV, but augmented sniper turrets. I wouldn't bat an eye if a the next invaded city was blanketed with systems like these.


> Imagine every street corner equipped not only with CCTV, but augmented sniper turrets. I wouldn't bat an eye if a the next invaded city was blanketed with systems like these.

We already do that. It's just that the computers are squishy, unverifiable, black-boxed wetware that tend to commit war crimes, and the actuators are twitchy pieces of crap that are so bad at shooting that they have to be equipped with machine guns. What do you think soldiers are doing when they get shot at or blown up by insurgents or accidentally shoot reporters or civilians? They're not sitting in their base playing cards, that's for sure.


I know. But that wetware doesn't scale out easily, their families vote at home and so on. Machines are cheap. 100 humans with 15% failure rate will kill a number of people, sure.

100 thousand automatic machines with 5% failure will kill many, many more. Possibly forever, because there will be no need "take our troops home". The troops are already at home, pressing kill/live in their drone baracks.


I don't really like that line of thinking. Saying, "well, a human wouldn't have been able to do better" serves only to absolve anyone of any responsibility for the death. It's a lot easier to say it was "unavoidable" if a human isn't responsible.


'computer said boom' is the next level in isolating the killers from the killed. It makes it that much easier to do the killing and 'a la carte' will make it even more so.

Imagine if one of the superpowers one of these days would develop the technology to smoke anyone they wanted anywhere on the surface of the planet with 100% accuracy. Do you believe that would lead to more or less deaths? Do you believe this would lead to unchecked use of that power?

Personally I'm not too optimistic.


"Fail early, kill innocents often" is perfectly acceptable in foreign countries especially if the population there is viewed as backwards in some way.


I'm sorry, but I couldn't find specific reference to where they were saying this tech would SOLELY be used in "life and death scenarios" or be linked to any sort of "kinetic action"

The only mention which comes close:

> “Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,”

"Fusion" is military parlance for "we would use a variety of sensor inputs and systems" to make inferences. So, this would likely be only one component of many others used to determine identity and/or hostile intent.


Soon you'll be droned by face recognition, oh the future!


Tangential: That face is not the one I would have pictured when looking at the IR image. It looks like some weird "white-hot" version of IR plus ambient lighting, and once transformed, lost the mustache entirely.

I'm sure there's a reluctance to put out the real capabilities directly, but I'm also sure there's a reluctance to put out the real weaknesses directly.


> and once transformed, lost the mustache entirely.

Isn't this what you want? A system that could be fooled by a little facial hair seems like it would be pretty useless.


Well I suppose so! It definitely kept the top hair so I assumed it would be matching as-is.


It's near-IR, not thermal like the top image.


Curious if this will work. About 1 year ago I was reading about US military research looking for a portable/personnel use device capable to combine night vision and thermal vision in one vision set - don't think they managed to get a breakthrough with that. This face detection would work great if ported on such a device.


More kill-loops automation? Hope it doesn't kill any QA testers or have datetime bugs.

Reminds me of Captain America: Winter soldier https://www.youtube.com/watch?v=3ru5wM7fl7g

"Deploy the algorithm. Algorithm deployed"


Does anyone know of ways to combat these systems, something like a special pattern that makes it hard to read. China is leading the way in facial recognition, and I'd be surprised if there aren't any countermeasures available.


It depends on the scenario. If all you’re trying is stop it from working, the article says it’s IR based so jamming the sensor (the camera) with a lot of IR radiation can work quite nicely.

Individually you could do this with bright IR emitting LEDs. On the scale of the battlefield, a cool trick that is already being done today on yachts of the rich and private is using a laser to shine a lot of light directly on the CMOS (sensor element) of the capturing device when the shutter opens.

These methods work but they don’t hide what they’re doing (jamming). It would be instantly obvious as to what you were doing which would be OK on the battlefield if you’re not trying to hide your position, not so much in China.


Well, there is this unpretentious approach: https://cvdazzle.com/

Look N° 1 could fit very well in a military context I believe. Look + 3 however required at least some urban setting.

Also not sure if some of this measures would work with thermal imagery.


This countermeasure story came up in 2018 but I'd only heard about it recently: https://www.vice.com/en_us/article/59jm8d/trick-face-scan-ha...


Every day we are getting closer to slaughterbots (https://www.youtube.com/watch?v=9CO6M2HsoIA)


A possible result is that adversaries will deploy robots that all emit the same radiation patterns in lieu of identifiable persons.


I'm not sure I'd consider this a possibility as much as a guaranteed evolution of defense.


Evolutionary pressure will now make chameleon genes reappear with morphogenesis located on our faces


I seem to recall the Terminator UI did all this before deciding what to do. Is that our future, killer robots wondering around killing suspected "terrorists" or other undesirables? I suppose if you combine China's social score data collection and this tech with Judge Dredd like robots, our society will turn out like a kind of Minority Report where data leads to pre-crime termination.

Not a future I want to be in.


Drones are already killing people based on which phone they're carrying.

https://www.vice.com/en_us/article/d738aq/us-drones-target-t...


The future is now. Drone assassination had become a common tactic for killing suspected. The tactic is pretty effective and it will not go away, no matter with or without AI.


The tactic is pretty effective.

The strategy is a dragon seed. Every collateral drone strike breeds resentment for generations. The chickens will come home to roost. (But then the MIC can sell even more weapons, so I guess everything will be fine.)


How collateral drone strike differs from soldiers' errors and war crimes in this respect? Or even from killing combatants who are someone's children?


Even the sloppiest teams on the ground don’t routinely kill 100 wedding guests, time and time again like the drone strikes do. And if they do, they don’t get to get home at 5 pm and have dinner in their suburb after a deed well done. Drones scale out with little immediate cost, other than festering resentment half a world across.

Currently the drone operators experience PTSD from loitering over targets for hours, but with increased automation, maybe someone won’t even have to look at the images. Just “authenticate” a strike based on weighed parameters. War can be made much more streamlined yet.


The time when battles were up-close and personal isn't known for its peacefulness. With an exception for Pax Romana, where enemies were crushed and more or less successfully integrated.


Are you saying we have peaceful times because of drone strikes?

I rather believe other factors are at play and we have peaceful times despite drone strikes.


When killing the "bad guys" is more important than feeding your own people. I love how they use the term "target" instead of "person", they're not even trying to hide it.


Prediction: the 21st century will see computer aided technology for identification and tracking banned like chemical weapons in the 20th.

Corollary: not until widespread use and abuse highlights the danger.


Likely won't be banned, but limited in scope (so no autonomous firing, but yes robocop style visual field highlights, etc)


The only real application here is killing people with drone-launched missiles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: