Hacker News new | past | comments | ask | show | jobs | submit login
How the U.S. military thinks about AI [audio] (changelog.com)
47 points by killjoywashere 6 days ago | hide | past | web | favorite | 50 comments





The substantive part of this conversation comes about 35 minutes in, when Allen describes the DoD's five AI ethics principles: that any deployed AI systems must be "Responsible, Equitable, Traceable, Reliable, and Governable." [1].

I think "Responsible" raises some hard questions. Today, any weapons use by a robot might require a responsible human operator to first (remotely) pull the trigger. But what if an adversary allows their robots to fire without waiting for human permission? The first side to do so would gain a critical advantage, and we cannot have an autonomy gap.

Some have raised conerns that in the extreme case, this could lead to a world of machines anonymously killing without attribution. If you haven't, see Stuart Russell's compelling short film "Slaughterbots" [2].

[1] https://media.defense.gov/2019/Oct/31/2002204459/-1/-1/0/DIB...

[2] https://www.youtube.com/watch?v=9CO6M2HsoIA


Autonomous killing could be something that needs to be added to international laws of war. The USA already follows restrictions that could give it an advantage in war, but it doesn't because of international agreements. For example, our military has to wear uniforms during war, and we have to follow proportional force that restricts what weapons can be used in response to an attack. Countries largely follow the rules because nobody wants a free-for-all war with no rules.

>Countries largely follow the rules because nobody wants a free-for-all war with no rules.

And somehow these rules tend to put restrictions on things which are a force multiplier for smaller forces against larger ones or would make it very costly for a larger country to attack a smaller one. They protect the incumbents. It makes sense why big militaries would want rules like this - this makes it much less costly for big countries to war with smaller ones. Russia didn't want Ukraine to have nukes. A few decades later Russia invaded Ukraine.

It is okay for a US drone to strike a target and kill civilians. The civilians will just be defined as "enemy combatants" and the world moves on. However, if people from that same place somehow manage to kill the drone operator then that would be a terror attack.


"Russia didn't want Ukraine to have nukes."

The most concerned party at that time was the US. Having multiple nuclear armed ex-USSR states was contrary to their interests.

The better example is the US not wanting Gaddafi to have nukes and later invading Libya.


The US follows rules to the extent they can get away with breaking them, no more no less. Witness the Hauge invasion act of the early 'oughts that sought to avoid accountability by the international criminal court by threatening to invade it if a US military person was ever put on the dock.

https://en.wikipedia.org/wiki/American_Service-Members%27_Pr...


I disagree. I think the majority of us military personnel are good people and would disobey orders to attack civilians or otherwise commit evil acts (even if it was in the middle of nowhere and they could get away with it).

In this context the story of Hugh Thompson [1] is very interesting - a clear cut “kill the civilians“ order and people risking their careers and lives to disobey and protect the village.

Whats relevant though is that most soldiers obeyed the orders and preferred to commit war crimes. Even in the US military which has a strong moral ethos (or likes to portray itself that way which for the common soldier is the same thing) But what about all the other, smaller countries?

[1]https://en.m.wikipedia.org/wiki/Hugh_Thompson_Jr.


Of course the majority of people is good people almost everywhere and yet our prisons are full with bad people that we need to lock away because they disobeyed the written rules of society and couldn't get away with it.

Possibly, in a conflict where the US isn't particularly threatened. The last time the US felt in real danger it flattened German and Japanese cities, and dropped two nukes.

I don't feel we've become profoundly nicer since then.


We used the weapons we had at the time to stop a World War. We've definitely become profoundly nicer since then. When was the last time we flattened a city or dropped a nuke?

We've since developed PGMs which allow us to minimize collateral damage more than ever before. We can destroy strategic targets of military interest with pinpoint accuracy. People that commit war crimes are reported by their fellow soldiers and justice is (usually) served. Sometimes war criminals are pardoned even after being turned in (like Gallagher), but this is rare and usually results in national outrage.


The US hasn't done so, because it hasn't felt the need (although it got close with Russia several times).

Committing war crimes has effectively been the national defense policy of last resort since WW2. That's what the nukes (and formerly chemical and biological weapons) are for. The implicit threat is that the US will massacre your civilian population.

And I don't mean that as any special criticism of the US. Most countries have a similar attitude of the rules of war going out the window when things get bad enough.


Well, sadly, what you think contradicts the facts and evidence.

https://en.m.wikipedia.org/wiki/Kandahar_massacre


How does that contradict what I think? That's the story of a single individual going rogue and then standing trial for his crimes. He's now in prison.

Trump recently pardoned some people who massacred a square full of civilians for no good reason. So I dont really have any reason to believe this.

Source?


>would disobey orders to attack civilians or otherwise commit evil acts

So..how do you explain, to give but one example, the hundreds of thousands the US killed in Iraq?


Easy, human error and collateral damage.

If insurgents hide and fight among civilians, civilians are going to die in the process of destroying the insurgents. It's like cancer - you can't kill cancer without killing tons of healthy innocent cells around it.

I don't necessarily agree with the US's prolonged occupation in the middle east, but it's highly disingenuous to imply US military leaders are commanding their soldiers and pilots to intentionally kill civilians.


And why shouldn't the US do that? The us isn't subject to ICJ jurisdiction, so it wouldn't be any different than if Russia kidnapped US military personnel for their own "trials".

How would autonomous killing be precisely defined? Whenever a machine is designed to kill without an affirmative command from a human operator? Would that include landminds? Maybe it should...

There are plenty of fire-and-forget weapons that will attack a designated target autonomously. Some may even be totally automated, like CIWS, if you put them in some paranoid mode.

A landmine is indiscriminate, by that logic anything with a triggerable fuze is autonomous (even a snare trap would be). The difference would be if the landmine chose to explode based on some other factor.

Autonomy and intelligence are not the same thing.


Well landmines can discriminate by atleast weight and magnetism. I think they can also count the tracks and blow up later if they want to

The action is however caused by a human who mined the area, its just delayed, similar to how a grenade has a delay after the human activation, before it explodes. You can make the same distinction with an automatic gun or turret. You position it in an area in which you decide to kill anyone in it.

Autonomy gets truly dangerous because you dont need to create death zones and no mans land. They are not a less violent version of just bombing the area. They can run considerations and decide by themselves who to kill and with this, their usage outside of active front lines will be considered. And with that, the people deploying them give up the decision and delegate it to an algorithm or worse yet, a trained black box. Reality will be to have completely unaccountable killers bots flying over countries who cant defend themselves against foreign aggression.

This stuff has absolutely horrible potential and i find it prudent to shame and cut out anyone with a sufficient lack of a spine to work in this area. As unpopular as it is to consider the impact of the technology you are developing, it is taught for a reason in a CS degree. This work has the potential to screw over humanity for a while to come.


I believe landmines (of the type used in wars of the past) are not allowed by international law anymore, precisely because of the problems that come with them going off on whoever happens to be there, including far into the future.

Some ability to loiter and choose targets via algorithm. Mines can't move, and guided missiles are spatially and temporally bounded in what they can target.

There have already been cases of "autonomous" targeting, such as in 1982 when Exocet missiles fired by Argentine aircraft at a British warship selected a cargo ship instead (possibly due to countermeasures used by the warship). In this case, the weapon hit a target not intended by the firing unit. Of course, unguided weapons do this all the time...


> Mines, [...] and guided missiles

The combination of the two is interesting. The Mark 60 CAPTOR is a naval mine that launches an acoustically guided Mark 46 torpedo.

https://en.wikipedia.org/wiki/Mark_60_CAPTOR

https://en.wikipedia.org/wiki/Mark_46_torpedo


good point. autonomous killing is nothing new.

"Responsible" seems pretty easy: follow the rules of engagement.

What if a convoy passes a group of nuns? What if the nuns "fire without waiting"? Should we let the nuns have the "critical advantage" of first shot? Well, yes, it'd be absurd to do otherwise.

If some other country wants to develop auto kill bots that commit war crimes, I don't think we have an imperative to match them.


That's not the problem parent is talking about. They're talking about an autonomy gap.

Tolerant rules of engagement are feasible because reactions are "similar" in timescale. Enemy fires on you, you fire back, etc.

But there are many developments that make that system less tenable.

Firstly, increased weapon lethality. When the first shot has a 90%+ kill probability (say, putting a Hellfire missile on a civilian vehicle), can you afford to let the enemy take the first shot? Or do your rules account for reality and back up a step? Maybe now, radar detection, firing preparations, or even entering a given airspace are sufficiently provocative to counterattack.

In literature and experience, this is casualty modeling moving from attrition (some constant, scaled by time) to volley (majority of casualties caused in bursts, at specific points in time).

Secondly, and this specific issue: reaction time between autonomous vs manual systems. If an opponent is using autonomous targeting and engagement, and you're using autonomous targeting and manual engagement, there may be some orders of magnitude difference in reaction time. Coupled with the first point, that may mean your entire force is wiped out before you have a chance to react.


The idea of a preemptive attack in the face of potentially large risks is nothing new and not unique to weapons utilizing AI.[1][2]

[1] https://www.jstor.org/stable/3109845?seq=1

[2] https://en.wikipedia.org/wiki/The_One_Percent_Doctrine


It's not. And it's led to a reevaluation of rules in those other spaces too.

> Traceability

A question for the AI Researchers and Engineers On the topic of _Traceability_ and as someone outside the field this is something that has puzzled me:

(Broad assumptions) Assuming a 'neural network' is a black box probabilistic model. How do we _prove_ in the context of _safety-critical_ systems that they will behave in a known determined and traceable way for all known cases?

I would be curious to understand the process/methods that people use in ADAS system or Medical Imaging (and now Defense applications as well) in order to provide 'verification' evidence for such a system? (Focusing more on verification/provability from a regulatory standpoint)


There is work in progress for understanding the content of a neural network and avoid this "black box" effect. This is still far from reached but there are some advances. See [0] for example.

Also, remember that AI includes more than NN. You can use some other models, as a Linear Regression or a Random Forest which are perfectly explainable.

[0] https://christophm.github.io/interpretable-ml-book/neural-ne...


Indeed ML is vast and not limited to NN.

My question was more towards people who are actively using NN as a solution and _have_already_ shipped products in a 'regulated' industry.

The products are obviously out there, so to me it is confusing what evidence can a team provide at this point in time (with the current NN implementations) in order to get a product using NN (with a non-deterministic uncertainty) approved by a regulatory body? (automotive, medical, etc.)

Examples that jump to mind are Tesla's stereovision sub-systems (I am assuming NN are used), or NN based medical imaging classifier software (NN instead of Clustering or SVM techniques)


Here is a (longish) post I wrote about this issue (mainly from the perspective of verifying autonomous vehicles):

https://blog.foretellix.com/2017/07/06/where-machine-learnin...


Thank you very much on point. Thanks for sharing!

DARPA has acknowledged this problem and is working on it.[1]

See slide 11 for some of the technical approaches to making AI more interpretable.

[1]https://www.darpa.mil/attachments/XAIProgramUpdate.pdf


Various model compression schemes get rid of high-frequency noise in the model, combating over-fitting. This results in robustness against adversarial inputs.

Some of these schemes are motivated by rather robust theory. Gaussian Processes are a good starting point to learn more.


This is certainly out of my depth but how would this model 'reduction' improve provability?

It would (I assume) reduce instances of non-desirable behaviour. But how would it improve the evidence I am able to provide to a regulating authority?


To my layman's understanding; by reducing instances of undefined behavior as those instances should come from the noise of the data. Noise looks like structure in very-high dimensional views of data, so you need bounds on that stuff.

For autonomous vehicles the 'proof' to the regulator can be the signature of the engineer. - Fortunately some engineers have higher standards. - I think in the future the trainig data will be from simulations that create curriculum learning datasets so that the noise characteristics are perfectly known. The ML algorithms can be written with dependent types, so that you can prove your code does what you think it does.

Another challenge is inductive bias, which is a lot like confirmation bias. This bias comes from choosing an ML algo which is sensitive to certain information and blind to others. You need to navigate the set of all possible functions, AKA Hilbert space, to overcome it. Fortunately only a small corner of this space is relevant to our universe and Tensor Networks seem to address this problem.

It looks like a lot of work to put these pieces together but at least it looks like the problem is tractable.


> But what if an adversary allows their robots to fire without waiting for human permission? The first side to do so would gain a critical advantage, and we cannot have an autonomy gap.

You're making the bold assumption that the robots will actually work, instead of wasting ammunition on all the wrong targets, including friendlies. I'm pretty sure the first side to allow fully autonomous weapons will do so before the technology is ready (humans are never patient enough to wait until the technology is ready) and end up regretting it.


> You're making the bold assumption that the robots will actually work, instead of wasting ammunition on all the wrong targets, including friendlies

I could easily imagine the US military deploying autonomous combat bots to some middle-eastern country unaccompanied by human troops. They would care approximately zero about wasted ammo, and we already know from our drone strike policy what the military thinks about accidentally killing non-combatants. As long as there are no Americans around to get killed by rogue bots, it doesn't really matter how much collateral damage they cause, within reason.


The US military may not care whether their toys work or not, but that doesn't mean they will be gaining a critical advantage from not caring, just as accidentally droning non-combatants hasn't given them a critical advantage.

It gets them the critical advantage of consolidating power at home through more money (via more military-related contracts), which "more money" oftentimes equates with more political power (at least in Western-like societies). The "more power" part has always been an end in itself.

> You're making the bold assumption that the robots will actually work, instead of wasting ammunition

I'm pretty skeptical the U.S. military is concerned about wasted ammunition. They waste so much on training and preparation that anything wasted in combat is minimal.

> I'm pretty sure the first side to allow fully autonomous weapons will do so before the technology is ready

It doesn't have to be autonomous to be deployed before it's ready. Iran managed to should down a civil aircraft based on human error. The U.S. has done the same decades earlier. Clearly there were either technology or human process problems, I suspect that autonomous military robots will follow this trend.


>I'm pretty skeptical the U.S. military is concerned about wasted ammunition. They waste so much on training and preparation that anything wasted in combat is minimal.

They may not care about the ammunition itself so much but they care about budget and combat effectiveness. It's not uncommon to have training be missed due to ammunition budget shortfalls and a robot without ammunition isn't terribly effective.


There have also been instances during the war in Iraq where human soldiers clearly had no problem shooting civilians.

It's not something that's unique to autonomous systems, though I suppose it might terrify some people a bit more than when it's humans doing it.


Maybe that makes sense for some applications though, like a landmine?

The sound design on this podcast is awful. Interesting podcast, but the music is way too loud and doesn't fade in or out at all. the mic of the host seems to occasionally tear as well.

I try to tell you guys about multi-million dollar opportunities to save lives available for the next 10 days and I can't keep it on the front page, even with dang's help (1). By comparison, I can't keep this softball podcast off the front page. Please, tell me you're more interested in saving lives than kibitzing.

(1) https://news.ycombinator.com/item?id=22050285




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: