Hacker News new | past | comments | ask | show | jobs | submit login
Deep in the Pentagon, a secret AI program to find hidden nuclear missiles (reuters.com)
98 points by denissa on June 5, 2018 | hide | past | favorite | 83 comments



> The Pentagon is in a race against China and Russia to infuse more AI into its war machine ...

Much of the focus is on the U.S., China, and Russia, but I wonder who else is doing it?

Several years ago, a leading thinker on the application of modern IT to the military noted that it could represent an historic change: Throughout history, military power depended mostly on the quantity of two resources, wealth and population. But AI is much less strongly dependent on those things: You don't need lots of people; the robots and computers do much of the fighting. And wealth only helps so much: For AI, much of the war-fighting asset is software, which of course is free to produce after development, not expensive hardware. As we know well, spending more on software development doesn't always yield a better outcome. What's the benefit from spending $100 billion on development rather than $1 billion or even $10 million? For a lot of software, not much. (Hardware scale does help machine learning, but is there an upper limit or diminishing returns?)

Small countries might have an enormous opportunity. Could Singapore or Israel take the lead in AI and become major military powers? Could Saudi Arabia or Iran afford to develop AI as good as the U.S., China, or Russia? What about private companies or individuals? I suspect it depends not on quantity of developers, but on quantity of 10x developers and on who first discovers the superior technology and, more importantly, its application. Remember that in WWII, German tanks were inferior to many of their enemies, but the Germans figured out how to use tanks more effectively and seized an enormous advantage before their enemies could catch up.

EDIT: A couple clarifications and added the potential of private organizations


> Several years ago, a leading thinker on the application of modern IT to the military noted that it could represent an historic change: Throughout history, military power depended mostly on the quantity of two resources, wealth and population.

You're describing network-centric warfare [1], which was formalized in US military theory in 1996 (22 years ago) and has guided US military doctrine since.

I'm going to put this delicately, but it'd be nice if we didn't assume everyone in the military was an idiot and hasn't thought of the things we have.

[1] https://en.m.wikipedia.org/wiki/Network-centric_warfare


> You're describing network-centric warfare ...

It's from an (at least somewhat) seminal paper about 10 years old from a leading defense think tank, and others discuss it too. The paper was about the future of warfare, particularly the influence of AI and robotics, and not past doctrine. It was highly respected in the defense community, AFAIK, so perhaps we shouldn't assume that HN posters are idiots who don't know what they are saying.

> it'd be nice if we didn't assume everyone in the military was an idiot and hasn't thought of the things we have.

It would be even nicer if we didn't attribute things to people that they didn't say.


Care to cite? I'd be interested in which paper you're talking about.

As for the barb, I wouldn't have dropped it if you'd cited / summarized more concisely. Past some word count without citation or reference, I assume people assume that I believe my thoughts are novel (and by extension, that others haven't thought of them).


> As for the barb, I wouldn't have dropped it if you'd cited / summarized more concisely. Past some word count without citation or reference, I assume people assume that I believe my thoughts are novel (and by extension, that others haven't thought of them).

I again just want make it clear that I'm not the one who made the parent and GGP comments look idiotic.

I don't work for you, so to hell with your requirements and your reading comprehension problems.

> I assume people assume

I just had to repeat that part.


If you're interested in this kind of thing I also recommend "Science, Strategy, and War" principally about John Boyd's EM theory of flight combat.

https://www.amazon.com/Science-Strategy-War-Strategic-Histor...


It will remain hard to do that whilst they still haven’t thought of the idea that perhaps killing people for money isn’t the best use of one’s productive energies.


Show me a world without violence, and I'll show you a country without an army.

Until then, I'd say the Swiss, Israeli, and American models are the most moral realistic options we've come up with. (Each with their own terrible moral compromises and failures)


Da Vinci designed siege machines and other weapons of war for his patrons. I suppose you'd consider him an idiot for doing so?

I'm as anti-war as they come but of course you have to recognize there are smart people who nevertheless decide to work with the military.


I disagree with both assertions. Unless any major breakthroughs in AI come along, you need a lot of people to create light, sweet crude data. China has multiples of the U.S. population, and is an expansive surveillance state gathering up every bit the internet has to offer, in contrast to the increasingly privacy-sensitive West. Given current trends in deep learning, those two factors alone give China an enormous advantage over the U.S.

Likewise, arguing that AI/software can supplant wealth in warfare is laughable. An example could be 9/11. The total cost to Al-Qaeda was a rounding error compared to the trillions spent in Afghanistan (and later, Iraq). And I have read anecdotes of U.S. Marines in Iraq recalling how insurgents managed to waste millions (if not billions) by firing one rocket at a Marine base then fleeing, all while the entire base has to shut down operations to dig in and defend, even if the lone attacker is long gone by the time anybody’s in position. And yet neither Al-Qaeda or ISIS or even the United States itself has managed to fully and definitely declare peace and go home without a new insurgency rising up in their wake.

In short, a power with little relative wealth or population - even with sophisticated software and AI - could never be any more than an expensive nuisance to the larger powers.


I don't think any amount of AI can predict light sweet crude over 60s within 5 ticks of accuracy either.


>>Throughout history, military power depended mostly on the quantity of two resources, wealth and population.

I thought that military power depended primarily on the quantity of population and materiél. The latter is driven primarily by wealth, but also relies on building, operating and maintaining a manufacturing and industrial base.

Smarter software will help equalize and hopefully reduce coordination costs (i.e. command hierarchy & effective intelligence) but doesn't really help with population nor materiél factors.


> but doesn't really help with population nor materiél.

To add, AI can help reduce staffing, failures, costs, etc. What it can't do is magically project force. This requires boots on the ground or a weapons system capable of reaching out and touching someone. This requires materiél, and for the initial build-up flesh-and-bone soldiers.

However, there is a possible exception in the form of cyber warfare. If AI can sufficiently disrupt digital infrastructure it could hypothetically bring another power to its knees. Still, if that power can bounce back from the initial setback that AI would again become a less direct asset.


Correct. And Cyber-based attacks are much less visible and easier to hide than overt attacks e.g. a cruise missile. But as things get more automated, the attack surface and concomitant damage will only increase.

This could have very real military effects. Suppose your supply lines/defense contractors are targeted by foreign digital attackers. Even if your Military itself uses secure communications and infrastructure, if their dependents are disrupted, it will directly impact their effectiveness.


Good point about materiél. When I wrote the GP, I tried to think of a country with wealth (and population) that lost a war due to a lack of manufacturing capacity and couldn't come up with one, so I stuck with 'wealth'.

Honest question: Can anyone think of such a place? I'm not including countries that had manufacturing capacity but didn't use it for military purposes.

> Smarter software will help equalize and hopefully reduce coordination costs (i.e. command hierarchy & effective intelligence) but doesn't really help with population nor materiél factors.

That omits another application of AI: Autonomous or semi-autonomous robots that do the fighting. That greatly reduces the need for population, because the AI robots are the 'boots on the ground'.

And while you need manufacturing capacity to build lots of robots, likely the software would be the difficult part. Manufacturing capacity is no longer particularly expensive or exclusive - it's so inexpensive that developing countries do most of it while wealthy countries moved on to higher-skill, higher- profit industries decades ago (such as writing software), and arguably the wealthy could rebuild capacity quickly if needed. Also, victory might depend far more on AI quality than on hardware quantity.

That's all speculative, of course. Who knows how it will work out. On one hand, we can't even build an AI that can drive safely on the freeway; on the other, AI might be simpler in a war zone, where safety and precision may take a back seat to destructive power - see it, shoot it - especially for the unscrupulous.


Germany and Japan. The USSR and USA both completely dominated them in terms of industrial capacity.

Germany in particular had about a 10-year head start in terms of weapons (strong tanks, jet fighters, assault rifles, drones, ballistic missiles) but simply could not build enough to make a difference.

Japan had a large population and plenty of soldiers to spare at the end of WWII but again couldn’t compete with Allied manufacturing. It was a fact they were aware of before the war.


I agree that Germany and Japan lost principally due to relative manufacturing capacity (though we could say they lost because their strategies didn't take that into account, particularly Germany, whose attack on Russia put them even further behind).

But I think that's a different question. I was looking for,

> a country with wealth (and population) that lost a war due to a lack of manufacturing capacity

I should have been more clear, though I think the context still conveys it: I meant, has their been a wealthy country that lost a war simply because it didn't invest in manufacturing capacity? Germany and Japan invested heavily in it, but the Allies, combined, were simply much wealthier and larger. Also, the Allied capacity in the USA wasn't reduced by bombing.


There are significant questions as to whether strategic bombing was effective at all. I can't find the USAAF studies from after the war at the moment, but I've seen them here.


Mine automation is a big deal.


Definitely not Saudi Arabia. A close friend of mine worked there as a engineering contractor for a big European aero-defense company and he said to me that the “engineering capabilities” (so to speak) of his Saudi colleagues were definitely sub-par. One of the reasons he gave me for this is that the Saudi electrical engineering students were still taught the Coran in university instead of the teachers devoting that time to useful EE knowledge. Maybe the new ruler will change things, but we’ll have to wait at least a couple of decades for the changes’ effects to become really visible.

My bet is on Israel, though.


> Definitely not Saudi Arabia. A close friend of mine worked there as a engineering contractor for a big European aero-defense company ...

The Saudi's answer to the problem you raise is in your first sentence: They can hire foreigners or buy the technology. In their nuclear competition with Iran, the Saudi's apparently intended to buy nuclear weapons from Pakistan.

> Coran

FWIW, it's usually spelled "Koran", "Quran", or "Qur'an" in English.


>>but we’ll have to wait at least a couple of decades for the changes’ effects to become really visible.

By that time the world could be completely different and hence too late to make any changes.


Interesting you mention Israel, I feel like they might be the prime example for your theory. They are small and not wealthier than say France or Germany. However they do have enormous IT talent and even more importantly their IT talent is motivated to help the military. In stark contrast to e.g. Germany where both bureaucratic rules (payment is fixed and terrible), the militaries reputation and the relative stability mean hardly any IT professional would be motivated to work on this. Also no first world military gets even close to the amount of real world use the IDF gets. No other nation can test their anti-missile systems straight from the backyard of its manufacturer using real enemy missiles. And we are already seeing this, Israel's Iron Dome is basically the only missile defense system we know actually and definitely works.


I think this is a pretty insightful an accurate point. I'd like to add the following:

'Secret' software is very hard to keep secret: once your enemy has a semi-intact drone/AI then they'll have upgraded their own fleet within months (if yours was indeed superior).

I agree that it seems possible that a country with less wealth might outperform a country or group with more wealth. However, it's likely that the wealthier nation is better able to steal the technology and catch-up than the reverse scenario. A wealthier group has more budget for monetary bribes, and can pursue more parallel reverse-engineering attempts than the less wealthy group.


It might be possible to ship the current trained model without exposing underlying training techniques/datasets. How possible would it be to pick up a XYZ learning model, reverse engineer it, and improve it in a useful way?


Very difficult. It's much easier to train another model. Those things are mostly black boxes. Even the developers might not understand completely how they work.


> 'Secret' software is very hard to keep secret: once your enemy has a semi-intact drone/AI then they'll have upgraded their own fleet within months (if yours was indeed superior).

All new systems have anti-tamper mechanisms designed in from the start specifically to minimize the chances of this happening (assuming they work as intended).


judging by our utter inability to secure software and hardware stacks consistently, i wouldnt hold out much hope for this approach.


The world of bespoke military hardware is very different from consumer devices or commercial installations. Where off the shelf stuff is used, a lot of work goes into neutralizing the infiltration and exfiltration vectors. It is one of the reasons military computing systems are both so expensive and so behind the cutting edge.


they also dont have magic pixie dust that solves the genuinely mathematicly hard problems involved in securing things. Your drone AI only needs to be stolen once.


Comparing the DoD and MPAA, you're dealing with an order of magnitude more funding and talent.

Are there waste and &#+$ups? Sure.

But do an extra couple 0's produce more secure hardware? Absolutely.


Adding self destruction capabilities & possible booby traps on military use drones is a bit different from adding them on smartphones.


Well Xbox One and Playstation are holding up pretty well and there are millions of units to try stuff on and they need low per-unit cost so can't even use more expensive anti-tempering stuff


there also arent nation state actors pouring in millions into cracking them. Or if there is, at least they arent going to tell the internet about it.

And for the record the ps4 has had kernel level exploits published.


Wait until the military figures out that AI is close enough over the horizon that a Manhattan style project will most likely be able to achieve a self-improving AI. There will be untold funds dumped into it to head off the 'enemy' at the pass because whoever gets it first might be able to stop others from ever achieving parity (or so it will no doubt be reasoned). For all our benefits I hope this is a lot harder than some people think it is.


While the dangers of atomic chain reactions are well known, I think we're still safely behind the AI chain reaction breakthrough.

While CNN and ltstm are very good at some tasks they're still far away from meta-learning


this is basically what Deepmind is doing


Isn't this basically the thesis of the singularity [1]

[1] https://en.wikipedia.org/wiki/Technological_singularity


> Much of the focus is on the U.S., China, and Russia, but I wonder who else is doing it?

The US sells defense equipment to it's allies


> For AI, much of the war-fighting asset is software, which of course is free to produce after development, not expensive hardware.

In an AI war, any software you push at thr beginning of the war will be useless within days, because the enemy will push better software. The most efficient way to adapt will be to continually update your algorithms by stochastic gradient descent on GPUs. In the resulting minute-by-minute arms race, hardware will be crucial, as will having clever engineers who can continually streamline the process.


ai definitely needs a lot of people (researchers, to invent or at least apply the AI) and money (to pay the researchers, get good ones), and the quality of the program is definitely correlated with the money invested


Well, definitely not the Israelis since they have such a historical record of substandard intellects.


> If the research is successful, such computer systems would be able to think for themselves

At what point will journalists stop writing this. We may very well have general AI at some point in the future, but I really doubt it's going to be in our lifetimes.


They've been writing like that for the last 60 years.

https://www.nytimes.com/1958/07/13/archives/electronic-brain...


It makes me bristle every time I read it. Advanced statistical computing != thinking! People love to conflate number crunching with intuition and all of the other things that are impossible for computers to do, so I expect it to continue.


Doesn't help that a lot of Tech companies actively promote that AI can deliver a lot more than it can (e.g. IBM, Tesla)


>such computer systems would be able to think for themselves, scouring huge amounts of data, including satellite imagery, with a speed and accuracy beyond the capability of humans, to look for signs of preparations for a missile launch

Replace "a missile launch" with "a peaceful protest" and the statement suddenly becomes dastardly.

The increasingly authoritarian Chinese government would absolutely adore a technology like that, I am sure of it.


But the Chinese already do this, it's just not that automated, and it relies on vast amounts of cheap manual labour.

Or you don't think the internet filters itself at the border of China.


> Government-sent postmen would be able to walk past every single door in the country every morning and [deliver mail].

Replace "deliver mail" with "look through the windows and write down what they see" and the statement suddenly becomes dastardly.

See what I did there? I took an random situation and made it instantly negative and big-brotherish just by invoking an imaginary narrative.

Conspiracy theories just don't help advancing arguments.


I think you're being unfair.

You've conflating the much simpler task of changing the label with a much broader change (ie, throwing away the entire practice in favor of a completely different and more difficult one).

Which do you honestly thing is more realistic and more consistent with what we know of the history of the state?


I’m not sure why this is news. I was interviewed for a position almost 10 years ago to develop software that categorized satellite imagery using heuristics, object detection, and classification. I assumed that similar projects had been going on for decades before.

Much of the ‘AI’ research has been directly (or indirectly) funded by DoD, 3-letter agencies, NATO, and many other countries since the field’s infancy.


Propaganda to counteract the negative press about AI for defense/Project Maven, perhaps.


So propaganda to counter propaganda?


Yes.


I imagine reviewing positive training data for that model is terrifying.


I can only say from experience in Pakistan but culture makes it difficult for other countries to succeed if they are turning away developers for ethnic, tribal or religious reasons or for being on the autism spectrum. Not to mention the culture of never questioning authority figures making proposing new ideas much more difficult.


Detecting launch prep might be useful but it's ultimately just too easy to conceal a launch site to be the complete solution. It doesn't really help with a sneak attack from a major power and US missile defense isn't even good enough yet to reliably stop a NK attack.

What the world really needs is to ensure mutually assured destruction is remains in effect. The world cannot afford to let the Russian or Chinese dictators believe they can pull off a first strike and take over the world.

The problem right now is that the US is still trying to uphold MAD by itself and it's failing. The US nuclear triad is way out of date and potentially vulnerable to a first strike. We need to decentralize the system among other wealthy and trustworthy countries.

All countries in the G7 should have enough nuclear weapons pointed at Russia and China to ensure MAD. And they should develop these weapons independently of the US for increased reliability and redundancy.


Frankly, I think the country that one should be the most concerned about pulling off a first strike is the USA, not China or Russia...


The US has had the opportunity to take over the world multiple times and did not avail itself. Dictatorships are much riskier because of how centralized power is. But the genius of MAD is that it doesn't rely on trust.


Let me guess, because of Trump, right.


more like superfusing, missile defense systems, superior early warning radar...


> The US nuclear triad is way out of date and potentially vulnerable to a first strike.

How are nuclear submarines vulnerable to a first strike?


There's just a handful of Ohio class subs and all of them should have been replaced by more modern and stealthier designs by now. It's conceivable that an enemy could locate most of them and hit them with nuclear torpedos or similar.

And even if most of the active subs survived it might not be sufficient to ensure MAD because they don't carry all that many SLBMs.


handful = 14 boats on deterrence patrols. there are literally 1000 warheads on active american boomers with superfusing.....

If literally one submarine survives this amazing russian counterstrike by taking out every single other ohio class at once that still leaves a counterforce of at most 12*24 = 288 warheads. that is 144 to burn russian cities and 144 to burn chinese cities. to me that seems like a credible strike.

and this is still assuming literally NO land based or air deliverable weapons are left. ... considering the spotty accuracy of russian missiles (seriously go look up their CEP ratios - do you even know what CEP stands for? ) I find that scenario very unlikely.

China is widely believed to have only 270 warheads...

https://thebulletin.org/how-us-nuclear-force-modernization-u...


1. The US claims to have 14 active but that doesn't account for maintenance problems and the times when they're going in and out of port, something enemies track very closely. The real number is likely even lower than this already low number.

2. Why do you think superfuzes helps in a counter strike? They massively increase accuracy but that's not even the problem to solve in a MAD scenario. Superfuzes help in taking out an enemy's nuclear force while it's on the ground, as in the case of a first strike. You don't need them to hit cities.

3. Counting warheads is misleading because it ignores the possibility of interception or failure of the missiles themselves. Countering some number of those 24 missiles is entirely conceivable.

4. Yes, I know what CEP means because I too have read pages on Wikipedia. Is this your weak attempt at claiming expertise where none likely exists?

5. Here's an open letter from the actual experts that ran the US Strategic Command. They agree with me that the US nuclear triad is in dire need of modernization.

https://www.wsj.com/articles/the-u-s-nuclear-triad-needs-an-...

Some of their recommendations have already been implemented and a lot more needs to be done before we can rely on MAD being in effect in the future.

6. Putting the hopes of humanity on a few subs that were designed in the 1970s is a very dangerous idea. It's an admission that the triad is broken. When you've let two thirds of your redundant system fail, you've got a very urgent problem.


Don't you think any of those 1970s subs have seen modernization efforts?


Have they been upgraded? Yes. Have they replaced those 1970s era engines with subs that have quieter electric drives as planned? Nope.

From the open letter (linked above) by the experts: "The last concentrated investment to modernize the triad came during the Reagan administration."

Even if you believe the subs are in great condition, that doesn't fix the other two fundamentally broken legs of the triad. The B-52 fleet would likely get shot down because they're very slow and non-stealthy. The ICBMs probably don't even work (no public evidence they do), they're in well known static silos, and have just a few minutes of "use or lose" time.


you do realize the ohio replacement class is literally in development right now.

> The ICBMs probably don't even work (no public evidence they do), they're in well known static silos, and have just a few minutes of "use or lose" time.

They fire off a few randomly selected minuteman IIIs every so often and the footage is typically public. maybe not perfect evidence but it is some evidence.


>It's conceivable that an enemy could locate most of them and hit them with nuclear torpedos or similar.

i kind of wondering which Navy in the world has (or going to have in the next 10 years - supposing that US would do nothing in those 10 years) sufficient number of such superior attack submarines (or how else you're going to attack these subs? ICBM or long range bombers are obviously out of question) Definitely it is beyond Russia and China capabilities (even if combined (not is going to happen))


What about massive nuclear mines laid in front of their routes? Or long range nuclear missiles fired from ships? Long range torpedoes? Kamikaze-style mini subs? A sneak attack would probably be creative.

The Ohio subs are supposed to be hard to track. If they are being successfully tracked then they're vulnerable to attack.

The point of the nuclear triad is to be triple redundant. The system is not designed to rely solely on the ability of the subs to survive an attack. That would be a single point of failure.


> It's conceivable that an enemy could locate most of them and hit them with nuclear torpedos or similar.

By 'conceivable', you mean 'I am able to think this thought, regardless of how tied to reality it is', right?


I actually mean something more like "plausible" here.

If primary means of protection for these subs is stealth, and that stealth is no longer in effect, then it's logical to assume that they're vulnerable.

The fact that the US Navy is trying to replace them with stealthier Columbia class submarines is good evidence that this is true.

Do you any useful information or ideas to add?


but MAD is a very bad thing - why would we want this thing to work at all - the more that fail the better off we all are


you are misinformed.


But what if the nuclear missile is being carried by a bipedal walking tank?


Does the fact they are using a different approach (i.e. 'Deep Learning') to many of the same problems they've been trying to address for some time ... even really matter?


'War Games' -> WOPR [1]

[1] https://www.youtube.com/watch?v=iRsycWRQrc8


I'm kind of sad that we have enough imagery of nuclear missiles being fired to train the model and verify it. :/


This is the most propaganda headline I've ever read.


Do you want Skynet? Because this is how you get Skynet.


I misread this as "secret AI program to fire nuclear missiles" and was briefly terrified.


I see Pentagon PR reached out to Reuters.


Maybe so, but please don't post unsubstantive comments here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: