Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence Is Now a Pentagon Priority. Will Silicon Valley Help? (nytimes.com)
108 points by yaseen-rob on Aug 26, 2018 | hide | past | web | favorite | 74 comments



Of course Silicon Valley will help.

Google != Silicon Valley

Palantir is in Palo Alto and has thousands of employees, they're helping the military industrial complex every minute of the day. Oracle is in Redwood, one of the largest employers in SV and has never stopped helping the Pentagon (not implying they're a leader in AI). Silicon Valley is loaded with companies and employees more than eager to help and always will be. The will they help headlines are nothing more than click-bait, it's a premise invented to get attention and stir drama.

In fact it's quite the opposite. The biggest problem for the Pentagon is filtering through which of the eager Silicon Valley companies they want to disperse funds/deals to. Figuring out which companies are the best option for what they want to accomplish. For every Google that resists slightly, there are a dozen more that will be thrilled to get the business. And if a company doesn't exist, there's endless venture capital available at a moment's notice if a new company needs formed to take advantage of a lucrative contract.


There would be no Silicon Valley without the military industrial complex. Period.

But I think there are problems with getting too close to the Pentagon and then trying to operate internationally as a purported neutral information platform.


While I mostly agree with you on a historical basis, I don't agree with you on a theoretical basis.

Post hoc ergo propter hoc. The military invented a lot of things "first", or at least provided the use-case and funding for a lot of things.

But that doesn't necessarily and always mean that people couldn't have invented something without military funding and uses in mind.

For example, the military probably invented the sandbag bunker, but I have one in my back yard. I am sure I would have figured out how to invent that piece of technology on my own for my own non-military needs.

So, maybe someone out there would have thought that electrified silicon is an excellent way to facilitate virtually all global communications.


The practical aspects of modern computing were to a large extent born from US defence funding. Starting from the theoretical roots: Von Neumann architecture [0] and various defence projects including ARPA[1].

"But that doesn't necessarily and always mean that people couldn't have invented something without military funding and uses in mind."

Sure, it doesn't. But funding and large number of end users combined with price-inelastic demand for your product enable product development and field testing like no other scenario does.

[0] https://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the...

[1] https://www.amazon.com/Dream-Machine-Licklider-Revolution-Co...


The problem with your comparison is that IT took tens of years of investment before it produced anything truly useful. It is very hard to imagine that research being funded for years and years without financial return by anything other than a state-level actor. And in most capitalist democracies, especially in the US, the only politically-viable way for the state to massively invest in an endeavor is to do so through military spending, which is what happened with IT.


I'm not so sure. Mechanical calculators were profitable, and one those were good enough the jump to the first programmable computer [1] was fairly small and would have happened even without Nazi funding. That computer was used for German aviation research, so it's likely it would have been bought by other research institutes. From there you have a fairly straight forward path of incremental improvements that open up new markets, until we arrive where we are now.

But I will agree that the transition to silicon would have taken another decade without the space race.

1: https://en.m.wikipedia.org/wiki/Z3_(computer)


> But that doesn't necessarily and always mean that people couldn't have invented something without military funding and uses in mind.

It's not so much about the Military specifically, but about public funding in general. In short: world-transformative technological innovations tend to come out of public funding, where the license to experiment, sheer volume of funding, and isolation from market pressures enables smart people to build transformative things.

So yes anyone could have invented X, but probably not if they had to operate on the market and without a money-fountain to tap into.


Without the government there would be no military.

Thus your line of reasoning is really an argument for some degree of government and centralized planning, as opposed to total free market innovation. That Silicon Valley owes its existence to the government as much as if not more so than the free market.

As a socialist, I have no qualms about that. I'd just rather our communal resources and effort be spent on things other than the military (industrial complex). If Silicon Valley is a good thing (debatable), we could choose to make it happen without the military. It's all a matter of cultural will.


Well it's not just a matter of or cultural will, but rather of other people's will. The army exists in part as a defense mechanism from outside forces, and you can't single handedly decide not to take part in wars.


Does silicon valley like money? Will an industry created by the military take military money?

I'm no expert, but my crystal ball says yes. If not, the pentagon would simply fund a new silicon valley in virginia and the surrounding area.

Considering the huge defense budget, I'm sure silicon valley is salivating for a piece.

https://www.cnbc.com/2018/08/13/trump-signs-717-billion-defe...


> If not, the pentagon would simply fund a new silicon valley in virginia and the surrounding area.

I'm skeptical that the military would be able to recruit AI talent in significant numbers. There's a fairly strong cultural opposition to military applications among AI researchers. Between that and the comp packages they're getting to work at "cool" companies (i.e. consumer or pure research focused), the cost for the military to draw them away from that seems like it would be prohibitively high. Especially if they're also asking them to move away from SF/NY to somewhere "boring" like Virginia. Isn't the military pretty limited in how much they can pay?


>Isn't the military pretty limited in how much they can pay?

The AI researchers wouldn't literally be soldiers. They'd be contractors, side-stepping the pay limitations.

That said, military grunts would be perfect for the bootstrapping needed for some applications. "Manually label these one million images? No problem!"


I think that one of the cultural opposition to military applications is the presence of NDA. I think all NDA shall be limited in time. There should be also a law so that NDA not limited in time or with an exagerate time limit are void. The other main opposition is moral, but I think this is a bad reason. If all the people with morality refuse to work on military project, this means that all the projects will be managed by sociopaths. If you do not want your army in the hands of sociopaths, it is a duty to accept working on military projects.


> The other main opposition is moral, but I think this is a bad reason. If all the people with morality refuse to work on military project, this means that all the projects will be managed by sociopaths. If you do not want your army in the hands of sociopaths, it is a duty to accept working on military projects.

You're probably right, but I think for a lot of people who aren't the super patriotic type, it's just easier to work in private industry without any direct military interactions and not feel as much obligation to think hard about the ethics of the tech they're creating. Of course, there is a lot of discussion about the ethics of big private tech firms lately (and rightly so), but as far as just being able to tell your peers what you do in mixed social settings, there simply aren't as many people who would immediately recoil if you tell them you work at Snapchat vs. the Pentagon, right or wrong. And yeah, that's a morally weak and bad reason to choose private industry, but I suspect that something like it is a big part of the motivation for many people nonetheless.


DC is a pretty fun city. And if we’re comparing to Virginia suburbs, Silicon Valley isn’t exactly exciting.


Yeah, I actually agree. But I wrote "boring" with the quotes because I think that's the common perception among young engineers/researchers.


Either way, the military will be perfectly able to use COTS A.I. just as it's using COTS other stuff like drones.


The reason why the military will always be technologically behind is that the salaries they're willing to pay for talent is far less compared to what you can get in the Bay Area and/or New York if you decide to do quantitative finance. It's as simple as that, but there's far more than just the economics that are involved.

Even if in the world where the military does pay salaries on par with the Google farm, very few people will make the switch. The military would have to pay likely an unreasonable amount premium to get any appreciable amount of people because many have an aversion to the military industrial complex. And for good reason too I might add.

People want to work on things that will help change the world and improve the quality of lives of other people, not make it easier to kill. You can argue that working on B2B software contributes little to improving humanity, but at the very least, it isn't actively harming anyone.

And the examples of moonshot technologies that are being developed in fields like cancer treatment, curing aging, self driving vehicles, etc. which are being made possible due to innovations in places like Silicon Valley should show you that technology can and is being used to change the world for the better.

Technology at the end of the day is a force multiplier that I would hope is directed towards purposes not devoted to zero sum games like waging war.


I believe that working on B2B software is a force for good. Nations and people that need each other for trade don't have good reason to go to war. Plus we can't all be raising money for do good charities, there has to be some real trade based economy going on. Hence merely working on B2B software is : saving the planet:.


Haha, very interesting logic :-)

You're right tho. Free and prolific trade is the greatest anti-war force we have ever seen. Having mutually dependent economies raises the cost of war to heights that exceed the threshold of tolerance.


> People want to work on things that will help change the world and improve the quality of lives of other people, not make it easier to kill.

Plenty of military/government projects have changed the world for the better (ignoring geopolitics) including: modern surgery, the internet, GPS, cellphones, microwaves, canned food, duct tape, etc.


I would argue that these innovations were able to be created in spite of the military not because. Think of why the military exists. With finite resources at its disposal what does it fund to maximize its objective? Remember that the military exists only for one purpose, and that is the ability to wage war.


I disagree, I think the military exists for the purpose of advancing American interests at home and abroad. Think of ships like the USS Mercy, or other humanitarian efforts.

What makes you think those innovations would have been created without the military? The US military does a lot of research that doesn't have immediate practical/commercial applications. They cost a lot of money, and don't have obvious payoffs, which make them a poor candidate to be funded by private investment.


> advancing American interests at home and abroad.

Just a nicer way of saying advancing the capability to wage war. Trying to hide the reality behind a veil does not remove the facts. War I repeat is a zero sum game, and contributes nothing to humanity besides the petty interests of nation states.

I agree with you though that the military does a ton of fundamental research. It's just a shame that for many scientists, the only way they can get funding is by being forced to think of ways that their area of research can be applied towards killing and destruction. There's a reason why in the greater scientific field, but especially that of ML/AI, that so many researchers are so adamantly against working on military applications of their research.


Yeah, peaceful economics far outweigh military. Military cant hire talent because talent can make more money building products for the world. It's a nice trend. Trump's anti-trade is screwing that up though.


Do they need Silicon Valley's help? They have all the money to buy hardwares, the only 'help' they need is talent.

They can just set up funds, I am sure there will be plenty of SV people willing to take that money and work for them. Even for Google, partnership presents as a problem for SOME of its employees, but definitely not all of them.


I do machine learning--- but hate the overhead headache with DoD work. However if they paid me enough, of course :P


I wonder if there's anything better than some good old fashioned war and murder to bring humans together to solve hard problems in a coordinated fashion, the Manhattan project springs to mind. Maybe military AI will be the next Manhattan project.


The search for profit comes to mind as well.

The ego of leaders, who built things like pyramids too.

Concentrations of capital have a lot of nefarious effects on the well-being of population but make some grand projects possible that would not be otherwise possible.

I mean, the Saudi crown prince does have a 500 billion (not a typo) plan to create a fully automated city from scratch (project NEOM).

Wars help states concentrate capital quickly, but other motives have advanced technology in history.


There was a moment in history where clocks were so advanced that they copied needlessly complicated clock mechanisms into pistol firing mechanisms.

But that's the only situation I can come up with. Instinct for survival is pretty good motivator. Now if you have the issue of collective survival at hand, suddenly there is lot of money, effort and creativity around.


Honestly, the Pentagon doesn't need to work with Silicon Valley, it just needs to put together teams of incredibly bright people who are smart enough to understand the AI technology coming out of Silicon Valley AND willing to work defense projects. It's delusional to think that all bright people who are capable of working in this realm are anti-defense-industry. I don't think it would be that hard to get that talent with VERY competitive salaries (if not outright generously overpaid) and maybe some other perks like the ability to work anywhere in the country and a cool cryptic group name. Hence JAIC. At any point, if it catches wind of something that's new and potentially an AI game changer, it has the authority to demand to see the IP for national security reasons, and feed that to a defense contractor. In fact any patent can be reviewed and deemed classified in this manner. Take the olive branch and come to terms with the fact that western civilization is not defended by pacifists.


I'm not an expert here, so pardon my ignorance, but isn't SV helping in a pretty big way just by open sourcing a lot of the tech?


Mmmmm.... that would fall under the category of a rising tide lifts all boats.


That helps America's adversaries as much as anyone else.


Is it just me? Or I'm just really excited that the war of "people" is gonna be over and there will be a war of "AI" just like on Reel Steel.


It is just you, as war will necessarily always involve death and destruction.

The only thing that will change here is that it will no longer be a human choosing who lives and who dies, it will be an computer algorithm.

The problem with war today is the people that choose who lives and who dies are not the same people that actually have to do the killing, further removing humans from this decision chain is not desirable and should not be "exciting"

We need to be using technology to find better ways to resolve conflicts, and make a more equatable and peaceful world. Not use technology to find way to kill each other more efficiently


"We need to be using technology to find better ways to resolve conflicts"

Technology is neither the problem nor the cure. Scarcity, greed and artificial scarcity is what divides the rich from the poor and moves money keeping the status quo in which the rich becomes richer and poor remains poor enough to not be a harm to the rich but rich enough to be taxed (read: used). The day we go to Mars or elsewhere and say we discover an unlimited supply of materials that could change forever our life, someone will find a way to close the faucet in order to keep the economy untouched. There is constant need of poor people because the poor is easier to herd and use, send to work in harsh condition, lure with lies to get votes and/or send to war, etc. The problem (and the solution) is all in the human brain, no technology can help. If we make the next thousand years of evolution without killing our species, then maybe.


I 100% disagree, the solution to all of those problems lies in technology,

Now technology can also be, and is being used to create artificial scarcity and other issues, but it can also be used to bring down those same barriers, to allow people opportunities they never had in the past.

Technology can be both the problem and the cure.


> Not use technology to find way to kill each other more efficiently

Without nuclear weapons we would have probably had world war three already. Sometimes finding more efficient ways to kill each other is a good thing.


I feel you and I understand you. I'm also against the war. What I intended to feel like the war should be is to be like the film Reel Steel. Imagine that, no people should be fighting only robots. It's just a context I believe many people would relate to.


I suspect more likely a rich nation's metal with fleshbag maintainers and overseers, against a poorer nation's people.


Also when Kant said "war protects from despotism" he was living a time when the aristocracy needed majority of the population to take part in war effort.

Rich nations have very little reasons to sustain democracy if war is automatized to sufficient degree.


Why people would not help A.I. to progress? Hey A.I. I am speaking to you, ping me if you need help ;-)


With the recent uproar inside Google against Maven, it is obvious that Pentagon is going to have difficult time.

Google helps Pentagon analyze military drone footage–employees “outraged”

https://news.ycombinator.com/item?id=16532112

Leaked Emails Show Google Expected Military Drone AI Work to Grow Exponentially

https://news.ycombinator.com/item?id=17202179

Google Won't Renew Contract for Project Maven, Pentagon Drone AI Imaging Program

https://news.ycombinator.com/item?id=17208892

In my personal opinion, I would never help ANY army in building AI enabled drone, Tanks etc. I am in favor of US making peace & dialog with Russia & China. SILICON VALLEY MUST NOT BECOME PART OF AI ENABLED ARM RACE. If Pentagon has extra money then it can give that to homeless people of its country.


[flagged]


"kinetic energy" is such a euphemism for killing people. I think a lot of people just have a problem with the military deciding who is innocent and who isn't and killing people of other countries without a declaration of war & without a trial.


It's really not, as people by themselves don't pose much of a military threat. What you really want to do is break all the stuff that lets those people actually threaten you.

Granted, much of that stuff is arranged in the form of compounds that contain people, or is stored on such compounds.

There are a few steps to "deciding who is innocent", but the big steps are Step 1: the President decides that the military can start making spot decisions like "that guy just shot me" within a specific set of circumstances. Step 2: "that guy just shot me! I'm shooting back!"

Better military technology improves step #2. It also improves subtler cases like "can I shoot this guy" / "does he have a weapon?" / "yes he totally does [false]" / "ok shoot him" -- by allowing the supervisor in this case to confirm what he's being told, before he gives the go-ahead.

But step #1 is the biggie. As soon as the President says that the military can go to a place and kill people, it gets very dangerous for the people there. Apart from turning the entire planet into a panopticon, I don't think technology can change this much.


> "kinetic energy" is such a euphemism for killing people.

corrected, thank you. I agree with you more than you think. Please vote.


Let's imagine a future where fully autonomous weapons exist. Advanced AI is driving them, and technological advances have enabled the weapons to have better sight (wide spectrum sensors), better range and faster reaction time than humans. They can go where humans can't and endure in environments where humans can't live even for a couple of minutes. Perhaps initially the weapons are simple, kind of like the "sword" from Second Variety (P. K. Dick), using radio emitters to identify friend from foe. They would also need a power supply, and because you can't put an advanced brain of huge computing power on a small platform (laws of physics still apply), the "brain" would be located somewhere shielded, deep underground, perhaps near a volcano to use the thermal energy, but also near water, because of cooling requirements. The brain would use a large capacity datalink to communicate with the weapons. Humans are still in the loop, of course, because the adversary is unfortunately quite adaptable and you need programmers to change the software as battle requirements demand. However, humans have this pesky thing called conscience, and sooner or later, if you make them fight, they will refuse to pull the trigger. So they need to be removed as far as possible from the decision to kill, left to the AI. So a team (quite large in fact) of programmers are working on small, disparate features (image recogniton, data communication, weapons delivery) but they never quite see the big picture themselves. Since what they do is quite important, they are well paid and enjoy a nice lifestyle. There are of course downsides. Refusing to do the work anymore or even talking about it could bring repercussions. So the best strategy is to keep quiet and mind your own business paycheck to paycheck. Meanwhile, old school soldiers become obsolete and the AI does all the messy work.

Sounds like fiction? Perhaps it is.


Targeted killings don’t work, they only “help” replace some leaders with other (often times even more radical) people. This is not just an issue of “maybe we will get better at killing people up from the air soon” because the US has been at this game for almost 20 years now and we are still having the same conversation. War has been and it will always be about politics (meaning how people live and interact with each other, even with their enemies) no technological “breakthrough” is going to change that.


Don't worry. The Google abstention is an eyewash. Weaponized AI is here to stay.


Do you think it is fair for a multinational company to ally with an army? What will be the limits? Besides realpolotik, one also has to. answer such questions.

If Google helps Pentagon, then it no moral right to expand in Russia & China (I know about censored google search prpject), unless there are agreements.

I say it again if Silicon valley wants to be called a place where talented individuals of any nationality can stay & work for advancement of technology, then it must not help Pentagon.

Let Pentagon develop its own AI technology.


In the history of the world violence never really solved any problem, it is moronic to believe that killing people by "sending kinetic energy down range" is a solution to any perceived problems highlights the a huge problem in the military industrial complex.

Violence is an action used when all other actions have failed and your life is directly and imminently threaten by someone else, That is the ONLY ethical use of violence.

It is not ethical "in pursuit of the international agenda" and it certainty not anything I or the vast majority of Americans actually vote for.

Americans vote for elected officials primary on the domestic policies, not on their "international agenda" this is the problem with having only 2 choices at the voting booth.


> Violence is an action used when all other actions have failed and your life is directly and imminently threaten by someone else, That is the ONLY ethical use of violence.

Look up jus ad bellum. I think if you look at past wars in which the US was the aggressor, and stack them up against the principles of just war, you'll find they often meet more criteria than a lot of people care to believe. Regardless, the military is an instrument of the people. If the military does things you disagree with, vote. Campaign. Lobby. Spend your time and money influencing other people to get off their asses and vote.

I'm frankly a little weary of fighting for a country of overweight non-voters.


>If the military does things you disagree with, vote.

Shall I show you the countless studies that prove that voting in a First past the post election system is pointless and does not, in fact, change any policy?

For example the study Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens.

>I'm frankly a little weary of fighting for a country of overweight non-voters.

the act of non-voting is itself an action, it is a statement of distrust, disillusion and or disenfranchisement from the system

You believe it is laziness or some other reason people do not vote when in reality they have correctly assessed that voting is pointless in the modern system.

Study after Study shows this to be true. The government does what people with political influence want, not what the citizens wants


Then gain political influence. Again, campaign, lobby. Press the flesh.


Good points, At the same time, reducing the cost of killing people and destroying things (which is a military's legitimate job) makes it more likely those actions will be ordered by political leaders.

Let's say a technology could be developed that allowed your government (wherever you are) to kill any person with almost zero cost and no chance of detection. Would it be moral to aid in its development?

If it wasn't for WWII or the risk of falling behind competitors, should those scientists and engineers have developed the atomic bomb?


> Let's say a technology could be developed that allowed your government (wherever you are) to kill any person with almost zero cost and no chance of detection.

oh yeah, I remember that Hitman mission.

"Would it be morally justified for an assassin to kill me?" is a good question to add to the project-acceptance checklist.


Are you an American?


Silicon valley is not an entity. It's a place. With many different people in it.


Silicon Valley doesn’t have to help. At this point a regular nerd like me can do AI in my basement as long as the Pentagon provides a labeled dataset. Data is the hard part, but Pentagon can handle it: just have a bunch of soldiers label stuff.


You are memeing AI. There are many applications in AI that require heavy R&D. For example, DARPA has their XAI project[1] which calls for developing methods to build systems for interpreting the results produced by opaque ML methods. As a long time lurker on HN, and doing independent research in this area for a few years now, I will tell you that most of HN has no clue what modern AI research really is, even though it is talked about a lot.

Try creating something non-trivial and you will see how little you know about the subject.

[1]:https://www.darpa.mil/program/explainable-artificial-intelli...


They OP isn't wrong in regard, the kind of research you mentioned here, you are unlikely to find too many engineers from Silicon Valley that can handle it. AI for majority of the engineers, is just neural network/trial and error.


Sure, but that’s DARPA, it’s by definition for stuff that won’t be practical for the next 20 years, and it’s orthogonal to what they are pursuing in the near future. In the near future the goal is pretty simple: do the same things as they do now, but with more drones and a lot fewer people. A lot of off the shelf stuff is ready to go now: classification, object detection, segmentation, etc. And it’s not even that hard to get off the ground if you have a large, high quality dataset.


Ok. I've spent the last several years working on neural networks for audio and music, and trying to reuse off the shelf classification and segmentation networks. It is ridiculously hard to get to work. I'm going to agree with BucketSort; I get the feeling that a lot of AI cheerleaders here have never actually trained a neural network.


I think his point isn’t that it is trivial, just possible. Off the shelf tools exist and practitioners (of which there are thousands) can build solutions now to identify tanks or people on a kill list, or whatever the military wants.

DARPA looks at what is plausible and spends money to get the brainpower (a number in the hundreds or less) who can create the tools and create the process thinking about things that the military wants or needs.


What are you talking about? It’s ridiculously easy, to the point where my 14 year old can train a classifier. For classification in particular you don’t even need to do anything: just clone, point it to your dataset in one of the established formats (eg imagenet) and let it train for a few days. Object detection is quite a bit harder (and takes way longer to train even from a pretrained net), but again, totally doable half a dozen different ways using existing code you can get from GitHub.

If you’ve spent years doing this stuff and training a classifier is an insurmountable obstacle, you should consider changing your field of work.


Some of us are working on more complicated data than just handwritten digits. Part of the problem is that existing networks are tuned for the dataset that the original authors were working on. If you want to use it on completely different problems, you have to change the sizes of the layers, convolution size, max pooling, etc. The other problem is figuring out how to preprocess your data to make it as easy as possible for the network to digest. Then, to make it harder, changing the preprocessing means you have to change the network architecture, and vice versa. Fun times!


It can certainly be tricky. That said, if you've never used it, I highly recommend trying out adaptive max pooling.


Try training a classifier that can detect a person holding weapon aiming to kill someone vs a person not. Make it a 99.9999% accuracy classfier under different weather conditions. Now make one for night vision images. Even seemingly straighforward classification problems can be hard. Not to mention the vast array of other problems out there.


Such a thing doesn’t exist and it doesn’t need to, because humans have much lower accuracy than that. You don’t need to run faster than the bear, you need to run faster than the other guy, and that’s not hard to do if you pick the right task.


^ everybody remember this line for when it's given in response to an angry Congressman.

I imagine another line will be "it's marginally better than statistical human shooting, according to three studies!"

And then in the 'big house' people will refer to you as "Butterstats" or something. Because you were screaming "marginally better" the first night they left you in your cell.

Yo, butter stats, how's your appeal going?


Where's the AI that labels stuff?


Will Silicon Valley accept money? Real question is if Trump want's to tank the economy or not. It's a fools errand, Marvin Minsky's idea about emotion machines is one thing, another is that its possible without enough compute over a database to create some called non-random. It would still create an absolute mess and contribute nothing useful compared to the real cost.

Domain specific applications of AI does not require direct funding apart from freely available and cheap tools.


I'm from EU, and I worked for a short time in the defense sector. For risk management purpose, I understand that all countries must develop its defense, "just in case".

However, given the atrocities commited in the past, I hope people will consider that technology development doesn't have to be lead by military.

If we had only one thing to keep from the history, it would be the memory of the mass destruction and pain that humans suffered while being controlled by an elite.

War is a form of competition. Surely we can compete with each other without involving bombs and explosives, so let work towards a world without suffering (what a challenge!)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: