Hacker News new | past | comments | ask | show | jobs | submit login

With the recent uproar inside Google against Maven, it is obvious that Pentagon is going to have difficult time.

Google helps Pentagon analyze military drone footage–employees “outraged”

https://news.ycombinator.com/item?id=16532112

Leaked Emails Show Google Expected Military Drone AI Work to Grow Exponentially

https://news.ycombinator.com/item?id=17202179

Google Won't Renew Contract for Project Maven, Pentagon Drone AI Imaging Program

https://news.ycombinator.com/item?id=17208892

In my personal opinion, I would never help ANY army in building AI enabled drone, Tanks etc. I am in favor of US making peace & dialog with Russia & China. SILICON VALLEY MUST NOT BECOME PART OF AI ENABLED ARM RACE. If Pentagon has extra money then it can give that to homeless people of its country.




[flagged]


"kinetic energy" is such a euphemism for killing people. I think a lot of people just have a problem with the military deciding who is innocent and who isn't and killing people of other countries without a declaration of war & without a trial.


It's really not, as people by themselves don't pose much of a military threat. What you really want to do is break all the stuff that lets those people actually threaten you.

Granted, much of that stuff is arranged in the form of compounds that contain people, or is stored on such compounds.

There are a few steps to "deciding who is innocent", but the big steps are Step 1: the President decides that the military can start making spot decisions like "that guy just shot me" within a specific set of circumstances. Step 2: "that guy just shot me! I'm shooting back!"

Better military technology improves step #2. It also improves subtler cases like "can I shoot this guy" / "does he have a weapon?" / "yes he totally does [false]" / "ok shoot him" -- by allowing the supervisor in this case to confirm what he's being told, before he gives the go-ahead.

But step #1 is the biggie. As soon as the President says that the military can go to a place and kill people, it gets very dangerous for the people there. Apart from turning the entire planet into a panopticon, I don't think technology can change this much.


> "kinetic energy" is such a euphemism for killing people.

corrected, thank you. I agree with you more than you think. Please vote.


Let's imagine a future where fully autonomous weapons exist. Advanced AI is driving them, and technological advances have enabled the weapons to have better sight (wide spectrum sensors), better range and faster reaction time than humans. They can go where humans can't and endure in environments where humans can't live even for a couple of minutes. Perhaps initially the weapons are simple, kind of like the "sword" from Second Variety (P. K. Dick), using radio emitters to identify friend from foe. They would also need a power supply, and because you can't put an advanced brain of huge computing power on a small platform (laws of physics still apply), the "brain" would be located somewhere shielded, deep underground, perhaps near a volcano to use the thermal energy, but also near water, because of cooling requirements. The brain would use a large capacity datalink to communicate with the weapons. Humans are still in the loop, of course, because the adversary is unfortunately quite adaptable and you need programmers to change the software as battle requirements demand. However, humans have this pesky thing called conscience, and sooner or later, if you make them fight, they will refuse to pull the trigger. So they need to be removed as far as possible from the decision to kill, left to the AI. So a team (quite large in fact) of programmers are working on small, disparate features (image recogniton, data communication, weapons delivery) but they never quite see the big picture themselves. Since what they do is quite important, they are well paid and enjoy a nice lifestyle. There are of course downsides. Refusing to do the work anymore or even talking about it could bring repercussions. So the best strategy is to keep quiet and mind your own business paycheck to paycheck. Meanwhile, old school soldiers become obsolete and the AI does all the messy work.

Sounds like fiction? Perhaps it is.


Targeted killings don’t work, they only “help” replace some leaders with other (often times even more radical) people. This is not just an issue of “maybe we will get better at killing people up from the air soon” because the US has been at this game for almost 20 years now and we are still having the same conversation. War has been and it will always be about politics (meaning how people live and interact with each other, even with their enemies) no technological “breakthrough” is going to change that.


Don't worry. The Google abstention is an eyewash. Weaponized AI is here to stay.


Do you think it is fair for a multinational company to ally with an army? What will be the limits? Besides realpolotik, one also has to. answer such questions.

If Google helps Pentagon, then it no moral right to expand in Russia & China (I know about censored google search prpject), unless there are agreements.

I say it again if Silicon valley wants to be called a place where talented individuals of any nationality can stay & work for advancement of technology, then it must not help Pentagon.

Let Pentagon develop its own AI technology.


In the history of the world violence never really solved any problem, it is moronic to believe that killing people by "sending kinetic energy down range" is a solution to any perceived problems highlights the a huge problem in the military industrial complex.

Violence is an action used when all other actions have failed and your life is directly and imminently threaten by someone else, That is the ONLY ethical use of violence.

It is not ethical "in pursuit of the international agenda" and it certainty not anything I or the vast majority of Americans actually vote for.

Americans vote for elected officials primary on the domestic policies, not on their "international agenda" this is the problem with having only 2 choices at the voting booth.


> Violence is an action used when all other actions have failed and your life is directly and imminently threaten by someone else, That is the ONLY ethical use of violence.

Look up jus ad bellum. I think if you look at past wars in which the US was the aggressor, and stack them up against the principles of just war, you'll find they often meet more criteria than a lot of people care to believe. Regardless, the military is an instrument of the people. If the military does things you disagree with, vote. Campaign. Lobby. Spend your time and money influencing other people to get off their asses and vote.

I'm frankly a little weary of fighting for a country of overweight non-voters.


>If the military does things you disagree with, vote.

Shall I show you the countless studies that prove that voting in a First past the post election system is pointless and does not, in fact, change any policy?

For example the study Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens.

>I'm frankly a little weary of fighting for a country of overweight non-voters.

the act of non-voting is itself an action, it is a statement of distrust, disillusion and or disenfranchisement from the system

You believe it is laziness or some other reason people do not vote when in reality they have correctly assessed that voting is pointless in the modern system.

Study after Study shows this to be true. The government does what people with political influence want, not what the citizens wants


Then gain political influence. Again, campaign, lobby. Press the flesh.


Good points, At the same time, reducing the cost of killing people and destroying things (which is a military's legitimate job) makes it more likely those actions will be ordered by political leaders.

Let's say a technology could be developed that allowed your government (wherever you are) to kill any person with almost zero cost and no chance of detection. Would it be moral to aid in its development?

If it wasn't for WWII or the risk of falling behind competitors, should those scientists and engineers have developed the atomic bomb?


> Let's say a technology could be developed that allowed your government (wherever you are) to kill any person with almost zero cost and no chance of detection.

oh yeah, I remember that Hitman mission.

"Would it be morally justified for an assassin to kill me?" is a good question to add to the project-acceptance checklist.


Are you an American?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: