Hacker News new | comments | show | ask | jobs | submit login
A dozen Google employees quit over military drone project (arstechnica.com)
141 points by dismal2 6 months ago | hide | past | web | favorite | 65 comments



this is primarily reporting what a news article on a different site says, please submit the original article in such cases. That article also already has a discussion here: https://news.ycombinator.com/item?id=17064776 (98 comments)


> "If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed," the letter reads, "we can say with certainty that no topic deserves more sober reflection—no technology has higher stakes—than algorithms meant to target and kill at a distance and without public accountability,”

My understanding from the reading I've done is that this project is to analyze drone video after it's collected, to automate tasks like picking out when people enter and exit buildings and to read the license plates off cars. If that understanding is correct then this quote seems disingenuous. Google's project is no more designed to "Target and kill at a distance" than the designers of the drone's camera, or it's engine. Arguably even less than those, since those components are in use when drone actually launches strikes. Google's project only comes into play after drones have returned and they have time to crunch the data.


I don't think that's an argument. The truth is, even if the particular applications of AI to tasks like this are innocuous in this case, they're less than a stone's throw away from things that aren't. Uses like this for technology are a perfect example of things we (we, the tech community, who have always prided ourselves on being more fair and ethical than the rest of the world) all promised we would never do. There is no more of a perfect mis-application of AI tech than to military uses. It doesn't matter if the military is currently only "counting people" in video footage. We all know exactly where it will eventually lead. Counting people will turn into finding people. Finding people will turn into killing people. We can't mince words in our condemnation of this activity.


I was following your argument until the parenthetical elitist remark about the tech community priding themselves "on being more fair and ethical than the rest of the world". Is this a truly held belief common in the tech community? Glad I don't subscribe myself to groupthink. The tech community seems to fill the news with sexual harassment and other unethical actions just as much as any other self-defined group.


The statement was meant to be slightly ironic.


Are you also against end-to-end encrypted messaging? Telegram is used extensively by ISIS - who does actual evil and deliberately targets civilians.


Frankly, I’m still sifting through the arguments here and don’t think I agree with the OP off the bat, but that’s a false analogy. Google’s work is directly advertised for military use— Telegram isn’t. I don’t think anyone is arguing against object detection research, more its military application.


> I don't think that's an argument.

Of course it is! It's an intentional mis-characterization of what the software does.

> Uses like this for technology are a perfect example of things we...all promised we would never do.

I didn't make any such promise. When did you?


It's really challenging to unpack this kind of dogmatic argument. Are you a pacifist in general? Are you against violence and military intervention in every conceivable case, up to and including Rwanda or WWII? Do you think it's good for relatively liberal and democratic countries to have less military capability than relatively authoritarian countries?

The cause of human rights, freedom, dignity, and survival would have been actively harmed if British and American scientists and engineers took the same moral stance that you're taking and refused to participate in the war effort. Do you think it was wrong of Turing to help break German encryption?


not op but I don't think it's inconsistent to view the military as a necessary evil while denouncing the use of drones in asymmetric warfare.

I'd have no qualms about supporting my own nation in a global conflict, but ethically speaking the drone program falls on the darker side of gray.


Can you expand on that?


Who watches the watchers?


Just because Google or the military say the tech won't be used to target and kill people doesn't mean it's true.

The letter that I read[1] states:

"Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not “operate or fly drones” and “will not be used to launch weapons.” While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks."

and:

"Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable."

So the concern seems to be that the military can easily repurpose Google's technology to lethal ends.

[1] - https://static01.nyt.com/files/2018/technology/googleletter....


It's not obvious to everyone that Google has close ties to intelligence agencies already?

I don't see why their promises mean something, and I don't see why some people never learn that there are no good corporations. They all exist to make money. Except the non profits. And there is tons of money in war, as you know.

Building tech that kills people is therefore playing dumb or not caring.


The response to that is in the article:

> While a Google spokesperson says the program is "scoped for non-offensive purposes," a letter signed by almost 4,000 Google employees took issue with this assurance, saying, "The technology is being built for the military, and once it's delivered, it could easily be used to assist in [lethal] tasks."

If the object and intent recognition is made fast enough, and is able to be sent to and from a drone in flight, then the technology can be re-purposed offensively, regardless of its initial purpose.


Once Google engineers have trained an algorithm to interpret a video feed to determine when people enter and exit buildings, what do you think the very next step is? It will be applied to live video for use in determining where a target is located in order to kill him.


Sometimes I’m convinced that hn’ers are just trying to argue the contrarian point of view, irregardless of facts.

Thats certainly easier for me to believe than someone who sincerely thinks that military spending millions of dollars on targeting systems doesn’t mean the military is planning on using targeting systems it spent millions of dollars to develop.


...crunch the data so that they can more effectively kill at a distance and without public accountability


Kudos to those who is ready to stand for their principles. If you are at G and thinking whether you should resign or not, remembers this - the market for AI talent is super hot. You will immediately find lots of great and challenging AI work pushing humanity forward


Are there many AI opportunities out there without potential military use? Warfare is a broad-spectrum human activity. I'm having trouble imagining many things that can't be leveraged.


IANAAIExpert, but it seems that many such jobs might have questionable ethical implications?


Google does not hire AI people exclusively.


Good on these employees. Now go work somewhere or start something that gives you ownership and control of what you work on.

This is likely a result of massive corporate/government entanglement. Google can't say no. Their stock could crash, their negotiating ability could go down significantly, all the work they've done on lobbying could be in danger. Who knows what other back room deals are happening.


Lately, a lot of Recaptcha challenges are things like - identify the trucks, the cars, the shops, etc. I really hate these challenges knowing what Google will (at least in part) do with the data.


IIRC, original Recaptcha was about OCR of books that weren't readable. I'm not sure why anyone uses it right now. Like you said, it's obvious where it's going. Not to mention that it's increasingly annoying for users.


And completely accessible. RECAPTCHA is basically impossible to use if you need a screen reader or have any sort of visual impairment. Hell, I’ve have problems as a EU resident due to differences between street signs between here and the US.


Yep. “Click on the images containing a crosswalk.” I’m British. That’s not a word we use.


"find the zebra" might be more confusing :)


I don't see what's so hard about it. Captcha descriptions are also localized but I understand why they wouldn't do US<>UK.


What's hard is that 90% of UK residents won't understand what they're being asked to do.


Yes because "crosswalk" is an opaque word that doesn't have an obvious meaning (and not present in commonly consumed media from the US).

While there is some potential for confusion, I don't think that stands for that specific word.


It's where two footpaths cross each other? :)


Good on them. That's taking a stand, and it's admirable they are standing by their beliefs. Maybe they can go work on making a product that could never be used by the U.S. military they despise so much as they live their sheltered and comfortable parochial lives.

I mean, I'd be hard-pressed to think of any such product, but maybe I lack imagination.


Free market isn't just about buying and selling. Doesn't matter if you agree with the politics, choosing to not to work somewhere on principle is a beautiful thing, especially when the position being given up is a highly coveted one.


I don't get this. This is about tracking humans. If you're worried about AIs tracking humans through areal cameras ... let me put your mind at ease : this is "tutorial" level stuff.

Granted. There's getting it to demoable state, and there's getting it to work under all conditions, getting it stable, getting it tested, and so on and so forth.

But still, this is not exactly state of the art anymore. This ship has sailed. Over and done. Genie cannot be put into the bottle. The US army has this option now, and very soon essentially any professional military will have it. A quick course on AI will enable you to do this, and I assume that the US military has enough such people available.

Same with tracking specific people in (high-res) cams. There's a computational cost, but this has been done and described so many times. If anybody wants to build a network of cameras that can track specific people by their faces, there's nothing stopping them at this point.

So why get all worked up about this ? What's the big deal ?


If the US army had or was easily able to build this they would probably not pay large sums of money to Google. There are still many challenges in creating scalable robust solutions using machine learning in computer vision, especially when high degrees of response time and robustness is needed.


Most, if not all types of machine learning networks are O(1). Keeping the response time constant is not needed.

Robustness is harder, but for this problem, not very hard.


http://neveragain.tech/ a bunch of names on here. Have any of them taken a stand?


Looks like they are taking a stand by signing that pledge.

There is one section on resigning:

> If we discover misuse of data that we consider illegal or unethical in our organizations:

> ...

> If we do not have such authority, and our organizations force us to engage in such misuse, we will resign from our positions rather than comply.

From what I have seen, Google's help with the Pentagon seems limited to software consulting. If Google had shared any data with the Pentagon for military purposes then it would have crossed a line for the pledge.

The pledge says to try to fix it through: working with colleagues/leaders; then whistleblowing; then legal defenses if they have the authority; then resignation.


Were they directly involved? Did they know what they were working on?


I wonder how many of those Googler's who objected to the development of weapon systems have served in combat positions fighting in Iraq or Afghanistan?

I'm guessing not many which is a reason why it is important for firms such as Google to prioritize the hiring of combat vets (besides the fact that they risked their lives to serve our country).

For those who have not served in combat or lost a friend or relative that served in combat, saving lives with drone technology is too abstract.

The drones are very effective at killing terrorists and technology which improves the effectiveness of killing terrorists and enemy combatants (and thus saving American lives) is a good thing.

In Israel, both men and women alike are drafted and the women can serve in combat positions if they desire. Men serve on one month reserve duty until they are 40. Some of these men have been educated as engineers and they understand first-hand the importance of developing technology to save the lives of combat soldiers.

This is something firms like Google are missing: there seems to be little empathy for US military soldiers who are risking their lives defending our nation and developing of technologies to save their lives.


> "and technology which improves the effectiveness of killing terrorists and enemy combatants (and thus saving American lives) is a good thing."

There are other than American lives at stake. I suppose what you say would be true, if the U.S. used the technology to kill only those universally bad terrorists and if this somehow significantly decreased the damage terrorists do around the world.

But this is fantasy. The U.S. will most probably use this technology to detect more and to kill more, with much less regard to foreign lives than to American lives. If the technology flags a building that most probably contains a terrorist, the non-terrorist people present won't matter much to a drone whose work is killing terrorists. The building with all people near it will be gone. I do not believe the U.S. government or U.S. military care about the lives of poor people who are on the other side of the world.

Drone attacks also probably aren't that good of a service to U.S. in the long term. They generate strong opposition world-wide and probably also generate new terrorists. Killing one terrorist now in this way may mean creating 10 terrorists 10 years from now.


I'm thinking that they have a problem with the military, period, and don't think soldiers should have been in Iraq etc in the first place. Its an issue of trust.

And then you have the argument that if you reduce the human cost of wars, you make wars more likely.


> the argument that if you reduce the human cost of wars, you make wars more likely

Very good point! Drone attacks make the operators into drones themselves, detached from feeling the consequences of their actions.


It's not clear that cost is even being reduced. PTSD rates in drone operators are roughly equivalent to those who fly manned combat missions [1].

https://www.nytimes.com/2013/02/23/us/drone-pilots-found-to-...


This is a very good comment and perspective. I am not a vet and don't have anything close to the experiences that it seems that you have. That being said, many of my friends and roommates from college are in active duty military positions, and it's scary to hear when they're being deployed.

Even though a program like this might save some lives in the short term, training machines that don't have empathy and can be programmed for whatever means to kill seems like a Pandora's Box that we should be damn sure we want to open before actually doing so.

Also, my apologies for the less-than-respectful replies that you are getting. You should not be chastised for respectfully sharing an important opinion, and their behavior is unbecoming of what we should expect of American citizens.


There's also precious little understanding of the fact that the US isn't the only high-tech military power in the world, and that there's nothing stopping countries like Russia and China from developing this kind of technology. And you're a fool if you think they would use that technology more ethically than the US military.


Not are many Google employers American or European. So if I were of e.g. Russian or Chinese nationality, I would not want to partake into activity that in all likelihood can be used to advance 'wrong' interests.

On another note, how about everyone that works for Intel, or ARM or in any of the EDA companies? Or on Open Source? Should they all quit unless their comfortable being accomplice to creation of machines of mayhem? Or what about everyone that works for BoA or Wells Fargo - how can you in good faith work for the companies that have again and again shown to engage in questionable business practices? Oil companies neither have ever engaged in questionable activities in Africa and elsewhere - how can people work or them? And then there are medical, forest, make-up and, well pretty much every other major industry. Full of people who choose to put their moral obligations a side. Never mind us wearing clothes and such that are result of child labor and such.


Unfortunately they’re also very effective at killing foreign nationals with nothing to do with any conflict. And if you reply with the weasel words “collateral damage” then, with all due respect, fuck you. (Sorry dang, I’ll take any downvotes as deserved, but this comment is unacceptable in my eyes).


How much of your objection has to do with the technology, and how much with the conflict itself?

In other words, which question is more appealing:

* Given that we're in this conflict, what should we do?

* How can we prevent these kinds of conflicts in the future?

Those are very different conversations. Personally I'm more focused on the second one. But we have to take care of the first one too. And it's messy.


The answer for the second arises from the first, to whit “promiscuous killing of citizens radicalises survivors”. The invisible but entirely audible presence of the drone is a reminder that a foreign power has the ability to kill you at the touch of the button for no reason that you were in the wrong place at the wrong time. And that’s meant to protect people?


Would there be article if company in question would not be as large/influential?


Well if they had very intelligent engineers that leveraged their positions and knowledge for work that is directly opposing their personal views, yes.


I'm staunchly opposed to war, but understand it's unfortunate necessity under certain extreme circumstances.

If these types of projects make war machines more precise overall, they may actually decrease overall collateral damage and reduce the total time war is waged which could cause less lives to be lost during war.

Until us humans can collectively overcome the various problems that cause war, might be worth it for the best minds to help make war machines as precise as possible.


Your argument is: "By making war more efficient/effective, we will reduce its use."

Say a government only has nuclear bombs in its arsenal and they really want an enemy of the state dead. Do you think they're willing to nuke an entire city to kill one person?

Now imagine a government has electronic kill switches. Imagine it being almost like The Matrix, they can just flip your life off at the flick of a button. Do you think they willing to just flick off the lives of anyone who they don't like?

You're effectively arguing that the latter is better than the former. Societies use more of a technology the far down the learning curve development goes and the cheaper the technology is. If there is no cost to violence, violence will be endless.


My argument is that more precise war machines might cause less collateral death thus resulting in less overall casualties from conflict.

Unfortunately, historically speaking, the use of war is assured.


Violence is already endless. It’s just a matter of moral vanity whether or not you feel better about not being personally involved.

We’ve tried non-intervention before, and places like Czechoslovakia, Poland, China, and Rwanda have paid the price. And the adversaries we have faced, from Hitler to Daesh, would not hesitate to use weapons of mass destruction as we would. The reason asymmetric warfare works is that the terrorist is willing to stoop to levels that we are not. The only counter to that is precision warfare.


> "Violence is already endless. It’s just a matter of moral vanity whether or not you feel better about not being personally involved."

Nonsense. Violence is still limited, in time and extent, by economic and political forces, and by the fact that soldiers still have some respect for life of other people, because they are in the field, risking their own life and seeing the injustices of war. But put them in control of a violent video game, and that may change.

> We’ve tried non-intervention before, and places like Czechoslovakia, Poland, China, and Rwanda have paid the price.

You've tried intervention, and places like Korea, Vietnam, Iran, Iraq, Syria also have paid and are paying the price.

Perhaps it is not about the intervention/non-intervention, but about how you engage in the world.

> The only counter to that is precision warfare.

Only if you already decided on the warfare part. An alternative is to stop killing unknown people abroad and in so helping create new terrorists, and instead doing something about cooperation between the governments to stop the violence.


> Only if you already decided on the warfare part. An alternative is to stop killing unknown people abroad and in so helping create new terrorists, and instead doing something about cooperation between the governments to stop the violence.

So your solution is to negotiate with people like Hitler and Bin Laden and to appease groups like Daesh. Ask the millions murdered in the Holocaust how that worked out for them, how Western non-violence was the solution.

While you're at it, ask the millions of South Koreans living in peace and prosperity how they feel about Western intervention to protect and even rescue them and their ancestors from the Kim regime. Ask the people of Bosnia and Kosovo whether we should have left them to the whims of Milosevic, just as we left the Tutsis to their fate.

We're not the ones who have decided on the warfare part. We're not the one who expand their reach by slaughtering boys and men while kidnapping and raping women and girls wherever they go. We're not the ones who commit ethnic cleansing and hack babies apart with machetes. That's already been decided. The only decision we have is between leaving these victims to their fate and living up to the words, "never again".


> I'm staunchly opposed to war, but understand it's unfortunate necessity under certain extreme circumstances.

> If these types of projects make war machines more precise overall, they may actually decrease overall collateral damage and reduce the total time war is waged which could cause less lives to be lost during war.

"I'm staunchly opposed to beating my children, but understand it's unfortunate necessity under certain extreme circumstances.

If these types of projects make child beatings more precise overall, they may actually decrease overall harm to children...."


So killing ISIS members is morally equivalent to beating children?


I just mean that if you're opposed to something then there aren't any times it's also a necessity. You may do it, but ultimately you view it as unnecessary. War happens, but it's unnecessary every time, even when one side initiates, it doesn't make it necessary.


It's necessary for you when the other side has decided to engage in it against you. The fact that it was technically "unnecessary" for them to start the war in the first place is rather irrelevant when they're marching down your street. The decision about whether there's going to be a war, necessary or not, has been made for you.

If you don't believe me, I'm sure there's a Kurd somewhere who would be amused to hear your theories about why their fight against ISIS is "unnecessary".


You're still missing my view point. It's always unnecessary, whether you engage or not.


I understand your point perfectly. My point is that that's only true by the most literal, pedantic, and irrelevant definition of "unnecessary".




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: