Hacker News new | comments | show | ask | jobs | submit login

So, I'm all for giving someone the benefit of the doubt if they have a change of heart upon reconsidering an issue, but this coming after the fact rings a bit hollow to me. I think the only principle at play here is that it became a PR issue. That's fine, but let's be honest about it.

Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we're aware, there was no discussion about the morality of the matter (I'm not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a "small project" and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M / year[1]. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.

And here we are now with a release from the CEO talking about morality and "principles" well after the fact. I doubt many people do anyway, but I'm not buying the "these are our morals" bit.

https://www.bizjournals.com/sanjose/news/2018/06/01/report-g...




I doubt that Google spelling out their moral stance is intended to convince you right away that they're all good now. It's a public standard that they're setting for themselves. If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with. It's a benchmark to which employees and the public can hold them accountable.


> It's a public standard that they're setting for themselves.

I'd like to really draw attention to the "for themselves" part here. Yes, this is a public document, and of course it serves a PR purpose, but the function of setting the terms for internal discussion at Google is at least as important.

I think that since most people aren't Google employees, they tend to focus on the PR angle of this, but I don't even think that's the primary motivation.


Small addendum: Big companies are big.

I didn't see the actual email chain (raw wasn't published?), but at Google-size it's conceivable there wasn't company-wide exec awareness of the details.

That's how big organizations operate.


Given how a lot of people don't hold Microsoft accountable for past misdeeds (the 4 last posts about github acquisition are endless arguments about it), there is few reasons to believe it's going to be different with google.

For them, it's always better to benefit from screwing up. If you don't get caught, yay ! If you do, apologize, wait a bit, and send your PR team go their magic. Bim, you are green again.

Why would they do otherwise if they can keep the loot and face little consequences ?


1. Does Microsoft have written promises that they broke about their past acquisitions? In the case of Skype it's going quite poorly, but as far as I know LinkedIn is running quite independently and is doing well. Nokia again is doing pretty poorly, but Mojang also seems to be doing fine. It's pretty hit an miss, but to be fair, smartphone and communication are pretty hard industries to succeed in.


All the arguments have already been used in the past threads. No use to repeat them here.


As a neutral observer, I've not been on past threads. Most people who don't have particular interest in this haven't. It would be nice to hear both sides of the argument


Go back to the threads on github acquisition. There are at least 4 of them during the past week. They are very long, very rich and very divided, so making a tl;dr would be too much work.


If people are complaining about Microsoft acquiring GitHub then is that not people trying to hold Microsoft accountable?

If Microsoft's sins were truly forgiven or forgotten, people wouldn't be complaining about the acquisition.


You missed the numerous HN comments defending microsoft.

You missed people on reddit or imgur singing glory to microsoft.

They now have a fan base.

A fan base.

That's not something I would have ever imagined in the 90'.


Yes they are a big company with many facets. You can like some parts and dislike others.

They have always had a fan base, even during those dark times (but not as many). But seems like they worked on engaging others and now have a bigger fan base.


Perhaps another good example closer to what google is doing is Cisco providing to China the means to build their great firewall. They got some criticism of it for a bit of time but China's censorship regime has since become the "new normal" and has clawed its way into western media via heavy investment into Hollywood studios by the country.


Historically has anyone succeeded in holding such giant firms accountable to their own stated principles? At the moment, I like those principles more than I like Google.


I'm not sure externally being held accountable is as important as it would seem.

Publicly stated principles such as these give a clear framework for employees to raise ethical concerns in a way that management is likely to listen to.

For example, one of my previous employers had ten "tenets of operation" that began with "Always". While starting each one with "never" would have been more accurate in practice, they were still useful. If you wanted to get management to listen to you about a potential safety or operational issue, framing the conversation in terms of "This violates tenet #X" was _extremely_ effective. It gave them a common language to use with their management about why an issue was important. Otherwise, potentially lethal safety hazards were continually blown off and the employees who brought them up were usually reprimanded.

Putting some airy-sounding principles in place and making them very public is effective because they're an excellent internal communication tool, not because of external accountability.


Look at it from the other side: with those principles written down, executives will at least have the option to adhere to them, something to point at when they do. Without, shareholders might give them a very hard time for every not strictly illegal profit opportunity they preferred to skip.

Google might be in a position to not get bullied around much by investors though, so that line of thought might be slightly off topic here.


One example I can think of is private colleges. Many in the US have made public statements dedicating themselves to uphold principles like freedom of speech. Organizations like FIRE do a pretty good job holding them accountable to those principles and there are many instances in which they have documentated policy or enforcement changes made due to their activism.


Arguably, the Googlers who stopped Maven just did. Labor organization is the one of the few checks on this level of corporate power.


The funny thing about "holding people accountable" is that people rarely explain what it means, and I'm not even sure they know what it means? It's a stock phrase in politics that needs to be made more concrete to have any meaning.


As best as I can tell, it means something like "using the generally available levers of social shame and guilt to dissuade someone from doing something, or if they have already done the bad thing, then requiring them to explain their behavior in a satisfactory way and make a public commitment to avoid doing it again."


And it requires that you be in a position of power - otherwise it's just heckling, which isn't likely to have any real impact. In this case it'd be having the ability to impose fines, or discipline corporate officers, etc.


I wouldn't think of bad press is "just heckling." A company's reputation can be worth billions in sales.

It's true that many boycotts fizzle out, though.


> It's a public standard that they're setting for themselves.

They already had a public standard that people actually believed in for a good many years: *Don't be evil."

They've been palpably moving away from that each year, and it's been obvious in their statements, documents, as well as actions.


"Don't be evil" is incredibly vague and practically meaningless. What the hell is evil, and since when did everyone agree on what evil means? It's obvious to you that they're getting "evil", it certainly isn't obvious to me.


Is explicitly circumventing a browser’s privacy setting evil?

How about shaking down a competitor? [2]

[1] http://fortune.com/2016/08/30/google-safari-class-action/

[2] https://www.bostonglobe.com/business/2015/05/19/skyhook-got-...


collusion to keep salaries down may not be evil in the super-villain sense, but it's hard to see as ethical.

Not being evil has always been a side-show to the main event: the enormous wealth-generation that paid for all the good stuff. It's still the wealth-generation in the drivers seat.


Even disregarding the issue of how "evil" is defined, there is another level of vagueness: when does one become evil, as opposed to only doing some evil? Arguably, one could do some amount of evil deeds without actually being evil.

The above is sometimes mentioned in discussion, were people point out that the motto is "don't be evil" and not "don't do evil".


>If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with.

What I think is that they will go forward with any project that has potential for good return if they don't think it will blow up in their faces, and that opinion is based on their past behavior.


I didn't realize they already have past behaviour of violating their own stated AI principles within the day of publishing those principles. /s

Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.


>Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.

I said I'm all for giving the benefit of the doubt _but_... That _but_ is important as it explains why I don't really buy it this time around, and that's based on how they handled this situation.

And c'mon, really; judging their behavior should be solely based on ML (it's not AI, let's avoid marketing terms) code? Why does the application matter? They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.


> Why does the application matter?

Possibly because it's literally the subject of this thread, blog post, and the change of heart we're discussing.

> but this coming after the fact rings a bit hollow to me

^ from your original comment. So you don't buy the change of heart because...they had a change of heart after an event that told them they need a change of heart?

Did you expect them to have a change of heart before they realized they need to have a change of heart? Did you expect them to already know the correct ethics before anything happened and therefore not need the change of heart that you'd totally be willing to give them the benefit of the doubt on?

> They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.

Right, in the same way that I can just say they are good and didn't violate that tenet based on my own arbitrary set of values that Google never specified (in spirit, of course, not saying they are literally "good", otherwise I'd be saying something meaningful).

It still doesn't look like you were ever willing of giving them the benefit of the doubt on a change of heart like the one expressed in this blog post. Which is fine, if you're honest about it. Companies don't inherently deserve trust. But don't pretend to be a forgiving fellow who has the graciousness to give them a chance.


Even if they abide by this, who's to say that once a party has some Google developed military AI, they won't misuse it? I fail to see how Google can effectively prevent this.


If they develop an AI that administers medicine to veterans, and the army takes it and change it so it will administer torture substances to prisoners of war, is it Google's fault or the army's fault?

Google makes tools with a purpose in mind, but like many other technologies in history, they can always be twisted into something harmful, just like Einstein's Theory of Relativity was used as basis for the first nuclear weapons.


> It's a public standard that they're setting for themselves. If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with.

Absolutely. Because "Don't be evil." was so vague and hard to apply to projects with ambiguous and subtle moral connotations like automating warfare and supporting the military-industrial complex' quest to avoid peace in our time ;)


Yes, like “do no evil”.


• At first there was no explicit policy.

• A decision was taken (contract entered into).

• As part of the backlash to that decision, one of the demands was that an explicit policy be created.

• That demand was accepted, and what you see here is the outcome.

(Sorry for the passive voice….)

That's all there is to it; I don't see a claim anywhere that the article reflects anyone's principles or morals in the past, only that this is a (only now thought-out) policy on how to decide about similar things in the future (“So today, we’re announcing seven principles to guide our work going forward”).


The emails you're aware of framed it in those terms. Do you think a leaker might selectively leak with the intent to paint a particular picture?

As someone who has helped write policy, it is literally impossible to write up all policy in advance. You always end up weathering some angry insights in the comment section (we used to call it the peanut gallery). If you can write out all policy a priori, every CEO, economist, scientist, and psychologist can just go home now.


Why would your default stance be to believe a company over a leaker?

The company has millions to gain from the contract and hasn’t shown morals on this issue.

The leaker has so much to lose by releasing the documents, everything from their career to a significant portion of their life. You could call that incentive to deceive, but I call it incentive to be just about their leak.

Especially when it’d be so easy for the company to leak counter examples showing moral consideration if they did...


> Why would your default stance be to believe a company over a leaker?

Because Sundar Pichai has a strong public track record. Every CEO ends up with warts, but I have some sense of what he's about. The leaker, I have zero information on. Given known vs unknown, I put more faith in the known. Whether I by default believe or disbelieve depends on who's saying what.


Many of the Googlers involved in the campaign against Project Maven are engineers of high caliber and those I know of are fairly activist in general. While I haven't always agreed with those I've interacted with, they're high quality people with high quality records. The sort of Googlers protesting Maven are the sort of Googlers who made Google the sort of company people loved. And they've put their careers on the line to make a statement about what is and isn't okay.

Sundar Pichai's claim to fame was getting the Google Toolbar installer (and later the Chrome one) injected into the Adobe Reader installer. [0]

[0] https://www.quora.com/What-did-Sundar-Pichai-do-that-his-pee...


I don't know how accurate it is to say that these engineers have put their careers on the line. It could also be that they wouldn't be able to make these statements were they not secure enough in their jobs to feel their careers wouldn't be on the line.


While they definitely have above average incomes and probably some good financial security, there's a Damore factor risk: Public attention could render them unhireable if they come off as troublesome or likely to cause issues with future employers.


> Microsoft didn't even ask their customers for permission. They just automatically switched anyone who installed IE7 to Bing as the default.

Don’t worry everyone it’s different now!


I don't think they're believing anyone over anyone, but rather entering the discussion with a fair amount of skepticism.

The point is that in any discussion, both sides have biases, and you need to take both sides into consideration to get a fuller picture.


If they really believed this stuff, I don’t see why they would have had so many resignations over the issue. Had somebody brought up the ethical aspects when the project was being discussed, they wouldn’t be scrambling to limit the damage now.


A few people resigned and they're trying to limit the damage because people like being outraged at things that don't matter. It's the same thing that got James Damore fired.


AI-powered drone warfare doesn't matter?

What does matter in your world?


Fulfilling a military contract isn't the same as killing people with drones. Virtually every plane you've ever flown on is built by a company that also builds killing machines, but you don't see people making a hissy fit over it.


>Do you think a leaker might selectively leak with the intent to paint a particular picture?

Possibly. As I said "...that we're aware of." Anything beyond what we know is speculation. This is the information I have available. Let me ask you this; if there was real debate and concern beforehand, why is it only now that Google has decided to back out?


Because one very good policy is to not make policy in the heat of the moment. Write things down. Discuss with confidantes, counsel, etc. Sleep it on it. Change it. Sleep on it again. The bigger the issue, the more you think about before going public.


While selectively leaking certain emails and withholding other might color an issue it won't make a negative into a positive. And if the leaked emails aren't genuine I have not seen any claims to that effect. So either they are real and they paint a real, possibly distorted picture or they're false but as it sits right now they are the evidence that people use to build their opinion on. If there is an email trail that establishes a different narrative Google is of course perfectly free to release that too to offer some counter weight.


> we used to call it the peanut gallery

At some risk of proving your point:

At least you're honest about your contempt for the common man.


Organizations do not always communicate well, and sometimes things only reach the CEO via the press. Do not assume that the whole organization, or even just its executive, agrees with or knows about everything the organization does.


In addition to this, individual executives and teams of them often have to compartmentalize discussions of different aspects of a complex issue. This makes taking specific communications, chosen by a third party, as indicative of the whole conversation iffy.


There was a similar "comming to jesus moment" with Google in China. They saw how they had to do the right thing after years of censoring and manipulating data for the chinese government, but only after they got massively hacked, blocked, and essentially forced out of china...

However a good thing done for the wrong reasons is still a good thing.


> However a good thing done for the wrong reasons is still a good thing.

Agreed, and I try not to be too hard on them. I don't think it's a black and white issue personally, the only issue I have is how this implies Google always wants to do the right thing from the get go, which very much seems to not be the case here.


You should read Daniel Kahneman's "Thinking, Fast and Slow". It's not possible to make all the right policy decisions that are right in hindsight and before a sentinal event occurs. Hindsight bias is always 20/20. Anyone making real decisions of consequence will eventually curse hindsight.


Well Google is now framing this as a moral issue, so did morality change significantly between when they accepted this project and today?


Do you think regret is morally valid?


Sure.

But Google has no intention of doing the right thing anymore than Microsoft or Disney does. These are corporations and their executives HAVE to do what they think will be best for the corporation. Not what they think is best for mankind.

This is how for profit businesses currently work. And PR saying anything to the contrary is simply not true.


This is a gross generalization that people trot out as if it were unassailable but never back it up with any support.

Corporations are run by people with a complex set of motivations and constraints in which they make decisions. Some of them make decisions with intent to harm. Some make decisions with intent to help.

No one person is automatically turned into a ruthless amoral person just be being employed at a corporation.


... and most make decisions in a space where (local) zero-sum games mean there is no option available that uniformly helps or harms.


It gets complicated but it's more about the employees responsibility to shareholders. Not their personal morals.

https://www.reddit.com/r/law/comments/3pv8bh/is_it_really_tr...


And do you know what can happen when a person's own morals or ethics come in to conflict with their responsibility to shareholders?

They can quit. They can speak out. They can organize. They can petition for change. They could join the more ethical competition (if one exists), or start their own.

This is especially easy to do for employees of a company like Google, with excellent job prospects and often enough "fuck you money" to do whatever they want without serious financial hardship.

They are not hopelessly chained to the corporate profit machine. They can revolt -- that is, if their morals are important enough to them. Otherwise they can stay on and try not to rock the boat, or pretend they don't know or are helpless to act.

A handful of Google employees chose to act and publicly express their objections. This action got results. More employees in companies which act unethically should follow their lead.


I used to work at google about 5 years ago. While I was there it was clear that Google employed some of the most morally conscience people I've ever worked with. It's why I still trust them to this day with data that I would never trust anyone else with. As long as Google employees continue to have a voice in the company I'll continue to trust them.


Google public shareholders do not have control of the company. Larry, Sergey, and Eric are the only shareholders who matter. So executives are responsible to them first and foremost.


Even if this is true, they can make the subjective decision that doing certain things will make the company look bad in the eyes of employees (which not only can cause employees to resign, but can disadvantage a company in negotiations to hire new employees) and users of the product, and can ultimately be worse for their bottom line than things that don't bring the same short-term financial benefits.

Ultimately, though, I agree with zaphar that you are overgeneralizing, since corporations are controlled by humans -- executives, other employees, and shareholders -- and human motivations can be complex.


This sort of thing gets said a lot. It's not a valid excuse and it's not true in the black and white sense that people constantly present it.


Otoh Google tries to claim much more moral highground than they actually have. Insincerity does rub people in the wrong way.


> However a good thing done for the wrong reasons is still a good thing.

I'd say it is definitely better than not doing a good thing. For me, the real question is this though: considering there is a pattern here (doing the right thing after doing the wrong thing), do you trust they will do the right thing in the first place next time?


> However a good thing done for the wrong reasons is still a good thing.

Yes, but we should absolutely remember what the original intention was.


Hmm not really. Google is bad and they should feel bad, you're just handwaving away how bad they are because you like them.

Imagine I wanted to have somebody killed and I hired a hitman to kill them and when I go to pay the hitman I accidentally wire the money to the wrong place and inadvertantly pay off the victims mortgage instead of paying the hitman. Now the victim doesn't die and gets their mortgage paid off. I'm not a good guy what I did is not a good thing, I just fucked up, that's all. Had everything gone to plan the guy would be dead and I would be blameless.

Similarly if everything had gone to plan Google AI would now be powering various autonomous murder bots except they realized that they didn't want to be associated with this, not because they have any morals, but because WE DO. They are still bad.


>Imagine I wanted to have somebody killed and I hired a hitman to kill them and when I go to pay the hitman I accidentally wire the money to the wrong place and inadvertantly pay off the victims mortgage instead of paying the hitman. Now the victim doesn't die and gets their mortgage paid off. I'm not a good guy what I did is not a good thing, I just fucked up, that's all.

That's an odd analogy considering the the would-be conspirator didn't make a decision to not go through with it. Do you believe Google published this article by accident? And really; comparing Google's actions to murder... c'mon.


Not comparing googles actions to murder specifically, that's simply you not being able to see the forest for the trees. The only reason they wrote the article is to make it seem like what they did was a proactive moral choice when in reality it was a retroactive action to frame their realization that in supplying AI for the DoD murder bots they would be part of the evil empire. I mean it's literally twisty mustache tier lack of scruples.

They didn't fess up because they realize that the outcome of their actions woudl be bad, they fess up because YOU realize that the outcome of their actions would be bad.


> comparing Google's actions to murder

you think people weren't/wouldn't be killed off intel gathered from the project?


Beleive me when I say that I do not like Google. But what is a person or a group of people supposed to do when they have done something wrong? All they can do is stop doing it, and try to prevent doing it in the future.

You can speculate about their motives, I personally beleive it to be PR motivated aswell, but what matters in the end is that it stopped happening.


That's all fine, but that person or group shouldn't expect everybody to love them afterwards. If anything, they should expect distrust and dislike.


Google’s disregard for the ethics of the arrangement is further supported by this interview with one of the Googlers who organized the resistance to this project:

https://jacobinmag.com/2018/06/google-project-maven-military...


Jeez, that's just brutal. I try to avoid content-free comments like this one but... damn. Just brutal.

If things actually went down like this person describes I would have been out the door too. (Due to the betrayal. I actually think Project JEDI is pretty cool.)


I noticed there was one particularly phrase missing from Apple's WWDC this year: artificial intelligence.

It wasn't mentioned one time. Never in the presentations, none of the talk titles had it included either.

Machine Learning was mentioned and Apple also released many updates related to it, but never did they call it artificially intelligent, or use the phrase artificial intelligence. Not a single time. Go ahead and watch it if you haven't already. They'll never say those words.

Pretty remarkable considering it was a developer conference, I wonder why?


I credit their sense of discipline in using the more accurate term. They know they can only stand to lose down the road if they over-hype notions if "artificial intelligence" now when it really is a misnomer. And they're probably betting that's what will happen to their competition who more enthusiastically use the phrase for marketing.


Probably more that they aren't as heavily invested in selling the idea that we are close to smart general AI.

Compare it to Musk who constantly talks up the idea while the actual AI systems his company has deployed are killing people.


That's a perfect example of someone who might get bitten in the ass by the over-selling thing, too.


Musk's systems are more A than I.


Probably because "machine learning" is a more accurate naming while "artificial intelligence" is the buzzword for non-tech people. There’s nothing "intelligent" in machine learning.


Impending AI Winter poised to make a lot of people look like fools?


The article you linked says the email was written by an exec, but the source article in the intercept said it was written by "a member of the sales team". It also says Fei-Fei Li, arguably the only exec on the email chain, was actually against it:

“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she continued. “Google Cloud has been building our theme on Democratizing AI in 2017, and Diane and I have been talking about Humanistic AI for enterprise. I’d be super careful to protect these very positive images.”

If you take this literally you might say her concern is "just PR", but this is exactly the kind of argument I would use if I was against something on moral grounds but I am trying to convince who does not share the same values.


https://www.nytimes.com/2018/05/30/technology/google-project...

That article adds some pre-context to the quote above.

> "Avoid at ALL COSTS any mention or implication of AI,” she wrote [...] . “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”


I'd put this another way. Here is the CEO of Google laying down the law. And the first question to ask, before you bother reading or discussing any law, is what redress you have when someone breaks it.

If Google does start building weapons, you can always turn up with a sledgehammer. How do these principles reduce the need for you to do that?


If Google starts building weapons, wouldn't one want to turn up with something a bit more effective than a sledgehammer?


The context of those leaked emails pretty much ensures that it would be framed in terms of revenue and potential PR backlash. Even if those individuals have a moral problem with it, they're likely going to frame their argument in that way because that's probably the most effective argument to make in that context.


There's a lot to be said for this view, and I've reshared and highlighted your comments elsewhere.

At the same time, Google and Alphabet Inc., far-from perfect entities which I have and do criticise strongly and often, at least present a public face of learning from mistakess rather than doubling down or issueing repeated and increasingly trite and vacant apologies (say: Facebook).

This is a line Google have drawn for themselves in the sand, and our various future selves may judge them against.


These principles seem to have already been wholesomely applied to Google then Waymo's approach to self-driving cars. This is in stark contrast to the approaches used by competitors such as Uber and Tesla who appear to favor capturing market first and foremost.

It seems a narrow view to take to assume Google's only AI project with mortal human consequence is Maven, then using that narrow view to confirm your own negative bias about profit, perception, disingenuity.


I agree. I'm encouraged that there are people who can see through the propaganda.


Is that all? 250M/year is still peanuts for a company like Google.


It’s 250M a year for now.

Considering the size of the global arms trade, it’s very unlikely to stay 250M a year for long.


You hit the nail on the head.


And now someone else will easily pick up the contract, and Google loses any ability at all to influence it. Perhaps they could have reduced civilian casualities more so than whoever picks it up.


My crack selling business is the safest, too!


That's the biggest integrity-lacking cop-out imaginable.

"If I don't do it somebody else will!"


Give them a break. Based on how much it cost to buy a house in the Bay Area, I don't blame them. /s




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: