Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we're aware, there was no discussion about the morality of the matter (I'm not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a "small project" and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M / year. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.
And here we are now with a release from the CEO talking about morality and "principles" well after the fact. I doubt many people do anyway, but I'm not buying the "these are our morals" bit.
I'd like to really draw attention to the "for themselves" part here. Yes, this is a public document, and of course it serves a PR purpose, but the function of setting the terms for internal discussion at Google is at least as important.
I think that since most people aren't Google employees, they tend to focus on the PR angle of this, but I don't even think that's the primary motivation.
I didn't see the actual email chain (raw wasn't published?), but at Google-size it's conceivable there wasn't company-wide exec awareness of the details.
That's how big organizations operate.
For them, it's always better to benefit from screwing up. If you don't get caught, yay ! If you do, apologize, wait a bit, and send your PR team go their magic. Bim, you are green again.
Why would they do otherwise if they can keep the loot and face little consequences ?
If Microsoft's sins were truly forgiven or forgotten, people wouldn't be complaining about the acquisition.
You missed people on reddit or imgur singing glory to microsoft.
They now have a fan base.
A fan base.
That's not something I would have ever imagined in the 90'.
They have always had a fan base, even during those dark times (but not as many). But seems like they worked on engaging others and now have a bigger fan base.
Publicly stated principles such as these give a clear framework for employees to raise ethical concerns in a way that management is likely to listen to.
For example, one of my previous employers had ten "tenets of operation" that began with "Always". While starting each one with "never" would have been more accurate in practice, they were still useful. If you wanted to get management to listen to you about a potential safety or operational issue, framing the conversation in terms of "This violates tenet #X" was _extremely_ effective. It gave them a common language to use with their management about why an issue was important. Otherwise, potentially lethal safety hazards were continually blown off and the employees who brought them up were usually reprimanded.
Putting some airy-sounding principles in place and making them very public is effective because they're an excellent internal communication tool, not because of external accountability.
Google might be in a position to not get bullied around much by investors though, so that line of thought might be slightly off topic here.
It's true that many boycotts fizzle out, though.
They already had a public standard that people actually believed in for a good many years: *Don't be evil."
They've been palpably moving away from that each year, and it's been obvious in their statements, documents, as well as actions.
How about shaking down a competitor? 
Not being evil has always been a side-show to the main event: the enormous wealth-generation that paid for all the good stuff. It's still the wealth-generation in the drivers seat.
The above is sometimes mentioned in discussion, were people point out that the motto is "don't be evil" and not "don't do evil".
What I think is that they will go forward with any project that has potential for good return if they don't think it will blow up in their faces, and that opinion is based on their past behavior.
Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.
I said I'm all for giving the benefit of the doubt _but_... That _but_ is important as it explains why I don't really buy it this time around, and that's based on how they handled this situation.
And c'mon, really; judging their behavior should be solely based on ML (it's not AI, let's avoid marketing terms) code? Why does the application matter? They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.
Possibly because it's literally the subject of this thread, blog post, and the change of heart we're discussing.
> but this coming after the fact rings a bit hollow to me
^ from your original comment. So you don't buy the change of heart because...they had a change of heart after an event that told them they need a change of heart?
Did you expect them to have a change of heart before they realized they need to have a change of heart? Did you expect them to already know the correct ethics before anything happened and therefore not need the change of heart that you'd totally be willing to give them the benefit of the doubt on?
> They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.
Right, in the same way that I can just say they are good and didn't violate that tenet based on my own arbitrary set of values that Google never specified (in spirit, of course, not saying they are literally "good", otherwise I'd be saying something meaningful).
It still doesn't look like you were ever willing of giving them the benefit of the doubt on a change of heart like the one expressed in this blog post. Which is fine, if you're honest about it. Companies don't inherently deserve trust. But don't pretend to be a forgiving fellow who has the graciousness to give them a chance.
Google makes tools with a purpose in mind, but like many other technologies in history, they can always be twisted into something harmful, just like Einstein's Theory of Relativity was used as basis for the first nuclear weapons.
Absolutely. Because "Don't be evil." was so vague and hard to apply to projects with ambiguous and subtle moral connotations like automating warfare and supporting the military-industrial complex' quest to avoid peace in our time ;)
• A decision was taken (contract entered into).
• As part of the backlash to that decision, one of the demands was that an explicit policy be created.
• That demand was accepted, and what you see here is the outcome.
(Sorry for the passive voice….)
That's all there is to it; I don't see a claim anywhere that the article reflects anyone's principles or morals in the past, only that this is a (only now thought-out) policy on how to decide about similar things in the future (“So today, we’re announcing seven principles to guide our work going forward”).
As someone who has helped write policy, it is literally impossible to write up all policy in advance. You always end up weathering some angry insights in the comment section (we used to call it the peanut gallery). If you can write out all policy a priori, every CEO, economist, scientist, and psychologist can just go home now.
The company has millions to gain from the contract and hasn’t shown morals on this issue.
The leaker has so much to lose by releasing the documents, everything from their career to a significant portion of their life. You could call that incentive to deceive, but I call it incentive to be just about their leak.
Especially when it’d be so easy for the company to leak counter examples showing moral consideration if they did...
Because Sundar Pichai has a strong public track record. Every CEO ends up with warts, but I have some sense of what he's about. The leaker, I have zero information on. Given known vs unknown, I put more faith in the known. Whether I by default believe or disbelieve depends on who's saying what.
Sundar Pichai's claim to fame was getting the Google Toolbar installer (and later the Chrome one) injected into the Adobe Reader installer. 
Don’t worry everyone it’s different now!
The point is that in any discussion, both sides have biases, and you need to take both sides into consideration to get a fuller picture.
What does matter in your world?
Possibly. As I said "...that we're aware of." Anything beyond what we know is speculation. This is the information I have available. Let me ask you this; if there was real debate and concern beforehand, why is it only now that Google has decided to back out?
At some risk of proving your point:
At least you're honest about your contempt for the common man.
However a good thing done for the wrong reasons is still a good thing.
Agreed, and I try not to be too hard on them. I don't think it's a black and white issue personally, the only issue I have is how this implies Google always wants to do the right thing from the get go, which very much seems to not be the case here.
But Google has no intention of doing the right thing anymore than Microsoft or Disney does. These are corporations and their executives HAVE to do what they think will be best for the corporation. Not what they think is best for mankind.
This is how for profit businesses currently work. And PR saying anything to the contrary is simply not true.
Corporations are run by people with a complex set of motivations and constraints in which they make decisions. Some of them make decisions with intent to harm. Some make decisions with intent to help.
No one person is automatically turned into a ruthless amoral person just be being employed at a corporation.
They can quit. They can speak out. They can organize. They can petition for change. They could join the more ethical competition (if one exists), or start their own.
This is especially easy to do for employees of a company like Google, with excellent job prospects and often enough "fuck you money" to do whatever they want without serious financial hardship.
They are not hopelessly chained to the corporate profit machine. They can revolt -- that is, if their morals are important enough to them. Otherwise they can stay on and try not to rock the boat, or pretend they don't know or are helpless to act.
A handful of Google employees chose to act and publicly express their objections. This action got results. More employees in companies which act unethically should follow their lead.
Ultimately, though, I agree with zaphar that you are overgeneralizing, since corporations are controlled by humans -- executives, other employees, and shareholders -- and human motivations can be complex.
I'd say it is definitely better than not doing a good thing. For me, the real question is this though: considering there is a pattern here (doing the right thing after doing the wrong thing), do you trust they will do the right thing in the first place next time?
Yes, but we should absolutely remember what the original intention was.
Imagine I wanted to have somebody killed and I hired a hitman to kill them and when I go to pay the hitman I accidentally wire the money to the wrong place and inadvertantly pay off the victims mortgage instead of paying the hitman. Now the victim doesn't die and gets their mortgage paid off. I'm not a good guy what I did is not a good thing, I just fucked up, that's all. Had everything gone to plan the guy would be dead and I would be blameless.
Similarly if everything had gone to plan Google AI would now be powering various autonomous murder bots except they realized that they didn't want to be associated with this, not because they have any morals, but because WE DO. They are still bad.
That's an odd analogy considering the the would-be conspirator didn't make a decision to not go through with it. Do you believe Google published this article by accident? And really; comparing Google's actions to murder... c'mon.
They didn't fess up because they realize that the outcome of their actions woudl be bad, they fess up because YOU realize that the outcome of their actions would be bad.
you think people weren't/wouldn't be killed off intel gathered from the project?
You can speculate about their motives, I personally beleive it to be PR motivated aswell, but what matters in the end is that it stopped happening.
If things actually went down like this person describes I would have been out the door too. (Due to the betrayal. I actually think Project JEDI is pretty cool.)
It wasn't mentioned one time. Never in the presentations, none of the talk titles had it included either.
Machine Learning was mentioned and Apple also released many updates related to it, but never did they call it artificially intelligent, or use the phrase artificial intelligence. Not a single time. Go ahead and watch it if you haven't already. They'll never say those words.
Pretty remarkable considering it was a developer conference, I wonder why?
Compare it to Musk who constantly talks up the idea while the actual AI systems his company has deployed are killing people.
“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she continued. “Google Cloud has been building our theme on Democratizing AI in 2017, and Diane and I have been talking about Humanistic AI for enterprise. I’d be super careful to protect these very positive images.”
If you take this literally you might say her concern is "just PR", but this is exactly the kind of argument I would use if I was against something on moral grounds but I am trying to convince who does not share the same values.
That article adds some pre-context to the quote above.
> "Avoid at ALL COSTS any mention or implication of AI,” she wrote [...] . “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”
If Google does start building weapons, you can always turn up with a sledgehammer. How do these principles reduce the need for you to do that?
At the same time, Google and Alphabet Inc., far-from perfect entities which I have and do criticise strongly and often, at least present a public face of learning from mistakess rather than doubling down or issueing repeated and increasingly trite and vacant apologies (say: Facebook).
This is a line Google have drawn for themselves in the sand, and our various future selves may judge them against.
It seems a narrow view to take to assume Google's only AI project with mortal human consequence is Maven, then using that narrow view to confirm your own negative bias about profit, perception, disingenuity.
Considering the size of the global arms trade, it’s very unlikely to stay 250M a year for long.
"If I don't do it somebody else will!"