I'd like to really draw attention to the "for themselves" part here. Yes, this is a public document, and of course it serves a PR purpose, but the function of setting the terms for internal discussion at Google is at least as important.
I think that since most people aren't Google employees, they tend to focus on the PR angle of this, but I don't even think that's the primary motivation.
I didn't see the actual email chain (raw wasn't published?), but at Google-size it's conceivable there wasn't company-wide exec awareness of the details.
That's how big organizations operate.
For them, it's always better to benefit from screwing up. If you don't get caught, yay ! If you do, apologize, wait a bit, and send your PR team go their magic. Bim, you are green again.
Why would they do otherwise if they can keep the loot and face little consequences ?
If Microsoft's sins were truly forgiven or forgotten, people wouldn't be complaining about the acquisition.
You missed people on reddit or imgur singing glory to microsoft.
They now have a fan base.
A fan base.
That's not something I would have ever imagined in the 90'.
They have always had a fan base, even during those dark times (but not as many). But seems like they worked on engaging others and now have a bigger fan base.
Publicly stated principles such as these give a clear framework for employees to raise ethical concerns in a way that management is likely to listen to.
For example, one of my previous employers had ten "tenets of operation" that began with "Always". While starting each one with "never" would have been more accurate in practice, they were still useful. If you wanted to get management to listen to you about a potential safety or operational issue, framing the conversation in terms of "This violates tenet #X" was _extremely_ effective. It gave them a common language to use with their management about why an issue was important. Otherwise, potentially lethal safety hazards were continually blown off and the employees who brought them up were usually reprimanded.
Putting some airy-sounding principles in place and making them very public is effective because they're an excellent internal communication tool, not because of external accountability.
Google might be in a position to not get bullied around much by investors though, so that line of thought might be slightly off topic here.
It's true that many boycotts fizzle out, though.
They already had a public standard that people actually believed in for a good many years: *Don't be evil."
They've been palpably moving away from that each year, and it's been obvious in their statements, documents, as well as actions.
How about shaking down a competitor? 
Not being evil has always been a side-show to the main event: the enormous wealth-generation that paid for all the good stuff. It's still the wealth-generation in the drivers seat.
The above is sometimes mentioned in discussion, were people point out that the motto is "don't be evil" and not "don't do evil".
What I think is that they will go forward with any project that has potential for good return if they don't think it will blow up in their faces, and that opinion is based on their past behavior.
Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.
I said I'm all for giving the benefit of the doubt _but_... That _but_ is important as it explains why I don't really buy it this time around, and that's based on how they handled this situation.
And c'mon, really; judging their behavior should be solely based on ML (it's not AI, let's avoid marketing terms) code? Why does the application matter? They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.
Possibly because it's literally the subject of this thread, blog post, and the change of heart we're discussing.
> but this coming after the fact rings a bit hollow to me
^ from your original comment. So you don't buy the change of heart because...they had a change of heart after an event that told them they need a change of heart?
Did you expect them to have a change of heart before they realized they need to have a change of heart? Did you expect them to already know the correct ethics before anything happened and therefore not need the change of heart that you'd totally be willing to give them the benefit of the doubt on?
> They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.
Right, in the same way that I can just say they are good and didn't violate that tenet based on my own arbitrary set of values that Google never specified (in spirit, of course, not saying they are literally "good", otherwise I'd be saying something meaningful).
It still doesn't look like you were ever willing of giving them the benefit of the doubt on a change of heart like the one expressed in this blog post. Which is fine, if you're honest about it. Companies don't inherently deserve trust. But don't pretend to be a forgiving fellow who has the graciousness to give them a chance.
Google makes tools with a purpose in mind, but like many other technologies in history, they can always be twisted into something harmful, just like Einstein's Theory of Relativity was used as basis for the first nuclear weapons.
Absolutely. Because "Don't be evil." was so vague and hard to apply to projects with ambiguous and subtle moral connotations like automating warfare and supporting the military-industrial complex' quest to avoid peace in our time ;)