Hacker News new | comments | show | ask | jobs | submit login

I doubt that Google spelling out their moral stance is intended to convince you right away that they're all good now. It's a public standard that they're setting for themselves. If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with. It's a benchmark to which employees and the public can hold them accountable.



> It's a public standard that they're setting for themselves.

I'd like to really draw attention to the "for themselves" part here. Yes, this is a public document, and of course it serves a PR purpose, but the function of setting the terms for internal discussion at Google is at least as important.

I think that since most people aren't Google employees, they tend to focus on the PR angle of this, but I don't even think that's the primary motivation.


Small addendum: Big companies are big.

I didn't see the actual email chain (raw wasn't published?), but at Google-size it's conceivable there wasn't company-wide exec awareness of the details.

That's how big organizations operate.


Given how a lot of people don't hold Microsoft accountable for past misdeeds (the 4 last posts about github acquisition are endless arguments about it), there is few reasons to believe it's going to be different with google.

For them, it's always better to benefit from screwing up. If you don't get caught, yay ! If you do, apologize, wait a bit, and send your PR team go their magic. Bim, you are green again.

Why would they do otherwise if they can keep the loot and face little consequences ?


1. Does Microsoft have written promises that they broke about their past acquisitions? In the case of Skype it's going quite poorly, but as far as I know LinkedIn is running quite independently and is doing well. Nokia again is doing pretty poorly, but Mojang also seems to be doing fine. It's pretty hit an miss, but to be fair, smartphone and communication are pretty hard industries to succeed in.


All the arguments have already been used in the past threads. No use to repeat them here.


As a neutral observer, I've not been on past threads. Most people who don't have particular interest in this haven't. It would be nice to hear both sides of the argument


Go back to the threads on github acquisition. There are at least 4 of them during the past week. They are very long, very rich and very divided, so making a tl;dr would be too much work.


If people are complaining about Microsoft acquiring GitHub then is that not people trying to hold Microsoft accountable?

If Microsoft's sins were truly forgiven or forgotten, people wouldn't be complaining about the acquisition.


You missed the numerous HN comments defending microsoft.

You missed people on reddit or imgur singing glory to microsoft.

They now have a fan base.

A fan base.

That's not something I would have ever imagined in the 90'.


Yes they are a big company with many facets. You can like some parts and dislike others.

They have always had a fan base, even during those dark times (but not as many). But seems like they worked on engaging others and now have a bigger fan base.


Perhaps another good example closer to what google is doing is Cisco providing to China the means to build their great firewall. They got some criticism of it for a bit of time but China's censorship regime has since become the "new normal" and has clawed its way into western media via heavy investment into Hollywood studios by the country.


Historically has anyone succeeded in holding such giant firms accountable to their own stated principles? At the moment, I like those principles more than I like Google.


I'm not sure externally being held accountable is as important as it would seem.

Publicly stated principles such as these give a clear framework for employees to raise ethical concerns in a way that management is likely to listen to.

For example, one of my previous employers had ten "tenets of operation" that began with "Always". While starting each one with "never" would have been more accurate in practice, they were still useful. If you wanted to get management to listen to you about a potential safety or operational issue, framing the conversation in terms of "This violates tenet #X" was _extremely_ effective. It gave them a common language to use with their management about why an issue was important. Otherwise, potentially lethal safety hazards were continually blown off and the employees who brought them up were usually reprimanded.

Putting some airy-sounding principles in place and making them very public is effective because they're an excellent internal communication tool, not because of external accountability.


Look at it from the other side: with those principles written down, executives will at least have the option to adhere to them, something to point at when they do. Without, shareholders might give them a very hard time for every not strictly illegal profit opportunity they preferred to skip.

Google might be in a position to not get bullied around much by investors though, so that line of thought might be slightly off topic here.


One example I can think of is private colleges. Many in the US have made public statements dedicating themselves to uphold principles like freedom of speech. Organizations like FIRE do a pretty good job holding them accountable to those principles and there are many instances in which they have documentated policy or enforcement changes made due to their activism.


Arguably, the Googlers who stopped Maven just did. Labor organization is the one of the few checks on this level of corporate power.


The funny thing about "holding people accountable" is that people rarely explain what it means, and I'm not even sure they know what it means? It's a stock phrase in politics that needs to be made more concrete to have any meaning.


As best as I can tell, it means something like "using the generally available levers of social shame and guilt to dissuade someone from doing something, or if they have already done the bad thing, then requiring them to explain their behavior in a satisfactory way and make a public commitment to avoid doing it again."


And it requires that you be in a position of power - otherwise it's just heckling, which isn't likely to have any real impact. In this case it'd be having the ability to impose fines, or discipline corporate officers, etc.


I wouldn't think of bad press is "just heckling." A company's reputation can be worth billions in sales.

It's true that many boycotts fizzle out, though.


> It's a public standard that they're setting for themselves.

They already had a public standard that people actually believed in for a good many years: *Don't be evil."

They've been palpably moving away from that each year, and it's been obvious in their statements, documents, as well as actions.


"Don't be evil" is incredibly vague and practically meaningless. What the hell is evil, and since when did everyone agree on what evil means? It's obvious to you that they're getting "evil", it certainly isn't obvious to me.


Is explicitly circumventing a browser’s privacy setting evil?

How about shaking down a competitor? [2]

[1] http://fortune.com/2016/08/30/google-safari-class-action/

[2] https://www.bostonglobe.com/business/2015/05/19/skyhook-got-...


collusion to keep salaries down may not be evil in the super-villain sense, but it's hard to see as ethical.

Not being evil has always been a side-show to the main event: the enormous wealth-generation that paid for all the good stuff. It's still the wealth-generation in the drivers seat.


Even disregarding the issue of how "evil" is defined, there is another level of vagueness: when does one become evil, as opposed to only doing some evil? Arguably, one could do some amount of evil deeds without actually being evil.

The above is sometimes mentioned in discussion, were people point out that the motto is "don't be evil" and not "don't do evil".


>If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with.

What I think is that they will go forward with any project that has potential for good return if they don't think it will blow up in their faces, and that opinion is based on their past behavior.


I didn't realize they already have past behaviour of violating their own stated AI principles within the day of publishing those principles. /s

Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.


>Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.

I said I'm all for giving the benefit of the doubt _but_... That _but_ is important as it explains why I don't really buy it this time around, and that's based on how they handled this situation.

And c'mon, really; judging their behavior should be solely based on ML (it's not AI, let's avoid marketing terms) code? Why does the application matter? They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.


> Why does the application matter?

Possibly because it's literally the subject of this thread, blog post, and the change of heart we're discussing.

> but this coming after the fact rings a bit hollow to me

^ from your original comment. So you don't buy the change of heart because...they had a change of heart after an event that told them they need a change of heart?

Did you expect them to have a change of heart before they realized they need to have a change of heart? Did you expect them to already know the correct ethics before anything happened and therefore not need the change of heart that you'd totally be willing to give them the benefit of the doubt on?

> They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.

Right, in the same way that I can just say they are good and didn't violate that tenet based on my own arbitrary set of values that Google never specified (in spirit, of course, not saying they are literally "good", otherwise I'd be saying something meaningful).

It still doesn't look like you were ever willing of giving them the benefit of the doubt on a change of heart like the one expressed in this blog post. Which is fine, if you're honest about it. Companies don't inherently deserve trust. But don't pretend to be a forgiving fellow who has the graciousness to give them a chance.


Even if they abide by this, who's to say that once a party has some Google developed military AI, they won't misuse it? I fail to see how Google can effectively prevent this.


If they develop an AI that administers medicine to veterans, and the army takes it and change it so it will administer torture substances to prisoners of war, is it Google's fault or the army's fault?

Google makes tools with a purpose in mind, but like many other technologies in history, they can always be twisted into something harmful, just like Einstein's Theory of Relativity was used as basis for the first nuclear weapons.


> It's a public standard that they're setting for themselves. If you think their actions don't match their words, you now have concrete terms and principles to critique and compare with.

Absolutely. Because "Don't be evil." was so vague and hard to apply to projects with ambiguous and subtle moral connotations like automating warfare and supporting the military-industrial complex' quest to avoid peace in our time ;)


Yes, like “do no evil”.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: