- Google can't be trusted on anything to do with building responsible AI (they violated ACM Code of Ethics and their own AI at Google Principles).
- Google has no authority to talk about ethical use of technology and human resources. The main manager responsible for this kerfuffle brands himself as promoting diversity and responsible use of technology.
- Google can't lay claim to being a transparent company, both to its users and outsiders, and to its employees and insiders. Even Larry Page was blissfully unaware of this controversial project that directly goes against his motivations for leaving China in the first place.
- When you go work for Google, you'll have colleagues and managers that won't speak up if they get to work on another unethical project. That will eschew core values for making their stock options grow. That want to build their own empire and positive performance reviews at all cost (even if this costs Google dearly in PR and culture damage).
- Google can't be trusted to be self-regulating, putting the user first, and to clean up any damage done from a top-level ethics violation. There is no objective ethics commission or employee Ombudsman to keep the bulls in check.
- There are more than a few rotten apples in the upper echelons of Google. Perhaps such $$-eyes behavior is rewarded by growing the ranks and internal opposition is seen as a necessary evil to be managed.
In other words, all of those things you comment on are true but ultimately meaningless. Google is not going to change because the users do not care. They care only about their own lives - when Google harms them, they may make a token effort to move away from the platform, but they'll come back.
"... we have to remember how this form of loss [reputation damage] typically materializes for a commercial company - reduced market share, reduced stock price (if publicly traded), increased cost of capital, and increased cost of acquiring and retaining employees."
Iff we want Google to change, we should figure out a way to translate moral outrage into threatening some of the things mentioned in the quote.
 -- Measuring and Managing Information Risk: A FAIR Approach 1st Edition, p. 138.
The deterioration of their reputation will probably rather cost them in term of regulations and political pressure than market share. Think of what happened to banks. We start seing this hostility to big tech companies in congress and in every other countries.
Actually, if you don’t mind going back a couple of generations (and considering that typically Apple gives 5 years support compared to a typical 2-3 for Android manufacturers that’s not awful) then you can happily get an iPhone with a pretty competitive CPU for $450: https://www.apple.com/shop/buy-iphone/iphone-7
> you can get an iPhone [...] for $450
Way to miss the point :) many Android phones people use over here are several times cheaper. Many people just don't even ever think about buying Apple because of its price.
I make decent money (like most on HN, I assume) and my phone costs under $300, the improvements for phones above $300 are relatively minor besides the camera.
Once you start thinking/working on that you realize these issues are part of any large org (full of internal competition and ambition) and Google still handles them better than all the others I have worked at.
- Fire Beaumont.
- Make secrecy and shutting out privacy and security teams against process and punish violators. Inform key decision makers, such as Larry Page, of controversial projects, and punish violators. Make lying to/obscuring your employees at an all hands meeting a fireable offence.
- Align incentives and OKRs. Promote and reward core values and those that make it sticky. Protect whistle blowers and conscientious objectors. Periodically review (and have subordinates review) managers and people in key positions, not for the profit or projects they launched, but for creating an inclusive collaborative working environment. Put less focus in hiring for skill and more focus in core value alignment.
- Appoint an employee ombudsman and objective ethics audit team. Give them enough authority, visibility, and power to make changes for the better. Make sure concerns of lower level engineers make it to the top. Make management justify putting profit over user safety.
- Offer a few golden handshakes to people high up in management, that are directly or indirectly responsible for the public erosion of Google's core values. Be wise to the fact that the best and most productive/profitable leaders are also prone to shrewd and unethical behavior, and guard against this.
Its principle purpose isn't to hurt people, it's to get search results.
They already have an amazing search engine, I doubt they're building a new one.
Plus, modern search engines are probably heavy users of AI/ML technologies.
This will remain a matter of interpretation and, while my reasoning may be sound, your interpretation may differ (much like those employees that pose that designing and developing a censored and spying China search engine app is consistent with "organize the world's information...").
First off, some premises:
- Information Retrieval, Ranking, Spam filtering, etc. are part of AI. Dragonfly applies to these principles.
- Publishing the AI at Google Principles and packaging it the way they did, allows me as an outsider to hold Google accountable to these principles, question their leadership, and critique them if they apparently skirt these principles.
- China's government spying on political dissidents violates international norms on surveillance.
- Google shut out privacy and security teams from evaluating project Dragonfly.
- Shutting out security teams makes it harder to build projects designed and tested for user security.
- Shutting out privacy teams makes it harder to build projects designed and tested for user privacy. Censored search terms are not transparent. You don't control which data of yours get shared with the government.
- Sundar Pichai lied to employees when he said the project was just an innocent proof-of-concept. There was no room for many voices in that conversation, because people lacked moral authority to form an opinion on the matter (they were kept in the dark).
- A fully operational Dragonfly project would make it impossible for Chinese users to use Google to find information about this very controversy (AKA: Google and its behavior itself becomes part of censorship)
- Human Right Organizations were correct in denouncing Dragonfly for its potential to do damage to Human Rights.
- Getting in trouble with the government over search terms that may denote a political preference opposed to the government causes an unjust impact.
- Censored search terms (without showing a notice: "Some results may have been censored it accordance with Chinese law") remove control from humans without any recourse or opportunity for feedback (or choosing another company). By facilitating a censored search engine Google can't point at a government and say: It was entirely their fault.
- A Chinese Google Search Engine which leaks user data to the government is easily adaptable to harmful usage (with little power for Google to push back/notice/monitor once deployed).
- A Chinese Google Search Engine will have significant impact.
- Google is deeply involved, making a custom solution, which enlarges their duties and responsibilities.
- The (user) benefits do not substantially outweight the potential for grave harm
- A censored and spying government-controlled search engine can be viewed as an information warfare weapon.
With these premises in mind, I see them violating all the principles, but one.
>violates international norms
The international norm (Microsoft, Apple, every other big company) is to obey China's command.
> Google shut out privacy and security teams from evaluating project Dragonfly.
From the article it sounds like that happened in 2017, before the AI principles existed.
>would make it impossible for Chinese users to use Google to find information about this very controversy
It wouldn't make it impossible, it's already impossible. (Assuming you don't use a VPN.)
> The (user) benefits do not substantially outweight the potential for grave harm
What grave harm would be caused that doesn't already exist?
It's standard in industry. Search is an AI problem.
> The international norm (Microsoft, Apple, every other big company) is to obey China's command.
No that's what companies without AI guidelines do. Violating international norms is when the US, Germany, and Japan would complain when their governments would surveil as much and as invasive as China is doing.
China's surveillance apparatus is NOT the international norm!
> From the article it sounds like that happened in 2017, before the AI principles existed.
Yes. So Sundar Pichai introduced those guidelines, knewing full well that they were dead in the water.
> It wouldn't make it impossible, it's already impossible.
One of the scariest conclusions. Really pause and take it in. How significant is this?
> What grave harm would be caused that doesn't already exist?
This is a weird reasoning for me. It reads to me as similar to: People are going to die anyway, what grave harm would be caused by murdering them by your own hands?
Morals only work at the scale of a closed society (be it a full country), not at the scale of the world.
If that’s accurate, I cannot possibly understand why Beaumont is still employed at Google unless the senior leadership team decided intentionally to limit their exposure to the project so they could claim plausible deniability in the case it blew up like it has.
Obviously I’m not on the inside of this story, but from the outside, it’s getting increasingly shameful by the week.
Presumably the less concerns they're personally aware of, the less they have to answer for.
It's plausible to me that Sergey Brin, who has been fairly peripheral to Google's business side for a long time, may have been kept in the dark since he was the one most against returning to China (and he claimed he didn't know about it at the TGIF). But there's no way the executive team in general wasn't on board with what Beaumont was doing.
compartmentalizing the org structure so that it removes the manager from liability and shifts the blame to a fall-guy is a common theme in organized crime. It is also illegal.
There is nothing necessarily "illegal" about an org compartmentalizing information. What is illegal is fraud... one person in an org saying one thing, while others in the same org know that the thing is not true. The other problem with overly compartmentalizing information is not related to legal liability at all: if managers don't know what's going on, good or bad, they can't be trusted by shareholders, which can kill the stock price.
Shareholders generally prefer known challenges than unknown ones.
Because he's a sociopath that brings them money. I'm not throwing words out of nothing, you have to be a sociopath in order to take a similar decision only for financial gain, knowing full well that innocent people are going to support the dire consequences (prison time or worse).
Although I personally wanted Google to return to China (never wanted it to leave in the first place), I don't have confidence that the team doing this was ethical or honest with the rest of the company, and in a company where employees have a surprisingly large amount of power like Google, that is an unforced error.
Avoiding privacy and security review for a product is quite typical pre-launch. All that's important is it gets and passes the review on or before launch day. This product hasn't yet launched, so avoiding or delaying the review makes sense.
It makes even more sense here when the privacy landscape is quickly changing - something that would have been considered acceptable in the past is no longer considered fine.
- He's the guy who posted the bizarre and really popular conspiracy theory about the President of the United States secretly plotting a coup: https://medium.com/@yonatanzunger/trial-balloon-for-a-coup-e...
- He's the guy who posted this incredibly misleading, and astoundingly viral, tweet about children which ICE seized from their parents being missing and unaccounted for - https://twitter.com/yonatanzunger/status/999827396046995456 - which lead people to believe they'd been ripped from their families and vanished, when they'd actually been reunited with family members who didn't want to be found. This was particularly irresponsible since the fix for this would literally be imprisoning more kids rather than reuniting them with their family members.
- He's the guy who wrote a popular piece opposing the idea of tolerance: https://extranewsfeed.com/tolerance-is-not-a-moral-precept-1...
- I'm pretty sure I caught him posting other dubious misinformation and conspiracy theories too.
"Opposing the idea of tolerance" is an extremely uncharitable way to describe a piece dealing with the paradox of tolerance.
Zunger's description of 1,475 children as "missing and unaccounted for" was prompted by an Arizona Republic article that described them as "lost". He was mistaken about ICE separating those particular children from their parents, but the article wasn't entirely clear about that. There was also still the question of what would happen to the children that ICE did separate from their parents.
If you'd please review https://news.ycombinator.com/newsguidelines.html and follow them when posting here, we'd appreciate it.
Just curious and wanted a former employees view.
I'm in China at the moment and if I search Bing.com for "Tiananmen Square massacre" all I get are results about Xi Jinping celebrating martyrs and nothing about what happened.
If you want to be outraged by Google's actions, OK, but it seems selective to me. Microsoft and Google can't be the only ones who do this, either.
What do you propose? Posting a critical thing about Microsoft, to then talk about how that is unfair Microsoft can't be the only ones who do "this", whatever vague thing "this" may be?
Which comment in this thread would you call "outrage", whom exactly would you accuse of giving Microsoft a free pass, or "being selective"? I'll assume your comment isn't "being outraged about people being outraged", that you are just making a point.
I'd say nobody is or wants to be "outraged", that's just framing comments you're not directly replying to as that. The word has become quite overused for basically anything slightly critical. Let people describe their own internal emotional state as outraged, if they feel that way, but otherwise, it's really just "y u mad" where actual interaction with the argument should be.
Many companies want to use the "masses", the influx of money and bug reports, not to mention statistics and personal data, ML training, the more the merrier, come one come all -- millions, billions of people are welcome to buy the product, click on the ads and all that. But if they have a negative feedback, no matter how valid, then it's suddenly unfair. Then they're all ignorant, entitled, outraged, hipocrites, the whole playbook, and need to be routed around.
But you cannot have it both ways, if you have a channel to peddle product through, blowback can and will travel through that as well. You want to be in the marketplace, be in the marketplace.
Just calling inconvenient criticism outrage, to be done with them, makes them accumulate interest, it doesn't actually turn it into "mere outrage over nothing because people want to be outraged", which could be ignored. But do that enough, and you'll face outrage you fully earned, on top of the still valid criticism.
Another recurring pattern, and that's regardless of company as long as it's big enough, is that so much in the form of speaking for the average user or person, or how something makes business sense. A lot of personal attacks on critics, in an indirect fashion they can't even defend themselves against properly, often by people who will not stand behind their own opinion as their own opinion.
IMO the best we can do is not just to go after companies but really drop the individual responsible managers/engineers names (as this article does, and as stories on FB have done wrt Sheryl Sandberg).
Well that's a bit hyperbolic. Of course this kind of secrecy was and has been around google ever since after IPO.
Google+ had a couple floors and even a cafe that you couldn't access unless you were on the team.
The X building (or buildings now?) are inaccessible to anyone not part of X, and X does not interact with the rest of Google^WAlhphabet at all. By design.
I'm sure there are many other, smaller secret passages that insiders are well aware of.
But otherwise, wow this is a pretty damning article. If I can have you recall the recent article about things to ask a startup as an interview candidate, one of my comments was that you should work at a startup if you don't enjoy big company politics. Well, timely enough, here is the exemplar.
The article states that Google’s own internal auditing teams (legal, privacy, and security) were kept in the dark, and this was extremely unusual. Are you saying that it’s normal for Google to keep these teams out of projects?
I cannot imagine a company that willfully keeps its own legal team in the dark about anything, unless it’s leaders intend to commit serious crimes.
Googles TPU designs would be an example. Google didn't want competitors finding out what they were up to.
As the launch date of the product gets closer and closer, restrictions relax. The product will still need to pass legal, privacy, security, management etc. review, but can do that in the few weeks before launch.
The risk of doing all those reviews late is that if they don't pass, major rework might be necessary. The risk of doing the reviews early is leaks are more likley, but also the project might get caught up in political battles between managers, or might get severe scope creep (you can't design your project without it supporting XYZ!).
not normal, but not unique.
I know from close sources that the security team involvement around Google Home was actively discouraged and the "launch bit" silently removed. For example.
Google is a very large company with very large politics and perverse incentives. Security is a hindrance for most of the product teams.
And how did that work out, isolating themselves from everyone who could tell them it was a bad idea? I wonder if all of Google’s cancelled projects are developed in echo chambers, it would explain alot.
Writing open letters to Google isn't going to change anything. If you still work for Google in 2018 you are a major part of the problem. The only justification for working at FAANG and not protesting against this is if you are there to actively sabotage the company from the inside.
But I hardly think that I or most of my colleagues are an insult to any discipline. Apple, Amazon and Google each has tens of thousands of engineers (and many more folks if you consider other roles). In most companies, there will be parts of the company engaging in behaviour you disagree with.
Expecting people to resign from company X because some people at company X do bad things is a very black and white way to think about the world.
But in the words of Desmond Tutu, "if you are neutral in situations of injustice, you have chosen the side of the oppressor"
¹ Tracking China’s Muslim Gulag: https://www.reuters.com/investigates/special-report/muslims-...
There is a spectrum between doing nothing and quitting that falls under "not being neutral in situations of injustice". If our only tool for reforming systems (such as countries or companies) is to abandon them, progress would hardly be possible. Tutu understood this and worked hard to enable reconciliation after Apartheid.
Why does Netflix need sabotaging?
Though I’m pretty sure they do have their blank spots, too, just wanted to say that there’s also quality in their offer.
those few who can't see it, can always look for a job in a high-street bookstore. The perks include: free access to all books in the ethics section.
Or maybe the firewall is airtight.
US, EU, Asian and African corporations, universities, institutions, NGOs, (you name it) are happily working and making money in China. This leaves Google without potential income they should have got if they hadn't left China in the first place.
As for the "moral high ground" argument: As long as the US hosts concentration camps for kids, supplies bombs to kill brown people, hunts down trans reporters, and imprisons people like Manning the moral high ground against this is shaky.
To put it differently: if you think China's bad, just check out the US.
I might be wrong, but targeting Google (and the rest of the FAANG) began when US Republicans "found out" Silicon Valley is actually Dem. As proof:
Please stop this Red Team/Blue Team tendency. It's both binary thinking and a Red Herring. It has zero to do with the subject at hand, which is whether a search engine should collaborate with a government to censor, suppress, find, and punish dissidents or even the mildly curious.
As for moral highground, it's worth pointing out that more than a dozen human rights organizations have come out against this project.
This "don't be evil" thing keeps nagging though. They might want to be a bit more public about its deprecation: https://gizmodo.com/google-removes-nearly-all-mentions-of-do...
Simply build your index by crawling from machines inside the great firewall. Whatever gets indexed gets indexed; that’s up to the Great Firewall overlords.
Hiding search metadata from the CCP is another thing entirely I suppose, but do-able if allowed by the regime (simply don’t log anything).
Sin stocks always go high.
Give this a watch and then ask yourself "why" would people still go out of their way to support Google as a company and its products.
Apple is selling their products in the top market segment, where consumers are usually consider an image, associated with the vendor and the products, as a big part of what they pay premium for. This works in many ways, an image of ethical vendor - in so many ways, from labour conditions for its' workers to support for Greenpeace and the attitude toward selling the customer's data - and an image of themselves, using the premium products, therefore being successful people, and all that. This model needs passionate consumers who support Apple and its products.
Google is not targeting the top market segment only. People buy Android smartphones and use Google services because it works well for them, and they do not want to pay the premium for an image, and basically do not care about an image associated with the vendor. This model does not need passionate consumers who support Google and its products.
From the terms of service:
"You understand and agree that Apple and GCBD will have access to all data that you store on this service"
All major tech companies take the same approach to China: "We comply with local laws and regulations". Everything else is just smoke and mirrors.