Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we're aware, there was no discussion about the morality of the matter (I'm not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a "small project" and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M / year. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.
And here we are now with a release from the CEO talking about morality and "principles" well after the fact. I doubt many people do anyway, but I'm not buying the "these are our morals" bit.
I'd like to really draw attention to the "for themselves" part here. Yes, this is a public document, and of course it serves a PR purpose, but the function of setting the terms for internal discussion at Google is at least as important.
I think that since most people aren't Google employees, they tend to focus on the PR angle of this, but I don't even think that's the primary motivation.
I didn't see the actual email chain (raw wasn't published?), but at Google-size it's conceivable there wasn't company-wide exec awareness of the details.
That's how big organizations operate.
For them, it's always better to benefit from screwing up. If you don't get caught, yay ! If you do, apologize, wait a bit, and send your PR team go their magic. Bim, you are green again.
Why would they do otherwise if they can keep the loot and face little consequences ?
If Microsoft's sins were truly forgiven or forgotten, people wouldn't be complaining about the acquisition.
You missed people on reddit or imgur singing glory to microsoft.
They now have a fan base.
A fan base.
That's not something I would have ever imagined in the 90'.
They have always had a fan base, even during those dark times (but not as many). But seems like they worked on engaging others and now have a bigger fan base.
Publicly stated principles such as these give a clear framework for employees to raise ethical concerns in a way that management is likely to listen to.
For example, one of my previous employers had ten "tenets of operation" that began with "Always". While starting each one with "never" would have been more accurate in practice, they were still useful. If you wanted to get management to listen to you about a potential safety or operational issue, framing the conversation in terms of "This violates tenet #X" was _extremely_ effective. It gave them a common language to use with their management about why an issue was important. Otherwise, potentially lethal safety hazards were continually blown off and the employees who brought them up were usually reprimanded.
Putting some airy-sounding principles in place and making them very public is effective because they're an excellent internal communication tool, not because of external accountability.
Google might be in a position to not get bullied around much by investors though, so that line of thought might be slightly off topic here.
It's true that many boycotts fizzle out, though.
They already had a public standard that people actually believed in for a good many years: *Don't be evil."
They've been palpably moving away from that each year, and it's been obvious in their statements, documents, as well as actions.
How about shaking down a competitor? 
Not being evil has always been a side-show to the main event: the enormous wealth-generation that paid for all the good stuff. It's still the wealth-generation in the drivers seat.
The above is sometimes mentioned in discussion, were people point out that the motto is "don't be evil" and not "don't do evil".
What I think is that they will go forward with any project that has potential for good return if they don't think it will blow up in their faces, and that opinion is based on their past behavior.
Doesn't sound like you're really that willing to give them the benefit of the doubt like you said.
I said I'm all for giving the benefit of the doubt _but_... That _but_ is important as it explains why I don't really buy it this time around, and that's based on how they handled this situation.
And c'mon, really; judging their behavior should be solely based on ML (it's not AI, let's avoid marketing terms) code? Why does the application matter? They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.
Possibly because it's literally the subject of this thread, blog post, and the change of heart we're discussing.
> but this coming after the fact rings a bit hollow to me
^ from your original comment. So you don't buy the change of heart because...they had a change of heart after an event that told them they need a change of heart?
Did you expect them to have a change of heart before they realized they need to have a change of heart? Did you expect them to already know the correct ethics before anything happened and therefore not need the change of heart that you'd totally be willing to give them the benefit of the doubt on?
> They've violated their own "don't be evil" tenet (in spirit, not saying they are literally "evil") before.
Right, in the same way that I can just say they are good and didn't violate that tenet based on my own arbitrary set of values that Google never specified (in spirit, of course, not saying they are literally "good", otherwise I'd be saying something meaningful).
It still doesn't look like you were ever willing of giving them the benefit of the doubt on a change of heart like the one expressed in this blog post. Which is fine, if you're honest about it. Companies don't inherently deserve trust. But don't pretend to be a forgiving fellow who has the graciousness to give them a chance.
Google makes tools with a purpose in mind, but like many other technologies in history, they can always be twisted into something harmful, just like Einstein's Theory of Relativity was used as basis for the first nuclear weapons.
Absolutely. Because "Don't be evil." was so vague and hard to apply to projects with ambiguous and subtle moral connotations like automating warfare and supporting the military-industrial complex' quest to avoid peace in our time ;)
• A decision was taken (contract entered into).
• As part of the backlash to that decision, one of the demands was that an explicit policy be created.
• That demand was accepted, and what you see here is the outcome.
(Sorry for the passive voice….)
That's all there is to it; I don't see a claim anywhere that the article reflects anyone's principles or morals in the past, only that this is a (only now thought-out) policy on how to decide about similar things in the future (“So today, we’re announcing seven principles to guide our work going forward”).
As someone who has helped write policy, it is literally impossible to write up all policy in advance. You always end up weathering some angry insights in the comment section (we used to call it the peanut gallery). If you can write out all policy a priori, every CEO, economist, scientist, and psychologist can just go home now.
The company has millions to gain from the contract and hasn’t shown morals on this issue.
The leaker has so much to lose by releasing the documents, everything from their career to a significant portion of their life. You could call that incentive to deceive, but I call it incentive to be just about their leak.
Especially when it’d be so easy for the company to leak counter examples showing moral consideration if they did...
Because Sundar Pichai has a strong public track record. Every CEO ends up with warts, but I have some sense of what he's about. The leaker, I have zero information on. Given known vs unknown, I put more faith in the known. Whether I by default believe or disbelieve depends on who's saying what.
Sundar Pichai's claim to fame was getting the Google Toolbar installer (and later the Chrome one) injected into the Adobe Reader installer. 
Don’t worry everyone it’s different now!
The point is that in any discussion, both sides have biases, and you need to take both sides into consideration to get a fuller picture.
What does matter in your world?
Possibly. As I said "...that we're aware of." Anything beyond what we know is speculation. This is the information I have available. Let me ask you this; if there was real debate and concern beforehand, why is it only now that Google has decided to back out?
At some risk of proving your point:
At least you're honest about your contempt for the common man.
However a good thing done for the wrong reasons is still a good thing.
Agreed, and I try not to be too hard on them. I don't think it's a black and white issue personally, the only issue I have is how this implies Google always wants to do the right thing from the get go, which very much seems to not be the case here.
But Google has no intention of doing the right thing anymore than Microsoft or Disney does. These are corporations and their executives HAVE to do what they think will be best for the corporation. Not what they think is best for mankind.
This is how for profit businesses currently work. And PR saying anything to the contrary is simply not true.
Corporations are run by people with a complex set of motivations and constraints in which they make decisions. Some of them make decisions with intent to harm. Some make decisions with intent to help.
No one person is automatically turned into a ruthless amoral person just be being employed at a corporation.
They can quit. They can speak out. They can organize. They can petition for change. They could join the more ethical competition (if one exists), or start their own.
This is especially easy to do for employees of a company like Google, with excellent job prospects and often enough "fuck you money" to do whatever they want without serious financial hardship.
They are not hopelessly chained to the corporate profit machine. They can revolt -- that is, if their morals are important enough to them. Otherwise they can stay on and try not to rock the boat, or pretend they don't know or are helpless to act.
A handful of Google employees chose to act and publicly express their objections. This action got results. More employees in companies which act unethically should follow their lead.
Ultimately, though, I agree with zaphar that you are overgeneralizing, since corporations are controlled by humans -- executives, other employees, and shareholders -- and human motivations can be complex.
I'd say it is definitely better than not doing a good thing. For me, the real question is this though: considering there is a pattern here (doing the right thing after doing the wrong thing), do you trust they will do the right thing in the first place next time?
Yes, but we should absolutely remember what the original intention was.
Imagine I wanted to have somebody killed and I hired a hitman to kill them and when I go to pay the hitman I accidentally wire the money to the wrong place and inadvertantly pay off the victims mortgage instead of paying the hitman. Now the victim doesn't die and gets their mortgage paid off. I'm not a good guy what I did is not a good thing, I just fucked up, that's all. Had everything gone to plan the guy would be dead and I would be blameless.
Similarly if everything had gone to plan Google AI would now be powering various autonomous murder bots except they realized that they didn't want to be associated with this, not because they have any morals, but because WE DO. They are still bad.
That's an odd analogy considering the the would-be conspirator didn't make a decision to not go through with it. Do you believe Google published this article by accident? And really; comparing Google's actions to murder... c'mon.
They didn't fess up because they realize that the outcome of their actions woudl be bad, they fess up because YOU realize that the outcome of their actions would be bad.
you think people weren't/wouldn't be killed off intel gathered from the project?
You can speculate about their motives, I personally beleive it to be PR motivated aswell, but what matters in the end is that it stopped happening.
If things actually went down like this person describes I would have been out the door too. (Due to the betrayal. I actually think Project JEDI is pretty cool.)
It wasn't mentioned one time. Never in the presentations, none of the talk titles had it included either.
Machine Learning was mentioned and Apple also released many updates related to it, but never did they call it artificially intelligent, or use the phrase artificial intelligence. Not a single time. Go ahead and watch it if you haven't already. They'll never say those words.
Pretty remarkable considering it was a developer conference, I wonder why?
Compare it to Musk who constantly talks up the idea while the actual AI systems his company has deployed are killing people.
“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she continued. “Google Cloud has been building our theme on Democratizing AI in 2017, and Diane and I have been talking about Humanistic AI for enterprise. I’d be super careful to protect these very positive images.”
If you take this literally you might say her concern is "just PR", but this is exactly the kind of argument I would use if I was against something on moral grounds but I am trying to convince who does not share the same values.
That article adds some pre-context to the quote above.
> "Avoid at ALL COSTS any mention or implication of AI,” she wrote [...] . “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”
If Google does start building weapons, you can always turn up with a sledgehammer. How do these principles reduce the need for you to do that?
At the same time, Google and Alphabet Inc., far-from perfect entities which I have and do criticise strongly and often, at least present a public face of learning from mistakess rather than doubling down or issueing repeated and increasingly trite and vacant apologies (say: Facebook).
This is a line Google have drawn for themselves in the sand, and our various future selves may judge them against.
It seems a narrow view to take to assume Google's only AI project with mortal human consequence is Maven, then using that narrow view to confirm your own negative bias about profit, perception, disingenuity.
Considering the size of the global arms trade, it’s very unlikely to stay 250M a year for long.
"If I don't do it somebody else will!"
The choice not to accept business is a hard one. I've recently turned away from precision-metrology work where I couldn't be certain of its intent; in every other way, it was precisely the sort of work I'd like to do, and the compensation was likely to be good.
These stated principles are very much in line with those that I've chosen; a technology's primary purpose and intent must be for non-offensive and non-surveillance purposes.
We should have a lot of respect for a company's clear declaration of work which it will not do.
As terrifying as the prospect is, it's already happening.
A killbot is (more or less) a mobile booby trap. If we have a problem with madmen leaving booby traps around, we can't solve that by laying more booby traps.
Neither can mines, and look at the decades of devastation those have caused. Now imagine a minefield where the mines get up and chase after you.
I far more concerned about the hackability of civilian autonomous systems than I am about Russia's killbots. If Russia wants to end the world they already can and paranoid militaries make more secure cyber systems than random internet connected cars or planes.
I will only direct expertise toward an imaging system like Keyhole or targeting/guidance systems if our society faces a clear and acute foreign threat.
The relevant research papers, knowledge and skills are widely available across the world. There are some advanced courses at Chinese universities right now that can only be seen as 'AI for military'.
Its funny cause I haven’t heard of large scale Chinese or Russian drones flying over other countries targeting terrorists but ending up murdering children on more than one occasion.
Perhaps these technologies can be just evil and the US is the only country powerful enough to get away with using them.
I really have no stomach for just a vapid excuse, and I cannot fathom how so many people fall for it.
Using this logic then US can do anything with the exception of the things you are 100% sure China(or insert other country here) is not and will not do it.
This means surveillance, killer robots, black magic,genetically enhanced humans, illegal experiments and procedures, is a valid tool for US because "what if we have a war with China, we must have same tools as them"
The question is: should America compete in the arms race? I don't know. But there are big consequences either way.
Say US wants to spend a lot of many with some black magic consultant that could assassinate at a distance, you can justify it by launching a rumor that China does it too or probably does it or it will do it.
So you throw away any moral discussions by blaming China , they do it so we have no choice.
However that aside, AI is very big in China right now, and they're using it for numerous applications with thousands of students going through Chinese universities being taught how to handle this stuff. While the same doesn't apply to niche interests like genetically enhanced humans (who is working on that, really?), something like AI with thousands of capable researchers and engineers is a different story.
People don't like war, so it is natural that some people won't want to use their talent for making weapons.
At a minimum, I would like to know the civilian and non-civilian casualty rates of drone strikes, the definition of civilian being used, a good idea of what alternative military action the US would have taken if they didn't have drone capabilities, and the civilian and non-civilian casualty rates of those military strike options.
Without that, bringing up the drone strike casualties is nothing more than moral grandstanding based on how certain types of military action make you feel. Bonus points if you use the words "murdering children" in an attempt to bypass any logic and go straight for emotions.
Here's your definition of a "combatant".
Maybe it would be better for the world for Switzerland to have the most advanced AI. Why does it need to be the US, especially in the rapidly deteriorating political climate from the past 2 decades?
I don`t know how somebody rationalizes deaths of millions of civilians caused by USA/NATO army/interventions, which acted in non-defense, but I for sure know that they would not rationalize it anymore if they would happen to be on the receiving and of "democratization".
Source: My grandfather had orders to go and do exactly that when the dropping of the bombs ended the war.
The US had already been ridiculously effective using firebombing to level Japanese cities with their B-29s - so much so, they actually had to consider slowing down/changing targets to leave enough behind to use the Atomic Bomb on: there was almost nothing left worth hitting in strategic terms. By the time the bomb was dropped Japan was largely a beaten nation already considering surrender, Tokyo a smoldering rubble pile save for the Imperial Palace.
"The bomb simply had to be used -- so much money had been expended on it. Had it failed, how would we have explained the huge expenditure? Think of the public outcry there would have been... The relief to everyone concerned when the bomb was finished and dropped was enormous." - AJP Taylor.
Of course no one can say with certainty, but I certainly don't consider the answer to this question to be a simple one.
Even after Nagasaki, it took personal intervention from the Emperor and the foiling of an attempted coup for Japan to surrender.
Of course, dropping the bomb and developing the bomb are two distinct, albeit related, ethical questions.
To quote Wikipedia: "On the night of 9–10 March 1945, Operation Meetinghouse was conducted and is regarded as the single most destructive bombing raid in human history. 16 square miles (41 km2) of central Tokyo were annihilated, over 1 million were made homeless with an estimated 100,000 civilian deaths."
The bombing campaign leading up to the atom bombs specifically left about 5-6 cities relatively untouched. There were still major strategic targets left in August. They did this to test their effectiveness on cities, as a demonstration to the soviets, and to destroy morale.
Demonstrating their effectiveness to the soviets is why they didn't drop them in Tokyo bay.
The US having nuclear weapons didn't work out so well for the 70,000-120,000 innocent civilians that were killed in the attack on Hiroshima. I don't have handy access for how many innocent civilians were killed in Nagasaki but I would assume it was similar.
Would the Nazis have done the same thing? We dont know, and we cant know. But what we do know is that despite the Soviets/Russia, France, UK, China, India, Pakistan and probably Israel and North Korea having the capability, only the US has used nuclear weapons for indiscriminate & wholesale massacre.
So with respect, I really dont think you can go around trumpeting how it was "far better for the world" for this to happen when there is zero evidence to support that viewpoint, and at least 70,000-120,000 reasons to refute it.
1 - https://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_a...
This does not seem like a valid position.
It has nothing to do with whether or not they would have used them. The reason that it was better for the US to have them is because Nazi Germany was conquering sovereign countries through military action, not to mention engaging in the industrialized genocide of millions of people.
Their position rests on the fact that most people agree that the Allies were the "good guys" and the Axis were the "bad guys" in WW2, which is not a position that really has to be defended.
If Germany or Soviet Russia had the opportunity to use nuclear weapons to impose their will on their enemies, one need only look at what happened to the victims who already fell under their dominion.
But we do know for an absolute fact is that the US did use nuclear weapnos to kill thousands and thousands and thousands of innocent men, women and children (and for the sake of balance, we do know for an absolute fact that the Nazis did kill thousands and thousands and thousands of innocent men, women and children, but not using nuclear weapons)
There is only one country with actual blood on their hands here with regard to nuclear weapons - the other nuclear powers have so far been able to show restraint.
As such, I find it pretty objectionable for people to suggest that it was "far better" this way, when the evidence really does not back it up.
I am not saying it is not best possible outcome for the world. Could it have been worse if the Nazis had for example nuked London in 1945 and killed a million? Sure, of course that might have happened, but it didn't actually happen. Perhaps had the nazis had that chance, the UK would have surrendered and there would have been peace and countless lives could have been saved and a completely new era of peace and prosperity begun? Or perhaps it would have also been untold slaughter and misery like the US did to Japan?
We just cant know, and so I object to people saying it was "far better" for history to have played out the way it did based mainly on - I suspect - the plot of Hollywood movies they've seen. History is written by the victors.
Anyway, this is way off topic and Godwin's Law has clearly been invoked. We should stop.
Regardless, the atom bombs were certainly not the worst things any country did in WWII. The US firebombing was far worse. Everyone did bad things in that war.
Stop using your modern sensibilities to judge them.
Secondly, your assertion that we can't know what Nazi Germany would have done with nuclear weapons is correct, but you seem to be interpreting that as meaning "all possible outcomes of Nazi Germany having nuclear weapons are equally as likely", which is absolutely not true and a common mistake to make in an argument.
How sure are you that the present-day United States is the "right" group to have AI-controlled murder-drones?
The incentives to cut corners and go to market are much higher for small startups with short runways. I don't want corners cut when lives are on the line.
...still not sure if you are talking about the (heavily) cost optimising conglomerates or not....if we agree to constrain the topic to finances leaving out innovative ideas, ethics, integrity, trustworthiness, etc. where conglomerates may be loose with standards.
Also do you know how Apple and Google started? (I hoped the suggestion will get through without stating the obvious, but it did not)
That said, your argument does not avoid complicity in behaving badly or potentially doing so. It says only, "I'm a shit, but I'm willing to be a shit because there are other shitty people in the world who will behave badly even if I behave well, so I choose to behave badly because it serves me and the outcome is probably the same either way."
Of course, if your business partners adopt the same Machiavellian philosophy toward you that you espouse, one day they'll probably speak those very words when they turn against you, since someone else probably would have.
Especially since Google is publishing their results for every void-filler out there to review. Unless they plan to start hiding results that might have military applications?
After all, everyone inevitably dies, so why not murder them.
Not what it says. It says:
“...surveillance violating internationally accepted norms.”
Thanks to leaks, we have a glimpse of the new normal.
They DO realize that the YouTube recommendation algorithm is a political bias reinforcement machine, right?
Like, I think it’s fun to talk trash on google because they’re in an increably powerful position, but this one isn’t even banter.
...and revealing! The headlines differences between, say, Washington Post, CNN, The Hill, and Fox News for the same news blurb is even more dramatic than I'd expect when you put them up side-by-side.
None of which contradicts your point, I just wanted to flag a happy instance where the result wasn't just "yo dawg".
When they demo'd at GoogleIO I was pretty excited to try and then I went to look at my already installed google news app and it was still the same old one I'd been using for a while. Long story short, discovered it's a whole different app with the exact same name but updated icon.
The actual app is very very good though. I've been reading lots more news in the app. It's been about 3 weeks using daily and I'm starting to notice reinforcement of my common subjects & sources on the 'For You' page. I hope I start to notice more curve balls. It's great to the 'Headlines' page which I'm pretty sure is the same for everyone.
Can you give a concrete example of this? I'm definitely interested in seeing how big the differences are
- WaPo: Trump dangles White House visit for North Korea’s Kim if summit goes well
- Fox News: Trump forced Kim Jong Un to 'beg' for meeting, Giuliani says
- CNN: How is Donald Trump preparing for the huge North Korea summit? He's not.
- Reuters: Latte art and a gym ad: Kim Jong Un's softer image in South Korea
- Reuters: Trump says Russia should be at G7 meeting, Moscow not so sure
- CNN: Trump: Russia should be in the G7 summit
- TheHill: An isolated Trump attends the G-7
- Fox News: Trump prepares for North Korea summit as a great performer -- like Reagan
As an American, I'm disappointed, and positively enraged by the hubris on display here. A bunch of (non-US) employees have pressured Google and therefore compromised the national interests of the United States.
See this for an alternative viewpoint: http://www.chicagotribune.com/news/opinion/commentary/ct-per...
It's high time these companies are regulated and their malfeasance reined in by the United States.
This applies regardless of whether you think this specific example is immoral.
Also Google is a global organization. I don’t believe that corporations should primarily serve the interests of their government, they should serve their users and reflect the attitudes of their employees.
Google may have offices all over the world, but it's an American company, and like people (Corporations are like people, no?), it must be held responsible for its actions.
I would argue that the public should be especially wary of 'global' corporations such as these (Facebook is another one) that suddenly grow a conscience when it suits them.
Surely a company with such high morals and ethics should easily withstand regulation and public scrutiny that protects the national interests of the country that's responsible for the majority of its profits, and provided the fertile ground from which it sprang to life.
That sounds antithetical to a lot of freedoms that we hold dear as Americans.
>I would argue that the public should be especially wary of 'global' corporations such as these (Facebook is another one) that suddenly grow a conscience when it suits them.
How do you mean? To paint this in a very cynical light, the sequence of events here was
Google does a thing. Then, many Google employees threaten to quit over that thing (among a bunch of other potential downsides). So, Google agrees to stop doing the thing
Is "Google changes its policy to maintain its workforce" something that you should be wary of? That seems like reasonable corporate governance.
On the other hand, you can paint this in a much less cynical light, where the sequence was
Google does something that is potentially antithetical to its values. Employees object to this thing, claiming that it really is antithetical to those values. As a result, Google reaffirms its values and makes them more explicit, promising not to do the thing.
In other words, a very anti-google view sees this as a move for retention, and a pro-google view sees this as a reaffirmation of the "conscience" (read: values) that Google already had. I don't see how your worries apply here.
(Am a Googler, but that isn't particularly relevant to this post)
Google, an already known to be duplicitous company, changed its policy to maintain its workforce at the expense of US national interests. That's certainly something to be wary about, as a member of the public.
Google, specifically, like Facebook, should be invited to explain itself and generally describe its activities a bit more transparently for the public to see. At this point Google is effectively a utility, so there's plenty of good reasons to regulate it like one. Right now, it has benefited from almost no oversight and has grown a bit too cocky and self-righteous. Silicon Valley CEOs need to be cut down to size. Almost no other industry has this level of smugness and self-righteous belief in their superiority over the American people.
Here is a quote by Louis Brandeis, an erstwhile Justice of the Supreme Court, that pretty much captures what I have to say:
Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.
I'm honestly confused as to how those two concepts are connected.
If not, what exactly are you suggesting?
The connection is the hubris that enables Google to benefit from public largesse while simultaneously believing itself to be superior to it (and by extension, the public). Like it or not, Google has benefited immensely from research facilitated/instigated by DARPA, DoD etc. which were all military technologies (GPS, internet, Grand Challenge). But now, it has serious qualms about AI that enables civilian areas to be identified in conflict zones (by its own admission).
I'm still confused as to how "we don't want to work on systems that kill people" is in any way "superiority".
In fact, reading your comments you seem to suggest that any entity (any individual even!) who has a moral objection to working on military technology, but who uses any modern technology believes themselves superior to the public. But that description describes a large minority (or perhaps a majority!) of the population.
You appear to say that Google's sense of superiority stems from its objection to working on military technology. But I think that description applies to much of the public.
That is to say, I find it likely that most of the public would object to working on AI for drones. Yet you're arguing that the objection to working on AI for drones makes one believe them-self superior to the public. In other words, most of the public believes themselves to be superior to...themselves.
Hence my continued confusion.
> In fact, reading your comments you seem to suggest that any entity (any individual even!) who has a moral objection to working on military technology, but who uses any modern technology believes themselves superior to the public. But that description describes a large minority (or perhaps a majority!) of the population.
I do not know how one could read that into my comments. At this point, I'm beginning to think you're being deliberately obtuse. Google doesn't just use this technology like you or me, it benefits, i.e., enriches itself immensely to the tune of billions of dollars every quarter! It also becomes much more powerful and further embeds itself into the lives of ordinary people in this process.
> You appear to say that Google's sense of superiority stems from its objection to working on military technology. But I think that description applies to much of the public.
I do not think that description applies to much of the public, which lives outside the SV bubble. But I will concede that this is something that's debatable.
> That is to say, I find it likely that most of the public would object to working on AI for drones. Yet you're arguing that the objection to working on AI for drones makes one believe them-self superior to the public. In other words, most of the public believes themselves to be superior to...themselves.
Here, again, you are twisting my words and ascribing meaning that simply doesn't follow from what I have said.
I am definitely saying that Google the corporation and leadership in Silicon Valley suffers from the hubris that they can simultaneously benefit (i.e., make billions of dollars) off the fruits of research that's quite explicitly geared toward military technology, and they can rebuff those very same benefactors without consequences (and while holding the moral high ground). These benefactors are Government agencies, that exist to carry out the will of the people (nominally, at least).
Except that much of what you're saying implies that you do think these companies should be regulated in a way that forces them to do this. Or at least, if that's not what you're saying, then you seem to be insinuating a whole lot for no apparent purpose. This is why I'm confused. Your stated words and actions (ie the rest of your words) don't appear to match up.
>Google doesn't just use this technology like you or me, it benefits,
Are you suggesting that you and I don't benefit immensely from the internet?
If I'm understand you correctly, you're saying that it is unethical for an entity to benefit from another entity without supporting it. That is to say, its unethical for Google to benefit from the military's technology without also supporting the military.
Ignoring, for a moment that there's a whole host of debate on whether or not that's even true to begin with, such an objection applies equally to any individual as well as Google. I, personally, benefit greatly from military technology. Is it unethical for me to refuse to work on drone warfare? It seems odd for you to say yes to that, but on the other hand, that's basically what you're saying about Google.
>I do not think that description applies to much of the public, which lives outside the SV bubble. But I will concede that this is something that's debatable.
I would consider more than a third of the US population to be "much of" .
> suffers from the hubris that they can simultaneously benefit (i.e., make billions of dollars) off the fruits of research that's quite explicitly geared toward military technology, and they can rebuff those very same benefactors without consequences
But again, except perhaps in terms of scale, this applies to anyone. You and I both benefit, significantly, from military technology, both in terms of safety and quality of life. Yet you've stated that we should not be compelled to give back.
Why should Google (or any other corporation, which again, is really just a set of individuals) be treated differently?
I do think that the level of freedom and lack of accountability (vis-a-vis individuals and governments) corporations enjoy in the United States has reached insane levels. One the other hand, I do prefer it to the situation in China, where any corporation is likely to become a tool of the State. I believe corporations can and should contribute back to the Military if they have benefited financially from military technology. Perhaps not as much as would have been the case in a Socialist/Communist country, but definitely at some level higher than the present.
Do you also believe corporations should contribute to Experimental Particle Physics at CERN if they have benefited financially from the world wide web?
I don't think what I'm saying depends on corporate personhood. My point is, if you claim that corporations have a responsibility to contribute back to the military, you are claiming that someone at the company should do that.
Further, in your post that started this subthread, you stated
>It's high time these companies are regulated and their malfeasance reined in by the United States.
That, at least to me, reads as though you think that the US should regulate these companies in ways that require them to give back to the military. Which again, which employees should do that? How can you compel a company without, at some level, compelling the individuals within the company? Which you've at least claimed you don't want to do?
It does. A corporation can easily set up a division or a separate subsidiary or sub entity and staff it with willing individuals to do this sort of thing. There is no direct conflict with individuals' rights. So sure, Google the corporation can be compelled to do this without affecting individuals. It's quite common in other industries, but of course, for SV, it's all about the hubris, optics and innate sense of superiority.
In what other industries does the government require companies to develop military technology?
>A corporation can easily set up a division or a separate subsidiary or sub entity and staff it with willing individuals to do this sort of thing.
So you're saying that if I found a company that is based on internet-related technologies, it is reasonable to at some point in the future compel me (or compel me to pay for someone else) to work on military drones?
The canonical example is the early days of aerospace, where for all practical purposes you were developing military technology.
> So you're saying that if I found a company that is based on internet-related technologies, it is reasonable to at some point in the future compel me (or compel me to pay for someone else) to work on military drones?
The government can already compel you to license your work via eminent domain. There's an established process for this.
At any point of time there are always a slew of 'sensitive technologies' whose use and development will be closely monitored and companies are incentivized, severely restricted, or outright barred from freely trafficking in them. It's not a giant leap of the imagination that they will be forced to do the federal govt's bidding if they already do a large amount of business with them or have started out with military IP.
'Internet-related technologies' is not one of them. There was a time when supercomputers were in this category, then it was cryptography, and now it's looking like AI.
BTW, Cisco and a few others have been forced to develop 'lawful intercept' technologies on their routers for the three-letter agencies for years, I think. There was a big controversy about this a few years ago.
And you seem to be arguing that this is a good thing? Surely any company (filled with people) should be free to work on whatever technologies that those people feel is ethically right.
I'm arguing that it's not a cut-and-dry thing. Clearly it's susceptible to abuse, but on the other hand it is vital to the long-term security interests of the United States. In any case, there is more accountability than Google or Cisco 'self-regulating' themselves. These companies can't massively leverage military research and then turn back and say they have no obligation whatsoever. They can choose to do no business with the federal government, but that's clearly not the case. In fact, the opposite is true.
I can tell you see it differently, but I hope you see that not all people see promoting “US government interests” as an automatically good thing, given what that phrase has meant historically.
I can't for the life of me see why Sundar Pichai should be beholden to his (significantly foreign) employees and privilege their interest over that of the nation (his nation). Remember that Google itself is tainted, being complicit in spying over Americans (PRISM, anyone?). Why shouldn't such an entity be regulated?
He's beholden to his shareholders and every move they have made is all about that. They knew it would be bad PR but they wanted to make money so they took the contract initially and tried to keep it quite. Then, all the leaks happened causing it to backfire, tons of employees and users got upset, and so they reversed directions as they think the good reputation for attracting employees and users will pay off more than military contracts in the long term.
> Remember that Google itself is tainted, being complicit in spying over Americans (PRISM, anyone?). Why shouldn't such an entity be regulated?
It wasn't just Google that had to comply with prism. They as long with every other company that wants to operate in the US had to comply with it because every company has to comply with the laws of each country they operate in.
That's the real nub. I'd posit that this is also about keeping a small but vocal group of employees happy. However, why did Google effectively disavow all cooperation with the military to appease this group? They could have easily set up a division and staffed it with willing people. Or Alphabet the parent company could have started something else up (how costly is it to incorporate, really?). Looks like none of these things was even given serious consideration.
Did you even read up on this story because that is essentially what they tried to do and it backfired?
I read that the 'contract was routed through' some front-company. I don't think that was separate company staffed with people to adapt google AI to this purpose.
Ha. Ha ha. Hahahah.
Please, keep drinking that Kool-Aid of American exceptionalism. Apart from the British, which other country has contributed to more invasions in foreign countries for the sake of purely economic interests?
Your position is laughable, and the classic, pathetic opinion of the sort of American who hasn't traveled around the world to see that their system of values is not qualitatively different from that of most other developed Western nations.
The "purely economic interests" qualifier makes your question difficult to answer, but France has likely been involved in more invasions in foreign countries.
Which country came to the aid of Britain, and all those Western European nations when they were faced with the existential threat of Nazism? Which country has provided refuge and solace to more oppressed and exiled peoples in the world than the US? Which country fought a violent war to rid itself of slavery? Surely not the Western European nations you speak of.
America has done a lot of damage in this world, but I can't think of any other nation that has done so much good either.
Slavery died out pretty much everywhere else in the world without bloodshed so it's pretty safe to assume the US wouldn't have been different in that regard, the freeing of the slaves was more of a punitive action against the seceding states than anything else.
Anyhoo, my point being that it isn't a grand example of American exceptionalism. Much better to just link to Kevin Kline's A Fish Called Wanda rant...
And why doesn't my iPhone know how to spellcheck "slavery", are we trying to remove it from the language double-plus fast?
In the long-view, the whole American Experiment may not have been a net positive for slaves and their descendants relative to a hypothetical alternate history where the Revolutionary War failed and the US was just more British Empire. Nearly impossible to say with certainty, of course, because a Britain that included the US may have had different incentives to push it away from ending slavery.
I couldn't resist :)
Russia mainly, from where an estimated 24 million people died. American also didn't wade into a war through purely selfless means either, if Germany managed to invade Britain and western Europe, America would have been at significant threat.
The reasons every country got into the war were complex. But it is also largely true that America spilled blood and treasure out of a sense of obligation to fight Nazism, despite having a significant immigrant German population (who fought against their brethren on the other side), going so far as to impose a draft in the later stages of the war.
You sound like a young kid who has never actually read about history and geopolitics.
Please dude, actually read up on the involvement of the US in the secret wars in Cambodia by Kissinger, the School of the Americas, instrumental in teaching South American militaries repressive strategies that killed tens of thousands of innocents with the full support of the American government, the Iran-Contra affair, the ridiculous involvement in Vietnam.
There is an innumerably long list of atrocities commited purposely by the American government and with the silent consent of the American majority.
Seriously bro, Irak happened less than 15 years ago in a completely manufactured war and you're actually so stupid to believe that there's anything particularly worthy of American Imperalism?
You should try to be on the receiving end of the American business interests that have fueled these conflicts, lest we see what your opinion on the matter would be.
Kissinger was just pure evil. Apart from his involvement in the things above, he also actively supported dictators in Pakistan, and indirectly did nothing to stop the slaughter of civilians in (then) East Pakistan.
But, I still stand by my point: Americans have strived to right the wrongs (and there have been many and monumental ones). Eventually all these 'secret' activities have come out and the public has ensured that the people responsible were shamed or held accountable, to some extent (it never is a full reckoning, unfortunately). I can't think of that happening in China, for instance. Name any other great power that hasn't had stuff like this?
Did I miss something? Was the Selective Service Act amended to extend to corporations, too? Was Google drafted?
Last I checked, cooperation with the United States military has been purely on a volunteer basis since Vietnam.
If you believe in corporate personhood, then Google, Facebook are definitely villains --- avoiding taxes, running ads from enemy states etc, while maintaining a shroud of secrecy and non-accountability --- positively treasonous acts if committed by a person. If you do not, then what right is violated by making corporations subject to the Selective Service Act?
don’t worry, The DoD will get their AI weapons, image recognition with machine learning is a commodity now, and some other company will end up doing it.
My point is, where there is a will, there is a way. Individual rights are sacrosanct, corporations', not so much (or at least, they need to be incentivized to work in the national interest).
Sure, it'd be nice if Google tightened up their rhetoric a bit, or proposed concrete ways they intend to act deliberately and publicly to enforce these bylaws.
But this is a start, and reveals a willingness to speak out both to their employees and the public at large that company policy disallows some future lines of business, especially building weapons and surveillance tech.
IMHO, this is a positive step in the right direction.
keep away from or stop oneself from doing (something)
1. e.g., "Don't be evil"
The machine learning "bias", at least the low hanging fruit, is learning things like "doctor == male", or "black face = gorilla". How fair is it that facial recognition or photo algorithms are trained on datasets of white faces or not tested for adversarial images that harm black people?
Or if you use translation tools and your daughter translates careers like scientist, engineer, doctor, et al and all of the pronouns come out male?
The point is that if you train AI on datasets from the real world, you can end up reinforcing existing discrimination local to your own culture. I don't know why trying to alleviate this problem triggers some people.
In current polarized climate there isn't much trust left that bias correction will be itself unbiased, or if it will reduce existing discrimination rather than doing the opposite.
For example, in your translation tool example, even a human translator would have trouble making the least offensive translation possible. She/he/(insert favorite pronoun here) would need to realize the audience is a young impressionable child who is about to base her entire world-view on whether there's statistically more of her gender in that one sentence of translation.
For a machine learning algorithm to understand enough about human nature to not offend the parent of that child, you're better off waiting for AGI that can surpass human tact.
We know what biases people say offend them already, there's no evidence fixing them is harmful, but a non-zero risk that not fixing them is harmful.
I feel like what I'm encountering is a conservative bias against changing the status quo, "social engineering", and the like. It seems people don't like deliberate, non-organic, changes to the status quo (well, they don't tend to like organic ones either, like succeeding generations becoming say, more sexually liberal)
Machine learning can create filter bubbles, echo chambers, feedback loops, and people may attribute more weight to answers provided by machines than people, and so having machine learning reinforce even more strongly, current cultural biases that we're already seeking politically to ameliorate seems prudent and pragmatic to try and balance.
But that's not correct. That's exactly what it's to do with.
A big part of the philosophy of conservatism is to accept the world as you find it. Conservatives, at least in theory (not saying the Republicans are a great implementation of the philosophy), eschew large social engineering schemes, they eschew attempts to remould attitudes via manipulation of language and so on. These are all traits associated with the opposing end of the political spectrum. Think how important re-engineering people's thinking via language was in Orwell's 1984, for example.
So now we have Google and related AI researchers announcing that when an ML model learns the actual fact that most doctors are male, this is "bias" and it needs to be "corrected". This is Orwellian. It's not at all biased, it's accurate. But because of some vague, handwavy assumption that if AIs believe gender ratios are 1:1 in medicine then somehow ... via some obscure chain of action ... more girls will become doctors, the basic reality of the matter is justified in being inverted. Or possibly, more boys will choose NOT to be doctors. Quite why this outcome is better than the existing status quo is never explained with any rigour - it's ideology.
This is the very style of social engineering that conservatism rejects, on the grounds that it so often goes wrong or has unintended side effects. So whilst I am interested to see that Google is deciding to walk the walk here when it comes to AI and weapons, I nonetheless find their statement of moral principles to be quite chilling and frankly, it renders their most important products useless for me. Their take on this isn't news, but it's sad to see them doubling down on it. I do not wish to be subtly manipulated via AI driven translation or re-rankings into not believing things about the world that are true but upset radical feminists.
It's especially sad for me because I worked there for so many years.
If a machine learning algorithm learned a bad definition of conservative, that cast them as crypto-racists, you'd want it corrected wouldn't you? Even if a reading of conservative news sites comment forums gives you exactly that impression is likely true?
> "Quite why this outcome is better than the existing status quo is never explained with any rigour - it's ideology."
Right, progressives are the only ones with ideology, conservative positions are arrived at by cold, hard logic?
Tell me why an African American slave boy, growing up in the 1800s, who learns the existing status quo that black people are property, and white people or not, is a worse outcome? Clearly, it's a better outcome for white people of the era, so any explanation for why it might be preferable to alter it has to argue that the status quo wasn't good for black people, or that it wasn't good for white people for some presumably economic reason.
Maybe just maybe, the status quo isn't good for women? Maybe it would be good to ask them if it's ideology, or if changing cultural attitudes about what women are allowed to do, and capable of doing over the last 200 years has been a positive change in the status quo for them?
BTW, Conservatives don't reject social engineering, they just reject social engineering they disapprove of. Social conservatives around the world, in concert with religion, have sought to engineer human behavior with appeal to cultural and religious piety, and in many cases, winning laws that enforce such behavior. We've had anti-sodomy laws, anti-booze laws, anti-miscegenation laws, all of them enacted by social conservatism. And what do you call religious proselytizing, if not social engineering? Trying to spread memes and infect and convert more people into a new way of thinking.
Sometimes I feel that conservatives are against secular, humanist 'social engineering' the way Scientologists are against Psychotherapy, because it's competing in the same meme space.
Society is a dynamic equilibrium. It is constantly evolving, sometimes it evolves purely organically, in a spontaneous order, and sometimes there are clusters and movements that boil over, and change arrives by deliberate persuasion.
Google is a global, transnational company, that serves the entire world, 7+ billion people. It needs to reflect that diversity. And like I said, it absolutely cannot have AI that does stupid stuff like learn that black faces are gorillas.
I'm afraid you're obfuscating. The most important paper in question is this one:
It documents the researcher's "discovery" that word embeddings trained on all available text learn that most doctors are male and most nurses are female, along with many other relationships, like volleyball / football being a female / male analogy.
Word embeddings can't answer a question like "what is a doctor", even people would struggle to give a good answer to such a vague question. So they asked it specific questions about gender, namely, "if you ask for a gender relationship starting from doctor what do you get" (answer: nurse). And then they decided this was bias, and wrong, and should be edited out of the model.
So yes - if you asked such a model "are men more likely to be doctors" it would answer "no" although the correct answer is "yes".
I wonder if that changes your views?
Both positions arrive at their conclusions via logic, but working forward from different assumptions and premises.
The difference is progressives are much more likely to try and impose change on the world top down, whereas conservatives are much more likely to leave the world be (note: attempts to remove top-down imposition of progressive views is often cast by progressives as equally equivalent "change", but it's not, likewise, not attempting to change old traditions is often cast as inherent support for them rather than a general aversion to imposing top-down change).
Your post is a great example of the dangers of this attitude:
Maybe just maybe, the status quo isn't good for women?
I happen to think the status quo, where women are routinely handed jobs and money simply for being women (e.g. the whole 'women in tech' movement), is excellent for women!
But our views on this are both irrelevant because it's not what the argument is about.
The point is, do we want to build AIs that understand the world as it is today or which have been given an understanding of the world as Googlers believe it should be.
You are very clearly arguing for the latter, here, to the extent that earlier in your post, you argued a good answer to the question "what is a doctor" would actually involve a lecture on the "socio-historical-cultural perspective". Who says users of AI give a crap about any of that? Maybe they just want to get a definition of the word doctor, without some AI trying to change their kids ideas about what job they'd like to do along the way?
A Google AI that is constantly engaging in proxy social advocacy with me would be annoying as hell. A Google AI that doesn't but has a warped and distorted view of reality because its creators feel they're towering moral actors with a mandate to change the world on behalf of billions of people they never met? Even worse! I'd rather it was at least open about it.
Look at it this way. If Google AI was constantly subtly suggesting that all progressives were naive, hated America, that government intervention always failed and markets were the best way to do things, <insert random political stereotype here>, you wouldn't be very happy about it, would you? Especially not if the AI hadn't actually learned such things but such beliefs had been added later by libertarian programmers convinced that they'd make the world a better place by doing so.
We've had anti-sodomy laws, anti-booze laws, anti-miscegenation laws, all of them enacted by social conservatism
I suspect there's a slight terminological difference here between libertarianism and conservatism (I've been meaning primarily the former).
But regardless, are you sure you aren't assuming that?
Let's take anti-booze laws. Prohibition in the USA was a bipartisan issue at the time and both Democrats and Republicans voted in favour of it, in fact more Democrats voted in favour than Republicans did.
The constitutional amendment that gave women the right to vote was introduced by a Republican. It took a long time to get passed (decades), but this was partly because - just like the UK - the woman's suffrage movement hurt its popularity through aggressive tactics and association with unrelated social policies, in particular with pacifism and refusal to join World War 1.
And as for religion, I'd note that the USA is probably the most conservative (or libertarian) country in the world and it's also famous for separation of church and state.
I recommend _The impossibility of “fairness”: a generalized impossibility result for decisions_ and _Inherent Trade-Offs in the Fair Determination of Risk Scores_
Google isn't promising that they're done with the question of how to use data justly. They are promising that as the public debate continues over where the line is drawn and redrawn between fair use and abuse of data, they will be a willing participant in that debate, and that they're receptive to abiding by decisions that require compromise on their part. What more do you want?
This very high up and is written in a way which would explicitly allow "fair bias". This means activists will have a free hand to use their positions at Google to enforce their vision of political orthodoxy.
I'm sure there will be internal debate over what biases are good ones to keep and nobody gets a free hand. But as a policy, it doesn't restrict Google's options very much.
Pichai's point is that such discrimination must be fair and societally beneficiaĺ.
The line is not clean, straight, or constant. But it provides a guiding principle to future decisionmakers aand stakeholders.
However, you can easily mess up and be very unfair. So even though there is no "perfect", there is still a continuum of bad to good.
It's similar to people who want to lose their accent when learning a language. It's something that is impossible to do, you can't not speak with an accent. The way to lose your accent is to pick a new one, and begin emulating it.