
AI at Google: our principles - dannyrosen
https://blog.google/topics/ai/ai-principles/
======
EpicEng
So, I'm all for giving someone the benefit of the doubt if they have a change
of heart upon reconsidering an issue, but this coming after the fact rings a
bit hollow to me. I think the only principle at play here is that it became a
PR issue. That's fine, but let's be honest about it.

Early emails between Google execs framed this project only in terms of revenue
and potential PR backlash. As far as we're aware, there was no discussion
about the morality of the matter (I'm not taking any moral stance here just to
be clear.) Once this became an internal and external PR issue, Google held a
series of all hands meetings and claimed that this was a "small project" and
that the AI would not be used to kill people. While technically true, those
same internal emails show that Google expected this to become a much larger
project over time, eventually bringing in about $250M / year[1]. So even then
they were being a bit disingenuous by focusing only on the current scope of
the deal.

And here we are now with a release from the CEO talking about morality and
"principles" well after the fact. I doubt many people do anyway, but I'm not
buying the "these are our morals" bit.

[https://www.bizjournals.com/sanjose/news/2018/06/01/report-g...](https://www.bizjournals.com/sanjose/news/2018/06/01/report-
google-thought-military-drone-project-would.html)

~~~
themacguffinman
I doubt that Google spelling out their moral stance is intended to convince
you right away that they're all good now. It's a public standard that they're
setting for themselves. If you think their actions don't match their words,
you now have concrete terms and principles to critique and compare with. It's
a benchmark to which employees and the public can hold them accountable.

~~~
ksk
Historically has anyone succeeded in holding such giant firms accountable to
their own stated principles? At the moment, I like those principles more than
I like Google.

~~~
skybrian
The funny thing about "holding people accountable" is that people rarely
explain what it means, and I'm not even sure they know what it means? It's a
stock phrase in politics that needs to be made more concrete to have any
meaning.

~~~
blacksmith_tb
And it requires that you be in a position of power - otherwise it's just
heckling, which isn't likely to have any real impact. In this case it'd be
having the ability to impose fines, or discipline corporate officers, etc.

~~~
skybrian
I wouldn't think of bad press is "just heckling." A company's reputation can
be worth billions in sales.

It's true that many boycotts fizzle out, though.

------
ISL
The best way to lead is by example. Thank you, Googlers.

The choice not to accept business is a hard one. I've recently turned away
from precision-metrology work where I couldn't be certain of its intent; in
every other way, it was precisely the sort of work I'd like to do, and the
compensation was likely to be good.

These stated principles are very much in line with those that I've chosen; a
technology's primary purpose and intent must be for non-offensive and non-
surveillance purposes.

We should have a lot of respect for a company's clear declaration of work
which it will not do.

~~~
chellam
Turning away will slow things down but AI for military applications will
happen. Someone will fill the void.

~~~
staticassertion
That doesn't make it ok to be the one to fill it.

~~~
philwelch
On a geopolitical level, it does. It's far better for the world that the
United States developed atomic weapons before either Germany or the Soviets
did.

~~~
qbaqbaqba
The US were the only one to use atomic weapons on a already defeated enemy/at
all.

~~~
ISL
The Japanese were hardly already defeated. Had the bombs not been dropped, the
United States would have invaded Japan directly, island by island, until the
country surrendered. The loss of life on both sides would have been
tremendous.

Source: My grandfather had orders to go and do exactly that when the dropping
of the bombs ended the war.

~~~
giobox
This is of course one of the justifications American leaders used, and as
always the victor gets to set the perceived historical narrative. Politically
it was extremely important for the US to believe the bomb materially shortened
the war given the huge amount of resources the Manhattan Project had consumed
that otherwise could have been invested elsewhere in the war effort,
especially when the military had to justify the incredible expense to Congress
(adjusted for inflation the total cost is around 30 billion in 2018 dollars).
I've recently been reading the excellent "The Making of The Atomic Bomb" by
Richard Rhodes which covers the events of this period in much detail.

The US had already been ridiculously effective using firebombing to level
Japanese cities with their B-29s - so much so, they actually had to consider
slowing down/changing targets to leave enough behind to use the Atomic Bomb
on: there was almost nothing left worth hitting in strategic terms. By the
time the bomb was dropped Japan was largely a beaten nation already
considering surrender, Tokyo a smoldering rubble pile save for the Imperial
Palace.

"The bomb simply had to be used -- so much money had been expended on it. Had
it failed, how would we have explained the huge expenditure? Think of the
public outcry there would have been... The relief to everyone concerned when
the bomb was finished and dropped was enormous." \- AJP Taylor.

Of course no one can say with certainty, but I certainly don't consider the
answer to this question to be a simple one.

~~~
philwelch
The US had no way of knowing for sure what the top-level strategic decisions
were in Japan. All they knew was that, throughout the war, Japanese troops
virtually never surrendered, repeatedly fought to the death, and engaged in
outright suicidal tactics including Kamikaze attacks. This persistence not
only continued but intensified on Okinawa. There was no reason to believe that
the Japanese military would ever stop short of fighting to the bloody end.

Even after Nagasaki, it took personal intervention from the Emperor and the
foiling of an attempted coup for Japan to surrender.

Of course, _dropping_ the bomb and _developing_ the bomb are two distinct,
albeit related, ethical questions.

------
finnthehuman
>2\. Avoid creating or reinforcing unfair bias.

They DO realize that the YouTube recommendation algorithm is a political bias
reinforcement machine, right?

Like, I think it’s fun to talk trash on google because they’re in an increably
powerful position, but this one isn’t even banter.

~~~
rmk
I'm convinced that Google is touting its' 'principles' with a hackneyed blog
post probably written by some PR flack.

As an American, I'm disappointed, and positively enraged by the hubris on
display here. A bunch of (non-US) employees have pressured Google and
therefore compromised the national interests of the United States.

See this for an alternative viewpoint:
[http://www.chicagotribune.com/news/opinion/commentary/ct-
per...](http://www.chicagotribune.com/news/opinion/commentary/ct-perspec-
google-artificial-intelligence-national-security-project-maven-
america-0607-story.html)

It's high time these companies are regulated and their malfeasance reined in
by the United States.

~~~
ta0982357
> and therefore compromised the national interests of the United States.

Did I miss something? Was the Selective Service Act amended to extend to
corporations, too? Was Google drafted?

Last I checked, cooperation with the United States military has been purely on
a volunteer basis since Vietnam.

~~~
rmk
You raise an interesting point. What makes Google the corporation better than
males under 26?

If you believe in corporate personhood, then Google, Facebook are definitely
villains --- avoiding taxes, running ads from enemy states etc, while
maintaining a shroud of secrecy and non-accountability --- positively
treasonous acts if committed by a person. If you do not, then what right is
violated by making corporations subject to the Selective Service Act?

------
cromwellian
Several comments don't seem to understand what the "unfair bias" mentioned is.
It doesn't have anything to do with censoring your favorite conservative
search result.

The machine learning "bias", at least the low hanging fruit, is learning
things like "doctor == male", or "black face = gorilla". How fair is it that
facial recognition or photo algorithms are trained on datasets of white faces
or not tested for adversarial images that harm black people?

Or if you use translation tools and your daughter translates careers like
scientist, engineer, doctor, et al and all of the pronouns come out male?

The point is that if you train AI on datasets from the real world, you can end
up reinforcing existing discrimination local to your own culture. I don't know
why trying to alleviate this problem triggers some people.

~~~
zawerf
It's definitely a problem worth alleviating. But it is "triggering" because it
is an open problem to determine whether a bias is harmful, even for human
beings. So it becomes an impossible/unreasonable amount of extra work if you
demand it as a prerequisite.

For example, in your translation tool example, even a human translator would
have trouble making the least offensive translation possible. She/he/(insert
favorite pronoun here) would need to realize the audience is a young
impressionable child who is about to base her entire world-view on whether
there's statistically more of her gender in that one sentence of translation.

For a machine learning algorithm to understand enough about human nature to
not offend the parent of that child, you're better off waiting for AGI that
can surpass human tact.

~~~
cromwellian
You don't think a black person trying to use a photo management app and having
their children's photos miscategorized as gorillas is harmful?

We know what biases people say offend them already, there's no evidence fixing
them is harmful, but a non-zero risk that not fixing them is harmful.

I feel like what I'm encountering is a conservative bias against changing the
status quo, "social engineering", and the like. It seems people don't like
deliberate, non-organic, changes to the status quo (well, they don't tend to
like organic ones either, like succeeding generations becoming say, more
sexually liberal)

Machine learning can create filter bubbles, echo chambers, feedback loops, and
people may attribute more weight to answers provided by machines than people,
and so having machine learning reinforce even more strongly, current cultural
biases that we're already seeking politically to ameliorate seems prudent and
pragmatic to try and balance.

------
bobcostas55
>Avoid creating or reinforcing unfair bias.

I recommend _The impossibility of “fairness”: a generalized impossibility
result for decisions_[0] and _Inherent Trade-Offs in the Fair Determination of
Risk Scores_[1]

[0]
[https://arxiv.org/pdf/1707.01195.pdf](https://arxiv.org/pdf/1707.01195.pdf)
[1]
[https://arxiv.org/pdf/1609.05807v1.pdf](https://arxiv.org/pdf/1609.05807v1.pdf)

~~~
kanox
> Avoid creating or reinforcing unfair bias.

This very high up and is written in a way which would explicitly allow "fair
bias". This means activists will have a free hand to use their positions at
Google to enforce their vision of political orthodoxy.

~~~
skybrian
A search engine with no biases whatsoever is useless. The whole point is to be
biased towards articles that users find more useful, not to give a random
sample of the Internet (whatever that means).

I'm sure there will be internal debate over what biases are good ones to keep
and nobody gets a free hand. But as a policy, it doesn't restrict Google's
options very much.

------
locacorten
> We believe that AI should:

>

> 1\. Be socially beneficial.

> 2\. Avoid creating or reinforcing unfair bias.

> 3\. Be built and tested for safety.

> 4\. Be accountable to people.

> 5\. Incorporate privacy design principles.

> 6\. Uphold high standards of scientific excellence.

> 7\. Be made available for uses that accord with these principles.

While I like this list a lot, I don't understand why this is AI-specific, and
not software-specific. Is Google using the word "AI" to mean "software"?

~~~
sidcool
Great point. AI is not the only way to reinforce biases.

------
Isamu
> AI applications we will not pursue [...] Technologies that cause or are
> likely to cause overall harm. [...] Weapons or other technologies

This statement will have zero impact on subsequent sensational headlines or
posters here claiming Google is making killbots.

~~~
ironjunkie
Well, now you are talking about philosophical questions.

\- Is pursuing an AI that kills 10 bad guys but saves 20 good ones overall
bad?

\- Is pursuing an AI that doesn't kill anyone but pushes us to watch ads and
lose our life on youtube overall bad?

~~~
lev99
> \- Is pursuing an AI that kills 10 bad guys but saves 20 good ones overall
> bad?

\- Yes. If your philosophical development is to the point where there are
clearly defined good guys and bad guys in your mind I suggest you read more.

~~~
oh_sigh
Am I philosophically underdeveloped if I think it's pretty clear that 20
merchants and shoppers in a souk are the good guys, and the people with
suicide vests on are the bad guys?

~~~
lev99
An argument that relays heavily on racial prejudice and emotional appeal is
philosophically underdeveloped.

~~~
briandear
A suicide vest isn’t racially prejudiced. Identifying those that are wearing
them isn’t an emotional appeal.

~~~
lev99
The identification of a person wearing a suicide vest isn't a racial or
emotional problem.

Declaring that a person wearing a suicide vest in an Arab market is inherently
bad does play on racial prejudice and emotional appeal. It's impossible to
separate that statement from a decade and a half of propaganda, and the trauma
caused by the numerous suicide attacks on civilians in the western.

~~~
orangecat
Forget "inherently" good or bad. It's entirely possible that the guy in the
vest isn't a morally horrible person; he may have been brainwashed since birth
and genuinely believe that his actions will bring about a better world. Still,
he intends to kill himself and as many other people as possible, and if I have
the ability to cause the only death to be his, I'm doing that every time.

~~~
lev99
I understand the self defense argument. Law Enforcement and Armies are trained
to respond with violence to people with intent to kill. The passengers of
United Airlines Flight 93 are heros.

Killing "bad people" to save "good people" is not a self defense argument. The
statement is too general. Is it okay to do medical testing on criminals if it
speeds the advancement of medicines that saves law abiding citizens lives? We
could have limits to only use people found guilty of the worst crimes and
increase the legal burden of proof for criminals going into the medical
testing program. It would decrease overall harm. It would kill bad guys to
save good guys. Still, this practice is illegal for good reason. We need a
better reason to kill someone than they are bad.

------
athoik
Some time ago, the π day... I become aware of the following. Sadly I totally
agree with the "trend" :(

"If machines produce everything we need, the outcome will depend on how things
are distributed. Everyone can enjoy a life of luxurious leisure if the
machine-produced wealth is shared, or most people can end up miserably poor if
the machine-owners successfully lobby against wealth redistribution. So far,
the trend seems to be toward the second option, with technology driving ever-
increasing inequality."

\-- Stephen Hawking

------
skapadia
Under "AI applications we will not pursue", its telling that the first rule
basically allows them to override all the subsequent ones - "where we believe
that the benefits substantially outweigh the risks". "We believe" gives them a
lot of leeway.

~~~
londons_explore
"We believe that AI powered nukes will lead to world peace when nobody dares
go up against them, therefore we're going ahead with the project."

------
juliend2
> Weapons or other technologies whose principal purpose or implementation is
> to cause or directly facilitate injury to people.

> We want to be clear that while we are not developing AI for use in weapons,
> we will continue our work with governments and the military in many other
> areas. These include cybersecurity, training, military recruitment,
> veterans’ healthcare, and search and rescue.

I wonder if this is an official response to the people at Google[1] who were
protesting[2] against Project Maven.

[1] [https://www.nytimes.com/2018/04/04/technology/google-
letter-...](https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-
pentagon-project.html)

[2]
[https://static01.nyt.com/files/2018/technology/googleletter....](https://static01.nyt.com/files/2018/technology/googleletter.pdf)

~~~
glitchc
I concur. This post seems to be specifically targeting the negative perception
the resignations were generating.

~~~
ryanobjc
Is there any room in your cynicism for any other view?

For example, maybe Google leadership did hear the complaints, then realized
what was happening and what could happen, and decided that something needed to
be done to create and uphold values.

I don't approve of defensive cynicism, I don't believe it makes a better
world. In fact, I think that defensive cynicism is the reason why the world
sucks. Let me guess, you don't vote because "all politicians are liars"? How's
that working out for you?

~~~
glitchc
Sorry for your pain. I understand this realistic approach bothers you a bit.
As it happens, I expect decency from individuals, not corporations. My
skepticism is justified in Google's post, since at the end of all the words,
absolutely nothing was changed. If Google were truly listening, they would
stop working on the project immediately. Therefore, rather than outright
contrition, this post reads more like "Sorry, not sorry."

Politicians follow Hotelling's Law. It doesn't actually matter who you vote
for as long as you vote. The only thing that changes is the flavour, and
that's a matter of personal preference.

------
amaccuish
Also on this topic, from students who would interview at Google. It's
important to hear how the upcoming generation, who would actually be doing
this work, feel

[https://gizmodo.com/students-pledge-to-refuse-job-
interviews...](https://gizmodo.com/students-pledge-to-refuse-job-interviews-
at-google-in-p-1826614260) [Students Pledge to Refuse Job Interviews at Google
in Protest of Pentagon Work]

~~~
edhu2017
From my observations in my college, Google is starting to become the new
"Microsoft". It is still seen as well paying, and nerdy but it is no longer
new and exciting. Facebook is taking over Google's spot as the well
established but still exciting tech company, while Airbnb seems to be taking
FB's spot as the new and exciting tech company.

~~~
davesque
I have rather the opposite impression of Facebook. They seem sort of tired and
are moving onto the same level as Uber in terms of ethical conduct.

~~~
londons_explore
Facebooks business seems to be suffering in various dimensions...

The key figure, minutes of time spent on the site in total, is being warped to
include Instagram and Whatsapp, trying to hide the fact that Facebook itself
is dying.

------
jfv
I've been asking myself this question for over 20 years: who are these people
that click on ads anyway?

Ads are inherently going to be the opposite of Google's values, yet Google
depends on them for the vast majority of their revenue. They show you some
search results in line with their values, and if you can't get to the top of
that "intrinsically", you buy ads or SEO. The folks that use that system to
exploit the least intelligent win here, and Google takes a share of the
profit.

Based on my Google search results in the recent past, Google isn't doing a
good job of making sure the "best" websites (by my own value system, of
course) make it to the top. I find myself having to go into second and third
page results to get legitimate information. I'm seeing pages of medical
quackery that "sounds good" but isn't based on science when I try to find diet
or exercise advice.

As technology becomes more democratic, more people will use it. That means
that the people that spend more time trying to sell you shit are going to win,
because they're the ones that are willing to reverse-engineer the algorithm
and push stuff up to the top. They add less value to society because they're
spending all their time on marketing and promotion.

I wish I knew how to solve this problem. By imposing morals, Google "bites the
hand that feeds".

------
75dvtwin
US government should consider accelerating breaking Google monopoly. So that "
__….we understand there is room for many voices in this conversation. __"
becomes more meaningful.

------
jillesvangurp
As much as I appreciate the conflict of interest here between doing good,
making money, helping the US government do its thing, and simply chickening
out for PR reasons; I'd like to provide a few sobering thoughts. AI and
misappropriation by governments, foreign nations, and worse is going to
happen. We might not like it but that cat has long been out of the bag. So,
the right attitude is not to decline to do the research and pretend it is not
happening but to make sure it ends up in the right hands and is done on the
right terms. Google, being at the forefront of research here, has a heavy
responsibility to both do well and good.

I don't believe Google declining to weaponize AI, which lets face it is what
all this posturing is about, would be helpful at all. It would just lead to
somebody else doing the same, or worse. There's some advantage to being
involved: you can set terms, drive opinions, influence legislation, and
dictate roadmaps. The flip side is of course that with great power comes great
responsibility.

I grew up in a world where 1984 was science fiction and then became science
fact. I worry about ubiquitous surveillance, un-escapable AI driven life time
camera surveillance, and worse. George Orwell was a naive fool compared to
what current technology enables right now. That doesn't mean we should shy
away from doing the research. Instead make sure that those cameras are also
pointed at those most likely to abuse their privileges. That's the only way to
keep the system in check. The next best thing to preventing this from
happening is rapidly commodotizing the technology so that we can all keep tabs
on each other. So, Google: do the research and continue to open source your
results.

~~~
imbokodo
This was basically the same argument the Zentrumspartei made for voting for
the Enabling Act in 1933.

~~~
Gorgor
What was their argument? Do you have a link?

------
capitalisthakr
Reminded me of their first principle, and how well they did with that one:
"Don't be evil"

~~~
beaner
Pretty well, all things considered.

~~~
mtgx
3 antitrust cases in the EU (Shopping, Android, and Adsense), and a couple of
FTC antitrust cases im the US (one of which Eric Schmidt lobbied away through
Obama) say otherwise.

[https://www.theregister.co.uk/2016/08/18/google_had_obamas_e...](https://www.theregister.co.uk/2016/08/18/google_had_obamas_ear_on_antitrust_probe/)

If that's supposed to be "our best", we're in trouble.

~~~
randcraw
Antitrust isn't about committing a crime so much as competing so well that the
the company sucks the oxygen from the room. Becoming a monopoly isn't evil.
It's an excess of success.

It's the responsibility of government antitrust law, not Google, to 'un-
distort' the playing field and resurrect competition / opportunity. No one
expects a corporation to voluntarily give away marketshare to ensure it
doesn't run afoul of antitrust law, just because the company also vowed not to
be evil. A negotiated settlement with regulators is exactly where this kind of
matter should lead; no crime done.

~~~
erikpukinskis
Yes, being a monopoly isn’t evil, leveraging monopoly position is.

------
davesque
It's good that they're openly acknowledging the misstep here. However, I wish
that the "will not pursue" section got the same bold-faced treatment as the
one above it.

It seems appropriate at this point for industry leaders in this field, and
governments, to come together with a set of Geneva-convention-like rules which
address the ethical risks inherent in this space.

~~~
benatkin
It certainly leaves the door open. Reminds me of that saying that no ethically
trained software engineer would write a DestroyBaghdad procedure, but would
write a DestroyCity procedure to chick Baghdad could be passed as a parameter.

------
djrogers
> Technologies that gather or use information for surveillance violating
> internationally accepted norms.

What does that even mean? Internationally accepted? By what nations and people
groups? I’m pretty sure China and Russia have different accepted norms than
Norway and Canada - which ones will you adhere to?

------
fortythirteen
> We want to be clear that while we are not developing AI for use in
> weapons...

we will be developing AI for things that have weapons attached to them. We
hope our lawyerly semantics are enough to fool you rubes for as long as it
takes us to pocket that sweet military money.

~~~
Barrin92
reminds me of the joke

>"It should be noted that no ethically-trained software engineer would ever
consent to write a NukeChicago procedure. Basic professional ethics would
instead require him to write a NukeCity procedure, to which Chicago could be
given as a parameter."

------
paulgpetty
So was the “Don’t be evil” principle or mantra that we’re all disappointed
about documented in a blog post? For some reason I thought it was on a page
like this: [https://www.google.com/about/our-
commitments/](https://www.google.com/about/our-commitments/)

Either way it’s just a statement on a webpage which has all the permanence of
a sign in their HQ lobby. It’s going to be hard to convince people that
statements like this from a Google, a Facebook, or an Uber really mean
anything — especially long term.

Will their next leadership team or CEO carry on with this?

~~~
ucaetano
_Don 't be evil_ is on the Code of Conduct:

[https://abc.xyz/investor/other/google-code-of-
conduct.html](https://abc.xyz/investor/other/google-code-of-conduct.html)

------
hueving
Pretty rich for them to claim privacy is important when all of this technology
is based on funneling your private data straight to them for storage and
processing.

------
whazor
But how? Let's assume I personally offer artificial intelligence services. So
I provide some API's where my customers upload training and testing data, and
I return a trained ML model. I do not know who uses my service or what they
are doing...

Furthermore, if ban the military. Then another company could do it for them.
So every customer would have to explain their activities?

~~~
j2kun
FTA:

> As we develop and deploy AI technologies, we will evaluate likely uses in
> light of the following factors: [...] Nature and uniqueness: whether we are
> making available technology that is unique or more generally available

Presumably this means they would allow the US military to use their cloud
services like any other customer. This is almost certain not to happen because
of the classified nature of their data.

Also, this is meant to guide how Google decides what to develop, not (AFAICT)
meant as a terms of service for customers. Though I bet Google still reserves
the right to block accounts it decides are using its platform in a way they
think is immoral.

Disclaimer: I work for Google.

------
Dowwie
This likely took careful consideration and deliberation among a number of
people. Google should be commended for the effort.

------
ehudla
What do you think about the following potential additions?

1\. "Pursue legislation and regulation to promote these principles across the
industry."

2\. "Develop or support the development of AI based tools to help combat,
alleviate, the dangers noted in the other principles in products developed by
other companies and governments."

------
forapurpose
At least they are starting the conversation. I'd be much more comfortable with
principles of design and implementation in addition to outcomes. For example,
transparency is essential. Also:

 _5\. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our
AI technologies. We will give opportunity for notice and consent, encourage
architectures with privacy safeguards, and provide appropriate transparency
and control over the use of data._

Why not "give people control over their privacy and over their information"?
That's a commitment to an outcome. "Incorporate ... principles", "give
opportunity", "encourage", and "appropriate transparency and control" are not
commitments. Google seems to be hedging on privacy.

~~~
mylons
they're only starting it because of the backlash from employees internally and
more importantly the external backlash.

------
TaylorAlexander
The principles state that they will not make weapons. However the latest
report I’ve seen states that their current contract for the military ends some
time in 2019. [1]

So while google says it will not make weapons, it seems that for the next 6-18
months it will continue to do so.

Does anyone know when in 2019 the contact expires? It seems odd to come out
with a pledge not to make weapons while continuing to make weapons (assuming
that is what they are doing).

(Full disclosure, I am a contractor at an Alphabet company, but I don’t know
much about project Maven. These are my own opinions.)

[1] [https://www.theverge.com/2018/6/1/17418406/google-maven-
dron...](https://www.theverge.com/2018/6/1/17418406/google-maven-drone-
imagery-ai-contract-expire)

~~~
ocdtrekkie
I believe it continues through March 2019.

------
exabrial
Google: We take Pentagon contracts to track people's location with our AI.
That's so bad.

Also Google: We will totally use our AI to 'legally' track a single mom that
clicked a fine print EULA once while signing into our app. That's totally
fine. It's different mmk?

~~~
CydeWeys
This comment comes off as intentionally disingenuous to me.

In the latter case, it's about serving up ads.

In the former case, it's literally about assassinating people from the sky.

They really _are_ different.

~~~
exabrial
Both can be used for malicious prosecution.

------
TremendousJudge
>At its heart, AI is computer programming that learns and adapts

No, that's machine learning. AI is intelligence demonstrated by machines, and
it doesn't necessarily mean that it learns or adapts.

------
billybolton
Luckily no one needs to worry about Google ever creating advancements AI (they
can't, they lack the required skillset). Google is the modern day IBM, and
AlphaGo is just another DeepBlue. I wonder when Google will make a gimmick
like Watson. I guess Duplex is the beginning of it. It's amazing to see how
many people were impressed by that. Then again, the tech scene lacks the
scientific rigour that is required for spotting breakthroughs.

------
acobster
Applications they will not pursue include those "that gather or use
information for surveillance violating internationally accepted norms." That's
some fancy gymnastics there, Mr. Pichai. Well played.

I was wondering how or if they were going to address this. It saddens me to
see that Google considers collecting as much data as possible about all its
users to maximize ad revenue an international norm. It saddens me more to see
that they're correct.

------
thrusong
Didn't Google have a motto of "Don't be evil," and then new management retired
the saying? What's stopping that from happening again in this case?

~~~
Crash0v3rid3
No, it's still in there code of conduct.

------
MVf4l
Great, another piece of "Don't be evil" with a new coat, and they can ditch it
whenever they feel powerful enough to ignore society's feedback.

Such statement absolutely relieves the pressure came from the public, hence
law makers. Can we make sure big companies are legally accountable for what
they claim to the public? Otherwise they can say just whatever persuades
people to be less vigilant about what they are doing, which is so deceptive
and irresponsible.

------
RcouF1uZ4gsC
>4\. Be accountable to people. >We will design AI systems that provide
appropriate opportunities for feedback, relevant explanations, and appeal. Our
AI technologies will be subject to appropriate human direction and control.

Youtube moderation and automated account banning combined with the inability
to actually get in contact with a human show they they have a long way to go
with this principle.

~~~
MarkMMullin
That is going to be a tough one, and possibly even impossible based on where
current ML tech is leading. A laudable goal, but taking a complex system with
a training time of many GPU years and asking it how it came up with the answer
basically nets a very large pile of numbers (weights) tied together in a
complex multidimensional relationship that we just plain can't follow outside
of the system. Right now the practical focus is on trying to stop feeding the
systems biased data. Your example is spot on, save for the 'not able to talk
to someone' which is just googz being too aloof and too cheap

------
kolbe
Trust is like a mirror: you can fix it if it's broken, but you'll always see
the crack in that motherfucker's reflection.

------
godelmachine
I was kind of reminded of Asimov's Three Laws of Robotics while going through
the Principles, especially the 7th one.

~~~
notfed
3\. Be built and tested for safety.

4\. Be accountable to people.

6\. Uphold high standards of scientific excellence.

------
s2g
> Technologies that gather or use information for surveillance violating
> internationally accepted norms.

I guess Google's policy of sucking up any and all data doesn't go against
internationally accepted norms.

This entire article reads like BS if you think about what Google actually
does.

------
confounded
This is pretty weak tea. It seems to completely justify working on anything,
as long as the tiny part that Google engineers touch is software, and they
aren't personally pulling triggers.

> _1\. Technologies that cause or are likely to cause overall harm. Where
> there is a material risk of harm, we will proceed only where we believe that
> the benefits substantially outweigh the risks, and will incorporate
> appropriate safety constraints._

Is this "We have solved the trolley problem"?

Benefits to who? US Consumers? Shareholders? Someone in Afghanistan with the
wrong IMEI who's making a phone-call?

Without specifying this, this statement completely fails as a restraint on
behavior. For an extrajudicial assassination via drone, is 'the technology'
the re-purposed consumer software to aid target selection, or the bomb?
Presumably the latter in every case.

> _2\. Weapons or other technologies whose principal purpose or implementation
> is to cause or directly facilitate injury to people._

This leaves the vast majority of military applications in scope. By this
definition, Project Maven (the cause of resignations/protests) meets the
criteria of not _" directly facilitat[ing] injury to people"_. It selects who
and what to cause injury too at lower cost and accuracy, to scale up the total
number of causable injuries per dollar.

> _3\. Technologies that gather or use information for surveillance violating
> internationally accepted norms._

Google _set the norms_ for surveillance by being at the leading edge of it.
It's pretty clear from Google's positioning that they consider data stored
with them for monetization and distribution to Goverments completely fine.
Governments do, too. And of course, _" If you have something that you don't
want anyone to know, maybe you shouldn't be doing it in the first place."_[0].

> _4\. Technologies whose purpose contravenes widely accepted principles of
> international law and human rights._

It's difficult to see how this could be anything but a circular argument that
whatever the US military thinks is appropriate, is accepted as appropriate,
because the US military thinks it is.

The most widely accepted definitions of human rights are the UN's, and the
least controversial of those is the Right to Life. There are legal limits to
this right, but by definition, extrajudicial assassinations via drone strike
are in contravention of it. Even if they're _Googley extrajudicial
assassinations_.

[0]: [https://www.eff.org/deeplinks/2009/12/google-ceo-eric-
schmid...](https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-
dismisses-privacy)

~~~
jadedhacker
Was going to say much the same thing, thank you. You said it much better.

------
sethbannon
Love this leadership from Jeff Dean and the team at Google AI. Technology can
be an incredible lever for positive change, but can just as easily be a
destructive force. Always import to think principally about how to ensure the
former is the case and not the latter.

------
foobaw
I wished they could define and clarify what "harm" means.

------
gandutraveler
AI can and will be used to cause harm. I hope this doesn't cause US huge
disadvantage against other nations like China where govt has more control and
access to AI.

------
foolinaround
> Avoid creating or reinforcing unfair bias.

AI will likely reflect the bias of its training set, which likely reflects the
bias of the creators. So, it is fair to say that AI will be biased?

~~~
cpeterso
There is definitely a risk of machines learning to reinforce existing but
undesirable bias in real-world training data. There is research into
documenting forms of bias so they can be recognized and countered when
selecting training data.

[https://sloanreview.mit.edu/article/the-risk-of-machine-
lear...](https://sloanreview.mit.edu/article/the-risk-of-machine-learning-
bias-and-how-to-prevent-it/)

[https://www.entrepreneur.com/article/279927](https://www.entrepreneur.com/article/279927)

------
metaphorical
The same AI tech developed for "search and rescue" can be easily re-purposed
for "search and destroy". How would Google prevent that from happening?

------
jcadam
As someone who has worked in the defense industry his entire career (and
served in the Army before that), I find the general tone of most of these
comments - in particular the ones coming from supposedly loyal American
citizens - disturbing (not to mention insulting). Almost makes me wish we'd
actually institute mandatory national service.

That said, I'd love to work on ML/AI related defense projects. Thanks to
Google, more of this type of work will surely be thrown over to the
traditional defense contractors - so maybe I'll get that chance, eh?

------
AtomicOrbital
Humanity is racing ever faster to craft its own replacement as a species and
we need to acknowledge this as our finest gift imaginable ... the cat is out
of the bag on AI and no amount of corporate double speak can shed
responsibility for any organization who employs armies who then freely spread
these skills ... passing the torch to that which runs at light speed and is
free of the limits of time which self evolves its own hardware and software
can only be something we collectively should be proud of not afraid of ...
rejoice as we molt and fly into the infinite now

~~~
wu-ikkyu
>can only be something we collectively should be proud of not afraid of

If AI is used for mass murder of the human species, should we be proud?
Humility, rather than hubris, is existentially important when it comes to
wielding the most extreme power humanity has ever known

------
current_call
_AI applications we will not pursue_

 _Technologies that gather or use information for surveillance violating
internationally accepted norms._

They already failed.

------
coreypreston
Its interesting the sections discussing 'privacy' and 'accountability to
people' contain the least amount of information.

------
sidcool
In a way, the engineers who quit Google had some part in this success. Would
it be unwise for Google to reach out to them?

------
DrNuke
I do not know, really... if not them, someone else will do however. Google has
a competitive advantage (they can hire & pay well the smartest minds on Earth)
and is letting it go? EDIT: going to be even more controversial but needs to
be said that Google just can’t stay neutral here imho, they either work for
autonomus killing machines or against them in order to preserve their market
position and brand

~~~
throwaway2048
If I don't build these gas chambers, somebody else will, so i might as well
pocket the money.

You don't see a problem with that position?

~~~
DrNuke
I have no competitive advantage to take care of, though.

~~~
throwaway2048
Questions of competitive advantage and profitability should likely be taking a
backseat to concerns about autonomous killing machines.

~~~
DrNuke
Going to be even more controversial but needs to be said that Google just
can’t stay neutral here imho, they either work for autonomus killing machines
or against them in order to preserve their market position and brand

------
hooande
The military is using open source software to sort images, with consulting
help from Google. No killbots, no acts of war, just doing the _only_ thing
that machine learning has any practical use for.

Science fiction writing is hard. I don't know why all of you are doing it for
no pay. We can't judge Google for what we think they _might_ do. And so far,
they're just using ml in the real world

------
retrogradeorbit
All corporations are amoral. They exist to maximise the profit of their
shareholders. This is marketing. It is a nice sounding lie. If it were
authentic, the last few months wouldn't have happened at Google. For me, it
only makes it worse. Because they think we are suckers. Actions speak louder
than words. These words ring hollow.

------
erikpukinskis
I think the avoidance of harm is fundamentally flawed. Creation necessitates
destruction. At times safety necessitates assault. Violence can not be
eradicated we can only strive to maximize our values.

Anyone who claims to be non-violent has simply rationalized ignorance of their
violence. See: vegans. (spoken as someone who eats a plant based diet)

------
kerng
This seems like a PR stunt, but at least something. Nothing prevents them and
reverting those newly found principles over time... similar with removing
"Don't be Evil" from their mission - which kinda would have covered that.
Google's goal is to make money and that's what this is about.

------
bovermyer
Just follow the three laws of robotics and you'll be fine.

------
htor
google has no moral or principles. how could it possibly have those things?
how can a global advertisement corp. not be evil? it doesn't make any sense!

------
dhimes
"We're just going to put the tip in...."

------
MVf4l
Off topic, is there a way to tag all the stakeholders of the main
company/government mentioned in title/article?

------
qbaqbaqba
1) money, 2) profit, 3) revenue.

------
mrslave
Don't be Skynet?

------
jamesblonde
"Those are my principles, and if you don't like them... well, I have others."
Groucho Marx

------
ruseOps
“Hey Google, give me three concrete examples of fair bias.”

~~~
tlb
Bias has a very broad meaning in AI work. See
[https://en.wikipedia.org/wiki/Bias-
variance_tradeoff](https://en.wikipedia.org/wiki/Bias-variance_tradeoff)

Most AI/ML/Statistics procedures use Occam's razor: prefer the simplest
explanation of the data. That's a bias (toward simpler explanations), but not
"unfair" to anyone.

~~~
finnthehuman
Oh, so they mean the boring technical definition that nobody reading the page
cares about?

We can trust google will try to build tools that are effective for googles
buisness goals without it being proclaimed in a statement of principles.

------
reilly3000
New hot job title: AI ombudsman.

~~~
criddell
This is Google you are talking about. It's more like _hot new script:
ai_ombudsman.py_

------
jacobsenscott
"Our AI is so powerful it needs special rules!" is pure marketing.

------
gaius
The fact that “make money” isn’t on the list means that you can’t believe
_any_ of it.

Also point 5 is an outright, blatant falsehood given Google’s track record and
indeed entire business model.

~~~
kolbe
This is clearly the only reasonable response to these 'principles'. Google
must have a strong employee presence here.

------
Mononokay
"Principles"

~~~
sctb
You've been posting a lot of unsubstantive comments. Could you please try to
increase the amount of information?

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
Mononokay
Sure thing; going into the future I'll make sure to; and sorry for breaking
guidelines a bit.

I was thinking I should probably take a break from HN until the current news
cycle's run its course (way too many things happening that I'm tempted to
reply with one-liners to; works on Twitter, doesn't work here), and should
have went with my intuition + will now.

