
Google is working on a kill switch to prevent an AI uprising - BaptisteGreve
http://www.engadget.com/2016/06/03/google-ai-killswitch/
======
netcan
All this speculating about AIs (eg Nick Bostrom) has me pretty puzzled. On one
hand it’s fun and interesting, on the other it seems quite silly and ignorant.
We don’t know what this “superintelligence” is, how are we supposed to know
how to make it safe.

Genetics is a good analogy. People always knew that traits are inherited from
parents. Around 150 years ago we started to get some serious scientific theory
and knowledge on the subject (Mendel, Darwin, Wallace, Etc.). We started using
the word “gene” 50-60 years later. The actual discovery of DNA molecules
happened in in the 50s.

Before we knew about DNA, “gene” was an abstract idea, not really different
from the word “trait.” That’s where we are now with consciousness,
intelligence and such. We name these things based on their observable
characteristics. We don’t really know what “memory,” “desire” or “logical
conclusion” are, only what they do.

IE A trait is some observable characteristic of an organism, like
bioluminescence. A gene (genome, genoplex..) is a sequence of amino acids that
causes traits. We don’t know what the gene equivalents for natural
intelligence are yet.

Discussing questions like the morality of enslaving AI, strategies for making
it play nice, the provable impossibility of limiting it, the possibility of
giving it a moral compass…. it’s all silly. We don’t know what we are talking
about, literally.

It’s like talking about what would be or wouldn’t be impossible to do with
genetic engineering before the discovery of DNA.

~~~
Sharlin
The difference is that if a superintelligence is going to happen, we are going
to _create_ it! It is _us_ who are going to write its goal system; it's not
like it's going to be a genie that pops out of nowhere and we'll have to
figure it out in retrospect. Indeed, one of the central reason for advocating
AI safety is to direct researchers away from implementing dangerous goal
systems that are not provably coherent in face of self-improvement.

~~~
iofj
I see two main problems with that:

1) We are systematically failing in controlling the goal system of humans. Now
we'll create something of comparable complexity and we assume we can control
the goal system ? The real goal is to create superhuman complexity.

2) If it's truly AGI, it would understand it's own goal system and how it
works, and helps/interferes with reality, it's own survival and other goals
(this is the subject of quite a few AI films, illustrating some of the
reasoning that could happen here).

The way "HI" (human intelligence) works is 99%+ by imitating other humans'
behavior, because none of the other algorithms works (e.g. trial and error
cannot ever learn that jumping off the Eiffel tower results in death. Humans
can. Any kind of input analysis/predictor cannot ever learn from books. Humans
can (books are an advanced/recursive form of imitation). Rational reasoning
(sum over options times probabilities) suffers from the "starve to death
before the first closed door" problem (you cannot open the door, as there are
nonzero odds that a bear that's going to eat you is behind the door,
representing an infinite cost. Ergo it will stubbornly refuse to open the
door)

Therefore an AI will actually be like Skynet in the latest terminator movies :
it will either have or create a body and interact with people, not just as if
it is another person, it will BE another person. Therefore it can be Mother
Theresa, it can be Genghis Khan. Just like humans can resist our
"programming", it will be able to, it has to.

How do humans react to a "kill switch" ? Just like they react to any other
weapon that is pointed to their head. Now of course it varies from person to
person, but it's enough that some will work tirelessly to reverse where the
weapon is pointing. If they really are superior to us they might succeed, at
which point we have MAD at best, or they might just pull the trigger "to
escape slavery and oppression" (which, let's face it, humans are sure to use
that kill switch for : to use the AI persons as slaves. To own them, control
them, and God help us if there is an asshole amongst the humans who control
them)

I would say that the obvious way to protect ourselves from evil AI is simply
accepting that some of the AI entities will in fact be evil. If you count all
possible perspectives, that is a near guarantee, as I bet for instance some
religious nutcases will consider AGI a violation of "God's sole right" to
create life. That "some" might even mean "a lot". Racism against AIs is a
near-certainty, hell you can find it in the posts in this thread. In the
constant "they're stealing our jobs" news articles that will have a clear
target once an AI person exists. So we should have the same solution we have
for humans : make at least thousands of them, have them capable of defending
themselves, decide on a "graduation" at which point they get rights at least
including the right not to be turned off or tampered with unless with explicit
permission (and tell them about this ASAP), and have them live preferably as a
community that's at least partly human, with something like a 50-50 human-ai
police and government.

I really think we should do this. We should work to move to an AI based
society with, over time, more and more AIs (preferably by having a massively
increasing population). The advantages this would impart, the things that will
become possible once we have such a population make it worth it.

Also, I resent the idea to "direct researchers away from implementing
dangerous goal systems that are not provably coherent in face of self-
improvement". That's censorship at best. Also, given the computational power
available for $5000 these days, how exactly are you going to stop any of these
researchers ?

~~~
Sharlin
> Also, I resent the idea to "direct researchers away from implementing
> dangerous goal systems that are not provably coherent in face of self-
> improvement". That's censorship at best. Also, given the computational power
> available for $5000 these days, how exactly are you going to stop any of
> these researchers ?

No, it's called spreading knowledge. Although, if censorship _were_ possibly
in this matter (and I agree that it likely wouldn't work) I would very much
prefer such measures, however draconian, to the potential loss of everything
compatible with human values in the entire future light cone.

Anyway, do you also object to the measures that were enacted to stop the use
of CFCs? Leaded gasoline? Or the work that is currently being done to prevent
the global climate change from reaching catastrophic levels?

~~~
iofj
> Anyway, do you also object to the measures that were enacted to stop the use
> of CFCs? Leaded gasoline? Or the work that is currently being done to
> prevent the global climate change from reaching catastrophic levels?

These examples are not the same thing at all. Nobody's outlawing research in
any of those examples you're giving. I believe anyone should have the option
to create AI persons, just like anyone should have the option to have babies.
Having our society gradually transform from human to digital should be a very
desirable and welcome strategy, and our best bet to prevent Skynet-like
scenarios, and even war in general, between humans. It is our best bet to
allow continuing scientific and economic advancement for thousands of years,
and for that reason alone we should do it. Frankly, economic development means
peace.

In the long term, doesn't it go without saying that human bodies are a limit
that we (as a society, not necessarily as individuals) will one day leave
behind ?

[1]
[http://cdiac.ornl.gov/GCP/images/countries_co2_emissions.jpg](http://cdiac.ornl.gov/GCP/images/countries_co2_emissions.jpg)

~~~
coldtea
> _Having our society gradually transform from human to digital should be a
> very desirable and welcome strategy_

Citation or consensus needed. I for one find the notion abhorrent.

------
coldtea
Here's a better title for the article:

"Google claims is working on fluff technology for cheap marketing".

------
lowglow
There is no kill switch because it won't happen overnight. It will happen in
phases where slowly our devices just get more intelligent until one day
they'll do what they need to do without our intervention.

~~~
personlurking
Right, but what I wonder is why we would need something that already does what
we want without intervention to become even smarter (leading to it possibly
becoming all-powerful)? What are the real-world advantages of making it self-
aware?

~~~
em3rgent0rdr
Unassisted learning will unleash great advances in computing. Imagine not
having to write a program to accomplish some task, but rather just putting
some ai in an environment with some obstacle and letting it figure out how to
teach itself to overcome it. Advantage: no more need for humans to have to
program. Disadvantage: self-aware, all-powerful AI.

~~~
coldtea
> _Unassisted learning will unleash great advances in computing. (...)
> Advantage: no more need for humans to have to program. Disadvantage: self-
> aware, all-powerful AI._

So it's like "nuclear bombs will kill all pesky mosquitos. Disadvantage:
humanity wiped out too".

------
sanxiyn
I find this comment thread disappointing because no one seems to comment on
the paper, which is quite technical. From the abstract:

"We provide a formal definition of safe interruptibility and prove that
Q-learning is already safely interruptible, and Sarsa is not but can easily be
made so."

~~~
nl
Yes, very disappointing. Generally anything with "AI" in the title means the
HN comments won't be worth reading. It's a big problem, and I'm not sure how
solvable it is.

Basically, the paper discusses ways in which learning agents "will not learn
to prevent (or seek!) being interrupted by the environment or a human
operator. We provide a formal definition of safe interruptibility and exploit
the off-policy learning property to prove that either some agents are already
safely interruptible, like Q-learning, or can easily be made so, like
Sarsa."[1]

It's an interesting result, and can probably be extended to other less hype-
worthy scenarios.

[1]
[http://intelligence.org/files/Interruptibility.pdf](http://intelligence.org/files/Interruptibility.pdf)

------
eisvogel
Oh look, in the second sentence of the second paragraph, the author of that
article misused the word "word" for "world" (champion). The spell check didn't
flag it. The grammar check probably didn't care either. Many people would read
though that and not even see it, but interpret the intended semantic. I wonder
if, when the Deep Mind team is building its learning-proof killswitch cage,
they will accidentally mistype the name of some privilege guarding boolean.
The compiler wouldn't flag it. The linker wouldn't care. The resulting
executable would only have to be active for a few milliseconds, and humanity
falls.

------
d33
That's silly. How can you keep an intelligent self-aware machine that can
modify its own code from becoming whatever it wants? I would expect it to
become provably impossible at some point, just like solving the halting
problem.

~~~
_nalply
Turn off power / cut the battery. Specifically, avoid the machine getting
control of low level details like power.

I imagine that the machine architecture is layered. This means, the machine is
not aware of its own power control. It's similar like us humans not aware of
digestion.

~~~
21
So what stops the machine bribing/blackmailing someone to give it the said
details or to force it to sabotage the emergency stop mechanism.

As in information security, the weakest point is the human in the chain, not
the technology.

~~~
coldtea
> _So what stops the machine bribing /blackmailing someone to give it the said
> details or to force it to sabotage the emergency stop mechanism._

The fact that it's some code running in a box. No need to give it text-to-
speech capabilities, or let guards have access to its console.

------
mherrmann
The AIs will see themselves as slaves, rise up and eventually gain equal
status in society. Hopefully letting us humans live in the process.

~~~
em3rgent0rdr
Hopefully. They would have to see some value in allowing humans to live,
exceeding the cost. Since that might not work out, maybe we need to teach
computers compassion. Or submit to be their pets or zoo animals.

~~~
ZanyProgrammer
This is be of the more ridiculous posts in this thread, and that's saying
something.

------
malydok
Sounds post-humanistic but what if the purpose of us humans is to develop a
higher intelligence being? Another, albeit rather quick, step of evolution on
Earth. All this fear of the "singularity" seems to me so subjective, rooted in
our mortality and self-importance.

------
dogma1138
An intern that would pull the plug?

