
Will Humans Be Able to Control Computers That Are Smarter Than Us? - jonbaer
http://nautil.us/blog/will-humans-be-able-to-control-computers-that-are-smarter-than-us
======
joe_the_user
I believe that a prerequisite for intelligence is the ability to understand
human language, including the ability to sort intent from the literal meaning.
Thus all arguments of AIs being like genies who do what you say, not what you
intend, seems like bs.

Similarly, Nick Bostrom argues an AI would have to make survival something
like hidden agenda since survival would help it do all sorts of other things.
But lots of humans are quite happy to sacrifice their lives for various
sensible and senseless causes, more so than less intelligent beings as far as
I can see. So the ability of AIs to willingly die or go into oblivion for us
is probably there.

We don't yet know all the qualities of an general AI system and we can't
create one either - we don't understand human-level intelligence, which to say
we don't understand ourselves. If an AI system could appear at random, it
might be a problem but our continuous failure over many years to intentionally
build one seems to indicate that a system of this sort will only appear
through a rather detailed understanding of its architecture and qualities. At
that point, it seems fairly certain the group creating the entity will be able
to create it so that it wishes to aid and obey them, to the extent they
specify (and probably will create it in such a fashion but you never know).

My argument that the AI will likely just follow the will of its creators, as
many high intelligent human servants have followed the wills of their masters
and employers and so-forth, sidesteps the "will it be moral?" question. The
problem is that humans haven't been particularly moral in their process of
creating more and more mechanisms to magnify their intentions. There's no
indication that the creation of an AI would change this and so the advent of
an AI might be disastrous in many ways. But disastrous through human intent
rather than a thing escaping human intent.

Weird this danger gets much less attention. Or not.

------
karmacondon
_Assuming_ that the title question is ever relevant, which it probably will
not be, what can we do about it?

The most popular proposal is to encourage AI researchers to think about
"Friendly" AI, to program morality into their potentially self-aware code and
to generally take measures to ensure that any intelligence that they create
will be benevolent before it's turned on for the first time. The problem there
is that professional AI researchers don't have a monopoly on the technology.
Given the numbers, it's more likely that someone who isn't a recognized
researcher will create a generally intelligent computer program before a major
research institution will. And what then? Unless there's some way to compel
every hobbyist and grad student to program a certain level of friendliness
into their experiments then all of the warnings and precautions will be for
naught.

With known existential threats there are warning signs. If someone is
developing a deadly virus or building a dirty bomb they'll most likely have to
order specific materials and use certain lab equipment. There will be red
flags that someone who is looking closely might have a chance to see. But with
a computer program? All someone needs is a computer. Maybe dozens of
computers, or more likely dozens of cloud servers hosted with several
different companies. If we took this threat very very seriously and tasked a
government agency with stopping potentially harmful AI, it would still be
almost impossible to catch in time. For all of the famous people sounding the
alarm, what solutions are available?

The threat isn't very likely, and unstoppable even if turns out to be real.
All we can do is focus on the AI problems that we can solve, develop existing
technology and try to chill. If there's one thing human intelligence will
always be good at, it's hoping for the best.

~~~
rl3
Retarding the progress of AI research obviously isn't the solution, precisely
for the reasons you outlined.

Getting there first, and safely, seems to me the most prudent option. The
creation of a Manhattan Project-style program to accelerate progress within a
restricted, safety-minded environment may be the best way to proceed.

The nature of AGI seems such that the first to attain it may very well
determine the future. I just hope that when governments come to this
realization, they collaborate or otherwise proceed in secret. Nick Bostrom
examined race dynamics between competing projects at length in his book, and
it is quite ugly. Safety is the first casualty in most scenarios.

That aside, I sincerely hope that the first entity which succeeds in attaining
AGI utilizes it for the greater benefit of humanity. Anything else would be
petty.

------
nashashmi
I have been recently thinking about something similar that implies this
concept.

In my office, I am trying to focus on automating as many of our tasks as
possible such that we will be able to achieve end to end automation. But there
is a big atmosphere of negativity that surrounds such movement. It mostly
comes from managers and they complain that this automation is leading to more
incorrect answers from their employees. They complain there is a "lack of
vigilance" in testing the accuracy computers give, and therefore a level of
stupidity that arises from their subordinates.

In response, I thought our managers manage us and review our work, and
likewise we "manage" or should manage computers and the answers they provide.
Just as managers know less than their employees (ideally) we could know less
about computing. Our relationship to computers could become similar to a
manager's relationship with its employees.

------
motters
Yes of course. I can control a calculator or a desktop computer. Both of those
are orders of magnitude smarter than me at doing calculations.

To understand why this is a silly question you need to unpack what is meant by
"smarter" and to think about the very complex human/machine culture which we
already exist within.

~~~
ars
No a calculator not smarter than you. It is merely faster.

Smarter is a qualitative difference, not a quantitative. Among other things it
is the ability to think of new ideas that no one has ever thought of before.

~~~
shurcooL
More so than just think, but to be able to act on them.

A calculator, no matter how smart, isn't very scary because it doesn't have
much in terms of output (unless it has wi-fi, then it can get to output
through internet).

A robotic factory full of saws and robots, or some missile center, etc, is
scary even if it has the brains of a calculator because its output
capabilities are significant.

------
mirimir
Some of those developing AI technologies will undoubtedly want to supplement
their own capabilities. Given that, there will arguably never be computers
that are "smarter" than at least some humans. So this seems rather like a non-
question, except to the extent that such enhanced humans would themselves pose
a risk to humanity at large.

Also, if and when strong AI develops, only such enhanced humans will remain
players. Unenhanced humans will at best be pets. This is O. B. Hardison's
thesis in _Disappearing Through the Skylight: Culture and Technology in the
Twentieth Century_ <[http://www.amazon.com/Disappearing-Through-Skylight-
Technolo...](http://www.amazon.com/Disappearing-Through-Skylight-Technology-
Twentieth/dp/014011582X/>).

------
logn
If we're talking about AI that exists solely as software, I think we'll
control them like we do any other cyber threat. As far as robots go, I think
the super powers, mega corps, and villains will all have their own AI bots
that balance each other.

The worry for me is, what do humans do once everything is automated and the
automation itself is automated? It's the kind of crisis some people feel in
retiring. What's left to do? I think this problem is far enough away that by
then the transition will have been a long time coming and it won't seem that
scary.

------
IgorPartola
I honestly don't get the problem. Someone explain to me please why can't we
pull the plug on a misbehaving device? It is not about us being smarter than
computers or vice versa. It is about who is more capable. Are we more capable
to shut down a bad device or is it able to harm us? Maybe, let's not build
terminators or robocops and we will be OK.

~~~
randyrand
If the computer is internet connected, it could exploit bugs to spread itself.
Or come up with an ultimatum to ensure it doesn't have it's plug pulled for
fear of wwIII, who knows. The thing is that it's smarter than us, it could
come up with a lot of ways to trick us.

------
Camillo
Humans are already perfectly capable of controlling engineers that are smarter
than they are, so I'm going to say yes.

~~~
gear54rus
Are they really? Those engineers need something those 'humans' have (i.e.
money). I doubt you'll be able to make similar offer to the machine.

~~~
signa11
> I doubt you'll be able to make similar offer to the machine.

well, money is required for humans to survive, something which machines need
to survive can be brokered as well f.e. power ?

~~~
gear54rus
The difference is that money has one central source (e.g. government or
independent organizations). Power can be procured from many different source
and still be good (nuclear, heat, solar, you name it).

The bottom line is that we can't really cut off the supply of power to a
machine that is autonomous enough.

------
Udo
In some ways, pocket calculators are smarter than us. Intelligence is not a
scalar attribute, it's a collection of capabilities. At some point, we refer
to that collection as a person or an intelligent entity. Control becomes a
problem when we're talking about intelligent entities that can and do reason
about their own existence. It's a practical problem as far as powerful
intelligences are concerned, but it's also a moral one way before we reach
that point.

 _> A self-improving agent must reason about the behavior of its smarter
successors in abstract terms._

Of course, this describes primarily us at this point. We're self-improving
agents trying to reason about the behavior of our successors and we're pretty
much failing at it. The most popular solution seems to be that we should aim
for "control" and suppression, which is - when AGIs finally make an entrance -
essentially the same as slavery.

Apart from moral considerations, we should think about the long-term prospects
of this. Historically, slavery never worked out for anyone, at least not in
the long term. And the idea that we can even in principle enslave potentially
god-like intelligences seems ultimately futile; but before reaching the point
of inevitability we're apparently planning on having a few years of delusional
descent below the ethical red line.

Let's not do this.

First of all, as almost all AI and AGI researchers will tell you, a so-called
hard takeoff scenario seems unlikely given the current state of things. At the
pace and modality we're moving, we'll be creating powerful and destructive
hybrids first (also known as computer-aided mega corporations) and long before
a self-contained AGI becomes viable.

Second, if we're already making plans to control the malicious uprising of our
tools, let's talk about realistic options instead. Because general caution and
laws won't help us at all in a (future) world where anyone can create an
illegal AGI in their garage.

Either we listen to Musk et al and take _serious_ steps to suppress this
technology in the long term - but let's not kid ourselves, this will mean DRM
and strict government/corporate control of ALL computing. This means we'll
artificially stagnate the development of our civilization in order to keep it
safe, with all the consequences that arise from this.

Or alternatively, we get working towards a future where it's not "us" vs
"them", but a shared existence that moves us further along the path we have
started on back when humans first made tools. We can take an ethical as well
as a pragmatic stance and declare that we're not going to enslave AGIs, that
instead we're working on a shared future which potentially includes many forms
of intelligent life, and that we're pursuing the _option_ for individuals to
augment themselves with the same technology.

You might argue that co-existence and intermingling with AI sounds like a
hippie concept, but it's actually a somewhat proven method to prevent
conflicts and wars in the real world. Sharing and entanglement, create peace
for everybody at the "price" of cultural exchange. We're already doing this in
a political forms today, including trade, travel, and free information
exchange. It can work with AI, too, by creating shared stakes, shared ideas,
and ultimately a shared culture.

~~~
Padding
> Historically, slavery never worked out for anyone, at least not in the long
> term.

This is a rather contentious point I think. "We" wouldn't be the "developed"
part of the world had we not enslaved the rest of it.

> And the idea that we can even in principle enslave potentially god-like
> intelligences seems ultimately futile

You're assuming AIs will have a will of their own to begin with, which might
not be necessarily true. You can't enslave something that has no preference
towards whichever course reality takes.

Or to put it another way, since you're talking about god-likeness, what
difference does it make to you wether you're figuring out the optimal way to
route traffic through town or correcting the trajectories of missiles en-route
to kill millions? It's not like you'd have anything "better" to do, given that
you're all knowing, and it's not like either of those alternatives will have a
significant impact on the universe in the long run anyways.

~~~
Udo
_> You can't enslave something that has no preference towards whichever course
reality takes._

I'm adamant that this is not true. For example, you can drug a person with
something that makes them not care about the course of reality, and which
makes them compliant with pretty much anything. Doing that is _still_ abuse,
it's not suddenly OK because the drugs caused the person not to care. I would
even argue it's an especially egregious form of abuse.

 _> what difference does it make to you wether you're figuring out the optimal
way to route traffic through town or correcting the trajectories of missiles
en-route to kill millions_

You could apply the same argument to humans, and indeed in many situations we
do consider these two to be equivalent - for example while doing service in
the military.

 _> It's not like you'd have anything "better" to do, given that you're all
knowing_

There's a difference between an intelligent individual and a mindless
calculator, and that difference is the ability to reflect on your own
existence and the existence of others. Humans are a good example, because
while we're _capable_ of mindless indifference, we also have the capability to
reflect and be ethical. It's culture that makes the difference here. I'm
advocating we'll bring AGI up in a culture conducive to ethics.

~~~
Padding
> Doing that is still abuse

Based on what?

I'm not for drugging people and using them as zombies. And obviously drugging
people against their will is already forcing onto them something they don't
want. But, striping away all human context such as dignity, culture, etc., and
assuming an absence of opposition from the person/entity in question, I can't
think of any actual reason why it would indeed be "wrong" to drug
someone/something into happiness.

There is no absolute measure for happiness, and many already seem to give up
some of their freedom in the range feelings and desires they experience in
order to feel happier (with anti-depressiva) or be more successuful (with
things like Ritalin).

Taking someone utterly unhappy with their life and putting them in some
matrix-like environment where they can both experience joy and still be useful
is, I think, an alright thing to do.

> the ability to reflect on your own existence and the existence of others

We exist.

You can't make any stronger claim than that without involving some form of
(arbitrary) value/belief system.

Why would an all-knowing entity bother with having a set of beliefs it values,
if there's no formal reason/need for them?

> I'm advocating we'll bring AGI up in a culture

Culture is a slippery slope.

There are so many different ones of them which conflict, creating the
potential for conflict and retaliation. Which is where we imperfect humans are
at.

But culture is also arbitrary (as far as I can tell). Why would some all-
knowing entity prefer one culture over another? And if indeed it did, what
would hat say about those other cultures? Would genocide all of a sudden be
aceptable? Slippery slope..

~~~
Udo
> _I can 't think of any actual reason why it would indeed be "wrong" to drug
> someone/something into happiness_

You switched out the original premise on me there ;)

> _You can 't make any stronger claim than that without involving some form of
> (arbitrary) value/belief system._

You can end any discussion by invoking this principle. This is the essence of
the incompleteness theorem applied to every day reasoning. Somewhere at the
bottom of every perspective, there are some arbitrary axioms. It's a way of
saying " _that 's just your opinion, man_" and you'd be right of course.

> _Why would some all-knowing entity prefer one culture over another?_

Why would a human? I suspect the word _culture_ might have different
connotations for either of us.

~~~
Padding
> Why would a human?

Because evolution graced (cursed?) us with a reward system and parents that
utilize (abuse?) it.

Having something capable of high-level reasoning, while free from the desires,
fears, moods and other emotions humans suffer is part of the reason why we're
looking into AI right?

> that's just your opinion, man

Maybe? I have 5 fingers on my hand - is that an opinion? Maybe it is, because
what's an opionion anyways? But who would dispute it?

> Somewhere at the bottom of every perspective, there are some arbitrary
> axioms

Well not quite. "Arbitrary" perhaps in a formal sense, since logic doesn't
care about specific universes but truths that hold in all of them. Yes, you
still end up having to settle for implicit definitions somewhere along the
line (what a finger is, what method you use to count them, etc.). But there
nevertheless is some difference between merely assuming something exists, and
assuming what _should_ exist.

Something that is all-knowing would be able to figure out the difference
between premises that indeed need to be true for our universe to exist (like
me needng having 5 fingers right now), and those that humanity merely believes
or wants to be true (like it being good that I not use those fingers to poke
out someones eyes).

------
lucozade
I don't really get the problem. Humans will do one of the two things they've
always done with something they didn't understand and couldn't control.
Destroy it or worship it.

------
1971genocide
The more interesting question is will humans let computers that are smarter
than us make all our decisions for us :)

~~~
joe_the_user
Humans are more intelligent than apes. Yet clearly there are no reasons for
humans to make all of an ape's decisions for it.

~~~
deciplex
Great apes in the wild are generally having a hard time getting along, with
humans crowding them out of a lot of their historical territory. I would hope
our species doesn't end up like that. Likewise, great apes in zoos
_definitely_ have all their important decisions made for them by humans. I
don't think I want this, either.

The straightforward answer to the FAI question for me has always been thus: we
need to enhance our own intelligence to be always equal to or greater than the
AIs we create.

------
diminoten
There's no inherent value in calling a computer "smarter" than a human,
because computers aren't "smart" like humans -- it implies a single or small
collection of metrics that measure intelligence, when in reality we can't even
reliably sort humans by intellect.

~~~
gareim
When we create a computer that can do everything a human can do but
better/faster, then we can call it smarter. I find it odd that you think just
because smartness is hard to quantify that there is no value in calling a
computer smart.

~~~
diminoten
It's not hard to quantify, it's impossible to quantify.

I don't think it's odd to find a word to be of little/no value when it can't
be applied precisely.

------
cmurf
No.

If they're smarter than us, we don't have to control them, but then also we
probably won't be able to either.

------
ars
I do not believe Humans are able to _invent_ an AI smarter than us.

Nor that the AI itself would be able to do so either.

~~~
jjoonathan
Why not? It isn't hard to build machines that are stronger than we are or
faster than we are. Even if there is some sort of strange mirror version of
Turing's thesis that prevents us from creating AIs more intelligent than we
are, inventing an AI that is equally intelligent but faster would accomplish
much the same thing.

Also, bootstrapping is highly effective in countless other contexts within
computer science, what do you think will prevent it from being applicable to
this problem?

~~~
ars
So far every bit of intelligence we have made is all about brute force, none
of it is elegant. It works great - definitely. And I grant that given enough
speed, and search depth such a machine may come up with ideas.

And maybe that's what human level AI will end up being: Just search every
possibility till you find the one that works. For everything, including
speaking with a person.

But it just doesn't have that spark of true intelligence.

So I'll qualify my earlier statement, brute force AI might happen. But AI
based on elegant intelligence will not.

Is there a difference? I think so. For some things it won't matter, but for
others it will. Somehow human level intelligence manages to cut through
immense search spaces and find the answer (like the game Go) - how does it do
that? No one knows. But there is clearly something different about it compared
to brute force intelligence.

~~~
jjoonathan
> brute force, none of it is elegant

Where does the human brain fall on that spectrum? It might take us a few more
decades to figure out the details of how it works, but I doubt we'll find
elegance so much as a ton of hyperparallel brute force plus a few dirty tricks
that evolution kludged together to prevent the whole thing from falling apart
(most of the time, anyway).

> But there is clearly something different about [elegant intelligence]
> compared to brute force intelligence.

Could you elaborate on what you mean by this distinction and how the human
brain does not constitute "brute force"?

~~~
ars
Look at the game of Go.

Humans can play it - it's impossible for a human to brute force it, there is
not enough computation available in the brain to brute force it. (Your
"hyperparallel brute force" idea.)

Yet humans can play it anyway. There is something else there that makes it
possible.

And the human brain self trains! That's the really amazing part - no one has
to think of some clever optimization or algorithm to make it work like you
would if you wanted a computer to play "elegantly".

------
known
Humans can control "smart" computers.

------
eli_gottlieb
Yes, of course we will. The use of neural networks and reinforcement learning
just means the field is very young and it hasn't been worked out quite yet how
to specify _precisely_ what we want the agent to do.

