
Superintelligence cannot be contained: Lessons from Computability Theory - fforflo
https://arxiv.org/abs/1607.00913
======
argonaut
This seems like a very big, pretty unreasonable assumption: "Assuming that a
superintelligence will contain a program that includes all the programs that
can be executed by a universal Turing machine on input potentially as complex
as the state of the world"

This assumption is essentially assuming that "superintelligence" means "god-
like intelligence" (god-like because it can compute anything about the world),
which is not what the first AGIs that are even significantly smarter than
humans would look like.

~~~
ccvannorman
I think the author has the unstated assumption that "humans will continue to
work on smarter and better machines until such a god-like machine is
eventually constructed, OR that machines will be created that can self-improve
until this state is reached." It doesn't seem unreasonable to me -- the
alternative is that at some point in the future, we _STOP_ making machines
smarter for good.

~~~
argonaut
There is no reason to believe it is even possible to even create such a God-
like intelligence. Even harnessing the energy of the sun would not give you
enough energy to compute _anything about the universe_. We also have every
reason to believe that even if this were possible, it would only happen in the
far future (e.g. hundreds of years).

~~~
drdeca
Only hundreds?

That's not that many lifetimes.

~~~
argonaut
I was just pulling a number out of thin air. We're in scifi speculation
territory so the numbers are all made up.

------
hodgesrm
Fun post. There's some fluff on the way but the central idea that the general
function Harm(R, D) is equivalent to the halting problem seems almost
trivially obvious once you ask the question. (So much for the laws of
robotics.)

My question is whether this approach even models the issue correctly. We
humans have problems just defining what terms like happiness, harm, and other
general concepts even mean. There's no reason to think that AIs will be any
better, whether assisted by us or through their own methods of discovery.

Many successful systems humans have developed for reducing harm take the
approach of building up rules for specific cases rather than trying to deduce
the cases and their solutions from general principles. Every legal system I
can think of works this way and they have by and large worked pretty well. The
successful ones have a method for adding to case law in a systematic way,
ideally without going to war in the process.

So maybe we should keep the lawyers around after all?

------
goldenkey
I especially like the story at the end:

> More terrible than either of these tales is the fable of the monkey’s
> paw[12], written by W. W. Jacobs, an English writer of the beginning of the
> [20th] century. A retired English working-man is sitting at his table with
> his wife and a friend, a returned British sergeant-major from India. The
> sergeant-major shows his hosts an amulet in the form of a dried, wizened
> monkey’s paw... [which has] the power of granting three wishes to each of
> three people... The last [wish of the first owner] was for death... His
> friend... wishes to test its powers. His first [wish] is for 200 pounds.
> Shortly thereafter there is a knock at the door, and an official of the
> company by which his son is em- ployed enters the room. The father learns
> that his son has been killed in the machinery, but that the company...
> wishes to pay the father the sum of 200 pounds... The grief-stricken father
> makes his second wish -that his son may return- and when there is another
> knock at the door... something appears... the ghost of the son. The final
> wish is that the ghost should go away. In these sto- ries the point is that
> the agencies of magic are literal-minded... The new agencies of the learn-
> ing machine are also literal-minded. If we pro- gram a machine... and ask
> for victory and do not know what we mean by it, we shall find the ghost
> knocking at our door.

------
leblancfg
Let's throw in a time factor here, and change the way we define our machine:
"Assuming that a superintelligence [SI] will contain a program that _can
efficiently infer /deduce_ all the programs that can be executed by a
universal Turing machine on input potentially as complex as the state of the
world". This seems much more reasonable in terms of how thinking gets done.

We see links to the Halting Problem pop up at times when discussing compilers.
Now, compilers try to interpret code in the most perfect way possible. But of
course, finding the optimal solution is just the Halting Problem in disguise,
and instead they opt for a more pragmatic solution. While none are perfect,
some of them are also really, really good, though
([http://lemire.me/blog/2016/05/23/the-surprising-
cleverness-o...](http://lemire.me/blog/2016/05/23/the-surprising-cleverness-
of-modern-compilers/)).

Now this is just an analogy, but I think the parallel can be drawn back to the
SI scenario. We might not have SIs that never cause harm _perfectly_ , but
this does not preclude them from being really, really good at it. Because if
you admit the definition above, you're also implying that it will efficiently
infer/deduce its harm function.

------
gone35
Cute cartoons, but the entire argument rests on the _insane_ assumption that
"a superintelligent machine could simulate the behavior of an arbitrary Turing
machine on arbitrary input" (pp.4-5) (!).

~~~
dozzie
...thus being itself Turing-complete? Why is this unreasonable?

~~~
gone35
Because of the (implicit) assumption that it runs _efficiently_ (as in, at
most in polynomial time) _even_ on "arbitrarily complex" inputs... Otherwise
the "superintelligence" would not be any different from your run-of-the-mill
Turing-complete interpreter!

~~~
dozzie
> Because of the (implicit) assumption that it runs efficiently (as in, at
> most in polynomial time) even on "arbitrarily complex" inputs...

Uhm... I have no idea where did you get this assumption, because it was not
expressed in the sentence we're talking about. Even Turing machine doesn't run
in polynomial time on arbitrarily complex input with an arbitrary program.

The super AI doesn't need to surpass the Turing machine, it only needs to
efficiently _emulate_ it, i.e. _interpret_ its program. If the TM program was
inefficient and ran in exponential time, super AI may run in exponential time
as well.

> Otherwise the "superintelligence" would not be any different from your run-
> of-the-mill Turing-complete interpreter!

Oh. You mean that my mind is not any different from my computer? Because I
_can_ run Turing machine (with a little help of external memory, like paper).
That's something new.

------
AndrewKemendo
Thank you! I've been saying as much for years - albeit with fewer proofs and
from a more paternal bent.

