
On the Impossibility of Supersized Machines - gene-h
https://arxiv.org/abs/1703.10987
======
rntz
Summary: This is a parody of various arguments about superintelligent AI.

It opens by parodying the classic futurist parable of history as exponential
growth in intellect and technology, recasting history as a tale of growth in
size (after all, the earliest organisms were tiny). It recasts fears about
superintelligent AI as fears about super-large beings (wouldn't you be afraid
of something that could step on you as easily as you step on an ant?).

Then it switches to parodying some arguments _against_ the possibility of
superintelligence, in order:

1\. That intelligence is "irreducibly complex"; since we don't understand what
makes people smart, we can't expect to make smarter machines. This, to me, is
the weakest of the parodies in the article. We definitely understand size
better than intelligence, and lack of fundamental understanding has absolutely
impeded our ability to engineer things in the past. To get to the Moon, we had
to understand gravitation at a much deeper level than "things fall down", or
even than "things fall in parabolas". That said, total understanding of
natural phenomena is not always necessary to replicate them: the Wright
brothers didn't need to understood how exactly bird flight worked to build the
first plane.

2\. That intelligence is poorly defined ("human-level" AI is "meaningless").
"Largeness" is similarly vague, but an Airbus is still obviously bigger than a
human. Vague goals can still be met.

3\. That human intelligence is in some way "universal". I don't understand
this argument enough to explain the parody. Can someone else chime in?

4\. Psychological arguments against superintelligence, which says that the
origin of belief-in and fear-of superintelligent machines is really
evolutionary, not logical. This is a classic non-sequitur; if evolution
predisposes humans to be afraid of wolves, it does not follow that humans have
no reason to be afraid of wolves.

5\. That the goal of AI research is to augment human intelligence, not
supplant it. Another non-sequitur.

6\. Arguments from the hard problem of consciousness, and philosophical
questions like "can a machine ever be conscious?". Yet another non sequitur:
even if you think a machine fundamentally can't be conscious, it doesn't
follow that it can't be functionally superintelligent.

7\. Bullshit arguments involving quantum mechanics, Godelian incompleteness,
and other things with pop-cultural cachet that most people who invoke them
don't have the technical chops to understand.

~~~
myhf
> 3\. That human intelligence is in some way "universal". I don't understand
> this argument enough to explain the parody. Can someone else chime in?

This is a reference to the principle of computational equivalence. All Turing-
complete systems are capable of emulating each other. A human can execute any
program a machine can (only slower), and derive benefit from the results even
if he doesn't fully understand the process.

[http://mathworld.wolfram.com/PrincipleofComputationalEquival...](http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html)

~~~
rntz
I'm familiar with Turing-equivalence, but the argument in that section didn't
seem analogous to me. In the first place, it's about _human_ universality, not
_computer_ universality; so an implicit premise of the "parodied argument"
would have to be something like "human intelligence is Turing-complete" \-
I've seen no argument against machine superintelligence that starts from this
premise (although not all arguments depend on rejecting it, either).

~~~
johncolanduoni
Rejecting human intelligence being Turing-complete in the same sense as a
particular conventional computer (i.e as a linear bounded automaton) would be
a pretty hard sell. It would be equivalent to claiming that there was a
single-threaded C program that a human with unbounded time and scratch paper
couldn't step through.

~~~
scandox
Why does the threading matter?

~~~
johncolanduoni
It doesn't, but I think it's an easier sell that humans can do that than a
multithreaded case. Especially because it introduces a lot more undefined
behavior.

~~~
scandox
Seems to me undefined behaviour is the beginning of something approximately
human.

------
tgb
I've always enjoyed this similar satire:
[http://dresdencodak.com/2009/05/15/a-thinking-apes-
critique-...](http://dresdencodak.com/2009/05/15/a-thinking-apes-critique-of-
trans-simianism-repost/)

------
falcolas
Sure, it's a parody, but let's take a few moments to take it at face value.

For every machine _of any size_ , a lot of effort goes into its creation. They
also don't survive without external intervention. The larger the machine, the
more effort that goes into their creation and maintenance (and the more
vulnerable they are to minor failures).

Aircraft, as a singular example, have to be virtually torn apart inspected and
repaired every couple hundred hours to ensure they won't fall apart during
regular use. They are also susceptible to the loss of a single critical piece;
the loss of which will cause them to fail, often spectacularly.

Big bridges require almost daily maintenance to keep them safe. Boats are
incapable of the simple act of fueling themselves. Computers stop working
properly due to cosmic radiation flipping bits. Knives get dull. Batteries run
out. Radioactive isotope power generators decay. Plastic breaks down. Stone
erodes.

We have, despite hundreds of thousands of years, barely been able create an
object which can last for even a couple of hundred years without constant
maintenance. We have yet to create a machine of any size which is capable of
maintaining itself. Why would a computer program, itself reliant on one of our
least reliable and fragile constructs, prove to be any different?

------
eli_gottlieb
Ah, yes, I had of course forgotten the Hard Problem of Largeness, which shows
that bigness must necessarily be a non-physical property. How silly of me to
ignore philosophy!

------
Keyframe
I thought it was a play on Kafka's quote:

 _" The crows assert that a single crow could destroy the heavens. This is
certainly true, but it proves nothing against the heavens, because heaven
means precisely: the impossibility of crows"_

I'm still not sure I am wrong.

------
MayeulC
On a more serious tone, I am currently reading the paper that surfaced here
tho weeks ago (from 1999):
[https://news.ycombinator.com/item?id=13920714](https://news.ycombinator.com/item?id=13920714)

The topic is somewhat similar, and the paper is interesting so far, but I am
reserving my opinion for when I will be finished reading it.

------
simonh
>6\. The Hard Problem of Largeness

Take that, Dave Chalmers!

------
mirimir
OK, so I thought that this would be about Dyson spheres etc.

------
jlebrech
I'm sure an AS400 is bigger than a human.

~~~
throwaway729
Check out fig 1 (page 4).

~~~
dogma1138
Better check the date :)

------
basicplus2
Pretty sure a D9 bulldozer is bigger than a human, and an A380 and a tunnel
boring machine and a steam engine and a crane, and the truck that picks up my
rubish and....

~~~
soVeryTired
It's satirical, parodying arguments against superhuman machine intelligence.
IMO it's also not as clever as it thinks it is.

~~~
RobertoG
I think it's very clever. Even if I think that the sarcasm of the paper
express the message perfectly, after reading some of the comments here, I'm
going to spell it.

It's telling us: we live in a material universe, intelligence is just a
function of physical attributes. If you want to deny the possibility of
machine intelligence, the burden of proof is on you because, denying that
would be so ridiculous as denying supersized machines.

