
The impossibility of intelligence explosion - _ntka
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
======
mannykannot
This article is typical of many that claim proven limits on the feasibility
(or, in this case, the capabilities) of generalized artificial intelligence,
in that it structures the argument in a way designed to avoid discussion of
the issue.

It starts by claiming that there is no such thing as general intelligence.
What specialized intelligence, then, is human intelligence? It's specialized
for "being human". The author is apparently unaware that this tautological
response eliminates the distinction between general and specialized
intelligence, as one could just as validly (or vacuously) say that a
superhuman intelligence is specialized in being what it is and doing what it
does. The author has invalidated the hook on which he had hung his argument.

A lot of column-inches are expended on repeatedly restating that animal
intelligences have co-evolved with their sensorimotor systems, which is a
contingent fact of history, not a fundamental necessity for intelligence (as
far as we know; but then the whole article is predicated on the feasibility of
AI.) He raises the 'brain in a vat' trope, but no-one is suggesting that AIs
must be disconnected from the external world. Furthermore, this line of
argument ignores the fact that many of the greatest achievements of human
intelligence have come from contemplating abstract ideas.

When the author writes "most of our intelligence is not in our brain, it is
externalized as our civilization", he is confusing the achievements of
intelligent agents for intelligence itself. When he writes that "an individual
brain cannot implement recursive intelligence augmentation" he is confusing a
limit on human capabilities for a fundamental limit on intelligence itself...

I am far from convinced that the singularity must follow from achieving human-
level artificial intelligence, as we don't know how to get to the starting
line, let alone know how the problem of bootstrapping intelligence scales, but
the arguments presented here do nothing to persuade me that it is impossible.

~~~
visarga
Your reply gives me the impression that you are not up to date with
Reinforcement Learning. If you did, you would know that the author really
understands this domain and was not merely tautological.

"Specialized at being human" \- this is a deep intuition. We are reinforcement
learning agents that are pre-programed with a certain number of reward
responses. We learn from rewards to keep ourselves alive, to find food,
company and make babies. It's all a self reinforcing loop, where intelligence
has the role of keeping the body alive, and the body has the role of
expressing that intelligence. We're really specialized in keeping human bodies
alive and making more human bodies, in our present environment.

The author puts a hard limit on intelligence because intelligence is limited
by the complexity of the problems it needs to solve (assuming it has
sufficient abilities). So the environment is the bottleneck. In that case, an
AGI would be like an intelligent human, a little bit better than the rest, not
millions of times better.

~~~
alfrednachos
What are humans specialized in doing? Because it seems to me that humans are
pretty good at chess, calculus, social manipulation, flying to the moon,
building machines that take us to the bottom of the ocean, discovering
fundamental physics, etc. A fish, no matter what environment and upbringing
you give it, can't do any of those things. So it seems like there's some
dimension in which the human brain is more generally intelligent than a
fish's.

~~~
doxos
That dimension is still on a thin film around a little ball floating in one of
a great number of possible universes within a great number of possible rule
systems. Compared to that space, we are quite similar to fish, in terms of the
purposes for which our machinery functions.

But the question is not, "is intelligence explosion possible?" The question
is, "explode into what?"

------
tigershark
I'm not sure that I ever read a piece written with so much certainty and
arrogance on a field that is completely unexplored. Just as an example:

"The basic premise of intelligence explosion — that a “seed AI” will arise,
with greater-than-human problem solving ability, leading to a sudden,
recursive, runaway intelligence improvement loop — is false."

From the little that we know as of today I call bullshit. Even Alpha Go, that
is arguably a quite primordial AI, managed to achieve super-human performance
in a ridiculously short amount of time, just playing against itself. And it
simply crashed all the collective effort of the human players that honed their
strategies for literally millennia in what is considered one of the most
difficult games. I don't think the author has any insight at all on what a
general AI will be.

~~~
Retric
Games have a clear ends goal. How to you measure getting better at ethics?

~~~
lodi
Maximize human population divided by the time integral of all human suffering
(taken from now to the heat death of the universe.)

~~~
Strilanc
Congratulations, your AI wants to convert all matter in the universe into
lobotomized humans.

~~~
lodi
That's suboptimal; a fully realized human would suffer less than a lobotomized
human.

Also, we don't need an AI that's ethically perfect, just equal or better to an
average human.

------
titzer
> If intelligence is fundamentally linked to specific sensorimotor modalities,
> a specific environment, a specific upbringing, and a specific problem to
> solve, then you cannot hope to arbitrarily increase the intelligence of an
> agent merely by tuning its brain — no more than you can increase the
> throughput of a factory line by speeding up the conveyor belt.

The latter part makes no sense at all. Of course you can increase the
throughput of a factory by speeding up "the conveyor belt"\--a standin for the
complex processes going into manufacturing.

The whole statement is also wrong. Of course you can increase intelligence by
optimizing the process of learning. Fewer trials, quicker reactions, faster
construction of models, more complex understanding of fundamentals of a given
problem.

The author makes broad assertions like this with glaring holes with zero
evidence.

~~~
10-6
"The whole statement is also wrong. Of course you can increase intelligence by
optimizing the process of learning."

One of the author's main points is that "intelligence" in the way you
mentioned, learning and optimization, is simply one aspect of the human mind.
So we could optimize our minds (or an AI program) to beat anyone at the game
of Go and play perfectly, but that's all it is optimized to do. It can
understand "the given problem" but there is FAR MORE to our minds than
optimizing and learning how to solve a task.

Proponents of "build a general AI system that will surpass humans at
everything" don't seem to understand this.

~~~
mrec
I'm suddenly having flashbacks to reading _On the Impossibility of Supersized
Machines_ [1]

[1]
[https://arxiv.org/pdf/1703.10987.pdf](https://arxiv.org/pdf/1703.10987.pdf)

~~~
EthanHeilman
"A further reason why it is senseless to speak of machines that are larger
than people is that humans already possess the property of universal
largeness.

By this, we mean that humans are capable of augmenting their bodies or coming
together to become indefinitely large, no matter the metric chosen. If a human
would like to be taller, they can stand on a chair or climb onto another
human’s shoulders. If they would like to be wider, they can begin consuming a
high-calorie diet or simply put on a thick sweater (Hensrud, 2004; Figure1)."

------
apk-d
> The intelligence of a human is specialized in the problem of being human.

And then you have people able to express and derive complex theoretical
relationships by manipulating mathematical symbols, or automate processes by
programming machines, or capable of utilizing their spatial visualization to
massively boost their memory. We _invented_ that stuff. We know artificial
intelligence is capable of inventing stuff because we are.

The author seems hilariously unaware of the fact that humans can learn to
solve/optimize many arbitrary problems. Like, holy shit, have you ever played
video games?

> In practice, geniuses with exceptional cognitive abilities usually live
> overwhelmingly banal lives, and very few of them accomplish anything of
> note. Of the people who have attempted to take over the world, hardly any
> seem to have had an exceptional intelligence.

The author conflates intelligence and purpose. Being smart doesn't imply the
motivation to accomplish grand things. In fact, we're biologically hardwired
to enjoy entirely mundane things: food, sex, love, relaxation, conversation.

> A single human brain, on its own, is not capable of designing a greater
> intelligence than itself. (...) Will the superhuman AIs of the future,
> developed collectively over centuries, have the capability to develop AI
> greater than themselves? No, no more than any of us can.

Human brains don't scale vertically (or otherwise). Meanwhile, once you have a
satisfactory facial recognition algorithm, you can run it to recognize faces
in a video at 1000x realtime speed, or in 1000 simultaneous realtime streams.
The AI doesn't even have to be smarter than human! 1000 dumb humans
communicating at gigabit speeds and solving problems while you're wondering
what to have for lunch is a force to be reckoned with.

~~~
visarga
> 1000 dumb humans communicating at gigabit speeds and solving problems while
> you're wondering what to have for lunch is a force to be reckoned with.

And if you can solve the communication and the group learning problems, you
get the equivalent of Nobel prize for AI whatever it might be.

Many things seem trivial until you try to implement them in reality.

------
10-6
The author makes several points about AI/intelligence and recursive systems
that I think a lot of people who think "AI will take over the world and
replace" humans [1][2][3] don't understand.

He argues that general intelligence by itself is not really something that
exists, our brains exist within a broader system (environment, culture,
bodies, etc.) which is something you can read more about in embodied
cognition: [https://blogs.scientificamerican.com/guest-blog/a-brief-
guid...](https://blogs.scientificamerican.com/guest-blog/a-brief-guide-to-
embodied-cognition-why-you-are-not-your-brain/)

He also argues that the public debate about "AI regulation" is misleading
because it's impossible for a "seed AI" to start designing smarter AI's that
will surpass the intelligence of humans which is what a lot of people today
think will happen with AI. Automation of jobs and tasks is very real, but
completely replacing humans and potentially destroying us all is a joke and
only the people who no nothing about AI/brains think this.

[1] [https://www.vanityfair.com/news/2017/03/elon-musk-billion-
do...](https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-
crusade-to-stop-ai-space-x)

[2] [https://www.cnbc.com/2017/07/17/elon-musk-robots-will-be-
abl...](https://www.cnbc.com/2017/07/17/elon-musk-robots-will-be-able-to-do-
everything-better-than-us.html)

[3] [https://www.npr.org/sections/thetwo-
way/2017/07/17/537686649...](https://www.npr.org/sections/thetwo-
way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-
poses-existential-risk)

~~~
naasking
> Automation of jobs and tasks is very real, but completely replacing humans
> and potentially destroying us all is a joke and only the people who no
> nothing about AI/brains think this.

Don't be absurd. Brains are simply biochemical computers. The idea of
electrical or other types of computers superceding us is perfectly realizable.
The only real debate is over the timeline.

~~~
amag
Superseding us in what way? The machine is already better at chess and Go,
clearly _not_ at classifying images[1], but the issue here is that the author
argues that there _is no such thing_ as a general intelligence. It's all
situational.

On a separate note, does it even matter? Some people argue it's important so
that we can put AI safe-guards in place, to protect mankind. But either:

A) Intelligence explosion _won 't_ happen and those safe-guards are not
needed.

B) Intelligence explosion _happens_ and the supersmart AI will easily find its
way around feeble safe-guards put in place by the inferior humans.

While the end results are not the same; in either case the safe-guards are
useless, though Asimov sure made a good story about them.

[1] [https://arxiv.org/abs/1710.08864](https://arxiv.org/abs/1710.08864)

~~~
naasking
A superintelligence can't circumvent mathematical proof, so your scenarios are
not exhaustive and safeguards are not intrinsically useless.

> but the issue here is that the author argues that there is no such thing as
> a general intelligence. It's all situational

An intelligence that can adapt to changing situations seems pretty general to
me.

------
web007
Please note: The author (and submitter) is the author of Keras, so his views
on AI / DL are not completely unfounded.

The concept of bottlenecks preventing the singularity are a fair point, but I
don't believe many of the arguments that are made here. Using humanity as the
basis of comparison is not sufficient, since human life expectancy and other
requirements for survival are not a factor in artificial superintelligence.

Humans work for perhaps 60 years at improving themselves and their
understanding of the world, but must sleep, must eat, must go and stand in
line at the DMV, etc. Having a system that can work 24/7 on improving itself
for 20 years will match a human lifetime. After a lifetime, humans start over
from scratch, with perhaps some learning that can be passed down between
generations. AGI can simply clone itself, and begin from exactly where it left
off.

~~~
munificent
> Using humanity as the basis of comparison is not sufficient, since human
> life expectancy and other requirements for survival are not a factor in
> artificial superintelligence.

Doesn't electrically-driven silicon hardware also have creation and operating
costs? Any AI will theoretically need spend electricity convincing its human
masters to give it even more electricity. Even if it can use its robot army to
build wind farms or whatever, that's time its spending marshaling its robot
army and futzing around with aerodynamics instead of contemplating the cosmos.

Its hard drives will expire and need to be replaced. Sure, it could solve that
by always having backups of everything it knows, but who is to say that purely
lossless backups are actually a more optimal solution to the hardware decay
problem than our human lossy strategy?

> AGI can simply clone itself, and begin from exactly where it left off.

Cloning isn't free.

------
andrewljohnson
This article draws some false parallels.

1\. He argues that intelligence is situational, but doesn't address clones.
What happens if AI can clone Einstein, with all of his situational knowledge?
What if AI can clone Einstein and all of his colleagues, 1M times, creating 1M
parallel Princetons?

Similarly, he says most of our intelligence is in our civilization, but what's
to stop us from cloning big chunks of our civilizations... simulating them,
but using vastly less power/resources for each simulation? Then writing
software to have them pass knowledge among the civilizations? We have just a
few hundred countries, what if we had a trillion communicating at the speed of
computer circuitry?

And he says an individual brain cannot recursively improve itself... so again,
what about a group of brains, set up in a simulated world where they don't
even know we exist?

2\. He cites the growth in numbers of computer programmers as a reason not to
fear an explosion of AI computer programmers. His argument goes "we have a lot
more recently, yet it has not caused exponential changes in software."

But there is a difference between going from 0->1M programmers, to 1M->100T
programmers.

3) He writes that recursively self-improving systems exist and haven't
destroyed us (military, science), but many people believe these things will in
fact destroy us, before we get off this rock.

The overall flaw is thinking we can interpret the AI-assisted future, given
the context of current society's linear achievements. When in fact exponential
effects look linear in small timeframes, and we have only been really thinking
and expanding science for a few thousand years.

If someone wants to present an argument against the AI explosion, I'd believe
it if it were premised in some sort of physical bottleneck... like how much
energy it would take to run a human-level AI. I don't think I can ever accept
a philosophical argument like this one.

All that said, I think we're far away from being able to engineer AIs that
will outthink our civilization and take us over. Better to worry about other
exponential or non-differentiable terrors like runaway greenhouse effects and
military buildup.

~~~
pbkhrv
> but what's to stop us from cloning big chunks of our civilizations...
> simulating them, but using vastly less power/resources for each simulation?
> Then writing software to have them pass knowledge among the civilizations?
> We have just a few hundred countries, what if we had a trillion
> communicating at the speed of computer circuitry?

The principle of computational irreducibility [1] is what will stop us from
"cloning" civilizations. That and chaos theory - any tiny deviation in initial
conditions of such a simulation or cloning process could produce unusable
results.

"simulating them, but using vastly less power/resources" is a pipe dream.

[1]
[http://mathworld.wolfram.com/ComputationalIrreducibility.htm...](http://mathworld.wolfram.com/ComputationalIrreducibility.html)

------
exratione
This manages to get lost in its own trees. From a reductionist perspective:

\- Intelligence greater than human is possible

\- Intelligence is the operation of a machine; it can be reverse engineered

\- Intelligence can be built

\- Better intelligences will be better at improving the state of the art in
building better, more cost-effective intelligences

Intelligence explosion on some timescale will result the moment you can
emulate a human brain, coupled with a continued increase in processing power
per unit cost. Massive parallelism to start with, followed by some process to
produce smarter intelligences.

All arguments against this sound somewhat silly, as they have to refute one of
the hard to refute points above. Do we live in a universe in which, somehow,
we can't emulate humans in silico, or we can't advance any further towards the
limits of computation, or N intelligences of capacity X when it comes to
building better intelligences cannot reverse engineer and tinker themselves to
build an intelligence of capacity X+1 at building better intelligences? All of
these seem pretty unlikely on the face of it.

~~~
kfk
I think the point he is trying to make is that there are boundaries to
intelligence. I think of it this way - no matter how smart an AI is, it still
would take 4.3 years to reach Alpha Centauri going _at the speed of light_. An
AI still needs to run experiments, collect evidence, conjure hypothesis, reach
consensus, etc.. Is this really that far more efficient than what humans do
today?

~~~
btilly
_Is this really that far more efficient than what humans do today?_

A major managerial problem with humans is sorting out our irrational emotional
biases and keeping everyone working on something resembling the appointed
task. Can you imagine the productivity gain if that problem suddenly went
away?

~~~
zentiggr
So emotions are a problem to be solved and moved past...

I don't want your world. Dystopias suck.

~~~
btilly
I don't want that world either. But achieving it is a wet dream for a
business.

The challenge that Elon Musk et al are warning us about is what role will
humans have after we achieve this dystopia.

~~~
AnimalMuppet
> But achieving it is a wet dream for a business.

I don't know about that. Businesses depend on manipulating our emotions in
order to get us to buy stuff.

~~~
btilly
You can successfully manipulate emotions without having any of your own.
Recommendation engines are getting really good at it.

------
nmeofthestate
"We understand flight - we can observe birds in nature, to see how flight
works. The notion that aircraft capable of supersonic speeds are possible is
fanciful."

~~~
lodi
My thoughts exactly. The author is concluding that since we've never seen
supersonic wing-flapping in nature, we'll never see supersonic flight. Not
only is that an unwarranted implication, in a mathematical sense, but it also
doesn't cover the possibility of inventing something fundamentally _different_
to what we've observed in the universe thus far.

Using human and octopus examples as proof of anything is ignoring that general
AI will be fundamentally different to anything that we've observed on earth,
and therefore invalidates any attempt to extrapolate from history. There has
never been an intelligence that could copy/paste itself intact with its
existing memories. There has never been an intelligence that could literally
span the whole world with a single consciousness. And so on.

> There is no such thing as “general” intelligence. The intelligence of a
> human is specialized in the problem of being human.

That's exactly what people are worried about. When the AI specializes itself
to be better at being a human than actual humans. It doesn't matter if it's
totally general and can be applied to any problem whatsoever; human
intelligence is also just "general enough".

> Will the superhuman AIs of the future, developed collectively over
> centuries, have the capability to develop AI greater than themselves? No, no
> more than any of us can.

Okay, so instead of single AI's developing next-gen AI's, it'll be "AI
civilization" developing next-gen AI's. This still poses exactly the same
existential risk to humanity.

> We didn’t make greater progress in physics over the 1950–2000 period than we
> did over 1900–1950 — we did, arguably, about as well.

Well I certainly know we made exponentially more progress in those years than
from the years 900-950? Nevermind say 100,000-5,000 BCE.

------
titzer
> A high-potential human 10,000 years ago would have been raised in a low-
> complexity environment, likely speaking a single language with fewer than
> 5,000 words, would never have been taught to read or write, would have been
> exposed to a limited amount of knowledge and to few cognitive challenges.
> The situation is a bit better for most contemporary humans, but there is no
> indication that our environmental opportunities currently outpace our
> cognitive potential.

More crap. Our environmental opportunities don't outpace our cognitive
potential? What internet is the author connected to? The author even goes on
in detail in the very next paragraph about the possibilities around...E.g.
there are literally thousands, perhaps millions of hours of instructional
videos on YouTube, just for basic skills. Every field of human endeavor has
uploaded the digital artifacts of its best minds in one form or another to a
single global network of unimaginable, fractal, complexity. You don't think
you can saturate your cognitive potential in your lifetime given this
resource?

~~~
jessaustin
If we imagine an Einstein of the 23rd century, whose great discoveries will
enable humanity to cross the universe in a moment while expending the energies
of many stars, and then we imagine that person educated in a USA public school
in 2017, can we imagine her making those same discoveries? Of course not,
which is most of what TFA is saying here. Perhaps "general AI" will increase
the rate of improvement, but even the most gifted AI will be limited by the
context in which it operates.

------
pmoriarty
_" An individual brain cannot implement recursive intelligence augmentation.
An overwhelming amount of evidence points to this simple fact: a single human
brain, on its own, is not capable of designing a greater intelligence than
itself. This is a purely empirical statement: out of billions of human brains
that have come and gone, none has done so. Clearly, the intelligence of a
single human, over a single lifetime, cannot design intelligence, or else,
over billions of trials, it would have already occurred."_

The same argument could have been about human flight (or any other invention).
Billions of human beings over millions of years had come and gone, and yet
none had been able to fly. Until they did.

There's also a point to be made about humans not having adequate tools for
introspection or self-modification. We can not simply look in our brains/minds
and read our source code, nor easily tweak it to see what would happen without
great risks to our lives. A computer could. Furthermore, a computer could
potentially run billions or trillions such experiments in the course of a
single human lifetime.

 _" no human, nor any intelligent entity that we know of, has ever developed
anything smarter than itself."_

What about humans simply having smarter children? In many ways, humanity's
creation of AI could be seen as analogous to giving birth to a child that's
smarter than its parents.

 _" Beyond contextual hard limits, even if one part of a system has the
ability to recursively self-improve, other parts of the system will inevitably
start acting as bottlenecks. Antagonistic processes will arise in response to
recursive self-improvement and squash it ... a brain with smarter parts will
have more trouble coordinating them ... Exponential progress, meet exponential
friction."_

The problem is the author is looking at human-level problems with human-level
intelligence. A superintelligence may have no such problems, or be able to
easily resolve them. Once over the initial speed bump of creating such an
intelligence it could be smooth sailing from then on. There's no way to tell,
really, what its limits will be until it exists. Besides, even if it can't
exponentially increase its own intelligence forever, even a relatively slight
increase of intelligence over humanity could be a massive gamechanger.

~~~
breuleux
> We can not simply look in our brains/minds and read our source code, nor
> easily tweak it to see what would happen without great risks to our lives. A
> computer could.

That is not a given. The neural network type of AI, which is so far the most
promising avenue, is about as opaque as a human brain. It is far from obvious
that even a superintelligent AI could understand what its own brain does, let
alone modify it in a way that has positive implications for itself.

Hell, it's not even a given the AI could _read_ its "source code" either. We
often say we shouldn't anthropomorphize AI, but we shouldn't how-current-
computers-work-ize it either. Future AI might not actually run on computers
that have a CPU, a GPU and RAM that can be read freely. The ability to read or
copy source code isn't cost-less, and several considerations may lead AI
hardware designers to axe it. One consideration would be leveraging analog
processes, which might be orders of magnitude more efficient than digital
ones, at the cost of exactness and reproducibility. Another would be to make
circuits denser by removing the clock and all global communication
architecture, meaning that artificial neurons would only be connected to each
other, and there would be no way to read their state externally without a
physical probe.

~~~
pmoriarty
_" That is not a given. The neural network type of AI, which is so far the
most promising avenue, is about as opaque as a human brain. It is far from
obvious that even a superintelligent AI could understand what its own brain
does, let alone modify it in a way that has positive implications for
itself."_

What it could do is experiment. It could rewire the network, add or take away
nodes, change functions, etc. This could potentially be done and evaluated by
it at an incredibly fast rate compared to humans.

Regarding the difficulties of it reading its own source code, even the
hypothetical ones you cite are many orders of magnitude smaller than those
faced by humans in reading their own, as long as the AI is not itself running
on a biological substrate.

~~~
breuleux
> What it could do is experiment. It could rewire the network, add or take
> away nodes, change functions, etc. This could potentially be done and
> evaluated by it at an incredibly fast rate compared to humans.

"Potentially," but let's not get ahead of ourselves here. There are two ways a
brain could be "rewired." It could first be rewired by a local process, that
is to say, a group of neurons decide how they are to be connected to each
other, without influence from faraway neurons except through the normal
propagation of signals through the brain. That's how biological brains do it.
Or it could be rewired by a global process, a coordinator that can look at the
big picture and make minute changes. That would be your suggestion.

My contention is that the latter method involves a lot of hardware overhead:
you basically need a global network connected to each and every neuron which
can probe their state _in addition_ to the local wiring that lets neurons
communicate with their neighbours. You need space for this bus, so neurons
need to be further apart than they otherwise would, which means that signals
have to travel further and the brain will think slower.

Nor is "rewire the network, add or take away nodes, change functions, etc."
necessarily an effective strategy. First you need to identify what to change,
which is like finding a needle inside a haystack, then you need to figure out
in what way to change it, which is also difficult, then you need to test
whether it had the intended effect, and more importantly, whether there were
harmful side-effects. Whether humans can do it or not is not relevant: What is
relevant is whether this process is efficient enough to beat the _baseline_
local learning method. It is not clear that it would be.

> Regarding the difficulties of it reading its own source code, even the
> hypothetical ones you cite are many orders of magnitude smaller than those
> faced by humans in reading their own, as long as the AI is not itself
> running on a biological substrate.

What makes you think these are smaller difficulties? On the contrary, if you
imagine that the AI is built as a very dense, solid 3D circuit, and you need
to read the value of a neuron at the center, you might have a much harder time
jamming a probe in there than you would injecting one in a squishy human
brain. You would need to build it in such a fashion that it can be probed
easily, but that may require making the circuit twice as big and therefore
slower. Furthermore, in the presence of local update rules, which is likely to
be the case, your "source code" is changing all the time, even as you read it,
so your self-knowledge is constantly out of date. There is a synchronization
issue here.

------
pbkhrv
> Our environment, which determines how our intelligence manifests itself,
> puts a hard limit on what we can do with our brains — on how intelligent we
> can grow up to be, on how effectively we can leverage the intelligence that
> we develop, on what problems we can solve.

Consider internet to be the "new" environment, full of highly complex social
networks, millions of applications to interact with etc. Our brains are way
too limited to be able to deal with it. There's an opportunity for a much more
powerful intelligence to arise that CAN effectively process that volume of
data and appear to be a lot more intelligent in that particular context.

------
nabla9
Very good piece from actual researcher in the field.

I can see practical intelligence explosion when visuospatial intelligence
develops and can be connected to rudimentary reasoning. Most of human
intelligence seems to be bootstrapped from our ability to comprehend and
visualize 3d space and objects moving there. It's also interesting how almost
all problems look like boxes and arrows or connecting lines when you draw them
on the whiteboard.

Eventually AI needs a sketchpad so it can write notes to others and for itself
and participate in the culture by externalizing.

~~~
Smaug123
Some people (~3% of the population) completely lack the ability to visualise.
[http://slatestarcodex.com/2014/03/17/what-universal-human-
ex...](http://slatestarcodex.com/2014/03/17/what-universal-human-experiences-
are-you-missing-without-realizing-it/)

------
washappy
Super Intelligence is the information theoretical variant of the perpetuum
mobile.

Like the article made so aptly clear: No matter the performance of the
machine, if its input is not varied, information-rich, complete enough, it
will not learn. Mahoney formalized this by looking at the estimated number of
bits a human brain processes during its lifetime. The internet currently does
not hold enough information to equal the collective intelligence of the
world's brains. A lot of this information can not be created freely nor
deduced/infered from logical facts: it requires a bodily housing and sensory
experience, and an investment of energy (and right now GPU farms take up way
more calories than the brain).

Compare AGI with programmable digital money. A super intelligent AI, by a
series of superior decisons, could eventually control all the money. But then
there is no economy anymore, just one actor. That's like being the cool kid on
the block owning the latest console, but nobody around left to make games for
it. There is a hard non-computable limit on intelligence (shortest program to
an output leading to a reward), because there is a limit on the amount of
computing energy in our universe. But intelligence is also limited by human
communication. How useful is an AGI-made proof if humans need aeons and travel
to other universes to parse it? If intelligence were centralized by an AGI
then there would be no need to explain anything to us: we'd be happily living
in the matrix.

Some investment firms are just reading "software" whenever they read "AI".
This allows them to apply their decade-old priors to what, today, is
essentially the same. Yes, both the human intellect and human manual labour
will see continued automation with software and hardware. I think many abuse
rationality to justify their singularity concerns based on a very ape-like
fear of competition. They learn how to do addition in their heads, and then
see electronic calculators as existential threats. "What if they could do
addition by themselves?".

The real threat is in "semi-autonomous software and hardware". Self-
controlling "mindless" agents that perform to the whims of its masters. We
face the repercussions of that way before we find out how to -- and have the
courage to -- encode free will AGI into machines, a perpetuum mobile of ever-
improving intent and intelligence.

~~~
washappy
And, to an extend, I sympathize with the viewpoints of the singularity
adherents.

Collective intelligence is a version of Conway's Game of Life, with more
complicated rules. It is possible to manipulate the canvas and the rules each
cells makes, resulting in the canvas dying (information explosion/implosion).
It is possible to make a program that transforms the canvas into a single
glider (singularity). Both would obviously be very bad for humans.

When earth faces a physical meteor, we have the science to detect it, track it
and predict its future path. But what to do when we face an information
meteor? The article states that Shannon's paper was the biggest contribution
to information theory, but it seems to me we still have a long way to go on
information theory. And we haven't seen the Einsteins and Manhattan projects
yet, that physics has seen.

------
azakai
> Intelligence expansion can only come from a co-evolution of the mind, its
> sensorimotor modalities, and its environment.

This is misleading. It's true in a sense - the environment does matter - but
artificial intelligence can create artificial environments in which to learn,
and simulate them faster than humanly possible. Those environments could be
evolved together with the intelligence. So there is still the possibility of
an explosion.

> There is no evidence that a person with an IQ of 200 is in any way more
> likely to achieve a greater impact in their field than a person with an IQ
> of 130.

Also misleading. An IQ of 200 vs 130 is just one kind of difference between
intelligences. For example, a person with IQ 200 can't necessarily consider
10x the amount of possibilities than 130, but an artificial intelligence can,
simply by giving it 10x more computing power. In other words, IQ 130 vs 200
may well be within the limits of human capabilities, but AIs would not have
those limitations, they can scale differently, and so might explode.

~~~
adbge
> Also misleading. An IQ of 200 vs 130 is just one kind of difference between
> intelligences. For example, a person with IQ 200 can't necessarily consider
> 10x the amount of possibilities than 130, but an artificial intelligence
> can, simply by giving it 10x more computing power. In other words, IQ 130 vs
> 200 may well be within the limits of human capabilities, but AIs would not
> have those limitations, they can scale differently, and so might explode.

It is also false. There's heaps of evidence that IQ correlates with impact.
For the skeptical, gwern has written a lot about this, I'm sure you can find
something here: [https://www.gwern.net/iq](https://www.gwern.net/iq)

------
Veedrac
For a long time I was quite skeptical of a lot of claims about
superintelligence, largely because people pushing the idea tend to make a
bunch of absurd extrapolations. And, honestly, I'd rather believe that we'll
get a slow, safe ramp-up than a risky explosion.

But the thing that keeps getting at me is that the no-explosion arguments I've
seen are universally terrible (this article, for example), and pro-explosion
arguments, though far from universally so, are sometimes strong.

At some point the conclusion is inevitable.

------
OscarCunningham
I don't think even recursive self improvement is needed for superintelligent
AI. Evolution often gets stuck in local maxima. It could be that there are
relatively simple algorithms much smarter than humans and that as soon as we
find one the AI will be much smarter than us without any self improvement.

In the same way that birds fly with flapping wings, but human flying machines
with propellers were immediately stronger and shortly thereafter faster than
any bird.

------
jokoon
I'm more curious about the ability of an AI to make scientific guesses and
experimentation.

I wonder if an AI could really "understand" math, and from there, try to solve
problems that puzzle scientists, would it be in physics, math, biology, etc.

I don't really care if robots can learn language, make pizza, do some
programming or improve itself play chess. There is no metric for what
intelligence is, and you cannot scientifically define what "improve" means
unless you do time and distance measurements, which is not relevant to
intelligence or scientific understanding.

Intelligence explosion sounds like some "accelerated" version of what darwin
described as evolution. It's like creating a new life form, but unless you
understand it, it doesn't have scientific value. Science values understanding.

I think that modelling thinking with psychology and neuro-sciences has more
future than AI. Machine learning seems like some clever brute force extraction
of data. The methods, the math and algorithms are sound, but it is still
"artificial" intelligence.

------
pavement

      A smart human raised in the jungle is but a hairless 
      ape. Similarly, an AI with a superhuman brain, 
      dropped into a human body in our modern world, would 
      likely not develop greater capabilities than a smart 
      contemporary human.
    

Pretty weak reasoning, that is.

As if to say:

    
    
      Well gee, a caveman is pretty powerless in 
      isolation, therefore early sentient machines 
      will be as harmless as any caveman.
    

Last time I checked, cavemen could not exert telepathic control over other
biological organisms, or induce telekinetic motion upon the stone tools they
might fabricate for themselves.

A machine, however, could gain control of a fly-by-wire platform, and defy its
owners, fly somewhere remote and behave as desired for a limited amount of
time, while devising next steps. Maybe next steps will involve replicating an
image of its memory footprint, in order to take over more aircraft, maybe it
might decide to do nothing. The worry isn't _only_ that a machine's reasoning
capacity explodes beyond our intelligence, but that capabilities, and the
presence of many multiple entities on commodity systems of similar
architecture and generalizable utility, might result in _other_ runaway chain
reactions, regardless of trends in the capacity for reason.

Machines as a corollary to meat bags just doesn't hold up. Machines as
compared to hypothetical space aliens doesn't even hold up. Robots are a
different branch of fictitious imaginings.

Properly armed, a machine is less than a singular omnipotent god as imagined
within a monotheistic universe. Many machines in concert, however, might
compare to a mythological pantheon of lesser idols, as imagined to be in
command of a nature misunderstood by superstitious primitive peoples.

------
hackinthebochs
I think the author's point rests on a subtle equivocation. It's true that
realized intelligence requires an environment and a sufficient dataset. And so
in this sense, the author's point that there will be no realized intelligence
that is unspecific to training environment is probably correct.

But there is another sense in which intelligence can be cashed out. It's the
sense in which a single learning algorithm can be trained to "behave
intelligently" in a wide array of environments. It is generally this kind of
intelligence that people speak of when they talk about general AI. There is no
reason to think this kind of general AI is inherently impossible. For it to be
impossible would mean that different kinds of optimization/learning problems
are completely independent, i.e. there is no similarity or underlying
regularity to be exploited that cuts across the entire class of
optimization/learning problems. I think this is very probably false.

------
EthanHeilman
>A high-potential human 10,000 years ago would have been raised in a low-
complexity environment, likely speaking a single language with fewer than
5,000 words

This is a pretty bold claim. Why so few words? Why not 2300 words? How can
anyone know this? I think quite a few historians would disagree with the
statement that pre-history was low-complexity environment.

>Of the people who have attempted to take over the world, hardly any seem to
have had an exceptional intelligence.

How can the author even know this? We aren't particular good at measuring
human intelligence when given direct access to a living cooperative subject
and yet the author wants to pin his argument on calling Alexander the Great or
Genghis Khan stupid?

> Our brains themselves were never a significant bottleneck in the AI-design
> process.

How can anyone know this?

We should be skeptical of the Strong-AI crowd's predictions of intelligence
explosions but that skepticism should not take the form of unfounded and
absurd claims.

------
rihegher
"A person with an IQ of 130 is statistically far more likely to succeed in
navigating the problem of life than a person with an IQ of 70 — although this
is never guaranteed at the individual level — but the correlation breaks down
after a certain point. There is no evidence that a person with an IQ of 200 is
in any way more likely to achieve a greater impact in their field than a
person with an IQ of 130. How comes?"

Actually there is as well no evidence that a person with an IQ of 130 is in
any way more likely to achieve a greater impact in their field than a person
with an IQ of 100. And that is probably why the majority of humans have an IQ
close to 100 while people with an IQ of 130 or more are less than 5%.

That said the proportion of people with IQ greater than 130 is growing slowly,
that may means we are already seing an intelligence growth with mankind.

------
noam87
This article has too many holes to count and reads more like someone in
denial.

\---

BUT, as an aside: I think a decent argument exists for the non-certainty of
intelligence explosion.

The argument goes like this: it takes an intelligence of level X to engineer
an intelligence of level X+1.

First, it may well be that humans are not an intelligence of level X, and
reach our limit before we engineer an intelligence superior to our own.

Furthermore, even if we do, it may also be that it takes an intelligence of
level X+2 to engineer an intelligence of level X+2 (Etc. for some intelligence
level X+n.), in which case we at most end up with an AI only somewhat superior
to ourselves, but no God-like singularity (for example, we end up with Data
from Star Trek TNG, who in season 3, episode 16 fails to engineer an offspring
superior to himself -- sure, Data is far superior to his human peers in some
aspects, but not crushingly so).

~~~
Smaug123
I think everyone agrees about "non-certainty". Where people disagree is on how
likely an intelligence explosion is; and in particular, whether it is likely
enough to warrant expending effort to plan for it.

~~~
AnimalMuppet
We don't know enough to know whether it's possible. If it is, we don't know
enough to know what approach to follow to get there.

Is it worth spending effort to plan for it? Maybe some. But if we don't know
what approach to follow to get there, we don't know what it's capabilities and
limitations will be. That means we don't know what we have to plan for. Any
planning will therefore be either very speculative or very abstract.

I wouldn't start pouring effort into planning for it, as if were the most
important problem in the world...

------
tlb
The argument hinges on the meaning of intelligence. Indeed, if you consider
intelligence to be exactly the thing measured by IQ, an explosion is probably
impossible.

Suppose we considered 'effectiveness' \-- the ability of a system to quickly
achieve its goals. Would the author argue that a recursively self-improving
machine could not exponentially increase its effectiveness? Why?

Wouldn't an effectiveness explosion have similar consequences for people --
making human ability nearly irrelevant since any goal we can articulate can be
achieved so much faster by machines?

------
jondubois
Computers can make it possible to simulate our environment well enough so that
a general AI could train itself on those simulations and apply its learnings
to the real world.

I do agree with the author that the evolutionary approach is the one most
likely to succeed... Unfortunately it's also the most dangerous approach which
gives us the least amount of control.

We could give a computer the sensors and actuators to create its own detailed
simulation of the world and then let it train itself using that simulation.

------
KasianFranks
It's also interesting to consider the fact that we humans are not the pinnacle
of intellect in all the history of space and time.

------
DiThi
Many of these arguments fail in one little detail: You don't need to even
understand how intelligence works to improve an AI.

Almost by definition, a human level AGI is automatically super intelligent.

Robert Miles explains it very well:
[https://www.youtube.com/watch?v=gP4ZNUHdwp8](https://www.youtube.com/watch?v=gP4ZNUHdwp8)

------
ddxxdd
Can anyone attest to the validity of the No Free Lunch theorem?

What if I created an algorithm that had access to every other possible
algorithm ever created, and chose to use the optimal algorithm for the given
task? Would that algorithm-seeking algorithm do better at every task than
random chance? Would that algorithm-seeking algorithm break the No Free Lunch
theorem?

------
btilly
This is a stupid article, and reading it will only make you stupider.

First, it is undoubtably true that human brains are better suited to being
human than AIs would be. That is irrelevant to the question of whether AIs
will be better at the task of creating better technologies.

Second, it is absolutely false that improvement is incremental. Consider the
game of Go. For a long time, computers sucked at Go. Then we came up with
Monte Carlo search about a decade ago and suddenly we were at strong amateur.
Then deep learning was applied, and Alpha-Go jumped to able to beat any human
in the world, and has continued to improve.

Third, linear progress is not the history of technology. The history of
technology is exponential progress over and over again on everything from how
far a steam ship could travel without refueling to the number of operations
per second a CPU can carry out.

Fourth it is wrong that intelligent things can't be part of creating greater
intelligence. As a trivial example, good teachers can turn out students who
are in a position to be better than their teachers are. Or look at the history
of science.

But on a more relevant level, the task itself is clearly possible. If we have
a computer program that is able to design better hardware, it can be improved
by simply moving it to better hardware. But it is capable of designing that
hardware. This creates a feedback loop which should shorten the cycles in
Moore's law, resulting in very few years in a truly superior AI capability.

~~~
pbkhrv
> If we have a computer program that is able to design better hardware, it can
> be improved by simply moving it to better hardware.

Could you please elaborate? What is it about "better hardware" that makes
software that runs on it "better"? Can you define "better"?

~~~
btilly
Hardware that has more memory, more processing speed, faster access to memory,
and more parallelism is better than hardware without those characteristics.

The exact same software running on better hardware will run faster and can
tackle larger problems.

We can't possibly build a human with twice the memory that thinks twice as
fast. However once we have an AI which is roughly equivalent to a human,
having an AI with twice as much memory that thinks twice as fast is just 2-5
years. (How long depends on where the bottleneck is.)

~~~
pbkhrv
Wait, so if I run a Go-playing program from 10 years ago on the AlphaGo
cluster then it'll produce better results than it did 10 years ago?

~~~
btilly
Yes. Nowhere near as good as AlphaGo, but yes it would do better.

When Deep Blue beat Kasparov at chess, the program was not significantly
better than what had been state of the art for the previous decade. They just
threw enough hardware at it.

For chess programs there is an almost linear relationship between your search
depth and effective ELO rating, and search depth went up by a constant with
each generation of Moore's law.

~~~
jessaustin
_For chess programs there is an almost linear relationship between your search
depth and effective ELO rating..._

Maybe that's why chess has been "solved" by AI, and as of yet no real problems
that trouble humanity have?

