
A Reply to François Chollet on Intelligence Explosion - lyavin
https://intelligence.org/2017/12/06/chollet/
======
aetherson
Something to note about the "we're on an exponential (or at least more-than-
linear) increase in technology" arguments: They generally need to reach back
quite a long time to sound convincing.

Yudkowsky makes two arguments about how fast technology or society is
evolving: in one he chooses 500 years ago, 1517. In another, he talks about
"the last 10,000 years."

In contrast, Chollet compares 1900-1950 with 1950-2000.

I agree, we've changed the world in big ways compared to 1517 or 8000 BC. But
if we're on a more-than-linear increase, then we _should_ be seeing more and
more technological growth in very recent timeframes, not needing to reach back
centuries or millennia.

In fact, if you consider a y = log(x) or y = sqrt(x) function, those more
closely fit a narrative of "If you look back a long time, things seem almost
crazily changed, but if you look into recent history, it looks slower" much
better than a y = x^2 or y = e^x function.

~~~
LolWolf
Just for funsies (but may answer your question): every (differentiable)
function is linear at a small enough time scale. This includes e^x or pretty
much every function you listed.

~~~
aetherson
Only on infinitely small subintervals of x.

~~~
LolWolf
This approximation has vanishing error from a linear function (as the interval
decreases) and (most importantly!) we only have finite noisy samples; so it’s
essentially indistinguishable (in a statistical way) for a small enough
interval given some variance.

------
evanwise
I can't remember where, but I saw a fairly damning argument against the
intelligence explosion hypothesis on the grounds that it only works if the
algorithm used to design a mind with n units of intelligence (whatever these
are) scales linearly with units of intelligence. If it scales faster than
linear, then your recursive bootstrapping operation takes longer and longer
each time, so that eventually your next bootstrapping step will take longer
than the amount of time left in the universe, meaning there is some finite
intelligence cap for any such bootstrapped mind. It seems quite implausible to
me that the problem of designing a mind would scale linearly, given that
ostensibly much simpler problems, like sorting a list of strings, require
polynomial time or log-linear time algorithms.

~~~
rhaps0dy
>meaning there is some finite intelligence cap for any such bootstrapped mind

This does not preclude an intelligence explosion, this cap could be many (say,
100) times higher than human intelligence. We could still see many features of
an explosion in that case.

~~~
evanwise
True, but I think that the results of such a "weak intelligence explosion"
(where the linear/sublinear scaling case would be a "strong intelligence
explosion"), while still remarkable, would fall far short of some of the
expectations placed on general AI by the singularity / superintelligence
crowd. For the sake of argument, extrapolating from our current energy
consumption using a (simplistic) linear model and some rough back of the
envelope calculations, the intelligence required to harness the total energy
output of the sun would be 40 trillion times the aggregate intelligence of the
entire human species today. Several hundred times just isn't going to cut it.
Now, if our ability to harness energy increases exponentially with
intelligence, then maybe it could work, but that's just an assumption, and,
given that there are hard physical limits on the efficiency of energy
generation due to thermodynamics, seems very unlikely.

~~~
Chickenality
I don't understand the connection to energy consumption. Humans have been able
to extract increasing amounts of energy over time without correspondingly
large increases in human intelligence. I don't see a strong reason to doubt
that this will continue, so I don't see a strong reason to doubt that a >=
human-level-intelligence AI could do it either.

~~~
evanwise
The raw computing power of the human wetware has not increased appreciably
over historical timescales, but the total intelligence of humanity, includes,
for example, the increase in effective intelligence gained by storing
knowledge in external devices like books. The gestalt organism that is
"humanity" is _much_ smarter than it was 500 or even 100 years ago, which
correlates with our ability to extract resources. It's an extremely simplistic
model, but since I was only after a very rough guess, I went with it.

------
skybrian
The more I read about this, the more I think we should stick to narrow AI. If
you don't want your cancer research bot to take over the world, don't give it
general reasoning capability. Goal alignment seems incredibly fragile in
comparison.

~~~
pasquinelli
by what means will a cancer research bot take over the world? it doesn't
matter how smart you are if you don't have the necessary means to do
something. i think it's a fantasy, something that people who imagine
themselves to be very intelligent have latched onto-- the idea that their best
quality is _the_ best quality.

~~~
GuiA
Well, the general AI doomsday fundamentalist argument proceeds as follows: you
tell an AI to cure cancer. It can’t, so it spends some time recursively
improving itself, then it finds out that the cost of curing cancer is C, but
the cost of killing all humans (and therefore indirectly curing cancer) is C’,
where C’ < C. Boom all humans are dead.

If you’re a smarty pants you tell the AI to cure cancer AND not kill all
humans. But because the AI is so smart it comes up with something no human
would have ever thought of, like putting all humans in eternal cryostasis,
thereby keeping them alive AND eradicating cancer. No matter what you do, the
AI will outsmart you because recursive-self-improvement, and humanity dies.

That’s what Elon Musk, Stephen Hawking, and others are worried about.

~~~
nkrisc
I think the question is, and the one I have too, is why would such AI have any
ability to do anything beyond output a Cure Cancer solution to a terminal? Why
does the AI need to be the one to implement its derived solution?

The AI has a good think and comes up with a solution of killing all humans.
The researchers read the printed report of the solution and decide against
implementing it, tweak parameters, and ask the AI to take another go at it.

~~~
yorwba
"The researchers read the printed report of the solution and decide" ... to
implement it immediately. It will save so many lives! They only need to
manufacture a few specific molecules to assemble them into nanobots and then
... where is that gray goo coming from?!

~~~
nkrisc
I guess that's the risk of blindly trusting AI. Of course, the AI did not
forcibly destroy us in that scenario. Trust, but verify.

If the AI's solution is so complex as to be beyond human understanding, well,
that's a different issue.

~~~
Yen
Say you want to see a picture of an orange cat. So you send a short HTTP query
to the nice computer at google.com, which responds with 700,000 characters
worth of instructions, in unreadable minified formatting, with the implicit
promise that if you execute the instructions, you will eventually see a
picture of an orange cat.

The google search result page's source code is, for many individual humans,
already so complex as to be beyond understanding. And that's a computational
artifact largely produced by other humans directly!

Say you build an AI, and ask it how to win a political election - and it
outputs a simple list of reasonable-sounding suggestions of where to campaign,
promises to make, people to meet, slogans to use, and criticisms of your
opponent to focus on.

Before actually implementing those suggestions, do you think you could be
_very_ certain that following those suggestions would result in you winning
the election? Or, would it be possible that the AI understood social dynamics
so much better, that it gave you a list of instructions that seemed mostly
reasonable, but actually result in your opponent winning in a landslide? Or
the country undergoing revolution? Or, you winning, along with a surprising
social trend of support for funding AI research?

------
vbarrielle
Lots of points can be reduced to the ability to simulate a physical
environment very fast, and much faster than the actual occuring of events. But
it doesn't look like it's easy to simulate physics at high speeds, let alone
faster than what happens in our environment. Therefore we are bound by our
environment and our limited ability to simulate it.

~~~
Chickenality
This is probably a good criticism if it turns out that the right level of
abstraction for most problems is Physics. And it seems like your argument
would apply equally well against the idea of human intelligence. Luckily, our
minds have developed other abstractions that allow us to solve problems much
faster than if we had to simulate them as physics problems. For example, I
don't need a physics-level simulation of my friend's brain when I want to
predict how they'll react to a gift I'm giving them.

~~~
vbarrielle
You're right, there are lots of problems where a simpler abstraction is
possible.

But I don't think my argument applies to human intelligence, it just means
that human intelligence is what you can get with all the data points you can
get by observing the world (and some simulation done by our brains, but I'm
under the impression that our brains don't perform accurate simulation, looks
more like heuristics).

------
btilly
The article that this is a response to was discussed previously at
[https://news.ycombinator.com/item?id=15788807](https://news.ycombinator.com/item?id=15788807).

People apparently didn't like my reply at
[https://news.ycombinator.com/item?id=15789304](https://news.ycombinator.com/item?id=15789304),
but I still stand by everything that I said there.

------
jstewartmobile
For those who don't know, François Chollet is the author of Keras, a leading
deep learning framework for Python.

Here is Chollet's original essay (it's a worthwhile read):
[https://medium.com/@francois.chollet/the-impossibility-of-
in...](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-
explosion-5be4a9eda6ec)

------
js8
I am not convinced that "super-intelligence" is either possible or a threat.

If it is possible, why are not societies or corporations super-intelligent? I
suspect they are not, because they face organizational problems that any
system will face. And I don't think these organizational problems can be
solved with faster or more communication, but rather that they are fundamental
in distributed systems.

But maybe they are (at least in some cases) vastly more intelligent. But then,
are they a threat? I think they are more of a threat to themselves than
humans..

------
darepublic
Why should the "Seed AI" that François speaks of need GPS skills "slightly
greater" than that of humans to spark the explosion. Should not just GPS at
all do the trick, the way he describes things, since even with human or less
than human level skills the computer could still rapidly recurse and self
improve?

------
_sy_
A good, recent, and comprehensive primer on intelligence explosion and its
theoretical implication: "Life 3.0: Being Human in the Age of Artificial
Intelligence" by MIT physicist Max Tegmark.

------
SandB0x
Reminder that Eliezer Yudowsky is a crank
[https://rationalwiki.org/wiki/Eliezer_Yudkowsky](https://rationalwiki.org/wiki/Eliezer_Yudkowsky)

~~~
dang
This is a self-refuting comment in terms of its value for HN: even assuming
you're 100% right, it's neither civil nor substantive. Please don't post like
this.

------
sitkack
The AGIs won't take over the world. Humans and corporations will use narrow
AIs to do much worse (better) before that happens. The intelligence explosion
that I would like to see centers around widely distributing tools that make
formal methods easier to use. Humans so far have a lock on constructive
creativity, AI and computation could augment that creativity by effortlessly
checking our work. It could make the cognitive work of all humans more
rigorous.

------
bobthechef
These kinds of discussions aren't very rigorous and reveal a basic
philosophical illiteracy and philistinism at work. There's quite a bit of
question begging going on. The elephant in the room is that the prevailing
materialistic/naturalistic (MN) understanding of the world is completely
impotent where intentionality, qualia, consciousness, etc, are concerned.
Philosophers like Thomas Nagel talk about it; the incorrigible Dennetts of the
world prefer to shutter the windows and live in their intellectual safe
spaces. To say that the notion that computers are intelligent is problematic
is putting it very lightly.

MN is rooted in the expulsion of the mind from the reality under
consideration, in the process sweeping many things under the “subjective” rug
that don’t fit the methodologies used to investigate reality. But now, when
the mind itself becomes the object of explanation, when someone remembers that
minds are, after all, part of reality, it is no longer possible to play the
game of deference and one must deal with all of those things we’ve been
exiling to the “domain of the subjective”. MN is wholly impotent here, by
definition. Qualia? Forget it. That’s why MN tends to collapse into either
some form of dualism or eliminativism, the latter of which is a non-starter,
the former of which has its own problems.

And yet, despite the terminal philosophical crisis MN finds itself in, the
chattering priesthood of Silicon Valley remains blissfully unaware, hoping to
conjure up some fantastical reality through handwaving.

~~~
Chickenality
I haven't seen anyone argue that intentionality, qualia, or consciousness
would necessarily be either a precondition or a result of developing AGI. In
fact, thought experiments like the "paperclip maximizer" are often brought up
to argue that a machine could be very alien in its internal experience or lack
thereof, but still pose an existential threat.

~~~
goatlover
If an AI can turn the world into paperclips, it can certainly understand that
we wouldn't want that. Paperclipping everything is a much harder task.

~~~
PeterisP
Of course any powerful intelligence can understand that we wouldn't really
want that. The question is why would it care about what we really want? Its
core values would be that more paperclips is good, and doing what humans
really want is evil if it results in less paperclips.

Currently, we don't know how to properly define "do what I mean / do what we
really want" goal in a formal manner; if we had an superpowerful AGI system in
front of us ready to be launched today, we wouldn't know how to encode such a
goal in it with guarantees that it won't backfire. That's a problem that we
still need to solve, and this solution is not likely to appear as a side-
effect of simply trying to build a powerful/effective system.

~~~
goatlover
The paperclip maximizer example starts with a human asking an AGI to make some
paperclips. That turning into an all-consuming goal at the expense of
everything else the AGI would understand humans to care about is the problem
with the thought experiment.

However a more complicated example like having the AGI bring about world peace
or clean up the environment could have undesirable side-effects because we
don't know how to specify what we really want, or have conflicting goals. But
that's the same problem we have with existing power structures like
governments or corporations.

