
The impossibility of intelligence explosion - yters
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
======
ravitation
Discussion of the original article on HN:
[https://news.ycombinator.com/item?id=15788807](https://news.ycombinator.com/item?id=15788807)

~~~
dang
We changed the url from [https://mindmatters.today/2018/11/software-pioneer-
says-gene...](https://mindmatters.today/2018/11/software-pioneer-says-general-
superhuman-artificial-intelligence-is-very-unlikely/), which simply copies
excerpts from this.

~~~
yters
If the original article is a year old, why was this killed as dupe? I've seen
old articles reposted on HN before. This repost seemed to be pretty popular.

Also, why is it considered blogspam for having excerpts from an original
article? I also see this occur frequently on HN. The article not only has
excerpts, but comments on the excerpts and links to related material. This
appears useful to an interested reader.

------
Rallerbabs
Quote: "An overwhelming amount of evidence points to this simple fact: a
single human brain, on its own, is not capable of designing a greater
intelligence than itself. This is a purely empirical statement: out of
billions of human brains that have come and gone, none has done so. Clearly,
the intelligence of a single human, over a single lifetime, cannot design
intelligence, or else, over billions of trials, it would have already
occurred."

What an incredibly inadequate argument. Obviously, none could have done so in
the past. Because the required resources simply weren't available.
Breakthroughs are being made right here in this era, with Jeff Hawkins' being
one of the latest.

This guy reminds me of Richard Smalley, who, against better judgment,
attempted to make irrational arguments against molecular nanotechnology, as
envisioned by Drexler.

Also, he can't be a software pioneer. He's too damn young.

~~~
Sharlin
Wow. That argument can be applied to _any_ invention or creation. There are
bad arguments and then there are ridiculous ones.

~~~
baddox
Yes. The argument is logically equivalent to "nothing can ever happen for the
first time."

------
fizx
"When a distinguished but elderly scientist states that something is possible,
he is almost certainly right. When he states that something is impossible, he
is very probably wrong." ~Arthur C. Clarke

~~~
coldtea
Yeah. Here we are in those rare cases when he is right (though not necessarily
for the arguments he makes)

~~~
worldsayshi
So what are the right arguments?

~~~
coldtea
1) That the rise of AI has been predicted again as "coming soon" in the 60s
and 70s and nothing came out of it once already.

2) That current AI is just very primitive NNs, and all the hoopla is mostly
hype (like Big Data, Grid Computing, Biotechnology, Nanotechnology, Virtual
Reality, 3D Printing, and several other previous fads served as revolutionary
and the cure for everything)

3) That they are several orders of magnitude more primitive than a human
brain.

4) That we haven't had Moore-law style increases in CPU power for quite some
time.

5) That, never mind AI, we can't even make a good email app...

6) That all kind of peddler make good money by tooting AI (e.g. IBM selling BS
tech as "Watson")

~~~
wild_preference
If you can agree that intelligence is just an information processing challenge
instead of a magical god-given treasure, then general AI is inevitable.

Though note that you couldn't come up with an argument against the future of
AI development in six points. They were all independently irrelevant, like
denying future medical breakthroughs because doctors didn't wash their hands
100 years ago. Your points are all distracted, like suggesting Elon Musk's
tweeting habits today doom all future humans from ever reaching Mars.

~~~
coldtea
> _If you can agree that intelligence is just an information processing
> challenge instead of a magical god-given treasure, then general AI is
> inevitable._

As a scientist, I don't believe anything is "inevitable" (that's for religious
people to believe).

I can very much imagine scenarios where we're already on the steep end of a
technological curve, having exhausted most low hanging fruit, and failing to
make many more revolutionary breakthroughs (no matter the timeframe).

> _Though note that you couldn 't come up with an argument against the future
> of AI development in six points. They were all independently irrelevant,
> like denying future medical breakthroughs because doctors didn't wash their
> hands 100 years ago._

I didn't want to make an argument against the possibility or impossibility of
AI in abstracto (whether it's physically possible or not. Since our brain does
it, it is physically possible. That's boring, again left to religious people
to discuss, of which there's no shortage, even among the atheists). I wanted
to make one about its _actual_ possibility, given what we know of our
technological development, past claims, trends, etc.

------
dicroce
There is an estimated 100 billion nuerons in the human brain. We know from
information theory that DNA is not nearly long enough to describe all of those
connections. This means that most of the complexity of the brain is emergent.

I envision bundles of neurons that begin their existence mostly
disconnected... These bundles are described somewhere in DNA (but not their
ultimate learned configuration) and so is whatever learning algorithm drives
this process.

We will have AGI when we understand the underlying learning algorithm our
brain uses... Personally, I believe that algorithm involves fantasizing future
scenarios and learning from the fantasies.

~~~
onemoresoop
Fantasy is a mental computation as well. Fantasy AI ftw!

------
thrwaway22
"Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is
exactly the same range as legions of mediocre scientists."

Can we stop the Feynman had an IQ of 126 narrative? Many people doubt his IQ
was in the 120 range. IQ scores, despite what people thing, are not always
accurate. Feynman is an example of such a case. Plus, you'd typically report a
range on IQ exams. Given a 5+/\- point range that could place him in the 131
category, which still feels too low for someone as brilliant as he was.

>"Feynman received the highest score in the country by a large margin on the
notoriously difficult Putnam mathematics competition exam, although he joined
the MIT team on short notice and did not prepare for the test." [0]

Someone that does that, most likely does not have a 126 IQ. Given his
accomplishments in Physics, it is likely he is more intelligent than any IQ
test gave him credit for.

[0] [https://www.psychologytoday.com/us/blog/finding-the-next-
ein...](https://www.psychologytoday.com/us/blog/finding-the-next-
einstein/201112/polymath-physicist-richard-feynmans-low-iq-and-finding-
another)

~~~
axus
IQ is just a number determined by a test, by definition. I don't think it's
possible for a person's IQ number to be independent from the IQ test.
Certainly he had more intelligent contributions to physics than expected from
126 IQ.

~~~
aetherson
IQ is valuable insofar as it is a proxy for g. I think that your parent poster
was suggesting that while IQ is in general correlated with g, there are cases
where it is not.

------
ravenstine
And yet general biological intelligence arose from unintelligent forces? Hm,
interesting.

I don't know enough about general artificial intelligence, but his claim seems
to hinge a lot on what we already know. There are things we may not know yet
that will lead to us generating general artificial intelligence. Although I
don't think it'd be required, it may turn out that the experience of
consciousness isn't something we can mechanically produce but is a property of
matter itself. Now, I can't prove that is so but it's a fun little hypothesis
I enjoy thinking about. None the less, there's no particular reason to
disbelieve why humans couldn't figure out similar knowledge and apply it to
creating better artificial intelligence.

Some of his reasoning is just flat out fallacious:

> An overwhelming amount of evidence points to this simple fact: a single
> human brain, on its own, is not capable of designing a greater intelligence
> than itself. This is a purely empirical statement: out of billions of human
> brains that have come and gone, none has done so.

Yet out of all those billions of brains, how many of them have tried
developing general artificial intelligence, let alone the "AI" of an if-then-
else statement? Probably 0.00001% of those billions of brains. Yes, I made
that number up, but it wouldn't be surprising for that number to be minuscule,
below a fraction of a fraction of a percent.

Evolution has had billions of years. We've only had computers for less than a
century. In that respect, it's not at all surprising that humans haven't
mastered what nature unintelligently developed for much longer.

> In particular, there is no such thing as “general” intelligence. On an
> abstract level, we know this for a fact via the “no free lunch” theorem —
> stating that no problem-solving algorithm can outperform random chance
> across all possible problems.

That depends on how you define the word "general". I don't think that
scientists and engineers working on intelligence are actually using the word
"general artificial intelligence" in the sense that such intelligence could
solve literally any problem. Humans are _generally_ intelligent within a set
of fixed domains, but that doesn't mean they don't have general intelligence
within certain constraints.

------
gameswithgo
Our brains already do it, likelihood is 100%. Nothing would stop transistor
based computers from doing it, though they might be very much slower at it.

~~~
yters
How do we know our brain creates our mind? That is a materialist assumption,
which is just an assumption.

~~~
yorwba
It's an assumption that has been tested many times. Humans can survive losing
parts of their brain, but it causes observable changes in their mind. That's
were those colorful maps of brain regions and their functions come from: by
recording what kind of brain damage caused what kind of function to
deteriorate.

Don't forget that people used to believe that the heart contained the mind and
the brain was just a cooler. There was real evidence that caused scientific
consensus to change.

~~~
yters
There are some interesting examples of very highly functioning individuals
with very little brain.

At any rate, correlation does not imply causation.

~~~
0xffff2
What's your proposed alternative? I'm trying not to be dismissive, but I'm
also having trouble coming up with a reasonable alternative hypothesis.

~~~
yters
I would say halting oracles are a possibility that have some degree of
mathematical traction:

[https://www.am-nat.org/site/halting-oracles-as-
intelligent-a...](https://www.am-nat.org/site/halting-oracles-as-intelligent-
agents/)

But, we shouldn't prefer an incorrect model to no model, and the brain = mind
model does not seem correct.

~~~
baddox
From that article:

> So, the objection that humans cannot solve every problem only shows that
> humans might not be complete halting oracles, but cannot show that humans
> are not partial halting oracles.

I don't get it. We can already easily making Turing machines that answer the
halting problem for certain infinite sets of Turing machines, so there's
nothing unique about the human brain in this regard.

~~~
yters
The previous sentence to the one you quote explains:

> We can even remove an infinite number of problems from the set and still
> have an infinite and undecidable set.

It is an undecidable set, so cannot be described by a TM.

~~~
baddox
Oh okay, I guess that entire paragraph is just a complicated way of rephrasing
their claim, which is that there is at least one TM for which no TM can
possibly answer the halting problem, but for which humans can.

So then, the claim is that it's at least conceivable that humans can do this,
because you can remove a finite or even infinite number of elements from an
infinite set and still be left with an infinite set. I still don't understand
how this is a remotely useful insight or in any way an indication that humans
may have super-Turing abilities.

~~~
yters
That's correct. It does not prove that humans have this capability. It just
shows that a common counter argument fails, and then the article goes on to
explain how it would make an empirical difference if humans are partial
halting oracles.

------
throw2016
It seems more likely more fundamental innovations are going to come from
academia with a culture of seriousness and decades of work without returns.

On this side there is a strong tendency to be blinded by hype, underestimate
problems or wish them away with naive desperation that things will just happen
based more on hope than reality.

We saw that with the self driving crowd underestimating the problems and
overestimating the existing technology, and now with the AI folks, who are
already guilty of perpetuating hype knowing fully well that their extremely
limited definition of 'AI' is nothing like what the world understands by AI.

All the privacy shenanigans, crypto disaster, self driving cars and AI leave
the reputation of tech community in a very bad place.

------
xutopia
I too have some resistance to the notion that we will see AGI at a higher
level than humans within 50 years. I don't think the arguments he has make any
sense. The technology simply did not exist and we're improving on so many
fronts mostly because we're now capable of yielding huge neural networks we
couldn't fathom having access to just a few years ago.

I don't think the technology required to build superhuman AGI exists yet at
least at a reasonable cost to humanity. I don't think we will reach this level
of massively parallel computing required for Superhuman AGI until maybe 2070
and that's only if we bring together all the world's supercomputers together
in one giant artificial brain.

~~~
ansible
I think we've seen the cross-over point for AGI in the last few years.

Just recently, within the last decade, we've seen incredible advances of the
_usefulness_ of existing ML/NN based approaches. We have phones and smart
speakers which have sufficiently reliable speech recognition. Ditto for vision
tasks.

What's different now compared to decades past is that businesses can see the
benefit of incremental improvements in existing systems. If Siri can easily be
demonstrated to be more useful than Cortana, that is a significant competitive
advantage, which will sell products and services.

The tech giants (and others) see this, so they will continue to invest. This
isn't like in past where we tried things here and there, they didn't work as
well as we had hoped, and then we stopped investing.

The pressure is on all the tech giants to keep investing, and applying that
research to more and more of their own businesses (like using ML to manage
power and cooling at a data center).

We're on a roller-coaster ride into the future, and nothing short of worldwide
disaster will stop it. For good or for bad.

------
iotb
To make an accurate projection or statement about something, it is generally
necessary that you have a sound understanding of it. This is just my general
intelligence speaking here. I see nothing of note that reflects that this
person has a fundamental understanding of General Intelligence or has even
been in the pursuit of it. Authoring an optimization algorithm suite doesn't
mean you have an understanding of General Intelligence. A non-read before even
clicking on the link. Two posts today about AGI. Seems the next hype train is
arriving right on schedule and its banner will be (AGI).

------
baq
the meat computer hidden in your head running on ~20W of power does it. why
wouldn't an electronic computer be incapable of the same feat is beyond me.
make it 20kW or 20MW if 20W sounds unreasonably low.

------
josquindesprez
> There is no evidence that a person with an IQ of 170 is in any way more
> likely to achieve a greater impact in their field than a person with an IQ
> of 130.

This feels disingenuous. If other things are bottlenecking human success than
intelligence (which isn't even entirely true), computers are generally better
at these things, especially in an era of massive low-cost computing resources.

------
overlords
We have an example AGI - the human brain.

We have an example superintelligence - humans working together in groups such
as corporations are a superintellgence.

So from that, the invention of a human-level AGI naturally leads to
superintelligence - lots of the AGIs cooperating.

The question then becomes of whether we're going to get to AGI. Evidence
points to yes. AGI (human intelligence) is a collection of abilities and the
machine learning community is steadily making progress on them. Speech,
vision, machine translation, question answering, summarization etc. are all
being worked on and steady progress, or in many cases - rapid progress, is
being made.

Unsupervised learning and reinforcement learning are the frontiers and both
have advancements in just the past couple of years (GANs, predictive learning,
inverse reinforcement learning, imitation learning, domain randomization).

Unsupervised learning in particular is likely the key to AGI and only recently
has significant progress been made on it - predictive learning (as Yann Lecun
calls it).

(Personal conjecture - the next couple of years when a larger number of people
investigate predictive learning might lead to AGI - in maybe just 2 years).

------
commandlinefan
I don't have the exact quote in front of me, but in 1979, Douglas Hofstadter
wrote in his book "Godel, Escher and Bach" something to the effect of: "It may
be that some day a computer can beat a human at chess, but when it happens, it
will be by the sort of computer that will say, 'no, I'm bored with chess, I
would rather talk about poetry'". If Hofstadter can be wrong about anything,
anybody can be wrong about anything.

~~~
the_af
Nice quote!

Wasn't there an assertion, mentioned several times here on HN, that
researchers have found that with AI the seemingly hard becomes easy (e.g.
playing chess) and the seemingly easy becomes hard (e.g. walking or...
becoming bored with chess)? It now seems qualitatively harder to design a
program capable of boredom than one capable of being a chess master.

~~~
baddox
That sounds like
[https://en.wikipedia.org/wiki/Moravec%27s_paradox](https://en.wikipedia.org/wiki/Moravec%27s_paradox)

