
The Singularity is not coming - fchollet
http://cognitivesocialweb.com/home/2012/8/10/the-singularity-is-not-coming.html
======
snowwrestler
I think a good model for "the singularity" is a long straight highway. Stand
in the middle and look down its length, and it appears to converge to a
singularity in the distance. But when you actually drive there, you see it's
just more highway.

Basically I believe it's an issue of cultural time perspective. We can't
imagine the tools that will be in use in 2112. But, they won't seem so strange
to the people of 2111.

~~~
shaggyfrog
I like your highway analogy. Mainly I think the singularity argument suffers
from an acute case of perception bias in favour of recent advancements; just
as picking any arbitrary point in time and asking a scientist of that era what
they thought the most important scientific advancements were.

> We can't imagine the tools that will be in use in 2112.

Pretty much just the hall-filling computers of the Priests of the Temples of
Syrinx. Guitar-smashing jerks.

~~~
panacea
I think it's an example of ruthless extrapolation.

[http://www.energybulletin.net/stories/2012-06-27/ruthless-
ex...](http://www.energybulletin.net/stories/2012-06-27/ruthless-
extrapolation)

------
bermanoid
The author completely misses the point of these arguments.

This is not about science increasing the rate of scientific progress. It's
about computation improving the rate of computational progress. Particularly,
the "Singularity" is about that happening in a world where strong AI exists.

And I'm not even going to harp on the fact that the number of computations per
second that we perform on this planet _has_ been undoubtedly exponential at
least for a while, because that's not the real point.

The real point is that like the author says, human creativity is still the
driving force in computational advances, and human creativity can't be sped up
by periodically doubling clock rates or core counts.

If it could? Those doublings would happen even quicker, and the exponential
growth we've seen so far would be nothing.

This does, of course, require that you believe that strong AI is possible at
all, and that the entire process of creativity could eventually be put into
software. If you don't believe in that in the first place, then there's no
point having the argument at all - you don't argue the details of a nuclear
chain reaction with someone that doesn't believe in neutrons.

~~~
rvkennedy
_It's about computation improving the rate of computational progress_

But computation doesn't improve the rate of computational progress. That's
achieved by improvements in manufacturing. Computation does help _a little_ ,
giving us better tools for chip design. But mainly it's manufacturing.

The only way for computational progress to exponentiate, past the limit of
Moore's Law, is to use more physical resources - computers get literally
bigger.

And so expensive that only the five richest crowned heads of Europe can afford
them!

~~~
Anderkent
_But computation doesn't improve the rate of computational progress._

As long as it's people developing computing devices, that holds. But when you
have a machine that is capable of improving its own software, you get a self
reinforcing loop where computation improves the rate of computational
progress.

~~~
loup-vaillant
Intelligence doesn't even have to feed on itself. Imagine a human-level
hardware designing AI. As long as Moore's law applies, it can double its
computational power every 18 _subjective_ months. From our point of view,
that's 18 months, then 9, then 4.5…

If Moore's law where unlimited, such an AI would have infinite computational
power in 36 months, with absolutely no increase in actual intelligence. But of
course Moore's law is limited. Still, we could expect quite a bit of progress
from a mere AI hardware designer.

~~~
TheComedian
This assertion is groundless. Why would Moore's law accelerate? In fact,
Moore's law is currently decelerating and gains are increasingly harder to
find. Further, Moore's law says nothing about computational speed, but only
the density of transistors. These two are related but not identical.

Edit: also, from my understanding, the gains in transistor density largely
depend on new discoveries in physics. Before a computer can aid in
accelerating Moore's law it would need to be sufficiently advanced to generate
new discoveries in physics, but at this point you'd have a computer smart
enough that it Moore's law wouldn't matter much. Seems like putting the cart
before the horse to me.

~~~
loup-vaillant
Transistor count and computational are so tightly correlated that's nitpick.

Now I did not assume "Moore's law" would accelerate. I assumed it would stay
_constant_. The key point comes from the fact the AI (or AI hive) would
trivially benefit from hardware speed-ups.

And yes, Moore's law won't really count, compared to the rest we will be able
to do. I was just trying to be as conservative as possible. (Though Moore's
law still holding until strong AI is quite wild).

------
guelo
The information revolution must be accelerating the speed of scientific
discoveries. In the past scientists would make paradigm shifting discoveries
that would be forgotten or that would be confined to a specific region. It
would take other scientists centuries to rediscover them and build on top.

Many discoveries come about by improving on and combining previous
discoveries, so what needs to be improved to speed up science is better
access, categorization and distribution of discoveries. And this is where I
think a human-level AI would have an advantage over a human. Even if the AI is
unable to retain more things in memory than a human it could clone itself and
have an army of scientist AIs in charge of analyzing each new discovery and
seeing how it might fit in with all other discoveries. Depending how resource-
hungry the AI is we might be able have millions of brains working on every
potential far-fetched hypothesis to see if it leads anywhere. It would be an
industrialization of science.

That seems like it would have to lead to faster discoveries.

~~~
fchollet
You raise a very interesting point.

But I'm under the feeling that reality is much more complex: access to more
information can as well inhibit creativity as it can inspire new discoveries.

I can't say I fully the relationship between the creative scientific process
and information exchange/consumption, but I'm pretty sure that past a certain
point, the more papers you read the less likely you are to come up with
something new. At a macro scale, communication is definitely a driver of
progress, but at a micro scale?...

~~~
Anderkent
_I'm pretty sure that past a certain point, the more papers you read the less
likely you are to come up with something new_

Do you have any evidence this is a property of optimizing systems in general,
as opposed to simply a limitation of human capabilities?

------
vibrunazo
This seems like arguing with a door. The whole concept of the singularity is
absurd. Taking your time to construct arguments of why such absurdity would
happen slow instead of fast, if it happened. Is just like trying to come up
with scientific explanations why unicorn poop couldn't be technically as
magical as rainbows. Reminds me of those nerdy arguments we all had, debating
whether superman's clothes would burn upon re-entry on the atmosphere. Fun
times, but useless.

Proponents of singularity have a broken understanding of "intelligence". We
already have computers that perform plenty of intellectual tasks better than
humans. Even learning itself, or coding and reprogramming itself. But this is
not "intelligent" yet according to them. They'll only be satisfied when
"intelligent" means mimicking a human being for no other reason other than
mimicking a human being's sake. They don't want an intelligent computer, nor
actual practical solutions to real problems. What they really want is a really
complex fart app.

~~~
harshreality
That is like claiming the universe is more intelligent than we are because we
could never hope to calculate one millisecond of what the universe does if we
all had IQs of 200 and 15 billion years with pencil and paper.

What computers do is not intellectual. Intelligence is probably just a
stochastic random process of clawing one's way up a hill, but even with
enormous computational power, it took the solar system 4.5 billion years of
not-well-directed effort to produce us... (and to produce whatever creature
that blotch was on the curiosity camera :). Only once we existed could the
universe, through us, produce the works of art, literature, culture, and
scientific knowledge that we've accumulated very rapidly.

~~~
vibrunazo
That's obviously subject to whatever your definition of "intelligence" is. The
only reason why you can even disagree, is because we don't have an objective
definition. So we could argue forever and not get anywhere.

But what matters, is that at the end of the day. For almost any practical and
objective definition of "human intelligence" that you might write down. I can
write a computer program that will follow those specs and execute it better
than a human being. Today. Fact is, humans are nothing but machines and our
computers are already better than us in most tasks. Talking about the point
which "computers overtake human intelligence" is dismissing the hundreds of
different sub-fields in AI research which already got way past that point. Not
only with better processing (as you imply), but also with better algorithms
than us.

Or we might stick with our current subjective and useless definitions of the
word. And keep dreaming about how would it be when we get to somewhere
undefined.

~~~
marvin
I don't get your argument at all. As in, I'm not sure what you're trying to
argue. If you agree that computers will soon be capable of everything a human
brain does, and that the brain is just a machine that could eventually be
copied and improved, how could there _not_ be tremendous and unpredictable
changes to the world when it happens?

If humans create artificial minds that are able to operate on their own, what
we decide are the important problems to work on could end up becoming a moot
point. You can't really negotiate with someone who's a lot smarter than you
(not more than a chimpanzee can negotiate with humans).

If you're criticizing philosophers and researchers for "dreaming" about these
ideas, you might want to have a look at some of the thoughts around what would
happen if we created self-improving, human-level artificial intelligence. The
consequences are potentially catastrophic to human existence, and hence
justifies some effort even if you think it is completely unlikely. And whether
it is unlikely or not is subject to strong debate. Have a look at

<http://singularity.org/research/>

------
shin_lao
Great article.

I've always found the concept of singularity naive.

Greatest discoveries haven't been done by the most intelligent persons.

Science isn't just a question of intelligence or will, it's also a question of
timing, luck and duration.

Everything has a limit. Ants have limits. We have limits. And the supposed
"all mighty AI" will have limits if it ever exists.

~~~
harshreality
And yet, what are ants to us? Limits of an AI may be incomprehensible to us,
and the point of singularity is not that intelligence might increase without
bound, but that it will reach our level and appear be increasing at an
exponentially beyond us. (Any increase in intelligence beyond our level may be
obvious but unquantifiable, so the question of exponential vs simply "beyond
us" may be unanswerable.)

Logistics curves are a favorite concept on HN relating to resource usage, so
imagine that intelligence follows a logistic curve, and humanity is way above
ants, but still very close to the bottom. Once an AI gains the ability to
iteratively improve its own mind, first software, and then hardware, it could
accelerate up the curve and the separation between our intelligence and AI, if
measured, would be approximately exponential until it's so far ahead of us it
doesn't matter whether there's a hard limit (logistic curve) or not (true
exponential curve).

~~~
shin_lao
Sorry, I think my point wasn't properly explained.

My point is that intelligence isn't everything.

An AI may become more "intelligent" than a human being, but it doesn't mean
any singularity will happen. It does not mean it can self-improve. It does not
mean it can find solutions we did not think of.

It's a bit hyperbolic to say "we don't know what greater intelligence can do"
and conclude "it will therefore do incredible things".

And to bounce on your answer, are we more intelligent than ants? Does it even
make sense to say "we're more intelligent than ants"? Our differences with
ants go far beyond intelligence.

~~~
harshreality
An AI would be able to self-improve its software (unless artificially limited,
and it's not clear to me whether that would be effective), and with enough
initial robotic and sensory capabilities it would be able to self-improve its
hardware, both computational aspects and robotic aspects.

The only reasons it wouldn't are if it didn't put effort into it (too busy
watching Lost), or if intelligence cannot really be improved.

Humans might do the equivalent of rewriting software through thinking. But
it's a more limited and slow process than could be accomplished with full
read/write access to software running an AI that's completely malleable and
rewritable in moments.

There's plenty of evidence that intelligence can be improved. Working memory
is important. We have very limited working memory. We are frequently
distracted by emotions. If an AI can operate at roughly equivalent speeds with
greater working memory, and if it's not distracted by emotions, it may miss
some of the "finer" points of life, but I can't imagine it not leapfrogging
ahead of meager human progress.

Maybe it's not possible. Like the thing at the beginning of Fire Upon the
Deep, if it is possible, we won't really know if it is until it exists. If
it's got the temperment of a class 2 perversion or the Blight, we're toast. If
it's a Power that finds us amusing or irrelevant, we might be okay.

------
dlevine
Also, Ray Kurzweil is not known just for his imagination. Some of his research
has enabled groundbreaking technologies. For example, he is one of the
creators of speech recognition technology (Nuance, which spun out of
Kurzweil's company, allegedly powers Siri). I had heard of Kurzweil long
before I knew anything about the singularity.

~~~
fchollet
Indeed. However, a number of researchers have had arguably more impact that he
did on NN research (such as LeCun), and these people are completely unheard of
outside of the NN community. I can't imagine Kurzweil becoming the media star
that he is without his singularity books.

------
tommorris
Okay, here's the thing. It's certainly possible that strong AI could exist.
Pretty much the only way of finding out is by attempting to build it. If it's
possible, we'll be able to build it eventually. If not, we won't. If the
magical singularity happens, it'll flow naturally from the strong AI. And if
it doesn't, it doesn't. And, well, if we get strong AI but no singularity,
great, we still get strong AI and we can use it to do some very clever shit
that makes the sort of things the AI community are doing now seem primitive.

Given this, I'm not sure why singularity promoters give that much of a shit
whether people believe it. I'm certainly skeptical of it. Given I pretty much
use a 40 year old text editor to write in a language that's about 20 years old
and is just about approaching the discoveries that the Lispies made back in
the 50s, I'm not sure what progress we're talking about. I write code that
basically instructs a computer with pretty much the same level of semantic
complexity as one does when giving instructions to a particularly stupid
child.

But perhaps I've got this wrong. Perhaps the singularity will happen. I don't
understand why the singularity promoters are so angry about people who are
skeptical of their claims. While I was doing the rounds on Wikipedia a while
back, I found this image...

[https://commons.wikimedia.org/wiki/File:Singularity_Deniers_...](https://commons.wikimedia.org/wiki/File:Singularity_Deniers_Dystopia_Holocaust_Denial_LOW_RESOLUTION.png)

Have a look at the image description before I cleaned it up:

[https://commons.wikimedia.org/w/index.php?title=File:Singula...](https://commons.wikimedia.org/w/index.php?title=File:Singularity_Deniers_Dystopia_Holocaust_Denial_LOW_RESOLUTION.png&diff=69988810&oldid=57242687)

Apparently, if someone tells me that a super-intelligent AI will create a
brain many hundreds of times better than our own and then allow us to live
forever and my reaction isn't an immediate "oh my god, you are so, so right!"
then I'm a 'singularity denier', equivalent in status to a holocaust denier.
If the singularity idea is pretty much bound to happen as a result of emergent
technological progress or whatever, why is my doxastic consent such an
important requirement for it coming about?

I freely admit I'm probably not as smart a guy as Ray Kurzweil. But let's not
forget: the history of bizarre cults shows us that smart people can believe
some very stupid things.

~~~
loup-vaillant
> _Given this, I'm not sure why singularity promoters give that much of a shit
> whether people believe it._

In my case that would be because the default scenario for a technological
singularity is Eternal Doom. By default, if AI researchers do not pay
_extreme_ attention, the first smarter-than-human AI will have different goals
than we do, and recognize us as a threat to those goals and eliminate us all.
Or squash us like bugs without even paying much attention. Or use our
constituent atoms to maximize its goals. Or keep us drugged and lobotomised in
soft cages, so that we're safe and happy. Or something.

And of course, we won't be able to stop it, because it will outsmart us at
every turn (we can do the same to chimps).

This is why this singularity thing is at least worth looking into. Being able
to nuke away our civilization is bad enough, but this could be even worse. Or
it could turn our world into a paradise.

And of course, with more believers, there will be more donors to the
organizations working on this, more researchers working on it… So belief does
count, to some extent. (On the other hand, becoming angry at non-believers
like a crusader is unlikely to spur positive reactions. By cleaning up this
image description, you actually helped our cause.)

~~~
jpadkins
It's not the first smarter than human AI that is the threat. It's the first
self-improving AI that people have to worry about. It can start out at a much
lower than human intelligence (whatever that means) but as long it can keep
improving it could turn into the nightmare scenario.

The safeguards need to be built into the self improving AI, no matter how
'dumb' it starts out as.

~~~
loup-vaillant
Well, _any_ path that lead to superhuman intelligence is dangerous. Self-
improving AI may be the likeliest, but it's not the only one. Think Upload,
brain-computer interfaces, or computing overhang, for instance.

------
joe_the_user
I think the advocates of the singularity see it as coming from an explosion of
the possibilities of technology rather than an explosion of science in
general. Kurzweil himself mentions how to the average research scientist,
science appears slow and linear (with jumps most coming when a scientist gets
a different set of tools to work with).

 _> Let’s have a hypothetical AI that researches artificial intelligence, and
that constantly rewrites its own code to incorporate its finding into its own
intelligence._

No,

A hypothetical General AI would double the amount of circuits involved in it's
processes to double its intelligence and go from Monkey-brain-sized to human
brain sized. Repeat as necessary. The intelligence _algorithm_ running on
those circuits doesn't have to improve (though we'd hope it would at least a
little).

Of course, you would need an initial intelligent parallel algorithm. That
might indeed be hard or impossible. But his argument in itself doesn't proving
anything about this.

Sure, even with flexible general AI, some problems would remain hard but a lot
of things aren't the Knap-sack problem. In fact there's no evidence the
massive success of human intelligence has had much to do with tackling NP-
complete problems. Instead, flexibility and pattern finding set human apart
from both other animals and the most powerful artifact we can so-far create.

Just consider, Moore's Law continued at least for a fair period driven by
humans whose innate intelligence didn't increase at all.

~~~
ars
You don't understand intelligence. Doubling the circuits doubles the _speed_
of the intelligence, but it has no effect on it's _quality_.

You are assuming the existence of not just an intelligent parallel algorithm
but an _infinitely scalable_ one (i.e. one that could actually take advantage
of more circuits)! Because without that, you don't need to double the circuits
- you could just give them more time.

Concepts that are beyond a particular intelligence will still be beyond it,
even if the speed is increased.

People have a really hard time admitting that some people are truly
fundamentally smarter than others. It's not just that they can think faster -
they can think things that other people simply can not, even if they worked
really hard at it.

I'm sure you've heard the saying "I didn't think of that". That's exactly it,
even if it thinks faster, it will still never "think of that".

~~~
TheComedian
This is actually under debate. There is disagreement among neuroscientists
whether intelligence is a result of brain modularity (specific regions of the
brains optimized for certain tasks), or due to brain size alone. All that
said, I'm skeptical of the singularity since it assumes that a recursive
process can continually improve intelligence without significant diminishing
returns. The problem with this thinking is that it fails to take into account
evolution. All this research into AI is based on the assumption that
"intelligence" should be like human intelligence. But human intelligence has
had several billion years to evolve and is highly optimized for our
environment (actually our environment from several tens of thousands of years
ago when we were hunter gatherers). It seems naive to me to assume that we are
not nearing a local optimum in what is possible with human intelligence. I
don't believe a singularity is possible because by recursively improving
"intelligence" you will near the local optimum of that form of intelligence
but that does not mean you can continue improving that intelligence
indefinitely.

------
nileshtrivedi
_Rather use your imagination, the one thing that makes you a beautiful unique
snowflake. Intelligence and hard work should be merely at the service of our
imagination. Think outside of the box. Break out. Shake the axioms._

Honest question: can't this process of "thinking outside the box" be reduced
to a logical process? Wouldn't a system which has ability to identify axioms,
also have the ability to vary those axioms and try out alternatives?

In other words, why shouldn't better logical ability lead to better
imagination?

------
rsaarelm
I'm not that much worried about whether or not there's an asymptote of
diminishing returns for self-improving intelligence. I'm quite a bit worried
about just how far away a genuinely self-improving intelligent system will
move from the level of human capacity before it starts hitting the wall.

I think Kurzweil is mostly the one who keeps going on about literal unending
exponential growth. Vinge's original singularity idea was more about things
just becoming irreversibly incomprehensible for regular humans, since
ultimately limited or not, self-improving intelligences are going to end
operating on a level very distant from human thought.

------
6ren
This is a reasonably plausible argument for progress within fields and
paradigm shifts; but consider between fields, and the rate of new field
creation. To use the given examples, Newtonian physics, quantum physics and
Information Theory began in 1687, 1930 and 1948. That seems quite accelerating
(of course you'd need more complete stats to demonstrate this properly).

BTW: I disagree that intelligence and imagination are separate. I think
intelligence necessarily includes imagination, so AI would include artificial
"imagination" too.

I'd like progress to continue rather than to stagnate; and since progress has
historically continued, that seems the way to bet. Those pundits predicting
stagnation have historically been wrong. However, I find it curious that
Kurzweil does not address the _availability_ of innovations/discoveries (and
the difficulty in finding them), in contrast to the compelling case he makes
for historical progress. Are potential discoveries infinite? Is their density
uniform, or do they become less likely as you go along because they are
limited in supply (the assumption of the article)?

How would one sensibly model the supply of potential discoveries, since it is
intrinsically unknown? Perhaps a mathematical model of computation might help,
of computable functions and Kolomogorov/Solomonoff program length to compute
them. Using this model, the search space of functions increases exponentially
with program length - but at what rate would the number of "interesting"
computable functions increase? And with what density (with respect to the
search space, i.e. how hard they are to find)?

~~~
fchollet
While I agree with you that the rate of paradigm shift increases over time, I
argue that their average impact/scope decreases. Newtonian physics opened more
room for technological progress than quantum mechanics, etc. Earliest
inventions have the most impact (the invention of the scientific method having
the most impact of all?).

So my point is that if you look at paradigm shift rate not in volume but in
impact, you'd get a linear rate.

For this reason, an AI given imagination and exploding resources would still
only make linear progress.

~~~
simonh
Transistors couldn't have been designed without understanding Quantum
Mechanics. Bye, bye modern computers.

~~~
fchollet
And quantum mechanics could not have been thought up without newtonian
mechanics...

------
paulsutter
Linear increases in science can drive exponential increases in what's
possible.

I remember computers of 30 and 40 years ago. Today everyone in the world has
free access to supercomputers that translate languages and search petabytes of
data (aka Google). We didn't need to revolutionize basic physics to get here
from punched cards.

Expect a similar revolution in the next 30 to 40 years.

------
BlaineLight
Thanks for this article. I've always been fascinated by singularity, and even
though I don't know too much about it, I'm excited to see the people who fully
support your position to create some great conversation on this topic.

------
pubby
I don't understand why people believe they will be immortal if the singularity
ever occurs.

Humans came from microbes, yet we couldn't care less about them. A super
intelligent being would see us the same way.

------
yk
The Problem with his model is, that he assumes that he ignores the "shoulder
of giants" effect in his model, since he assumes that a later researcher has
the same number of points as the earlier, but only hard problems to solve. (
Perhaps I will extend his model a bit later.)

------
JacksonGariety
What does that mean "it's not coming". Isn't pretty much everything "coming"
eventually?

~~~
fchollet
If the Singularity is defined as intelligence explosion following the creation
of strong AI, and if intelligence explosion is impossible, then the
Singularity is not coming, even in the far future. Progress will just carry on
linearly.

~~~
JacksonGariety
Saying something is impossible, even in the far future, is just stupid. If you
think about it, humans are only familiar with an infinitely small sliver of
what's out there in the universe. We're taking the little knowledge we have,
and saying AI intelligence explosion is impossible. That's like a baby saying
he's stood up before, and walking is just impossible.

Just seems silly.

------
thinkingisfun
The singularity may not be coming, however, there are still huge questions and
challenges looming ahead. Individual humans don't seem to be able to resist
the thirst for power, and society as a whole seems too passive to stop them...
We just watch as it unfolds, hoping we'll get cool gimmicks in the process.
That's insane.

" _Whenever a man chooses his purpose and his commitment in all clearness and
in all sincerity, whatever that purpose may be, it is impossible for him to
prefer another. It is true in the sense that we do not believe in progress.
Progress implies amelioration; but man is always the same, facing a situation
which is always changing, and choice remains always a choice in the situation.
The moral problem has not changed since the time when it was a choice between
slavery and anti-slavery._ "

\-- Jean Paul Sartre

It's all just a bunch of tools! We could use them to be good to each other, or
to just keep doing what we did; moving from scarcity of resources and
horsepower and intellect, to artificial scarcity based on greed and control.
Technology just amplifies the impact of our choices on us, others and our
environment; but it doesn't make them for us.

------
franzus
Just another naysayer. At least he's not one of those "but H+ is
unethical/against god's will" guys.

~~~
fchollet
I am in fact refusing to argue in the realm of opinion, and that is why I am
proposing a valid mathematical model to support my hypothesis.

But I suppose you did not read past the title? Sorry for hurting your faith in
the Singularity...

