
The Singularity is not coming (2012) - fchollet
http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html
======
AndrewKemendo
>But imagination is a fucking superpower.

Indeed!

Why the author decouples imagination from intelligence is odd to me,
especially as he claims to be an AI researcher. Artificial Imagination and the
like has been researched basically since the beginning of AI research.

That is not to say that imagination is a solved problem within AI, far from
it, but it is as covered in theory (which this article is about after all) as
any other AI problem.

>Rather use your imagination, the one thing that makes you a beautiful unique
snowflake.

My guess is that the author thinks there is something special or non-material
about being human.

~~~
fchollet
To clarify this point: at the time I meant "imagination" in a human context,
as the ability to create new thought systems to operate in, as opposed to
doing research within an existing thought framework (typically inherited from
your thesis advisor, etc.).

This is analogous to creating a new market with a completely new type of
product vs. entering an existing market, for companies.

No, I do not believe there is anything special about human intelligence.
Artificial consciousness is among my research interests. As for artificial
imagination, it is not exactly "covered in theory" by our current
understanding of AI or the human mind --the only kind of AI we know are AIs
that operate within rule systems determined by humans. Let's take a simple
example, the most general AI we know of, genetic algorithms: if you launch a
genetic algorithm to find a solution to some problem, it will merely search
through a search space you will have to determine yourself. Artificial
imagination would the capability to expand or modify that search space based
on previous findings...

~~~
nnq
> it will merely search through a search space you will have to determine
> yourself

What makes you think "human imagination" is any different, qualitatively at
least? Yes, our minds have a humongous search space at their disposal: what
comes through our sense organs, the processed memories of our entire lives and
the ability to consume information in formats that are very hard for current
AI systems to make sense of (ambiguous written sources, like most of the
world's literature, ambiguous communication in form of conversations, art
etc.). Add this with the fact that we can all communicate and the search space
becomes as big as the minds of the entire human beings alive (thoug admitedly,
you have very pool bandwith and availability of access to this amount of
information), plus the transmitted pieces from the ones before, and that this
space is ever expanding, and you get the _" miracle of human imagination"._

...but a human level AI will have access to the same thing (by definition,
because otherwise we wouldn't call it "human level AI"), so the same kind of
"imagination" but augmented by 10...0x times better connectivity and access to
it and everything related.

Also, the search space is fixed for human minds too, it's called the universe,
and at least if we accept the scientific-objective world view, we don't make
it any larger by imagining things (if I imagine a new particle, it doesn't
just pop into existence out of my imagination). The only way to defend a
difference between "human imagination" and "artificial imagination" is to go
against the science based and objective world view.

EDIT: typed "intelligence" instead of "imagination" at the end, sorry...

~~~
arethuza
As an aside, although I completely agree with you, for an alternative
fictional perspective I can recommend Neal Stephenson's _Anathem_ \- there is
one particular section about the mind modelling the universe that I am
completely entranced by (and as a bit of background I worked for years as an
AI researcher working on systems modelling).

[http://en.wikipedia.org/wiki/Anathem](http://en.wikipedia.org/wiki/Anathem)

------
gerbal
I've long thought that among the chief promises of a self-improving AI is not
that it will exponentially make new discoveries in Science. But that it will
be able to synthesize and combine existing knowledge in a manner too complex
for most humans.

The problem with current knowledge is that humans just aren't capable of
handling more than a tiny fraction of it. A functional AI (even of the weaker
variety) is capable of handling orders of magnitude more information and
arriving at novel combinations of existing knowledge.

Part of the reason the impact-per-researcher has fallen so much is that there
is simply so much more to know and consider than there was a century ago. And
Computers are really, really good at dealing with volumes of information that
would overwhelm a person.

And while a self-improving AI may someday slow down it's rate of self
improvement, in the short term its rate of self-improvement would appear
exponential as it groks scales of knowledge humans can't cope with.

~~~
fixermark
This is a pretty valuable insight. Even without some kind of innovation to
intelligence itself, simply taking the human cognitive process and mapping it
onto a strata with several factors more bandwidth (an expectation that isn't
unrealistic, given the current implementation is low-speed potential flows and
chemical squirting) could significantly push science forward.

This one is an easy bet, since we're already doing this; modern researchers
(even the professionals ;) ) use Google and more specialized search tools,
which are little more than machines that can read, digest, and correlate
symbols at speeds much, _much_ faster than a human (though in a far more
narrowly-scoped domain than a human).

------
kawa
> scientific progress has really been linear. We didn't "get" physics 50 times
> better in 1990 compared to 1940

It's very hard to quantify scientific progress. The main reason is that we
first pick the "low hanging fruits" and advancements get progressively harder.
Just think of particle physics: While the first steps could be done with basic
lab bench equipment, today we need to build gigantic accelerators to push the
boundaries even a little bit. The same is true in other areas: The first steps
are comparatively easy, but the more you know the harder it gets to solve the
remaining problems, because you already got the easy ones.

So even if perceived progress is linear, considering the increasing difficulty
we may have exponential scientific progress.

~~~
andybalholm
That is the OP's main point, except turned around. He was measuring progress
by results, not by effort expended, and saying that it takes exponential
effort to reach linear results.

~~~
kawa
But considering that the total amount of scientific knowledge is limited,
linear progress will find "everything" in a manageable amount of time. Of
course we don't know how big the percentage of our total knowledge is, but
it's quite possible that we are already getting close to know "everything".

The singularity doesn't promise certain invention (for example it's possible
that there is no and will never be a way to go faster than the speed of light)
it only promises exponential growth which will bring the "end of science" into
the foreseeable future. So if we found maybe 80% of all physics in the last
200 years, with linear progress we would reach the end of physics in 40 years.

------
sergiosgc
Interesting text, but flawed in many ways. A couple of the more important
flaws:

\- Exponentials often look linear in short x axis observations. Science at
t-50 and t-100 may be too short an observation period. Looking at the past
3000 years the exponential characteristic is quite obvious.

\- Measuring scientific progress as defined in the paper is obviously
simplistic. You don't have a limited set of equal valued discoveries to be
found randomly. You have a limited set of increasingly valuable[a]
discoveries, with dependencies, to be found. The value rate of discoveries is
ignored in the text, but is relevant. Say, observing the periodicity of star
movement was important, but the gravitic theory built on that to produce even
more valuable information. One allows you to count time, the other is
fundamental in travelling to the moon. One can be discovered by an uneducated
farmer, the other requires calculus knowledge.

[a] The definition of value is itself difficult. Impact value, as in practical
application value, is a metric. Foundational value, as in which doors are now
open for research is another. The utility function is probably a mix of both.
I'd wager more recent discoveries are better than older ones in both counts
(antibiotics vs immune system programming for example). Do not get fooled by
the relative impact (you get a division by zero error in the first scientific
discovery ever made)

------
ThrustVectoring
It's really a question of how quickly it gets difficult to improve an
arbitrary AI algorithm, in terms of the problem-solving capabilities unlocked
by having an AI algorithm of that strength. Once you have given beliefs about
these quantities, you've got a belief about the result of AI research.

I think that the AI problem gets hard really quickly. It's less clear how much
problem solving ability having an AI algorithm gives you. There's three ways
I'd use an AI to do AI research better - AI(improve_AI_algorithm),
AI(get_more_computing_power), and AI(improve_AI_research_process).

The first is pretty much a run-once sort of thing for a fixed gain. The second
is a lot less clear to me - I'm not really familiar with the constraints of
chip design, and convincing people to put more resources into AI research hits
diminishing returns. The third is much more promising at the current level of
AI sophistication, for doing things like hire/fire decisions, scheduling and
resource allocation, etc.

Anyhow, my model is roughly a sideways S, with low-hanging fruit getting more
reachable at first until it gets exhausted in fairly short order. There's a
rather broad distribution of where it ends up afterwards and how quickly the
easier improvements get found. The low end wouldn't get classified as a
singularity, but the high end would.

------
mrrrgn
This author rails about how scientific progress isn't exponential; but that's
not even relevant. The idea of accelerating returns was only ever meant to
apply to information technologies. Though they are related on various levels,
fundamentally, science != technology. Only when scientific processes are
converted into information technologies do they start becoming exponential (as
was the case with genome sequencing).

------
briantakita
> We didn't "get" physics 50 times better in 1990 compared to 1940

We don't have to. The fundamental rules are discovered slowly. The frontier
has been expanded & there's plenty of room in the spaces between. We can now
utilize these rules to create & build. The rules of physics are being
assimilated by the rest of the culture & it's properties are being utilized
more than ever. This trend is accelerating.

There is plenty of room for exploration, "cheap" growth (scaling), & turning
over new leaves in biotech, tech, biology, medicine, robotics, ai, software
development patterns, etc.

The nice thing about the globalization of the internet is there's an
unprecedented amount, quality, & interconnections of minds to advance our
global knowledge. We are also undergoing cultural change that fosters
collaboration and decentralized intelligence. There's also a lower barrier of
entry for one to contribute to global knowledge (i.e. you don't need a PhD nor
do you have to be a white male from a privileged background).

------
DennisP
The model assumes that researchers can predict the impact and effort of their
discoveries. If that's the case, then of course progress slows down as earlier
researchers pick the low-hanging tasty fruit.

But if we really knew what the research would show, we wouldn't have to do the
research.

A smaller point is that it's not really true that progress in algorithms is
slowing down. According to a study a couple years ago, in the time it took
hardware to improve by a factor of 1000, algorithms improved by a factor of
43,000, for a total speedup of 43 million.

[http://bits.blogs.nytimes.com/2011/03/07/software-
progress-b...](http://bits.blogs.nytimes.com/2011/03/07/software-progress-
beats-moores-law/?_php=true&_type=blogs&_r=0)

------
tedks
>Intelligence is just a skill, more precisely a meta-skill that defines your
ability to get new skills. But imagination is a fucking superpower.

Isn't this the core inconsistency? Intelligence explosion advocates obviously
believe that imagination is not an irreducible superpower, but part of the
same meta-skill that defines your ability to get new skills.

If you assume that paradigm shifts can be farmed at the same rate of
discoveries, it seems like you'd get explosive effects again. This also
assumes that fields don't proliferate; i.e., that a single field couldn't
produce multiple paradigm shifts that each open up a space of new low-hanging
fruit.

Still, good treatment of the question. Much more convincing than a lot of the
explanations for.

------
droopyEyelids
I'm of the opinion that the singularity has been here for a while. There have
been self-modifying, self-improving biorobots terraforming the Earth to
augment themselves physically and mentally since before my grandparents were
born.

------
markpundmann
I have read a lot of thoughts on the supposed "singularity", and something I
still have yet to see is for someone to realize that every person who believes
in the singularity disregards economics.

They all seem to think that this intelligent machine will improve it's
algorithm or find ways to make smaller, more efficient, and better designed
chips. On the software side, there is only so much one can do to maximize the
intelligent aspect of the algorithm. At one point or another, the program will
be maximally efficient at solving whatever problem the algorithm is designed
to solve.

On the hardware side, the singularitists sometimes argue that the program will
design every better, bigger, and more efficient chip designs as the news chips
would be used to build even better chips.

However, who is going to pay for the manufacturing, energy, and maintenance
costs of this self improving machine? Eventually, this machine will have to
quit trying to design more intelligent machines and start solving HUMAN
PROBLEMS. Why? Cause humans will actually pay for the solutions to problems
that this intelligent machine can solve.

I'm not saying that this type of self improving machine would never be built.
I just think that we will end up splitting the line between devoting resources
to improving the machine to encourage future growth, and making money by
solving problems that people are willing to pay for.

Also note that as this machine gets more and more powerful, the problem of
making itself better will get more and more complex, and as I think we can all
agree on, complexity is our greatest enemy.

~~~
edanm
"I have read a lot of thoughts on the supposed "singularity", and something I
still have yet to see is for someone to realize that every person who believes
in the singularity disregards economics."

I don't think that's accurate. LessWrong, one of the largest online
communities talking about (friendly) AI and the Intelligence Explosion, was
founded by Eliezer Yudkowky. He was a co-blogger with Economist Robin Hanson,
also a contributor to LessWrong. That's not to say that Robin Hanson agrees
with all the views, but they _are_ aware of economics.

In fact, Eliezer Yudkowsky wrote a paper specifically on the economics of the
Intelligence Explosion
([https://intelligence.org/files/IEM.pdf](https://intelligence.org/files/IEM.pdf)).
He also regularly writes about economic matters.

As to your specific points:

"However, who is going to pay for the manufacturing, energy, and maintenance
costs of this self improving machine? Eventually, this machine will have to
quit trying to design more intelligent machines and start solving HUMAN
PROBLEMS. Why? Cause humans will actually pay for the solutions to problems
that this intelligent machine can solve."

"I'm not saying that this type of self improving machine would never be built.
I just think that we will end up splitting the line between devoting resources
to improving the machine to encourage future growth, and making money by
solving problems that people are willing to pay for."

For a very advanced intelligence, creating its own resources isn't a problem.
E.g. humans. We make our own resources. The AI doesn't have to rely on humans
necessarily. Maybe only in the beginning.

"Also note that as this machine gets more and more powerful, the problem of
making itself better will get more and more complex, and as I think we can all
agree on, complexity is our greatest enemy."

That sounds reasonable. But the limit that a superior intelligence reaches can
be many orders of magnitude above where humans are on the spectrum. E.g.,
things that are overly complex for monkeys have been achieved by humans,
despite the fact that they are much more complex.

------
Houshalter
Exponential self-improvement may or may not happen. I'm of the opinion that it
will. The first AI is likely to be vastly suboptimal and just thrown together.
The very first thing that actually works, but not the best possible thing
(same is true of humans.) Therefore there are likely to be lots of
optimizations and improvements that can be made - low hanging fruit. So lots
of progress will happen very quickly until it starts to approach the actual
limits.

But those limits don't have to be anything near humans. Transistors are many
thousands of times faster than neurons. It will have massively superior serial
processing power, which is good at doing optimization tasks (i.e. the kind of
tasks we want it to do or that will be useful in improving itself.) They are
general purpose and can be reprogrammed for specific tasks, unlike our brains.
And we can create infinitely many instances of them to work on every
conceivable problem in parallel, and share resources (human brains are quite
limited at that.)

There is Plenty of Room Above Us:
[http://intelligenceexplosion.com/2011/plenty-of-room-
above-u...](http://intelligenceexplosion.com/2011/plenty-of-room-above-us/)

------
edanm
Not sure I understand the gist of the argument.

First, let's separate "Intelligence Explosion" from "Singularity" as to many
people those terms mean different things.

A lot of the people talking about an "Intelligence Explosion" do _not_ agree
that the progress of science has been getting exponentially faster. What they
are talking about, is that once we create an AI that can _improve its own
intelligence_ in a way that humans _can 't_, then the improved intelligence
will know how to further improve its own intelligence, and so on.

Even if there is a limit to the number of discoveries possible in any field,
if you're affecting the base rate at which you're making new discoveries, you
can (potentially) continue making progress.

More importantly, all of this is besides the point - it could easily be that
all the "low hanging fruit" in improving intelligence can make an intelligence
that is vastly more intelligent than a human, at which point the so-called
"singularity" will have been reached. Or in the words of the "Intelligence
Explosion" camp, we will have reached a world where we are no longer the
dominant intelligence, possibly by a very wide gap.

I see plenty of reason to think that human intelligence is not a good
benchmark for "the maximum intelligence you can get by picking all the low-
hanging fruit", since there are many, many easy improvements we can imagine
that will almost certainly improve human intelligence (e.g., perfect memory,
ability to partially reprogram yourself to do stuff, like for example not to
have "bad" desires like eating unhealthily, etc.).

------
gwern
> That this deadline would arrive just in time to save the proponents of the
> Singularity from old age is just a weird coincidence that ought to be
> ignored.

It's also not a true coincidence:
[http://intelligence.org/files/PredictingAI.pdf](http://intelligence.org/files/PredictingAI.pdf)
It look like it was just an artifact of the 10 or so predictions Kelly
compiled.

------
tim333
Most of the article seems to be attacking a straw man argument saying science
does not progress exponentially hence no singularity. The arguments for the
singularity do not generally involve the advance of science. They are based
largely on computing getting faster which is down to technology and economics
and does not necessarily require any new scientific discoveries although there
may well be some. The basic argument is that is computers keep getting more
powerful in the way they have for the last century (see
[https://en.wikipedia.org/wiki/File:PPTMooresLawai.jpg](https://en.wikipedia.org/wiki/File:PPTMooresLawai.jpg))
then at some point they will have processing power greater than a human brain
and assuming software development keeps up that they will be able to exhibit
human level intelligence. The hardware getting up to human equivalent levels
seems a near certainty and software development while less certain is coming
along in a promising manner with Google's self driving cars and the like.

~~~
austinz
1\. Moore's Law is not an ironclad law, it's an observation about improvements
in integrated circuit transistor density. Given that there are not that many
node shrinks remaining before we hit the fundamental limits of our current
technology, scientific discoveries _will_ be necessary to continue increasing
transistor density into the far future. It is absolutely not a given that
Moore's Law has to continue until the Singularity can occur, although it isn't
impossible that it'll be around for several more decades either.

2\. The most powerful hardware in the world is useless without software.
Improvements in AI theory don't follow Moore's law, and there is no reason
they should. Maybe your super-fast computer can run machine learning
algorithms on a huge dataset, but it's not clear machine learning alone is
sufficient to produce something on the scale of the Singularity. Anyways,
strong AI is a very difficult problem, but in no case I know of has the
limiting factor ever been the power of the hardware (except maybe for neuronal
simulations trying to emulate the activity of a human brain).

~~~
tim333
1 - you don't need Moore's law, just the observation that computers have got
steadily faster for decades and will probably continue.

2 >I know of has the limiting factor ever been the power of the hardware

One example would be computer chess: "In 1990, futurist Ray Kurzweil predicted
that a computer would win a game of chess against a human by 1998. His
prediction came true on May 11, 1997, when IBM’s Deep Blue defeated the world
chess champion Garry Kasparov." that prediction was done pretty much entirely
on figuring the computational power needed and projecting when we'd get there.
Not really rocket science, just some back of the envelope calculations. His
2029 prediction for the turing test is the same stuff.

~~~
austinz
> will probably continue

But will it, and why? This is little more than an article of faith. Computers
are getting faster according to a given model, a model whose underlying
technological process is reaching fundamental (scientific, not engineering)
limits. We have no idea if whatever replaces that model will be anywhere as
scalable as Moore's Law.

> computer chess

Computers play chess very differently from humans (they do what amounts to a
brute force search of the state space and choose the best option). This is
basically taking a well-known algorithm and figuring out how fast a computer
needs to be in order to solve it in a tractable manner - cryptographers do the
same thing when they decide that a cryptosystem can't be brute-force cracked
in a reasonable amount of time. There is no well-defined algorithm for passing
the Turing test, and given how so many of the 'interesting' problems in AI are
outside the domain of highly logical, rules-based algorithms, using the same
metric to measure them is not that useful.

------
ThomPete
I am always amazed that the same people who are proponents of the scientific
explanation (simple immaterial matter turning into self-aware human beings
without any designer. At the same time talk about a very small part of
evolution as something unique and magical as if it required a designer.

Why is it so hard to believe that computers created and designed by human
beings who themselves didn't even have a designer, can become self-aware?

We also see this in the quantum physics field where some people insist on the
"hidden variable" theory.

Simply don't get it.

------
izzydata
But if science is becoming exponentially harder and we are making linear
scientific progress, then wouldn't that imply that scientific endeavors are
increasing exponentially?

~~~
fchollet
Empirically, they are. As a civilization we are investing exponentially more
in science over time, which is mentioned in the article.

~~~
softatlas
Here I say a better model for the Universe is not matter but information, an
information architecture of the universe. Matter is buggy, less elegant,
event-driven (whereas emergence (acausality) might help).

We're investigating exponentially more _if_ we remain vigilant in preserving
the materialist's conceptual scheme.

------
fthssht
The biggest flaw was the articles strawman that science laws as in physics
knowledge as opposed to information technologies like millions of instructions
per second or memory capacity are exponentially increasing. Funny hiw he
arrogantly "objects your honor" on science exponentially ibcreasing while
missing this.

------
mitchmindtree
> ...due to the "exponential" rate of progress of science.

I don't remember any great futurist referring to the "exponential rate of
progress in science", but rather the accelerating advances in regards to
information processing technology specifically.

------
smky80
I came to the conclusion a while back that humans are just fundamentally
religious. Even most people who don't believe in "gods" seem to end up largely
clinging to some over-simplified, basically religious model of the world.

As examples:

Modern liberalism = New Testament Christian ethics with the government
replacing god.

"Universe is a simulation" = Gnostic/platonic belief system (our universe as a
sub-creation created by a flawed sub-creator), with a technological guise.

Belief in the Singularity probably appeals to some people in the same way that
an imminent Rapture or Second Coming appeals to others. Especially with the
uploading consciousness to computers and living eternally ...

~~~
fixermark
"There's nothing new under the sun"

... except for all the new stuff, of course. The fact that Da Vinci dreamed of
flying machines doesn't diminish the significance of the actual invention of
the airplane or modern commercial air travel.

------
aet
[http://www.simulation-argument.com/simulation.html](http://www.simulation-
argument.com/simulation.html)

------
eudox
>After all the total quantity of intelligence and hard work available around
is millionfold what you can provide –you're just a drop of water in the ocean.
Rather use your imagination, the one thing that makes you a beautiful unique
snowflake. Intelligence and hard work should be merely at the service of our
imagination. Think outside of the box. Break out. Shake the axioms.

Given this sort of nonsense, it's no surprise the author is (Probably, since
they all are) some kind of vitalist. He probably doesn't think brain
emulations are people!

~~~
briantakita
Why is it nonsense?

~~~
astrodust
"Intelligence" as we have come to understand it is not some magical thing, not
some exotic element that's impossible to fabricate. In fact, it's probably
something mundane and insultingly simple when we get to the core of it.

Reproducing intelligence is mostly a matter of finding the correct formula,
the right structures, the right technology. After that it will be boring,
ordinary, even disposable. This doesn't sit well with some people who reject
that on an emotional level even if they can't figure out any concrete reason
why. It just _can 't_ be.

We already have enough hardware to _fake_ intelligence, to make a believable
Turing Test candidate that could win, but we've yet to figure out _how_. That
much will become clear in the coming decades, guaranteed.

If you look at the progression in computer chess programs, where they were
pathetic for the longest time until things came together and the performance
of them rose exponentially, matching and then eclipsing grandmasters, it's
inevitable that the same pattern will play out in the artificial intelligence
space.

~~~
joedavison
And what about concepts like art, love, beauty, morality? Are these also
simply "computations" of some sort?

I believe your argument holds true for all intelligence that relies on logic.
However, I would put forth that the full definition of "intelligence" includes
logic, and also something else, let's call it "beyond logic".

For example, see Gödel's incompleteness theorems [1].

You will find that there are statements that can be "true but unprovable".
Statements that are "true but unprovable" cannot be computed. But that doesn't
make them untrue.

Therefore, there are limits to artificial intelligence.

[1]
[https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Relationship_with_computability)

~~~
habitue
The point isn't that everything can be proven, the point is that human brains
have no special abilities that can't be emulated by computers. If you emulate
every neuron and neurotransmitter in the brain with a sufficiently powerful
computer, you will get an intelligence that has the same perception of love,
beauty, blah blah blah that people think somehow set them apart from all other
matter in the universe.

In addition, once you have it modeled, its possible invent a computer that
loves more deeply than any human ever could. Who understands and appreciates
the human concept of beauty better than any human ever could etc.

We know these things are emergent properties of very slow meat processors, and
therefore can be recreated again on very fast silicon processors.

~~~
joedavison
I believe you are correct: human brains have no special abilities that can't
be emulated by computers.

But I also think that "human" == "brain" is far from a given, and that is an
implicit assumption of this viewpoint.

~~~
habitue
Are you talking about sensory perception, hormones from the rest of the body
affecting cognition and emotion? Any of those effects could be replicated as
well.

If you're talking about a non-materialistic view of the mind:
[http://youtu.be/Juriylw7B0g](http://youtu.be/Juriylw7B0g)

