
How Long Before Superintelligence? (1997) - joaorico
http://www.nickbostrom.com/superintelligence.html
======
erikpukinskis
The thing people don't understand is that in order to simulate human
intelligence, you have to be able to simulate TWO things:

1) A human brain

2) An entire human development up to the age of intelligence you are looking
for

The first one is not the harder of the two.

Now, many AI researchers believe they can cut corners on the whole simulating
an entire human lifetime thing, and that they can use a more impoverished
simulation and make up for it on volume... say, just flashing a billion images
in front of the AI hoping that's enough to form the specific subset of
intelligences you are hoping for. Or letting the AI read the entire internet.
But at this point it's an open question whether that could even theoretically
lead to generalized intelligence.

~~~
whitegrape
In order to simulate human-level intelligence, the machine doesn't necessarily
need to be modeled on the human brain. It doesn't even need to use neural
nets.

In order to simulate a human (which by definition only has human intelligence)
then only your point 1 is true. Point 2 is not, because the standard way to
know we've got 1 right is uploading/emulation: take existing humans, scan
their brains at a sufficient level of detail (which will probably require
destructive scanning), and use that as the base information of the simulation.

~~~
erikpukinskis
The neural nets still need to have experiences that are trained on a realistic
simulation of the events which it needs to understand. Historical data does
not train the networks the same way that interactive learning does.

There was an old study, in an earlier time of ethical strictures, where they
took two kittens, paralyzed one, and strapped it onto the other one and let
them run around and do kitten stuff. The paralyzed kitten saw everything the
intact kitten did, felt the same breeze on its fur, but was completely blind
to the universe. Without _interaction_ , you cannot learn.

In order to provide that interactive environment you either need a robot body
or a really rich virtual environment for your AI to grow up in.

At that point, they are developmentally limited to human timescales. No hyper-
accelerated intelligence exponential.

------
nefitty
[Regarding power of artificial intelligence] "...If Moore's law continues to
hold then the lower bound will be reached sometime between 2004 and 2008, and
the upper bound between 2015 and 2024."

I guess his prognostication here depends on super-powerful computing and
brain-emulation software. China's Tianhe-2 has already hit 3.3^15 ops, Bostrom
was anticipating for 10^14 - 10^17 to be the runway. Now, I am not sure what
the state of brain emulation is at the moment but it looks like our biggest
snag is there. Researchers are having a hard time bubbling up new paradigms
for artificial intelligence software. Anyone have any insight into that?

~~~
dhj
Off by a few orders of magnitude. Tihane-2 hit 33 pflops or 3.3*10^16 flops or
approx 1/3 of the upper bound. Brain simulation is a snag, but it isn't our
only snag.

Like you said, it's a general algorithm issue. We do not remotely understand
the brain well enough to simulate it. We have very little idea of what an
intelligent algorithm (other than brain sim) would look like.

Also, all of these estimates are based on flops and none of them consider
bandwidth. We are a few orders of magnitude lower in gigabits/s than we are in
flops. I personally think that is where the bottleneck is. 100 billion neurons
with a 100 gigabit/second pipe could interact once per second and then only at
the level of a toggle switch. Granted not all neurons have to interact with
one another, but we are significantly behind in bandwidth and structural
organization.

Bandwidth is intimately tied to processing capacity. I dont think the
bandwidth will be there until 2045-2065 and like you say we have serious
software/algorithm/understanding deficiencies to resolve before then. I would
be very surprised if we get general AI before 2065 if ever. I do not expect it
in my lifetime and would be pleasantly surprised if it happened.

~~~
nefitty
Oops, excuse my mistaken quote of the Tianhe flops.

Regarding the bandwidth bottleneck, it's fascinating to see that as one
hardware problem is overcome, the next one looms even greater. The same is
happening with the software, as machine learning, etc. is advancing (as
contentious as that statement may be to people deep in the industry) the
coming hurdles look even more intimidating.

The algorithms that need to be developed to reach the milestones of
intelligence are incredibly difficult. What excites me is evolutionary
algorithms that may be harnessed to reach those milestones. This may be a
brute-force method, and researchers would have to know what to tell the
algorithms to select for at first, but with increasing computational power,
running significant amounts of these algorithms in parallel could be
negligible. If you see this comment dhj, have you considered evolutionary
computation in your predictions? I'd be interested in what you think, as your
clarification of the bandwidth problem was enlightening to me.

~~~
dhj
I agree that some form of evolutionary algorithm will be our path to
intelligent software (or a component of it). However, as genetic algorithms
are currently implemented I would say the following analogy holds
neural_net:brain::evolutionary_algorithm:evolution ...

In other words GAs/EAs are a simplistic and minimal scratching of the surface
compared to the complexity we see in nature. The problem is two fold: 1) we
guide the evolution with specific artificial goals (get a high score for
instance) 2) the ideal "DNA" of a genetic algorithm is undefined.

In evolution we know post-hoc that DNA is at least good enough (if not ideal)
for the building blocks. However, we have had very little success with
identifying the DNA for genetic algorithms. If we make it commands or function
sets we end up with divergence (results get worse or stay the same per
iteration rather than better). The most successful GAs are where the DNA
components have been customized to a specific problem domain.

Regarding the target goal selection that is a major field of study itself with
reinforcement learning. What is the best way to identify reward? In nature it
is simple -- survival. In the computer it is artificial in some way. Survival
is an attribute or dynamic interaction selected by the programmer.

I believe that multiple algorithmic techniques will come together in a final
solution (GA, NN, SVM, MCMC, kmeans, etc). So GA is still part of a large and
difficult algorithmic challenge rather than a well defined solution. The
algorithmic challenge is definitely non-exponential -- there are breakthroughs
that could happen next year or in 100 years.

The bandwidth issue is the main reason I would put AGI at 2045-2065 (closer to
2065), but with the algorithmic issue I would put it post 2065 (in other
words, far enough out that 50 years from now it could still be 50 years out).
Regardless of the timeframe, it is a fascinating subject and I do think we
will get there eventually, but I wouldn't put the algorithmic level closer
than 50 years out until we get a good dog, mouse or even worm (c.elegans)
level of intelligence programmed in software or robots.

------
tensafefrogs
There's a more recent article in the New Yorker that follows Mr. Bostrom
around a bit and is a good general read:
[http://www.newyorker.com/magazine/2015/11/23/doomsday-
invent...](http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-
artificial-intelligence-nick-bostrom)

~~~
T-A
One sentence in that article made me do a double take: "He was learning to
code." Past tense, but it's about what Bostrom was doing at the time of the
interview (last year, well after the publication of "Superintelligence").

So the most influential AI doomsayer on the planet has been writing about
artificial minds for a couple of decades, without even knowing enough to get a
computer to say "Hello, world"? OK...

~~~
theoh
I believe a key factor in AI doomsayers' thinking is the interconnectedness,
complexity and automation of an AI-driven world. The actual presence of
intelligence is a red herring: we already have problems with complex automatic
systems failing catastrophically:
[https://en.wikipedia.org/wiki/Northeast_blackout_of_2003](https://en.wikipedia.org/wiki/Northeast_blackout_of_2003)

------
atemerev
That took some... balls, back in 1997.

There were a lot of strong AI sceptics, who repeated on and on: oh, computers
can calculate, but can they play chess? Oh, chess was easy, how about
understanding what this picture about? Driving cars? Talking like humans? Oh,
they can talk now, but do they really _think_?

Reality happens faster than anybody imagined. Except a few visionaires like
Bostrom.

~~~
igravious
Bostrom is not a visionary. He's just a philosopher.

Nietzsche, _Ecce Homo_ "Philosophy, as I have so far understood and lived it,
means living voluntarily among ice and high mountains — seeking out everything
strange and questionable in existence, everything so far placed under a ban by
morality."

I would argue that if philosophy isn't producing Bostroms then it's not doing
its job right.

~~~
sawwit
Wait, so Steve Jobs wasn't a visionary. He was just a salesman exploring
unexploited markets as any salesman does? Albert Einstein wasn't a visionary.
He was just a physicist that explored unconsidered theories as any physicist
does?

It seems to me that in philosophy there is just as much "groupthink" as in
almost any human endeavor—there is a maybe little less in the hard sciences
where the systems give you feedback about whether an idea is correct or not.

~~~
igravious
You're just biased towards the so-called hard sciences. There is nothing about
them specifically that prevents groupthink as you call it. Science deals with
what is, not with what ought to be or shouldn't be and so on. Sure, for a
scientist the universe kicks back but that is nothing to do with how a
scientist chooses what to work on in the first place, and what preconceived
notions and frameworks that scientist is operating under. I could give
countless examples of scientists comfortably working within ideological
frameworks or reasoning using incorrect theories.

The whole point of philosophy is that it is meant to encourage freedom of
thought and free-thinking individuals. That's its job spec. I disagree that
“there is just as much "groupthink" as in almost any human endeavor” -- if
that really is the case then philosophy is failing at what philosophy _ought_
to be succeeding at.

I'm not saying Bostrom isn't an extraordinarily good philsopher, I'm saying
that seeing the big picture and going against conventional wisdom and
intuition goes with the territory. Don't imagine that I'm running Bostrom
down, I very much enjoy reading the guy and listening to his thought
processes, I find him to be a very rigourous thinker.

Maybe it's a small quibble, of course he can be both.

------
ctl
Let's look at the most important section of the paper. He estimates the
processing power of the brain:

 _The human brain contains about 10^11 neurons. Each neuron has about 5 • 10^3
synapses, and signals are transmitted along these synapses at an average
frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals
10^17 ops. The true value cannot be much higher than this, but it might be
much lower._

In other words, there are 5 * 10^14 synapses in the brain, and each synapse
transmits up to 100 signals per second, and we can probably encode each signal
with 5 bits. That's ~10^17 bits per second.

So, uh... does anybody else notice that that's _not an estimate of processing
power_?

That's an estimate of the rate of information flow between neurons, across the
whole brain.

The level of confused thinking here is off the charts. Does this guy not
understand that in order to simulate the brain, you not only have to keep
track of information flows between neurons, you also need to _simulate the
neurons themselves_?

That's not merely a flaw in his argument. It indicates that he has no idea
what he's talking about, at all.

Needless to say, this paper and its conclusions are complete nonsense.

~~~
api
I formally studied biology not CS, partly out of an interest in AI.

Everyone who thinks superintelligence or even just human or higher-animal
level intelligence is right around the corner needs to study genomics,
proteomics, molecular biology, and neuroscience. Study them with an open mind
and think about what's _really_ going on.

A neuron is not a switch. A neuron is an _organism_. It contains a gene
regulatory network more complex than the entire network topology of Amazon's
entire web services stack, and that's just looking at the aspects of gene
regulation and enzyme (a.k.a. nanomachine) operation that we understand. There
are about 100 billion of these in the brain and every one of them is running
in parallel and communicating constantly. There are also about 10 glial cells
for every one neuron, and glia are involved in neural computation in ways we
know are there but don't yet fully understand. (Seems to be related to longer
term regulation of synapse behavior, etc.) Each glial cell also contains a
massive gene regulatory network and so on.

The CS and AI fields suffer from a lot of Dunning-Kreuger effect when they
talk about biology. The level of processing power and the parallelism that's
going on in the brain of a living thing is simply mind numbing. It's as
incredible as the sense you get of the scale of the universe when looking at
the Hubble Deep Field.

Our present-day computers are toys. We are not even close. It would at least
take advances equivalent to the ones that took us from vacuum tube ENIAC to
here.

Edit: I don't write off superintelligence categorically though. I think we
could achieve forms of it not through pure AI but by deeply augmenting
biological intelligence. Genetic and biochemical performance enhancement could
also play a role. Imagine having more working memory, perfect motivational
control, the ability to regulate your own desire/motivational structure, and
needing only a few hours of sleep. Cyborg superintelligence is a possibility
in the foreseeable future and it does raise issues similar to those the
superintelligence folks raise. So I don't dismiss an intelligence explosion. I
just very strongly doubt it would be purely solid state.

~~~
danieltillett
Of course we are far from reaching human level yet, but generalised Moore’s
Law means the number of years until we reach human level is not that far away.

There is of course the issue that since brains evolved rather than being
designed that they can be inefficient in their processing. Look at how poor
humans are at arithmetic - we need to divert a huge fraction of our processing
power to do what a computer designed for arithmetic can do very efficiently.

~~~
ufmace
I'm not sure how seriously we should take Moore's Law when it comes to these
things. It applies pretty well so far to the development of silicon-based
microprocessors, but at some point, we're going to come up against some hard
physical limits on those. Once that happens, we may be stuck until we can come
up with something fundamentally new.

We already seem to be up against some limits in a way as far as single-
threaded processing power - it doesn't seem to be going up all that fast in
the last few major cycles of processor development.

~~~
danieltillett
This is why I said generalised Moore's law, not Moore's law. We are pretty
much at the limit of current designs, but there is still plenty of room for
parallelising computation.

I do agree we are going to need something new to get to human level.

------
proc0
Seems AGI is the rage these days. David Deutch has an article and outlines a
good point. We won't have an AGI before a good theory of consciousness. Some
philosopher will first need to explain consciousness in detail (more so than
Dan Denett, which already did an amazing job), and then neuroscientists might
have to prove that theory right, AND THEN AI researchers will be able to take
a stab at it. So I don't think it will just pop in to existence by running
some neural network training over and over again.

~~~
tim333
More likely the engineers will build AGI first and the philosophers try to
explain it after.

------
jbpetersen
AI is the wrong way to go looking for superintelligence.

Far more realistic is developing means of organizing humans together
effectively enough to achieve superintelligent levels of collaboration.

I think before 2025 is quite reasonable given this approach.

~~~
tim333
Humans have been trying collaboration for a while and the results have been
patchy.

------
transfire
Computers a still far too slow to exhibit any kind of real-time intelligence.
I suspect we still need three orders of magnitude improvement.

------
NoMoreNicksLeft
I'm more concerned with when I can expect to see the regular variety of
intelligence.

------
Piskvorrr
Yup, the XKCD translations still hold:
[https://xkcd.com/678/](https://xkcd.com/678/)

------
vonnik
Nick Bostrom is a peddlar of the apocalypse who has made his name by spreading
fear about a fairy creature called superintelligence. He's convinced people to
go looking for weapons of mass destruction in snippets of math and code. But
the WMDs aren't there any more than they were in Iraq. Nice work if you can
get it.

------
themgt
_The human brain contains about 10^11 neurons. Each neuron has about 5_ 10^3
synapses, and signals are transmitted along these synapses at an average
frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals
10^17 ops.*

This kind of nonsense is why no one should take Bostrom seriously. We did not
then and do not know even begin to know _how_ to write software to "simulate"
a human brain, or whether such a task is even possible with modern day tools.
Multiplying random neurobiology stats times 0.5 bits pulled out of your ass ==
AI in 2004?

We have "AI" that can drive a car or a copter, play Chess or Go, translate
speech to text, do image recognition ... but what we mean by human
intelligence is something different. And I see no evidence anyone has made
much progress developing a truly biological-like AI even at the level of say,
a mouse. Which according to Bostrom's math ought to be doable in a 2U chassis
by now, right?

If someone does succeed in writing mouse-AI or dog-AI, I'd believe that could
be scaled up to human-level intelligence very rapidly. But it's clear to me
there's (at least one) fundamental breakthrough missing from the current
approach, because my dog can't play chess or drive a car, but he has a degree
of sentience and awareness (and self-awareness) that no 2016 AI even
approaches.

~~~
zardo
[http://www.openworm.org/](http://www.openworm.org/)

We are on the cusp of revolutionary general nematode level intelligence. Soon
we'll be able to upload nematode minds to the cloud and they can live forever
in the nemamatrix.

~~~
pnut
"You get used to it. I…I don’t even see the code. All I see is blue squiggle,
green squiggle, translucent squiggle."

