
The Road to Superintelligence (2015) - rbanffy
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
======
goatlover
I'm not convinced that progress has continued to accelerate. Compare the past
70 years to the previous 70 years and ask yourself which period of time
experienced more radical progress, upheaval, and scientific discovery. My
suspicion is that there was more from 1877-1947 than from 1947 till now.

~~~
ynniv
You're looking at the impact on everyday life, which has nothing to do with
the abstract advancement of technology. Superintelligence only requires
continued exponential growth of computing power, which has documented
consistency.

~~~
Retric
Not over the last 10 years vs the prior 10 years.

[https://en.wikipedia.org/wiki/Logistic_function](https://en.wikipedia.org/wiki/Logistic_function)
/ Sigmoid function's look exponential, but there is no longer a lot of room at
the bottom. When things where 10,000 atoms wide dropping that to 1,000 was no
big deal yet massive speedup. When they hit 5 atoms wide we what use 0.5
atoms?

PS: Arrays of low power processors like GPU's are still making progress, but
we are rapidly approaching the point where there is more than one core per
pixel. We might be able to use 1,000 cores per pixel, but I doubt it. EX: At
1080p SLI GTX 1080 Ti = 289 pixel's per CUDA core. Sure, 4/8k will push this
off for a while, but monitor resolution has not been going up all that fast.

~~~
andars
Why are pixels relevant to a discussion on computing power?

~~~
Retric
Pointing out that add more cores is yet another wall because FLOPS is only a
proxy and few things are truly embarrassingly parallel when your start talking
1+ billion cores.

~~~
mannykannot
Furthermore, adding cores is not an exponentially-growing process. On the
other hand, there is some evidence, from the fact that intelligence is
possible within the human brain, that intelligence is embarrassingly parallel.

~~~
Retric
I think neurons are closer to transistor analogs than core analogs. The brain
wires them up into networks that have meaning vs passing along data that has
meaning. And in that context we crossed the 10 billion transistor on a chip
milestone recently vs 100 billion neurons in a human brain.

------
axplusb
Superintelligence, The Idea That Eats Smart People:
[http://idlewords.com/talks/superintelligence.htm](http://idlewords.com/talks/superintelligence.htm)

~~~
celeritascelery
This is the best thing I have read in a long time. I have always been
skeptical of "superintelligence" and glad to see I am not alone.

------
ThomPete
Personally I believe that a huge part of intelligence and superintelligence is
about emergent complexity in networks which has increased along with the
computational power.

Humans (and biological life in general) is as far as I can see pattern
recognizing feedback loops. Humans have more connections in the brain and
better/longer memory than other species which seems to be at least somehow
fundamental to our difference from the rest.

So while we don't know for sure IF this complexity is fundamental to
intelligence we sure seem to be on the right track.

~~~
Smaug123
If you add too much complexity of the wrong type, you get a star. Just making
something more complex by no means guarantees making it cleverer. And beware
of using "emergence" to hide the need for an actual explanation:
[http://lesswrong.com/lw/iv/the_futility_of_emergence/](http://lesswrong.com/lw/iv/the_futility_of_emergence/)

~~~
ThomPete
I am not talking about just complexity anymore than an evolutionist is talking
about simple randomness in the evolution of species.

Emerging in this context means incremental i.e. in small steps based on
previous steps. Plants react to light they don't think about what light is.

Not sure what that article has to do with what we talk about here.

~~~
Smaug123
It is in response to "So while we don't know for sure IF this complexity is
fundamental to intelligence we sure seem to be on the right track."

Which to me reads as "hey, if we make things more complex then they might
become intelligent - who knows!" which seems extremely optimistic.

~~~
ThomPete
thats not what i said. I said if emerging complexity is at the core we are on
the right track. I am willing to hear arguments against complexity at the
center of this but then that should be what we argue. The star comparison
doesent work in this vontext anymore than random does in evolution.

------
Osiris30
As a counter, see the previous discussion on Kevin Kelly - the AI Cargo Cult &
the myth of a Superhuman AI (1).

See also several other articles making similar points (2).

And "In late 2008, economist Robin Hanson and AI theorist Eliezer Yudkowsky
conducted an online debate about the future of artificial intelligence, and in
particular about whether generally intelligent AIs will be able to improve
their own capabilities very quickly (a.k.a. “foom”)."

(1)
[https://news.ycombinator.com/item?id=14205042](https://news.ycombinator.com/item?id=14205042)

(2)
[http://www.salon.com/2015/10/15/calm_down_artificial_intelli...](http://www.salon.com/2015/10/15/calm_down_artificial_intelligence_is_not_going_to_take_over_the_world_partner/)

(3) [https://intelligence.org/ai-foom-debate/](https://intelligence.org/ai-
foom-debate/)

~~~
skgoa
Ah, Yudkowsky... The autodidact "AI theorist" who does not understand simple
math and philosophy concepts, is afraid of being resurrected and punished by a
future super AI, has not added anything to human knowledge despite collecting
millions of dollars to fund his research institute (which changes its name
every few years for some reason) and has actually argued with a bot on reddit.
Even when you ignore the singularity bullshit he spews, it is astonishing that
anyone takes him seriously.

~~~
Smaug123
> is afraid of being resurrected and punished by a future super AI

I hope you similarly mock those Christians or Hindus you come across. What
makes this belief system so much more worthy of scorn, even if Yudkowsky
actually were afraid of this (given that, AFAIK, he has never expressed
anything to suggest that he is)?

"Autodidact" is not the insult you seem to think it is.

Remove those two, and tone down the attacking voice, and you might have
something one could discuss.

~~~
sheepdestroyer
Just nick picking but any irrational belief is mockable, if not the people
holding it themselves. And when people holding risible beliefs speak from
authority, it is not unfair to cite those.

~~~
Smaug123
There, but for the grace of God, go I. (And I repeat that I don't know of an
instance where Yudkowsky has ever expressed fear that he will be revived by a
future AI and tortured. This sounds like it's referring crudely to the memetic
hazard known as Roko's Basilisk, which he has explicitly stated he doesn't
believe in.)

~~~
skgoa
You are right, I misremembered that.

------
mikorym
I agree with some of the below comments that there is more hype than reason.
For me the observation is from a different starting point: progress in
mathematics is not increasing exponentially, and personally I feel that
revisiting parts of mathematics is more enlightening than the current trends
in AI.

NN's for example are mostly just an application of linear algebra where design
determines the nodes and training the weights; and then, the "decision making"
is done by a notion of product. Very useful, but not at all superintellegence.

~~~
foodie_
My feelings are similar. Current AI seems to be the infinite monkey theorem
put into practice. I'm not saying it doesn't work, it obviously is, but I
wonder what the limitations of it are. When do we start running out of monkeys
too the point that only nation states, and companies with the resources of
one, are able to achieve the next level?

------
hacker_9
Quite a good write up I thought, especially interesting about the OpenWorm
project and their successfully funded Kickstarter [1]

[1] [https://www.kickstarter.com/projects/openworm/openworm-a-
dig...](https://www.kickstarter.com/projects/openworm/openworm-a-digital-
organism-in-your-browser)

------
j7ake
So what's the evidence that exponential growth of AI will continue at this
rate for the next few decades ?

~~~
_greim_
Nothing conclusive, but:

1\. If we look at rate of change over the last few thousand years, it doesn't
seem completely unreasonable to extrapolate that forward.

2\. It's hard to argue that no more breakthroughs can possibly be made in
physical computing architecture or computer science, though admittedly it's
equally hard to argue breakthroughs _definitely will_ be made. However...

3\. Modern industry is heavily incentivized toward AI advancement, because of
the optimization capabilities it provides. Where such incentives exist,
barriers tend to dissolve.

------
HillaryBriss
> _Imagine taking a time machine back to 1750—a time when ... all
> transportation ran on hay_

wait, what? weren't ocean-going ships using wind power in 1750? come to think
of it, weren't ships using wind power in 1750 _BC?_

------
_greim_
In my view, we should avoid terms like "progress" and "intelligence" in these
conversations, since they make implicit value judgments and tend to draw us
irresistibly into certain unproductive patterns of argument. The topic of
evolution has a similar pitfall.

------
SeanDav
> _" In order for someone to be transported into the future and die from the
> level of shock they’d experience, they have to go enough years ahead that a
> “die level of progress,” or a Die Progress Unit (DPU) has been achieved."_

I suspect that it is no longer possible to achieve a DPU going forward. We now
know that amazing things have been achieved and will continue to be achieved.
Plus we have science fiction that can conceive of almost any possibility, from
intelligence at sub-atomic scale to creatures the size of galaxies, parallel
universes, humans capable of controlling time and space by mere thought etc
etc.

Based on recent history, I _expect_ the future to be unrecognisable!

------
yters
Why assume intelligence is computable?

------
nzonbi
Some thoughts:

Progress is not like a liquid, that increases continually with research.
Progress is an unknown function, composed of huge number of discrete
"discoveries". Each discovery is hidden behind research of variable level of
difficulty. Each discovery, may or may not enable more discoveries. The
reality may have either a finite or infinite amount of discoveries waiting to
be made. We don't know for sure. This universe could have a finite amount of
discoveries. But it is possible that we may find a way to travel to infinite
other universes, and these may have infinite discoveries waiting to be made.

The shape of the technical progress function, depends on these unknown
factors. So it is wrong to assume that it is an exponential. Although an
exponential seems like a good approximation around the local point in which we
currently are. The population have been growing, and the economy have been
growing. These two factors have enabled an increased amount of resources
dedicated to research. The more research being done, increases the probability
of discovering the available discoveries that exist at the current
technological level. The speed of progress depends on the amount of research,
and the number of latent discoveries hidden in our reality.

We are approximating the point, in which we can build artificial general
intelligence. This will be a machine similar to a human mind, but capable of
dramatically faster reasoning. Its internal dialogue will be millions of times
faster than a human mind. Because it will move at electrical speeds, instead
of biological speeds. Also it will have practically perfect and unlimited
memory (compared to human capabilities). And will have almost instantaneous
capability to resolve mathematical calculations of reasonable level. With
these improvements, it can be expected that it will be much more effective at
making discoveries than a human. Additionally these machines will be
industrially replicable. So it will be possible to put a large amount of them
at work on problems. It is reasonable to expect that these machines will
resolve the chain of discoveries available to be made faster than humans.

These artificial machines will maximize discoveries, from the chain of
discoveries available. What this is going to mean, depends on the actual
unknown amount discoveries available in this universe. If things like
nanotechnology, molecular machines, biological machines, etc, are actually
possible, these intelligent machines are well equipped to discover them
dramatically faster than we humans. If there is new physic available to be
discovered, these machines have much better chance than humans at discovering
it.

Will machine superintelligence actually create a singularity? maybe, or maybe
not. Depends on the amount of discoveries to be made, contained in this
universe, and their level of difficulty. It could be the case that our
universe is running out of hidden discoveries. So any prediction on the shape
of the curve of progress, is pure speculation. For example, we could have
already run out of exploitable significant discoveries in physics. Or we could
be on the verge of discovering faster-than-light/instantaneous communications,
and lots of other things.

In my opinion the invention of artificial general intelligence, and
superintelligence is imminent. A matter of years. I base this on introspective
observation of the thinking process of my mind. And in comparison of it with
the operation of artificial neural networks. They show similarities. The
thinking process of the mind is entirely reproducible with deep learning
networks, assembled in the right structure. An interesting topic is, who is
doing this research. Obviously, the big tech corporations are working on it.
But are states organizations also working on it? Who is doing the biggest
investment? Who has the best odds of inventing it? What will happen when
someone gets it? Are they going to immediately announce it?

------
vonnik
Can we put (2015) in the title? Maybe replace the publication name, which is
visible twice. This was big when it was published, now slightly outdated.

~~~
amelius
Also put "AI" in the title. Upon reading the title, I thought it was about
genetics/breeding a superhuman.

~~~
folli
This is HN. This gives AI a slightly higher Bayesian prior than genetics.

