

Machine intelligence will surpass human intelligence, within a few decades. - mdariani
http://www.kurzweilai.net/the-law-of-accelerating-returns

======
johngalt
When I look at history I don't see people that expected incorrect rates of
growth, I see people that predicted incorrect directions of growth.

When nuclear power was new people predicted the future would be shaped by
massive amounts of clean/cheap power. When air travel was new we predicted
that everyone would fly everywhere.

The singularity is our version of the fusion powered car, or the personal
jetpack.

~~~
bluekeybox
Very good point, but computing devices don't have scaling problems that the
other technologies you listed have. Nuclear _would_ have had a similar
generality if, for example, you could safely run a fission power device on
your phone, for example -- but you can't; it just didn't scale down at the
required safety level. Similarly, air travel doesn't scale down as people
thought it would (still no flying cars) even though it works very well on
large scale (airlines, combined, have near monopoly on long-distance travel).
Computing scales both up (supercomputers) and down (cell phones). It may be
that current silicon chips won't scale down to cellular level, but it also may
be that (a) such scaling is not required for human-level AI (I believe it
isn't), or that (b) other technologies such as graphene may replace silicon.

------
grannyg00se
The assertion in the title is nonsense because "human intelligence" is not
properly understood at this point. There is still much debate as to what
drives the brain, and whether there is more to human intelligence than just
the physical brain. Even if we could completely duplicate a human brain
biologically, would it just start working automatically? Would it have
feelings? Would it be able to learn if we could attach sensory inputs? This is
not known. It's quite possible that there is _much_ more to human intelligence
than massive computing power.

See for example <http://en.wikipedia.org/wiki/Mind-body_problem>

~~~
bluekeybox
> There is still much debate as to what drives the brain

There is no debate. The short answer is: sugar.

> whether there is more to human intelligence than just the physical brain

No.

> Even if we could completely duplicate a human brain biologically, would it
> just start working automatically

Not unless you teach it. Humans who grow up in the wild on their own are quite
ape-like -- clearly, human brain doesn't start working automatically on its
own (at least working _well_ ). Also, as demonstrated by machine learning, you
can teach a non-biological system just as efficiently as you can teach a
biological one, which means that the fist AI will reside (or already resides)
on a run-of-the-mill silicon chip.

> Would it be able to learn if we could attach sensory inputs?

It would have to, otherwise it would not be human-level AI by definition.

> It's quite possible that there is much more to human intelligence than
> massive computing power.

Yes, and that is years of learning and good enough ( _just_ good enough)
learning algorithms.

> See for example <http://en.wikipedia.org/wiki/Mind-body_problem>

My foot tells me it doesn't have that problem.

~~~
shadowfox
Since you seem to know this stuff, I had a couple of questions.

> Not unless you teach it. Humans who grow up in the wild on their own are
> quite ape-like -- clearly, human brain doesn't start working automatically
> on its own

I find this very interesting. Are there some articles/papers that I can look
in to?

> also, as demonstrated by machine learning, you can teach a non-biological
> system just as efficiently as you can teach a biological one

Is there something you can refer me to that compares the efficiency of
learning in humans vs machines?

~~~
bluekeybox
> Are there some articles/papers that I can look in to?

There was little real science done because "real" science is when you do
experiments, and a study of this kind would involve the so-called forbidden
experiment, which would create a huge ethical stir. The most often-cited
report is this one:
[http://academic.research.microsoft.com/Publication/2053957/g...](http://academic.research.microsoft.com/Publication/2053957/genie-
a-psycholinguistic-study-of-a-modern-day-wild-child)

Here is a brief Powerpoint with an overview of interesting cases:
[http://www.tulane.edu/~howard/LING411/ppt/d39-FeralChildren....](http://www.tulane.edu/~howard/LING411/ppt/d39-FeralChildren.ppt)

While it is often said that it is hard to draw conclusions from the few
reports that exist -- I think that the one conclusion that can be drawn is
that nearly everything that makes us "human-like" is a product of learning and
social interaction. No confirmed "feral" child has ever been reported to have
acquired language in the full sense of the term.

> Is there something you can refer me to that compares the efficiency of
> learning in humans vs machines?

Just look up any introduction on machine learning. Nearly everything we do
every day (recognizing objects, thinking in terms of goals, etc.) involves a
classification problem, and every classification problem can be reduced to a
regression on data. So there is nothing more fundamental to learning than
fitting a model to data. Humans are pretty efficient learners, but they have
their limits (your Bayesian spam filter is probably more efficient than you on
recognizing spam, even though it may fail in a few cases because it doesn't
quite "live" in the world you live in, meaning that its world model is not as
deep/rich as yours). To acquire a deep/rich world model, you would need more
training (provided you have a correctly balanced algorithm), which would
ultimately be as expensive as raising a child. There is no reason though to
think that it won't be done one day.

~~~
yters
The no free lunch theorem has killed all such hopes for me.

~~~
bluekeybox
Don't see an argument here. One could also use your theorem to arrive to the
exactly opposite conclusion: AI will happen and will eat your lunch, therefore
there will be no free lunch.

~~~
yters
Perhaps you are unaware of this:
<http://en.wikipedia.org/wiki/No_free_lunch_theorem>

------
KeithMajhor
I feel like this is pretty non sequitur. He seems to imply that above some
magic number of transistors intelligence will simply manifest itself.
Intelligence is more than just computational power.

~~~
wladimir
He might be right, though. Intelligence might arise as an emergent property of
some complex system. For example, automated trading.

There's no telling really. It also depends on how you define intelligence. It
doesn't necessarily have to be (explicitly simulated) human intelligence.

(Mind that intelligence in humans wasn't created on demand either. It arose as
an emergent property of the mammalian brain, due to evolutionary pressures)

------
nl
I understand the cynicism of many people here, and I totally agree that there
is a lot more to intelligence than mere computational ability.

Then I type "adress of resurunt in adelide", it autocorrects to "address of
restaurant in adelaide" and shows me a list of restaurants in my city.

I tell my phone to navigate to one, and it understands me and displays a map
with route directions from my current location.

I wonder if anyone in my city has deployed SceneTap yet, and laugh to myself
at how weak it is. I mean - it can only count the number of people in a bar
and identify if they are male or female. I wonder why they aren't rating the
attractiveness of the people there instead? [2]

Driving my car, I swear at the bad drivers ahead of me, and wish the cars here
were driverless. It's crazy it has taken so long! Ok, I understand back in
2004 no cars managed to finish the DARPA challenge across the desert, and the
winner in 2005 said "The impossible has been achieved" when his team's car
finished[3] - but that was 6 years ago! Anyone would think this was
complicated or something.

Then I come on HN, and see everyone bitching about how ignorant it is to think
computers could ever be intelligent. I agree of course - it's obvious that
chess is no test of intelligence, nor is spelling correction, nor is
contextual instructions, nor is machine vision, nor is driving a car, nor is
breaking CAPTCHAs[4]!!!

The Turing test - now that's a true test of intelligence. At least until it is
passed - then _obviously_ it's a flawed test.

[1] [http://blogs.forbes.com/kashmirhill/2011/06/28/using-
facial-...](http://blogs.forbes.com/kashmirhill/2011/06/28/using-facial-
recognition-technology-to-choose-which-bar-to-go-to/)

[2] <http://www.springerlink.com/content/t75552811t449746/>

[3] <http://www.foxnews.com/story/0,2933,171673,00.html>

[4] <http://news.ycombinator.com/item?id=1897932>

------
levand2
No, no it won't. This is the same statement that has been made consistently
since the invention of computers. But the fact is, that despite our increases
in computational power, and our high degree of success with specific tasks, we
still don't even know what general "intelligence" really means, beyond "what
humans do."

Until we have a reasonable definition for intelligence and consciousness
(which I would argue are related), we'll always be moving the goalposts, and
machine intelligence will always be a "few decades" away.

~~~
waqf
The following two statements are not incompossible:

1\. At each time T, we believe that "machine intelligence", according to the
definition in use at time T, is a few decades away;

2\. At each time T, we're right.

In other words, the fact that in thirty years we'll still be talking about how
we'll have machine intelligence one day, has nothing to do with the fact that
by that point we'll already have machine intelligence as we _currently_
conceive it.

~~~
hugh3
_The following two statements are not incompossible: 1\. At each time T, we
believe that "machine intelligence", according to the definition in use at
time T, is a few decades away; 2\. At each time T, we're right._

They're not incompatible, but the second one isn't true.

It's true that a few decades ago some folks thought that a computer capable of
playing good chess would be true "machine intelligence". But they also thought
such a computer would be able to carry on a sensible conversation and
generally act like a _person_.

It turns out that playing a decent game of chess was easier than we thought,
but building a HAL-like computer (man, I just realised that movie is nearly
fifty years old!) is still incredibly difficult.

Still, I shouldn't even be talking about "building" an intelligent computer.
The hardware is the easy part, the software is the hard part. Surely by now,
all the supercomputers in the world are quite capable of simulating a rat's
brain. But are they? Hell no, nobody has ever written a rat-brain simulation
and nobody has the foggiest idea how to start.

~~~
yters
Supposedly the human brain is only on the order of petaflops, so we should be
able to simulate it at this point.

------
watmough
It always seemed self-evident to me that to attain something like human
intelligence, machines would have to share human human visual and auditory
senses and be bootstrapped to a point where some great influx of knowledge
could take place.

We now have that body of knowledge, audio, visual, and instantaneous worldwide
communications.

I have to agree with Kurzweil that, surely machines that broadly exceed humans
cannot be far away, though it may be their poetry will leave something to be
desired.

------
shriphani
I have to disagree with Kurzweil here. I remember in one of the Kinect videos,
the researcher who worked on the mic array explained that the billion neurons
allowed such a level of processing that we could extract info from the noise
easily.

The amount of computational resources needed to combat information overload on
one sense organ is immense. To be able to pull that off with 5 organs,
regulate body function, make inferences with ease, apply vast swaths of info
to fields completely unrelated to the underlying information is just immense.
Although, I can see a budding brain peripherals market in about 2050+. It is
just that we need a lot more than computational resources to emulate the
brain.

One interesting quote I took away from a cognition class (the only one I
considered legit in our Psych dept.) was that we tend to compare our brain to
the most advanced tech of the moment. My gut feeling says that the current
computational models we use are not adequate representations of the human mind
and potentially a socratic approach also might fall short.

~~~
mdda
Just to pick on one piece of your argument : the difference between 1 sense
organ and 5 is only a 5-fold increase. Even worst case it's like a 25-fold
increase (network effect, etc). But a 25-fold increase should take less than a
decade - as demonstrated by the power of the processor in your phone...

But I'll definitely agree that true AI is a difficult target to aim at, since
AFAIK no-one knows what the important magical bit of intelligence is.

------
yters
By 2026, apparently: "But because we’re doubling the rate of progress every
decade, we’ll see a century of progress–at today’s rate–in only 25 calendar
years."

"In line with my earlier predictions, supercomputers will achieve one human
brain capacity by 2010, and personal computers will do so by around 2020. By
2030, it will take a village of human brains (around a thousand) to match
$1000 of computing."

So, have super computers achieved one human brain's worth of computing? I
believe we're only in the petaflop range, which is a factor of 10^10 behind
his prediction.

At any rate, I believe AI is logically impossible.

~~~
D_Alex
Estimates of the computing power of human brain vary widely, the ones that
have some logic behind them range from 0.1 to 1000 petaflops. IMHO, they tend
to be very "pro-brain", and should be seen as an upper limit.

Currently, the fastest supercomputer does abt 8 petaflops. This might be
already 80 times more than the power of a human brain. And it could be less..
but where is your 10^10 factor from?

~~~
yters
Ah, whoops, I took his human race number. n/m, seems we're on track for
computational performance, or even ahead of schedule.

Sweet, I should start seeing human type sentience any day now!

------
theblueadept111
Machine intelligence will soon diverge from Kurzweil's hyper-optimistic
predictions and timetables so dramatically that he won't be able to gloss over
it any longer.

As for where AI stands today, I'd say we're not even at the level of
simulating a jellyfish, or even a single cell. Simulated protein folding is
still a pipe-dream for our fastest supercomputers, nevermind simulating the
intelligence of a sentient being.

On top of that, algorithmic systems as we understand them may be a shallow
model of what it means to have 'human intelligence'. How do you design a
circuit that feels pain, or know when you've succeeded?

------
podperson
Definitely true for sufficiently weak definitions of "human intelligence".
E.g. already true for "chess-playing" ability.

Incientally, "a few decades" means "just after we get fusion power" right?

------
jackfoxy
Will strong AI ever enjoy a gin and tonic at the end of work on a summer's
day? Will it ever feel the endorphin release from a strong run on the
treadmill? Of course these are rhetorical questions. AI, however you define
it, is a machine, a tool. It may become arbitrarily powerful, but that's all
it will ever be; and it will always be a creation of man.

~~~
mdda
And when the AI designs a machine that is superior to it, and superior to
anything that man has designed - then what? Or are you suggesting that humans
are 'by definition' the only beings that can be said to have created
something?

The AI (or its progeny) might disagree - and be able to argue pretty
convincingly... After all, somewhere down the line they'll be much smarter
than any human.

Or are you also arguing that human intelligence is some kind of asymptotic
limit? Tell that to a kid that's smarter than its parents, or teachers.

------
dreamdu5t
Summary of every post from KurzweilAI.net since it's launch years ago:

"Machine intelligence will surpass human intelligence and a singularity will
happen. Why? Technology gets more advanced."

Broken record...

~~~
cageface
Somebody needs to introduce Kurzweil to the concept of the asymptote.

~~~
KeithMajhor
I agree. Something like this maybe?

<http://dl.dropbox.com/u/8464115/graph_test.html>

~~~
mdda
But unless you've got evidence that we've started to 'turn the other corner',
doesn't your graph suggest that the level of technology is going to increase
by the same factor as it did since the (say) 1950s?

And even then, maybe the AI level of technology is below the level to which we
might be tending towards?

------
goberoi
Mitch Kapor and Ray Kurzweil made a public bet on this back in 2002. Mitch
challenged that "By 2029 no computer - or 'machine intelligence' - will have
passed the Turing Test." Stakes: $20k to the EFF or Kurzweil Foundation
depending on the winner. See: <http://longbets.org/1/>

------
mdariani
i would not say it is impossible. predicting the futuere is so difficult and
things are changing so fast. i agree with the author "kurzweil" that we will
first enter a hybrid phase between machines and humans. later, machines will
surpass humans. the progress of tech evolution is getting faster and faster
and i can't see right now what should stop this. i'm not a scientist, but what
i can read in the article looks very logic and therefore i agree with
kurzweil.

------
melling
That great. I've got so many questions.

~~~
melling
Ouch! I got beat up over this. I still think it's funny. This man vs. machine
appears on HN from time to time, and it probably will continue for another
three decades. I've just gotta ask, are we really accomplishing anything here?

