
Kurzweil Responds: Don't Underestimate the Singularity (2011) - llambda
http://www.technologyreview.com/view/425818/kurzweil-responds-dont-underestimate-the-singularity/
======
mycroftiv
When I was a young person in the 1970s reading about technology, space travel
was the focus of futurism and predictions. I read a lot about the
"accelerating rate of progress" in space technologies, and many experts
confidently predicted permanent lunar colonies around 1990 and luxurious
martian and orbital space colony living post-2000. The reasoning was more or
less the same as the "AI singularists". If you look at the "curve of progress"
in space tech from WWII to 1970 and extrapolate from it, we ought to have the
full "Star Trek" technologies now.

However, as we know, the pace of progress in space travel and settlement
technology slowed down and almost stagnated, despite a lot of very smart
people who thought fast space colonization was a sure, inevitable prediction.

I think the same goes for extrapolations from our current computing technology
- as exciting as the blue-sky scenarios are, and as fervently as I hope that
brain uploading and similar goals will be achieved, I think the barriers to
computational transcendence of the current limits of our bodies will probably
be at least as challenging as the barriers to permanent off-earth
colonization.

~~~
rymith
You are missing a fundamental key difference. There is no profit in reaching
the stars. Besides that, there are also no barriers that come close to the
issues with the Physics and energy requirements of space travel. Unlike a warp
drive which we don't even know is possible, the human brain is an actual
physical device that exists, and simply uses chemicals as it's logic gates and
transistors. We both run on electricity.

I'm a biologist by schooling, and a programmer by occupation. So I understand
the science on both sides, and it's not a matter of if, it's a matter of when.
And it's coming a lot sooner than people think.

~~~
indiecore
>There is no profit in reaching the stars.

There's no profit in the singularity either.

~~~
rymith
Are you kidding? Because I don't want to be a jerk, but you cannot have
possibly thought that through.

~~~
yk
Actually he is correct, if everything is virtualized (and therefore abundant)
, then the price of everything drops to zero.

~~~
rymith
You're a little unclear as to why it's called the singularity. It means a
point so drastically different that we cannot predict what the world will look
like after it happens. However, should a planet still exist on the day after
this occurs, you can pretty much guarantee that the company that controls this
technology will make Apple, Google, and Microsoft look like technological
infants. Well, unless it is Google, which seems to be the play they are making
by hiring RK.

Also, World of Warcraft is virtualized, and everything in that game is
potentially infinitely abundant. And yet, it still seems to pull in billions.
So...

~~~
yk
Yes, but my point is essentially that talking about profits in a post
singularity world is probably as sensible as talking about Mao as CEO of
China. After the singularity we will likely have a different socio-economic
system and profits will probably be something like a noble rank is today.

------
md224
As I've said before, I think Kurzweil's focus on a singular A.I. is
misguided... he's missed the target but hit the tree. Much of the exponential
growth in technological progress (from the printing press to the Internet) has
to do with improving the efficiency of information flow between individuals.
I'm thinking that at some point this flow may approach neuronal levels of
complexity and speed, at which point the collective intelligence of society
(or groups in society) will tip over into something resembling a multi-human
organism. It's hard to grasp now, but I doubt that whatever the Singularity
ends up being will be easy to grasp at this pre-Singularity stage.

Arguing about brains in boxes just seems a little myopic to me.

------
stcredzero
_> And while the translation of the genome into a brain is not
straightforward, the brain cannot have more design information than the
genome._

We know this is mostly true, but not entirely true. While the genome is
probably the biggest repository of information driving brain development,
there is no hard and fast law saying it's the only one. It's easy to see how a
bit of information external to the genome, like "maternal alcohol consumption
and lead ingestion are bad," can effect brain development.

~~~
mistercow
And besides that, the information content of a design is not related to the
number of unique structures the design describes. By analogy, the output of
`print i for i in [0...100000]` contains 100000 unique lines, but the
information content is bounded by the program that generated it.

Oddly enough, Kurzweil actually misquoted Allen there, and his refutation
_does_ apply quite well to Allen's original statement, which was that each
neural structure has been _individually refined_ by evolution. Evolution does,
in fact, operate solely on the genome (plus or minus some epigenetic factors
here and there), and it really is impossible for it to tune that many
structures individually.

So maybe Kurzweil just phrased that point poorly.

------
HillOBeans
Mitch Kapor and Kurzweil have a $20,000 bet on whether or not any "machine
intelligence" will have passed the Turing Test by 2029. Kurzweil, obviously,
is on the "for" side. Kapor's taking the "against" side stems from his
experience as a software developer. I think he has a point: consider how
incredibly difficult it is to give any machine a set of instructions that will
get it to do EXACTLY what you want it to do. The promises of new languages,
frameworks, and methodologies have all failed to magically make software easy
and bug-free. At some level, a machine intelligence is going to need some sort
of software. We like to call the game engines we play against "AI"s, but are
they really intelligent? In the end they are just rules engines. It IS fun to
dream, and those dreams can certainly lead to technological advances and
progress, but as someone who spends much of my days attempting to coerce a
machine into understanding what I need it to do, I also share Kapor's
skepticism. I think Kapor himself said it best: "In the end, I think Ray is
smarter and more capable than any machine is going to be."

~~~
akkartik
Perhaps you don't need perfectly designed code, just the right kind of
selection pressure? Consider the spaghetti that is our DNA or brain structure.
YAGNI design might well be a poor fit for the fractal redundancy necessary for
truly complex systems.

~~~
HillOBeans
I am aware that there are already some programs that are capable of
"learning", and you could certainly say that they are selecting the correct
paths based on trial-and-error, and "remembering" in order to build faster and
more accurate responses. I don't pretend to understand how human brains are
wired, but I don't really view DNA as "spaghetti". It is code. Wonderfully
designed TERNARY code integrated into a system complete with an interpreter
and built-in code cloning, error-checking and correction. We humans have yet
to design something that works so efficiently. We still suffer from errors in
the code, though - mutations that cause such things as CF, Downs, Sickle-Cell,
etc.

I suppose it is really the brain one would seek to emulate if they were trying
to create some form of true AI.

~~~
akkartik
Yes it's dangerous to draw from examples that we don't understand the workings
of. But all the evidence seems to indicate that brains and DNA don't pay much
attention to parsimony or micro-optimization. Or high-level architecture. Or
modularity. Nature just does what works.

------
31reasons
Just in case people didn't notice, the article is from October 19, 2011.

~~~
akkartik
It's a response to <http://news.ycombinator.com/item?id=4925204> on the
frontpage, which in turn was likely a reaction to
<http://news.ycombinator.com/item?id=4923914>. One of the problem of the HN
archives is that they lose these connections.

------
teeja
The essence of the story on Watson: it beat the humans because it was faster
on the trigger finger. Same story as John Henry and the steam-powered hammer.
As for how it acquired its "knowledge" ... by "reading" documents ... yes, an
ocean is bigger than a teacup. But water can't swim. Call me when a general-
purpose machine is motivated to mull over all that knowledge and originates
new, original, testable answers to long-standing questions.

------
yk
One aspect of 'the rapture of the nerds' that I find most interesting, is that
it will drag parts of metaphysics into the realm of science, since a
simulation of the meuronal connections is (rather likely) just a computational
problem, while explanation of a working AGI woild be quite a challenge for
dualistic ideas.

~~~
return0
Somehow i believe people will never stop having metaphysical questions. Even
diehard eliminativists can't help finding the 'why?' question annoying.

------
Kilimanjaro
We will never achieve artificial intelligence unless we create a program that
can differentiate good from evil, pleasure from pain, positive from negative.

The basic building blocks of life.

If we continue making faster and faster machines at calculating formulas and
storing knowledge we will have just that, a giant calculator.

~~~
im3w1l
I definitely think you are on to something. The attempts at artificial
intelligence I am aware of all consist of some sort of optimizing, so trying
to find a good thing to optimize seems a very reasonable thing to try.

(This is just my speculation, so take it with a grain of salt)

Here I think it is reasonable to look to human motivation. Maybe by making an
agent that optimizes what a human brain optimizes, we could see similar
behaviour?

A reasonable start is Maslows hierarchy of needs.

1\. Biological and physiological needs. For an embodied AI, this could
correspond to integrity checks coming up valid, battery charging, servicing.

2\. Safety needs. I think these emerge from prediction+physiological needs.

3\. After that we have social needs. This one is a little bit tricky. Maybe we
could put in a hard coded facial expression detector?

4\. Esteem needs. Social+prediction

5\. Cognitive needs. I have no idea how this could be implemented

6\. Aesthetic needs. I think these are pretty much hard-coded in humans, but
are quite complex. Coding this will be ugly (irony)

7\. Self-actualization???

Now, from 1 and 3 it is reasonable to suppose (provided the optimizer is good
enough) that we could train the AI, like one trains a dog. You give command,
AI obeys, you smile/pet it ( -> reward).

It does something bad, you punish it.

In order for the optimization procedure to not take unreasonably long time, I
think it is important that the initial state has some instincts.

Make sound if you need battery. Pay attention to sounds that are speechlike.

Maybe give it something aking to filial imprinting could also be a good idea.

Extensive research on neural basis for motivation should be prioritized in my
opinion.

