
Kurzweil Claims That the Singularity Will Happen by 2029 - lelf
https://futurism.com/kkurzweil-claims-that-the-singularity-will-happen-by-2029/
======
dharmon
Here is an algorithm for making predictions that is as accurate as Kurzweil:

1) Choose a recent 5-year trend, preferably one increasing geometrically.

2) Extrapolate out this trend n years.

3) Your prediction is either the number from (2) above, or, for bonus points,
some directly dependent result (e.g., in 2000, connection speeds were
increasing, instead of predicting "connections will be X mbps", the prediction
is something like "we will be able to watch HD video in real time." This
sounds more impressive.)

4) Good predictions need a timeframe, so choose 2n to give yourself some
buffer for being wrong.

5) You will be a highly sought after futurist at this point, so please send
1/2 your book royalties and speaking fees to my address. (PM for details)

~~~
comex
But we can stream HD video in real time, obviously. Many people have
connections that can download HD video at 10-100x real time, depending on
bitrate; gigabit has been slower to penetrate than I'd like but it is
penetrating, continuing the trend of exponential growth up to the present
(albeit maybe more slowly than you would project from 2000? I couldn't easily
find a graph on Google). If the point is to show the vacuousness of Kurzweil's
extrapolation method, I'm not sure Internet speed is the best example...

~~~
dharmon
Maybe not, but in 2000 downloading JPGs of naked ladies required great
patience, and now I can stream 4K video.

Look at his predictions from the 1989 timeframe and count up how many were
just extrapolating Moore's law and coming up with different implications.

My point was that making unsurprising, long-term predictions (10-20 years) is
both easy and useless. But for some reason we call people who do this,
"futurists".

Now there's another name for people who can make surprising, short-term
predictions (< 10 years). We call them "rich".

------
chimen
I have nothing against Kurzweil and I would love to see that happen as he is
predicting but, after seeing about 5 documentaries about him in particular I
tend to believe he desperately wants that to happen. I mean, the man takes
about 150 pills a day to make sure he will live to see these predictions
happen, nothing against that by all means but I'm not the one to put too much
weight on his predictions.

~~~
WalterSear
He doesn't take 150 pills a day. He just writes that he does, which is worse.

Source: eaten with him, carried his luggage.

~~~
cema
But did you take his pills?

~~~
WalterSear
I was trying to point out - he didn't have any with him :)

------
sireat
Didn't Kurzweil originally say that Singularity could/would happen around
2010?

The cynic in me says that 2029 is still within bounds where Kurzweil can still
participate. He would be 81 then.

The guy is a brilliant polymath but I get the impression he thinks he can
cheat the big sleep.

~~~
steego
No. Here are his predictions:
[https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...](https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#The_Age_of_Intelligent_Machines_.281990.29)

Frankly, it perturbs me when people dump on others who have the courage to
make predictions or misrepresent them. Don't get me wrong, I despise people
who are consistently wrong with their predictions and have the audacity to
pawn themselves off as a visionary.

Rather than dump on somebody else's predictions, I'd encourage other people to
make their own and stake something on it. Put some skin in the same. Where do
you think we'll be in 2029? Why?

~~~
hueving
>Rather than dump on somebody else's predictions, I'd encourage other people
to make their own and stake something on it. Put some skin in the same. Where
do you think we'll be in 2029? Why?

Have you considered that some people think it's stupid to make predictions
like this so they wouldn't want to make their own predictions?

It's like telling people criticizing numerologists to perform better
numerology to prove something.

~~~
steego
Are you comparing making educated guesses based on observations and analysis
with numerology?

Making predictions and providing analysis for those predictions is actually
incredibly valuable because it helps people and companies anticipate trends
and hedge their strategies.

Unlike numerology, making predictions grounded in reason is actually a
valuable and timeless skill that will benefit most people. I'm not encouraging
people to engage in nonsense.

------
charles-salvia
The problem with all these claims is that the term "singularity" implies, to
most people, a sudden change after which civilization will be radically
altered. I guess we'll wake up one morning and just start bowing down to our
AI overlords.

Except realistically, how is this supposed to happen? Just because somebody
runs ./ai_overlord on some linux box, or some VM cloud, doesn't mean that this
AI suddenly somehow magically has insane _industrial_ capacity. It can't
necessarily network with all the physical infrastructure and vehicles to start
carrying out goals. It's existence is limited to the various RAM chips of a
digital network. It will take much longer before an AGI can actually even have
a major physical impact on the world.

Also, intelligence is not magic. There are certain mathematical limits to
intelligence. An AGI cannot solve the halting problem or compute the
Kolmogorov complexity of some data. It can't suddenly build any physical
object from raw materials without a huge amount of infrastructural work being
setup in advance.

And let's not forget that human beings are also Turing complete. Which means,
in theory, anything an AGI can do, we can also do. The main difference is that
an AGI would be able to computationally solve problems at much greater speeds
than a human. But it's totally unclear to me if this somehow translates into a
sudden complete overhaul of all of our civilizational infrastructure
overnight. Frankly, that doesn't seem likely. I think it's more likely the
first AGIs will just trade derivatives or recommend Netflix queues.

~~~
_rpd
I think the concern is that the first AGIs will research how to build better
AGIs, and will do so in time-accelerated virtual environments, performing
years of research in days, then hours, then minutes, etc. Yes, they'll need a
plan to earn bitcoin and pay people to build the first self-replicating
nanobots, but I wonder how much of a hurdle that is going to be?

------
cuspycode
The theory behind the singularity is all about exponential trends. But most
exponential trends tend to turn into some kind of S-curve[1] after a while.
This insight goes back to Thomas Malthus, even though he never derived the
sigmoid function (as far as I know).

The AI singularity could happen before the inflexion point of the S-curve, or
some time after, or never. And as others have already commented, it depends on
how you define "singularity". So I am not holding my breath waiting for the
rapture.

[1]
[https://en.wikipedia.org/wiki/Sigmoid_function](https://en.wikipedia.org/wiki/Sigmoid_function)

------
willtim
Will the singularity be happy that it's been written in Python?

~~~
sebringj
It will and it will also prefer whitespace formatting and yet hate tabs at the
same time.

~~~
sebringj
ouch hit a nerve on python people

------
sebringj
I do like Kurzweil but I don't know if we can even identify a singularity. It
will just seem normal, like now. One could say financially, the big world
banks are so far ahead of us, there's no going back, but life seems the same,
AI is already making decisions for financial transactions and for even
recruiting people (as I've made some of this), but life seems the same. I
think our zeitgeist moves along gently so we don't notice things. The only
thing that will be certain is Kurzweil's frequency of saying "exponential"
will only increase exponentially as he gets older.

~~~
goatlover
For people living 500 years ago, modern society would be a singularity. Much
of the things we can do with technological would be considered magic or
witchcraft. Our scientific understanding has revised our picture of the world
in pretty radical ways.

~~~
sebringj
I agree. I also think our brain has remained the same yet has been fire hosed
with new information as things progress farther and farther along, well at
least those who try to keep up with that sort of thing. Eating cooked meat was
a big deal in having larger brains. We will need something like that but of
course very quickly to be able to process the new stuff. I think that's
Kurzweil's hope or "singularity", just a better iphone that let's us
understand as fast as downloading that's within us to make it way over
simplified, or an "internet of brains".

Side note, "certainty" is a feeling. If we were given extra senses outside of
our own brain and given a sense of certainty this was ourselves, we would
never know if we were controlled by the AI itself, similarly to how the brain
stem hosts the higher brain functions such as the prefrontal cortex. Random,
sorry.

------
mark_l_watson
Within 12 years? Maybe.

I think reaching the singularity will require a few more optimization tricks,
as we saw with pretraining many layer networks, and also a breakthrough or two
in fast hardware, associative memory, etc.

I have been working in the field of AI since 1982, and I find it easy to both
admit how much I don't know and that I have a lot to be modest about. With
these caveats, I predict that AGIs to be hybrids of deep learning neural
networks and more conventional AI systems.

------
flamedoge
is it just me or does anyone want to see something of substance and not
speculations?

~~~
vkou
If you want something of substance, Kurzweil is not the person you want to
talk to.

He has impressive technical work to his name, but his written predictions are
wishful thinking.

------
adwf
I've never 100% bought into the idea of the singularity. Yes, in concept sure
- if we make a computer sufficiently smarter than us, it can then design a
smarter successor and so on.

But in practice? What if there's an asymptotic limit to intelligence? ie. We
can get really smart, but there's only so smart we can get. What if we just
get faster?

You could set an average person a really hard math exam and maybe they'll get
50% correct over the course of 2hrs. What if all we get out of AI is a
computer getting 50% correct in 2 seconds?

Anecdotally, this is also my general impression of the state of AI as it
exists today. Our big advances are generally just making simple calculations
much much faster than before. Deep blue, AlphaGo, Google Search, Deep Learning
in general; it's not smarter, just faster.

~~~
chadgeidel
Not the singluarity, but DeepMind is now capable of applying learning in one
area to learning in another area. Granted, this is just playing video games
but it's an exciting step: [http://www.wired.co.uk/article/deepmind-atari-
learning-seque...](http://www.wired.co.uk/article/deepmind-atari-learning-
sequential-memory-ewc)

------
jtraffic
I have never come across a well-defined concept of "The Singularity." Society
still doesn't agree on how to compare intelligence between humans. What is the
measurement approach that I could use to determine if Kurzweil is correct?

~~~
ericras
Kurzweil has a fantastically vague definition: the point after which things
are so different we can't presently comprehend the magnitude of change.

~~~
jankedeen
That is sort of the Vingean 'mystery' take. If you enjoy his writing style and
SF 'Marooned in Real Time' is worth a read.

------
sonink
I think maybe both Elon Musk and Ray are looking at it wrong. The ideas behind
neural lace/neocortex has an implicit human-centric bias. In some ways this is
similar to how a hundred years ago some people thought earth to be the center
of the universe.

AI is coming on a vastly different 'hardware platform'. There is little reason
to believe that it will be limited by cognitive constraints implicit to the
human form. Extrapolating the rate of growth in processing power makes it
extremely likely to surpass human scale processing very soon.

What both these things imply is that AI has little to gain by 'tieing' itself
down to humans through a neural lace or the neocortex.

~~~
ericb
Computer programs don't have goals unless provided with them. They don't have
wants, or needs, or interests or hopes or dreams. Even if they start
"thinking" the default would be a zen lack of attachment to "existing" or
"advancing". If I write a C program, it doesn't care if it terminates.

Humans can provide the drives on an ongoing basis. The danger is providing any
drive that can get out of hand before we can revise it.

[https://wiki.lesswrong.com/wiki/Paperclip_maximizer](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)

~~~
sonink
Not too sure.

All programs have a very explicit goal - a lot more specific than most humans
do. You will have to try very hard to write a program to not have a goal.

Your laptop has a need of battery life. It lets you know of the same when it
is out of it. Not very different from humans being hungry.

AI might not have hopes or dreams. But a lot of recent game playing AI can
simulate possible future scenarios and develop responses for them. Not very
different from hopes and dreams.

If you write a C program, in some ways, the only thing in the 'programs mind'
is to finish its goal and terminate itself. In a hurry to finish our goals and
terminate ourselves is also perhaps how an Alien intelligence would describe
the human race looking in from the outside.

Humans can provide the drive only because we make bigger plans than AI. And
our plans are a direct expression of our computing power. There is no good
reason why AI wont beat that processing power.

~~~
ericb
I think our goals come in because we have evolved drives vs. being programmed,
not because of processing power.

At the same time, giving an AI a goal is the moment when we become subject to
an indifferent genie who doesn't share our values.

In terms of your original question "what use would they have for us?" I guess
my answer is something like: if we program them correctly, helping us will be
their only goal. If we give them other goals, even slightly off the mark and
don't have a tightly integrated way to course correct (like the neural lace),
then we will be like ants crushed by an indifferent boot-heel.

~~~
sonink
'If we program them correctly, helping us will be their only goal' \- I think
this is too optimistic a view of the future. I suspect that it is also a bit
simplistic.

Even if we assume that we have a way to 'program' them, the problem is that we
ourselves dont know what it means to 'help' humans.

If those humans were a part of the US military facing a situation similar to
ww2, it would mean that AI 'help' those humans drop some nuclear bombs. If
those humans belong to a religious fundamentalist group, it might mean that
the AI help further the cause of their gods.

One probable outcome, that this chain of thought leads to, is that perhaps the
last AI standing will be the one which kills all other AI, or has the
capability to. A possible corollary being that to do so the last AI standing
will be the one which is able to prioritise its own survival over everyone
else - even other humans. Which was not the plan that we started with.

------
xiaoma
I read Kurzweil's earlier book, _The Age of Intelligent Machines "_ as a boy
and remember how fantastical it seemed. At the time I felt like he laid out a
strong case for his predictions but every adult I knew, including my teachers,
thought it was nuts... especially his predictions of a chess AI beating a the
world chess champ and of sequencing the human genome by the year 2000.

Both of those things ended up happening a couple of years _before_ 2000\.
While practically the entire world was saying that Kurzweil's far too
optimistic, it turned out on the whole they were slightly too pessimistic.

It would be foolish to discount this.

~~~
goatlover
Sequencing the genome always comes up as an example of underestimating the
application of Moore's law, but mapping the genome is not the same as
understanding how specific genes work, or the epigenome and the interplay with
the environment, which is a lot more important than merely sequencing genes.

~~~
xiaoma
Yes. A chess AI defeating Kasparov also always comes up as an example of
Moore's law, but creating an AI capable of using bruteforce to examine all
plausible moves dozens of plies deep into tree of potential games is not the
same as creating one capable of making more strategic assessments of a board
needed to win games such as go or Texas hold'em.

Kurzweil has done great work mapping out when AI is likely to reach each of
these milestones and so far has been far more successful in his public
predictions than anyone else.

------
donjigweed
Worth linking to PZ Meyers takedown of Kurzweil's ignorance of biology. [1]

[1] [http://scienceblogs.com/pharyngula/2010/08/21/kurzweil-
still...](http://scienceblogs.com/pharyngula/2010/08/21/kurzweil-still-doesnt-
understa/)

------
Nuzzerino
Kurzweil's prediction of human-level AI is not the same as his Singularity
prediction, which he measures as a machine having processing power greater
than the combined processing power of all human brains on Earth. For that
date, he has given 2045.

------
jankedeen
Not a singularity but any cyber augmented memory interface would lend the
appearance of genius until the inputs/outputs are all wrong. Then you have
artificial alzheimers and dementia.

------
powera
Kurzweil should be treated equivalently to a new-age mystic. He doesn't claim
there is any evidence supporting his claims, and we shouldn't believe there is
any either.

------
mrfusion
I wonder if he's taking the end of moores law into account though?

It's a fairly recent development and I'm not sure how many futurists have
refitted their models for that.

------
devindotcom
But when will it be equally distributed?

------
reasonattlm
My thoughts on Kurzweil's predictions from 12 years ago:

My own two cents thrown into the ring say that the class of future portrayed
in the Singularity is Near (TSiN) is something of a foregone conclusion. It's
quite likely that we'll all be wildly, humorously wrong about the details of
implementation, culture and usage, but - barring existential catastrophe or
disaster - the technological capabilities discussed in TSiN will come to pass.
The human brain will be reverse engineered, simulated and improved upon. The
same goes for the human body; radical life extension is one desirable outcome
of this engineering process. We will merge with our machines as nanotechnology
and molecular manufacturing become mature technologies. Recursively self-
improving general artificial intelligence will develop, and then life will
really get interesting very quickly. And so forth ... the question is not
whether these things will happen, but rather when they will happen - and more
importantly, are we going to be alive and in good health to see this wondrous
future?

We humans are in the process of building tools that enable us to create or
meaningfully interact with ever-greater complexity. There is one important
area of complexity management in which we seem to be making little headway,
however: the organization of humans in business and research. For all that we
can now accomplish with faster computers and enormous leaps in
telecommunications, we don't seem to have made significant inroads in getting
large numbers of humans to cooperate efficiently. I am prepared to go out on a
limb here, as I have done before, and say that business and research cycles
that involve standard-issue humans are incompressible beneath a certain
duration - they cannot be made to happen much faster than is possible today.

This is not to say that they cannot be made cheaper. But cheaper doesn't
equate to faster business and research cycles; rather, it means that any given
problem will be tackled by many more parallel attempts. Expensive projects
mean conservative funding organizations, which means organizational matters
proceed at a slow pace. This is a defining characteristic of our time: we have
blindingly fast rates of research and technological advances once the money is
on the table, but the cycles of business, fundraising and research are still
chained to the old human timetable.

Kurzweil's Singularity is a Vingean slow burn across a decade, driven by
recursively self-improving AI, enhanced human intelligence and the merger of
the two. Interestingly, Kurzweil employs much the same arguments against a
hard takeoff scenario - in which these processes of self-improvement in AI
occur in a matter of hours or days - as I am employing against his proposed
timescale: complexity must be managed and there are limits as to how fast this
can happen. But artificial intelligence, or improved human intelligence, most
likely through machine enhancement, is at the heart of the process.
Intelligence can be thought of as the capacity for dealing with complexity; if
we improve this capacity, then all the old limits we worked within can be
pushed outwards. We don't need to search for keys to complexity if we can
manage the complexity directly. Once the process of intelligence enhancement
begins in earnest, then we can start to talk about compressing business cycles
that existed due to the limits of present day human workers, individually and
collectively.

Until we start pushing these limits, we're still stuck with the slow human
organizational friction, limits on complexity management, and a limit on
exponential growth. Couple this with slow progress towards both organizational
efficiency and the development of general artificial intelligence, and this is
why I believe that Kurzweil is optimistic by at least a decade or two.

