
What is the Technological Singularity? - zeedotme
http://thenextweb.com/industry/2011/06/19/what-is-the-technological-singularity/
======
possibilistic
The singularity is _not_ near.

Kurzweil continues to pitch his prognostications for strong AI so that he can
sell more books and speaking engagements. Despite experts in the fields of
biology [1] and cognitive science telling him that we just don't understand
these things very well, he professes that the key discoveries lie just around
the corner and that we should just wait. You know, there's probably a business
in selling some of his followers rapture insurance...

I'm a systems biologist who is at an intersection of some of the various
fields where Kurzweil makes his boldest predictions, and I'm willing to place
some bets. Before we see this ludicrous explosion in technology and AI that he
describes, I think we're much more likely to see all of these:

* A cure for AIDS.

* Whole-genome sequencing for under $100.

* A scale up and widespread use of tissue bioreactors.

* Identified causes and cures for neurodegeneration.

* Effective treatments and diagnostic tools for many or most forms of cancer.

* Artificial hematopoietic cells approved for human use (no more blood donations, etc.)

* Designer babies with hundreds if not thousands of selected markers.

These problems are easier than human-surpassing machine intelligence. While I
don't have a problem with believing machines will one day be smarter than us,
I find it offensive that this guy continues to sell us on it happening within
our lifetimes. It almost certainly won't.

Kurzweil has almost no appreciation (and I would argue he has little
understanding) for biological systems and their complexity. [2] We can barely
understand a single cell, and yet we're supposed to understand the brain in
only a few years!

The best part is, when is is called out on this argument, Kurzweil admits it.
But he then proceeds to tell us silly biologists in the errors of our ways--
that we are missing his point entirely. Strong AI doesn't need understanding
from the field of biology in order to happen. Consciousness can't be _that_
hard.

In closing, SMBC's take: <http://www.smbc-comics.com/comics/20100114.gif>

[1] PZ Meyers vs. Kurzweil -
[http://scienceblogs.com/pharyngula/2009/02/singularly_silly_...](http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php)

[2]
[http://pipeline.corante.com/archives/2010/08/18/reverseengin...](http://pipeline.corante.com/archives/2010/08/18/reverseengineering_the_human_brain_really.php)

~~~
DavidChouinard
I sympathize with your position. Kurzweil seems to regularly makes strong
remarks that shock me in their assertive naivety.

However, our species is incredibly poor at intuitively understanding
exponential trends. Kurzweil's essential point is that our gut-feeling
predictions of the future will be off substantially. Specifically, our
intuitive sense of future progress significantly underestimates actual
progress and the gap systematically widens as time progresses.

Neither you or I could have intuitively predicted twenty years ago the level
of today's technological advancement. And we'd be even more wrong in the
predicting the next twenty.

It's far too easy to dismiss Kurzweil's work as that of a superficial nutjob
who wants to "sell more books and speaking engagements." Kuzweil's crunched
the numbers. While he might not be right on everything, his predictions are
based on data, but yours are based on human intuition that has historically
been wrong.

I would be very careful of the knee-jerk reaction you display to Kurzweil's
work. Keep in mind we are cognitively disabled at intuitively predicting
exponential progress.

~~~
Retric
The problem is not all tasks are of linear difficulty. If building a 1% more
intelligent AI takes 1% more computational power then he might have a point.
However if building a 1% more intelligent AI takes 2x the computational power
then you don't end up with a singularity because you don't get a strong
feedback loop.

Consider starting at the low end of the scale booting windows 7 on a 2KHz CPU
is about twice as fast as a 1KHz CPU and all things being equal faster
processors let you get more things done but having a 1GHz vs 2GHz CPU has
little effect on boot times. We might be able to build a 10x human level
intelligence but if that can only build a 11x human intelligence and that can
only build a 11.1x human intelligence you quickly hit a wall. Or for a more
natural looking progression (10x, 15x, 18.75x, 21.09x, 22.4x). x sub 1 = 10;
x(sub n) = x(n-1) * (2^n+1)/ (2^n)

~~~
MatthewPhillips
The point of TS is that once you get to 1.0001x smarter than human you get to
wall really quickly. If the wall is 2x or 11x, self-improving AI will reach it
in no time. And we'll still have tremendous benefits from it, no matter how
close the wall is to human intelligence. AI only need be a tiny bit smarter
than humans to achieve things such as interstellar travel, eternal life, etc.

~~~
Retric
Slightly more intelligent than human is not the same as magic. Self improving
AI is a nice theory but it assumes you can get better at everything at the
same time without tradeoffs and that takes better hardware not just software.
Science fiction is full of magic masquerading as technology, but it ignores
the possibility that reality may not support that capability. Predicting the
future requires more than simply extrapolating trends it also requires
understanding physical limitations, early engines saw rapid increases in both
HP and efficiency but that trend slowed down for a reason.

EX: Literal eternal life requires a universe that is capable of lasting that
long. If the fundamental laws of physics limit either the age of the universe
or the amount of useful work that can be done then you at some point
everything dies. So no we are down from eternal to just really long life.

Also, 1.0001x a smart as a human is not really significant. 1,000 AI's 1.0001x
smarter than a human is not going to be able to say beat 1 grand master at go
without training or hundreds of years of preparation. I have no problem saying
AI get's to quickly build better software, but building hardware at scale
takes time.

------
dstein
The thing I don't like about Kurzweil's singularity prediction is it kind of
sounds like predictions in the 1960's that we'd all be living in space by now.
Yet, here we are 50 years later still burning gas in our cars but we have
phenominal GPS technology in our phones. People (even scientists) tend to
overestimate the "deepness" of technological advancement and underestimate the
"wideness". I predict that by the time a singularity occurs that it'll be much
farther out in time and may not even be a distinguishable event.

~~~
gaius
A great example of this is Star Trek. They can travel at warp speed, but they
can't use a communicator to send a video signal back to the ship, the captain
has to beam down to see it himself...

~~~
ceejayoz
In fairness, that's more of an "interesting TV" issue than a "failure of
imagination" issue. A show about a bunch of people who sit around in a room
teleoperating a robot probably would be a lot less interesting.

------
Symmetry
It sort of annoys me that whenever people discuss technological singularities
its always through Kurzweil's ideas. I feel that Verner Vinge, the guy who
invented the term, had a much more interesting take on it, and there are other
schools of thought as well. [http://singinst.org/blog/2007/09/30/three-major-
singularity-...](http://singinst.org/blog/2007/09/30/three-major-singularity-
schools/)

~~~
gnosis
Indeed. Kurzweil is far from the only Singulitarian. And virtually all of them
have more interesting things to say on the matter than he does.

------
wheels
_One only needs to look at history to see our capacity for rapid improvement
in retrospect. One of my favorite metrics is life expectancy. In 1800, most
people wouldn’t expect to live past 30._

It's hard to take someone seriously writing about science who makes such a
dumb mistake. The life expectancy for a 20 year old in 1800 was 64. I don't
see how it can even pass the smell test for an intelligent person to assume
that a couple hundred years ago people systematically keeled over at 30.

~~~
burgerbrain
You are deliberately disregarding the improvements in modern medicine that
caused such a dramatic drop in child/infant death. This seems like a pretty
classic example of people not realizing how _wildly spectacular_ modern
society is, because they're living in it.

------
ameasure
We've already passed the technological singularity, we just don't realize it
because we assume machine intelligence is going to act like human
intelligence.

Look around, computers have been more intelligent than humans in many
different domains for decades. Computers can solve complex equations billions
of times faster than humans, they can play chess better than people that have
dedicated their entire lives to mastering the game, they can sift through the
information of billions of websites in hundredths of a second, they can answer
obscure and complex questions better than the best Jeopardy players, they can
accurately model extremely complex systems like the world's weather, and the
list goes on and on and on.

But guess what, your calculator isn't plotting to kill you because unlike
humans it hasn't been programmed to do that. There may come a time when
someone is evil or careless enough to program these traits into a machine
capable of acting on them, but this is a game humans (and life in general)
have been playing for a very long time, and we're very good at it. Good luck
programming a billion years of evolution based learning into your calculator.

------
MatthewPhillips
I'm skeptical about TS because it seems to good to be true (or too terrible),
but every time I hear about it I can't help but think of this clip [1] from
Neil deGrasse Tyson about how aliens would only have to be less than a percent
smarter than us for us to look like peons to them. So it makes it seem
inevitable that if strong AI comes to pass and AI is measures more intelligent
than us, it's the end of human civilization and the begin of the era of
machines. Our only hope is they decide to keep us around as pets, and are good
owners. If so we'll be able to kick our feet back and enjoy eternity (should
we choose to live it) debating iPhone vs. Android on Hacker News.

[1] <http://www.youtube.com/watch?v=w-uZZ7RdL5E>

~~~
bugsbunnyak
It's an interesting thought experiment, but the 1% is an oversimplification -
necessary, perhaps, in a general audience. The 1% difference he talks about is
not phenotype (reasoning ability) but genotype (DNA). It's not hard to imagine
a few critical changes such as more cortical folds and improved myelination
leading to most of the phenotypic cognitive difference. These could be
controlled in some key fractions of a % of the genome.

While it works as a metaphor, it's flawed as a general metric. A species in
which Hawking-level ability is - if not average - at least commonplace, would
require the average IQ to be 2-3 standard deviations higher (relatively) on
the human scale. IQ being, itself, a rather flawed metric of course :)

------
motters
It's amazing how similar Kurzweil's observations about technology are to
Stafford Beer's same observations in the early 1970s. Beer even draws an
exponential graph and talks about a succession of S-curve shaped technology
paradigms.

