
Paul Allen: The Singularity Isn't Near (2011) - rblion
http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/
======
bdr
When "The Singularity Is Near" was first published, I asked Bill Gates what he
thought of the book. He said "I don't know how near the singularity is, but I
haven't heard any convincing argument why it won't happen". That seems like a
more sensible position than the one offered by Allen in this piece. There is
indeed a lot of uncertainty in the rate of progress. But Allen contorts that
fact to say that the singularity _isn't_ near. He offers three basic
critiques:

First, he points out that Kurzweil is extrapolating, and extrapolations can be
wrong. This seems obvious. And it doesn't do much towards Paul's "isn't".

Second, he says that software and hardware will have to keep improving, and
that might not happen. Again, this seems intellectually equivalent to saying
"you might be wrong". No evidence is provided that progress will slow.

Third, he says that the singularity will require either a bottom-up
biologically inspired model, or a non-biologically inspired "AI" system.

The complexity of the former will take a long time to overcome. Since Allen
concedes that sufficient computational power is already here, he seems to be
arguing that it will take over a hundred years for us to have a detailed model
of the human brain. Looking at the progress of science, this seems terribly
conservative, yet little justification is offered. Allen's posited "complexity
brake" seems positioned tendentiously — why are we going to start hitting it
now, instead of fifty years ago?

As for the AI route, he writes: "But when we step back, we can see that
overall AI-based capabilities haven’t been exponentially increasing either, at
least when measured against the creation of a fully general human
intelligence." What does it even mean to be exponential relative to a binary
variable? He is right to say that current methods are limited, and don't
achieve generalized intelligence, but offers no reason why that won't happen.
His argument seems equivalent to simply saying it hasn't happened yet.

This piece offers little intellectual contribution, and the only reason we're
reading it piece is because of the author. It reads like he started from the
conclusion he wanted, and threw some paragraphs in that direction. His
argument, essentially, is that he's betting _against_ continued progress in
science, software, and hardware. That just seems crazy.

~~~
angersock
( I don't believe Allen ever actually conceded the point you seem to wish to
attribute to him, even after multiple readings. )

There's every reason to believe that things aren't going to get faster or
better--CPU development seems to be petering out compared to the gains of
yesteryear. Worse, the products that drive those sales, and the markets that
drive those products, seem utterly uninterested in anything beyond simple apps
and hardware-as-appliance.

The thing a lot of people seem to forget is that the market is what drives
this stuff, and that in the absence of market forces not very much happens.
This is the world we live in.

~~~
orangecat
_CPU development seems to be petering out compared to the gains of yesteryear_

This is arguably true if your primary metric is single-threaded performance on
branchy-integer-type code. Admittedly this is what normal users see most of
the time, which is why consumer desktops and laptops haven't been very
exciting the last few years. But the cost to build a petaflop supercomputer
keeps dropping.

~~~
freshhawk
Sure, but the cost to solve a petaflop problem is not. We hit a point where
the bottleneck is no longer flops but being able to organize systems that can
use them effectively.

Doesn't matter how fast you think we will overcome this problem, it
illustrates to many people that the intersection between the world that is not
increasing exponentially and chip speeds starts to matter a great deal when
you want to actually solve a problem by moving floating point numbers about.

------
jasonkolb
Thank you, Paul, for speaking out against this cult.

It's very hard for me, as a programmer, to take this idea of a singularity
seriously. I know how all of the technologies that Kurzweil is banking on work
at a very low level and I call bullshit that this is ever going to happen. It
would require a type of software that simply does not exist yet, something
akin to a self-programming software, and we are so far away from that that it
might as well be cold fusion.

I think it's much more valid to call the "singularity" the point in time when
technological expansion started occurring at an exponential rate. Thus I would
put the singularity at about 200 years in the past, right around the time the
spinning jenny was invented, and right before the industrial revolution. Now
there's a point in time that I can point to and say "something meaningful
happened". This is complete pie in the sky stuff, and I'm extremely cynical
that it's not just something that Kurzweil talks about to sell books and
conference tickets.

~~~
dbaupp
Just a point that should be made in any discussion about the "Singularity":
Kurzweil's isn't the only model/definition[1] of the Singularity, and some
high-profile Singularitarians don't have a high opinion of Kurzweil[2] ("I've
come to the conclusion that Kurzweil's worldview prohibits Kurzweil from
arriving at any real understanding of the basic nature of the Singularity").

 _> I think it's much more valid to call the "singularity" the point in time
[...]_

Isn't this just redefining the word "singularity", and so making the
discussion about something entirely different? So it might be better to
qualify that with "industrial singularity" and the one under consideration
here is the "technological singularity".

[1]: <http://yudkowsky.net/singularity/schools>

[2]: <http://www.sl4.org/archive/0206/4015.html>

~~~
Symmetry
I've really got nearly zero respect for Kurzweil as a theorist. He made a
bunch of predictions for 2010 in his 1999 book The Age of Spiritual Machines.
When 2010 came around he graded his predictions and gave himself very high
marks, but when I found a copy of the book and read what he'd actually written
I found that he'd had to re-write his predictions substantially in order to
count them as succeeding.

As far as I can tell the only way to become popular as a futurist is to lay
out predictions for the future in far more detail and with far more certainty
than could every be justified.

------
swombat
Be sure to read Kurzweil's well-argued response too:
[http://www.technologyreview.com/view/425818/kurzweil-
respond...](http://www.technologyreview.com/view/425818/kurzweil-responds-
dont-underestimate-the-singularity/)

(imho, more convincing than Allen's).

~~~
simonsquiff
One of Kurzweil's arguments in his response is "...the design of the brain
(like the rest of the body) is contained in the genome. And while the
translation of the genome into a brain is not straightforward, the brain
cannot have more design information than the genome"

However the brain's design information is the genome, the laws of physics _and
a universe_.

You only have to look at protein folding, and the complexity of the resulting
molecule and how that interacts with the rest of the world, so see how much
complexity is there outside of the genome - and it's that complex end result
that you have to simulate or replicate.

He goes onto say that "..the amount of design information in the genome is
about 50 million bytes, roughly half of which pertains to the brain. That’s
not simple, but it is a level of complexity we can deal with and represents
less complexity than many software systems in the modern world."

As above, this is greatly simplifying the end result that you're trying to
replicate - the genome is just the software that's running on the OS/hardware
of the universe. To replicate that virtually we'd also need a virtual
universe.

~~~
oh_sigh
The main complication of protein folding is how the amino acid chain interacts
with itself, not its interactions with an external environment.

~~~
polyfractal
That proves the point. To be able to fold a simple protein, you have to be
capable of simulating the very challenging simulation of "real world physics
at an atomic scale".

To simulate a brain, you get to do that for a few hundred trillion proteins
that make up a brain...and all their interactions with all the other trillion
proteins.

The complexity doesn't come from the proteins so much as the physics that
govern the proteins.

------
spindritf
My first thought when I saw the title was that Paul Allen has some patent and
obtained an injunction against the singularity.

Somehow I cannot bring myself to think about the whole idea (of singularity,
economics of emulated minds, etc) seriously. Whenever Robin Hanson (of
Overcoming Bias[1]) writes about it, I usually skip the page, even though his
ideas are new and insightful. I also read some stuff from Eliezer Yudkowsky
(of Less Wrong[2]) and again it's a great concept, he even wrote some cool
stories about it[3], but despite all the emphasis on correcting own biases and
other follies of the mind, feels way too much like some sort of techno,
cyberpunk religion from the 80s.

[1] <http://www.overcomingbias.com/tag/future>

[2] <http://lesswrong.com/about/>

[3] <http://lesswrong.com/lw/qk/that_alien_message/>

~~~
TeMPOraL
I fear that one of the worst things religion did to people is to make them
think that every hope of doing something amazing and making the world a better
place is insane. People are afraid of great dreams, because they think it's
religion. They think low of singularity-related stuff just because they
pattern-match long life (and/or immortality) and superhuman intelligence with
what religions promise and talk about. They shouldn't. Living longer, better,
improving ourselves and extending our capabilities beyond what is currently
possible is not faith-stuff; it is the dream of mankind at large, and we're
building our technology to achieve that.

~~~
freshhawk
The best thing that religion did is show modern man the dangers of choosing
faith over evidence. People don't call the singularity a religion because they
want the world to be better or want to live forever or know more. Everyone
wants that. They call it a religion because people believe this will happen in
their lifetimes because they really want it to. They have faith and have shown
no evidence. It's not about the belief in improvement, it's about the belief
in prediction of a specific kind of improvement and a prediction of a
timeline.

------
crazygringo
I would upvote this a million times if I could. The whole concept of the
"Singularity" confuses hardware with software.

Yes, _hardware_ technology has followed an exponential rate of improvement,
and there's no obvious reason to believe that will stop.

But _software_ certainly hasn't. Programmers today still deal with the same
issues they dealt with 30 years ago. Separately, we've figured out some cool
pattern-recognition techniques, but there's absolutely nothing indicating
"exponential" growth in how smart our programs are.

Yet the "Singularity" depends primarily on software, not hardware. And all
this talk about getting around it by just "scanning in" actual brains and
simulating them on hardware... well that still doesn't show how those
simulated brains are suddenly going to get smarter than our own.

(And then you've got a whole lot of human-rights and personhood issues when
you start dealing with actual seemingly conscious brains running on silicon,
with real childhood memories, real emotional desires and whatnot -- I mean,
they would basically be actual people, not some kind of abstract AWS
brainpower cluster...)

~~~
bloaf
I'm not sure that the software/hardware break is as clear cut as that:

[http://bits.blogs.nytimes.com/2011/03/07/software-
progress-b...](http://bits.blogs.nytimes.com/2011/03/07/software-progress-
beats-moores-law/)

~~~
crazygringo
Kurzweil makes the same point in his rebuttal.

But all he (and the article) is talking about is speed improvements to
algorithms. Singularity-type AI is not about the speed, and it's not about
complexity in terms of number of moving parts either (a million-line code base
might be larger than a 10,000-line one, and more complicated, but not
necessarily any more conceptually complex).

What matters for the "Singularity" is _conceptual_ improvements. We're still
writing brittle computer code that crashes a compiler upon a single slightly
misspelled constant name or missing comma, where the intention would still be
crystal-clear to any human programmer. I don't see any kind of "exponential"
progress whatsoever relating to the fundamental building blocks of artificial
thought, which is what the entire "Singularity" premise is based on.

------
reasonattlm
[http://www.fightaging.org/archives/2005/09/reading-the-
sin.p...](http://www.fightaging.org/archives/2005/09/reading-the-sin.php)

I am prepared to go out on a limb here, as I have done before, and say that
business and research cycles that involve standard-issue humans are
incompressible beneath a certain duration - they cannot be made to happen much
faster than is possible today.

Kurzweil's Singularity is a Vingean slow burn across a decade, driven by
recursively self-improving AI, enhanced human intelligence and the merger of
the two. Interestingly, Kurzweil employs much the same arguments against a
hard takeoff scenario - in which these processes of self-improvement in AI
occur in a matter of hours or days - as I am employing against his proposed
timescale: complexity must be managed and there are limits as to how fast this
can happen. But artificial intelligence, or improved human intelligence, most
likely through machine enhancement, is at the heart of the process.
Intelligence can be thought of as the capacity for dealing with complexity; if
we improve this capacity, then all the old limits we worked within can be
pushed outwards. We don't need to search for keys to complexity if we can
manage the complexity directly. Once the process of intelligence enhancement
begins in earnest, then we can start to talk about compressing business cycles
that existed due to the limits of present day human workers, individually and
collectively.

Until we start pushing these limits, we're still stuck with the slow human
organizational friction, limits on complexity management, and a limit on
exponential growth. Couple this with slow progress towards both organizational
efficiency and the development of general artificial intelligence, and this is
why I believe that Kurzweil is optimistic by at least a decade or two.

------
tocomment
Is there any food that makes a substitute for bread on sandwiches?

I've always wondered that. I think I could substantially reduce my carbs but I
love sandwiches.

Any ideas?

~~~
tocomment
Ok, I'm really curious why my post in the wrong thread has 6 upvotes?

------
jamespitts
Extrapolating future-history is extremely difficult. You get into this mode
where you extend the current set of capabilities and limitations until you hit
an edge, and to get past it you invoke magic.

But it is so enjoyable and useful to creatively extrapolate or generate
history, if only to encourage us to sink our time and energy into pushing that
edge.

------
kingkawn
If you assume that a breakthrough is possible, and desire its occurrence, then
you've got a subjective bias to find a way to logically predict it will happen
within the bounds of your lifetime.

~~~
sampo
Actually, some of the singularity people regard the singularity both
inevitably happening soon(ish), and their worst nightmare. Not desirable at
all.

The superintelligent machines will take over the world and probably destroy
humankind as a side effect. Consequently, they think that we urgently need
philosophical musings over possible ways to ensure that the inevitably created
superintelligent AI overlords would be build using principles that make them
friendly to humankind. This is (according to them) the only hope to save
humankind from extinction in near future.

~~~
kingkawn
I don't equate desire and positive outcome. Sometimes the waiting is worse
than the thing itself. especially if you know it's coming.

------
seiji
The quotes on their website are a hoot. First, there's a modal you have to
manually dismiss: "Singularity University is acquiring the Singularity Summit
from Singularity Institute."

Then, such gems like "The Singularity Summit is the premier conference on the
Singularity. As we get closer to the Singularity, each year's conference is
better than the last."

They seem kinda obsessed with "thought leaders" and not so much "thought
doers."

~~~
dbaupp
"Their website" and "they". Who are you referring to?

Edit: I guess Singularity Institute <http://singularity.org/> since the
banners and text seem to match up, maybe?

In any case, the news of the transfer of the summit is evidence against your
last point. It means (I hope) that Singularity Institute is trying to free up
their researchers to be able to concentrate on actual work, rather than
organising a conference every year.

------
DaniFong
Here's a probably important proposition that Peter Thiel and Garry Kasparov
have been putting forward, and I have yet to see engaged with and answered:

If we are truly accelerating technologically, why have the exponential gains
in wages and PPP in the developed world largely stopped? They have hardly kept
up with inflation.

Compare 1891 - 1931, 1931 - 1971, and 1971 - 2011.

People point to improvements in computer technology -- why have they not
yielded significant improvements in the world of stuff? There has not been the
expected productivity increase at all.

~~~
yk
The way I think about this, you have to correct for 'technological deflation.'
What I mean by this is, if you only think about monetary loss of purchase
power, then you will probably conclude that an iPhone costs something like
$500 ( 2000 dollars). However to actually buy an iPhone in 2000 you would need
to spend several billions and delivery would take something like ten years. (
Most of this is developing processors and displays that can power an iPhone.)
Another example would be flat screens, there were some in 2000 (about as
expensive as a car). So you don't see this increase in wages, simply because
you are comparing against a moving target.

~~~
DaniFong
If that were the case, and computing technology were part of the PPP goods
bundle, wages would be seen as going UP.

The question is, if you subtract computers, why do you not see any
improvement? Is this not evidence of a technological slowdown, at least in the
world of stuff?

~~~
yk
My impression is, that the world of stuff moves slower but is moving. ( But I
do not have some nice data to back this up, but cars are more fuel efficent,
airline tickets decrease in price etc.) And atop of this you get a lot of 'add
a computer' inventions. For example (non mobile) phones which store addresses.

------
robbiep
This whole thing (futurism, the (+-Not)singularity, what may one day come) is
fascinating to me and clearly pretty much everyone.

Getting aside from the pseudo-religious arguments which are often raised in
objection (and, personally I believe legitimately), I think we are left with 2
options:

1) the Singularity is a real thing which will happen at some point in the
future

2) Consciousness is not able to be crafted by man and the logistics of
downloadable consciousnesses and infinately extendable lifespans are forever
beyond us, and life will continue pretty much as it has since pre-history,
with better technology making a 'richer' life a possibility for a greater and
greater proportion of humanity.

As much as I would hope for option 1) the question I feel needs to be asked is
why have not other conscious beings (which logically must exist elsewhere in
the universe regardless of the numbers you plug into the Drake equation) come
to us in their flying robot suits?

The complex interplay of neurology and computing is only in it's infancy, and
I wait with bated breath at the advances we are making in our understanding..

~~~
swombat
Concerning your question, you're probably familiar with these arguments, but
here goes.

Assuming it's true, the main conclusion from Fermi's paradox (which is really
an observation more than a Paradox) is that there is some kind of "cliff" that
vastly reduces the number of civilisations that achieve the technology
required to come visit us with their robot suits.

The interesting question is, where is this cliff: before where we are now, or
after?

If it is before, then that's great for us. What it means is, for example, that
perhaps the evolution of life, or multi-cellular life, or animal life, or
intelligent life, etc - is so unlikely that even though there are trillions of
trillions of attempts, those that succeed are so far apart that they will
never meet (perhaps thanks to the expansion of the universe, or through the
difficulty of travelling across interstellar distances, etc). That's the lucky
scenario.

The unlucky scenario is that this cliff is after where we are now. Perhaps
there are millions of intelligent species even in just the Milky Way, but
perhaps intelligent life is doomed to self-destroy eventually (for example by
reaching a Singularity, building a Dyson sphere, and then basically
disappearing up its own arse; or perhaps by ending in some kind of nuclear
war, or biological wipeout after someone's biology experiment goes horribly
wrong, etc).

That latter scenario would mean that we're likely to kill ourselves before
_we_ get to the stars.

I hope for the first scenario.

Edit: Of course, as pixl97 points out, perhaps Fermi's paradox is an incorrect
observation, and the aliens are already here, they're just being really
cautious about being observed.

~~~
sampo
_"perhaps the evolution of life, or multi-cellular life, or animal life, or
intelligent life, etc - is so unlikely ..."_

Bacterial life appeared on Earth pretty much as soon as the planet had cooled
down.

An argument could be that some extremely unlikely chemical coincidence was
needed to create life, but I think the observation gives more support for the
theory that if you have a planet with prebiotic soup, the biotic part is going
to kick in pretty soon.

But then it took 4 billion years -- or 30% of the whole universe's lifetime --
for life to invent multicellularity (with some important steps, like the
invention of nucleus and thus eukaryotic cells, still unicellular life, half-
way in between).

I don't have a problem believing that bacterial life is abundant in the
universe. But life on Earth, inventing multicellularity only in 4 billion
years, and then sentience only in 0.6 billion more (multicellularity obviously
being the harder part), is among the fast ones.

Wait another 4 billion years to give the slower ones a fair chance to discover
multicellularity, too.

------
melling
I can't say that I ever bought into the singularity by 2045. Things always
take longer than expected. When I was a kid, I always thought that technology
in 2001 would be incredible. As it stands now, the US doesn't even have a
manned space vehicle.

Anyway, the one good thing that could come out of the Singularity discussion
is maybe we can have a concerted effort to make it happen. A lot more research
money into big science, for example. Hypersonic flight, maglev trains, the
Texas supercollider, space exploration, medical research, clean energy, etc.
Imagine if the the first flight happen 100 years before the Wright Brothers,
or the telephone had been invented even 50 years earlier. We might not have a
Singularity by 2045, but we can actively accelerate our quest for knowledge.

~~~
colomon
The technology in (real world) 2001 _was_ incredible. But all the progress was
in computers and communication, not travel.

Back in the 20th century, every looked at the incredible advances in travel
technology from 1850-1950 and assumed the growth curve would just keep on
going. But it didn't; it peaked out somewhere in the 1970s. Since then it's
gotten cheaper, but not faster.

I think the Singularity is the exact same sort of projection. Progress in
computer technology was incredibly fast from 1970 to 2000. My phone has a
processor thousands of times faster than my first computer, back in 1982. And
if progress continued at that rate, the Singularity probably would be
inevitable. But it hasn't. My desktop computer today is not significantly more
powerful than the machine I had in 2007. My laptop is much better than my
laptop then, but it's only on par with my 2007 desktop. My phone is insanely
better than my 2007 phone. The current trend is for computing power to get
smaller and cheaper, rather than getting more computing power. That's great,
but I don't see it getting us to the Singularity anytime soon.

~~~
pemulis
> The current trend is for computing power to get smaller and cheaper, rather
> than getting more computing power.

I think it's a mistake to compare specific devices from different eras, like a
2007 desktop and a 2012 desktop. In order to find total computing power, you
need to add your phone, tablet, desktop, laptop, and cloud services together.
When you do that, you see that we all have far more computing power than we
did in the past, but we've chosen to spread that power over a variety of
different devices.

------
jeremyjh
I agree that there is no convincing evidence the Singularity is so near, but I
don't agree with his central argument concerning the necessity of software
engineering breakthrough requiring accurate models of human cognition. The AI
and ML learning successes we've had the last decade aren't based on developing
some kind of imperative algorithm for evaluating a specific problem domain. We
don't need one for cognition either; we just need a machine that can learn how
to learn better. That is really all the singularity is; unbounded recursive
self-improvement.

------
Neuromorphism
Hmm, Kurzweil joins Google -> Allen says Singularity is bunk. Looks like there
has been an ongoing exchange, but funny that both items make the front page of
HN on the same day...

~~~
kristofferR
This is from 2011, so it had absolutely nothing to do with Kurtzweil joining
Google.

------
swalsh
I would suppose that by 2045 we might technologically be capable of "the
singularity" but i'm not convinced we'd be socially accepting of a
singularity. The old saying, the future is today its just not evenly
distributed will still apply 30 years from now.

If you could quantify social progress, I would theorize that it would be a
linear growth. Which means, our capability, and how it fits in our society are
becoming decoupled from each other at an increasing amount every year.

------
olefoo
Thesis: The singularity must be prevented at all costs.

Evidence: The increasing uselessness of human beings to economic activity.

Evidence: The callousness to economic externalities with biological
consequences of existing suprahuman organisms. i.e. Monsanto, Dupont, Keystone
XL, the US Government

Conclusion: The singularity would most likely result in the complete
extinction of biological human beings.

Nota Bene: Structured like a High School debate contrapositive because that's
what the singularity always sounds like.

------
meric
It's only 32 years until 2045. 32 years ago was 1980.

Emacs was first released in 1976.

Lisa with its mouse and GUI was first developed in 1978.

The first handheld mobile phone was demonstrated in 1973.

The TCP/IP protocol was first standardized in 1982.

Thirty years on we're still relying on the same core technologies, though more
mature.

What new technologies have appeared in recent years that will become more
mature in 5, 10, 20, 30 years?

------
charlieflowers
Can anyone point me to information about this question: "What have we done to
discover and understand the differences between human and animal brains, to
see what accounts for the vast difference in intelligence?"

I'm sure I could find a lot by Googling, but asking this group seems to be a
more efficient way to benefit from some curation.

~~~
Lost_BiomedE
I know there is a lot of on-going research in learning the 'operating system'
of the brain across quite a few animal models. In rats, mice, and monkeys, a
lab where I work looks at the neuron level while these animals learn
behavioral tasks. Classic behavioral tasks are merging with basic neuroscience
and advanced imaging, with the help of computing.

At the neuron level, we are still at very immature levels of understanding,
but the path and direction being taken looks very promising.

------
DannoHung
I thought the singularity just meant that technological change was so rapid
that it ceased to have meaning.

I feel like we might be there today. I'm not sure what could be engineered
tomorrow that would surprise me meaningfully.

If someone says that they have a working fusion reactor, I'd just be like,
"Geez, finally. What the hell took so long?"

------
dgregd
This is discussion about intelligence. But do we have clear definition of
intelligence?

Without clear definition one can say that the Singularity already happened.

Chess masters definitely are considered intelligent. So why computer programs
which beat them aren't called super intelligent? Because they use brute force
algorithms?

~~~
dbaupp
Because the chess engines aren't sufficiently general purpose.

[http://lesswrong.com/lw/vb/efficient_crossdomain_optimizatio...](http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/)
tries to address questions like the ones you ask. (Obviously the definition
proposed there of "efficient cross-domain optimisation" isn't _the_ definition
of the word "intelligence", because no such thing can exist, but it is a
definition that seems to match with our intuitions well.)

------
troymc
Note: The linked article was published in October 2011. It's not exactly news,
but does make for stimulating reading!

------
teeja
"The singularity isn't near". Only became obvious when Kurzweil signed with
Google???

------
michaelochurch
I'm fairly skeptical of a near-term singularity, but he provided no evidence
that technical growth _isn't_ exponential. Exponential doesn't mean "fast". He
established that problems that were once impossible are still very difficult.

Economic growth _is_ faster than exponential. We see increasingly rapid growth
at any point in time: pre-Cambrian vs. post-Cambrian evolutionary, pre-
mammalian vs. mammalian, evolutionary vs. paleolithic, pre-agrarian vs.
agrarian, agrarian vs. industrial.

I don't think we're going to see a "Singularity" in 2045. We might see 10-50%
annual economic growth by then. That wouldn't surprise me. At this point,
we're probably making serious in-roads on a wide variety of health problems,
and life expectancy at birth will probably be over 85 and may be undefined. I
think it's a good bet that someone born in 2045 will see 3000, not because of
a Singularity, but because such a person won't even begin to experience old
age until the 22nd century.

~~~
tbenst
Student of Applied Mathematics - Economics here. I took a class with Oded
Galor, the primary proponent of Unified growth theory [1], a single model that
describes the transition from the Maltusian trap [2] to an era of rapid
growth, and finally a transition to a sustained growth regime. According to
this economic growth model, the United States has already reached the
sustained growth regime - meaning that we can expect continued growth of, say,
1-2% for eternity.

Economic growth is NOT faster than exponential in a sustained growth steady-
state. Do some googling for long-term economic forecasts, and you will find
much research that supports low single-percent growth for the 21st century [3]

[1] <http://en.wikipedia.org/wiki/Unified_growth_theory>

[2] <http://en.wikipedia.org/wiki/Malthusian_trap>

[3]
[http://www.economist.com/blogs/buttonwood/2012/11/economic-o...](http://www.economist.com/blogs/buttonwood/2012/11/economic-
outlook)

