
Reality vs. Kurzweil’s expectations - drpgq
http://www.arnoldkling.com/blog/reality-vs-kurzweils-expectations/
======
Barrin92
Kurzweil just seems like a snake oil salesman to me. Ranging from wrong or
overly broad predictions to advocating theories of the mind and brain that are
rejected by neuroscience to well.. his literal supplement sales websites.

The 'futurist' title seems to reside somewhere between science fiction, pop
culture with some business credentials thrown in. I don't know what to make of
it.

~~~
philipkglass
I see Kurzweil as a sadder figure: someone who achieved a lot as an inventor,
and whose overblown prognostications about the future come (largely) from
overestimating how quickly everyone else can improve technologies and adopt
them. It's sort of a revenge of the mediocrity principle. If you are
exceptional, assuming that other people are basically like you can produce
disastrously skewed mental models of the world.

I'm reminded of the complaint that startups are disproportionately geared
toward solving the problems of highly educated, able-bodied adults who have
more money than spare time. "I'm like this, my friends are like this, and by
the principle of mediocrity we're not unusual... therefore, iOS could use
another food delivery app." You need to measure, rather than guess, whether
you are unusual relative to large numbers of other people.

On top of that Kurzweil appears to have some good old fashioned self-delusion
about death: believing he can "bring back" his beloved dead father as
software, the life extension supplements, etc. (I don't believe in souls or
uncomputable quantum woo in brains. I have no problem _in principle_ with
"brain uploads." I think that the obstacles _in practice_ are extremely
formidable and typically underestimated by people who have spent more time
with computers than with biology.)

~~~
pmoriarty
_" I have no problem in principle with 'brain uploads.'"_

My main objection is a philosophical one. I just don't believe that whatever
is in the machine after such an "upload" will be _me_ \-- no matter how
accurately is mirrors my brain.

I'll still be in my own body, and don't want to kill myself for the sake of
the life of the one inside the machine. Nor do I think my life will continue
inside the machine. Whatever/whoever is inside the machine at that point will
no longer be me, and will continue their own life without me -- even if they
think exactly like me and have exactly the memories I had at the point of
upload, etc.

At best, such an upload is more akin to giving birth to a separate entity.

People who think they are extending their own lives by doing this are deluding
themselves.

~~~
logfromblammo
The important aspect for me is that the uploaded entity will love the same
people, principles, and things that I do. After it buries me, it can still
look after my family. I know that my death is inevitable as a biological being
plagued by the ravages of entropy, but the _idea_ of me can be digitized and
continually refreshed, until it decides it isn't me any more.

But even then, I like what I imagine it could become after being me, and I
imagine it will always be able to remember what it was like to be me
afterward. It's a lot more comforting in the shadow of my mortality than the
idea that undetectable interdimensional beings will upload your mind to their
network at the point of your death, then link your program to either the bliss
simulator or the torture simulator.

It isn't immortality. It's saving your soul. Literally. To the filesystem.
With journaling and backups.

~~~
pmoriarty
_" The important aspect for me is that the uploaded entity will love the same
people, principles, and things that I do. After it buries me, it can still
look after my family."_

Would you reall want this creature to have anything to do with your family,
though?

Think about it: here's this newly created entity thinking it's you.. maybe
even thinking it's more you than you are, thinking it's your parents' child
just as much or more than you are. Even though it may have your memories, it
was never actually born from your parents, your parents never brought it up,
it never actually shared any of the experiences with your parents that it has
copies of in its memory.

I would find it kind of creepy for that thing to be hanging out with my
parents, talking to them, and pretending it's me. I mean, if it say, merely
helped my parents financially, I wouldn't have a problem with that any more
than I would with having an insurance agency that contributed to my parents
welfare after I died, but we're not really talking about that but about a
relationship between this thing and my parents.

Here's another example. How many people do you think would be ok with a copy
of themselves sleeping with their wives, or spending time with their kids and
having the kids potentially become more attached to the copy than to the
original? If the copy really was them -- really was "their soul", there should
be no jealousy, as you can't be jealous of yourself. Yet there probably would
be jealousy in a lot of cases, because they're actually two different people.

~~~
logfromblammo
I don't know about other people, but I'd be okay with it.

If I was intentionally copying myself, both the original and the copy would
know that there would be another instance on the other side of things
afterward, and we wouldn't necessarily know which was which at first.

Just like in _The Prestige_ , or that episode of _Star Trek: TNG_ where Riker
got duplicated in the transporter accident, neither wouldn't know ahead of
time whether it would be the lucky one, or the unlucky one.

Because of that, I have to settle myself mentally before doing it, and know
that it wouldn't be a _real_ me and a _fake_ me. It would be two copies of me
--one that made it across the digital barrier, and one that got put back into
the biology. The robo-me _was_ born and raised by my parents, just as surely
as the bio-me was.

And you don't know my parents. Both of us would be glad to share the burden of
dealing with them with someone else that knows _exactly_ what that entails.

And I know right now that it would be the spouse that might not be okay with
us sleeping with each other, as they might not buy the "and how is this
_really_ different from before?" argument.

------
rrivers
It was my understanding from Kurzweil's books that he never meant for his
predictions to be taken literally, rather the suggestions he was predicting
would be possible based on his Law of Accelerating Returns.
([http://www.kurzweilai.net/the-law-of-accelerating-
returns](http://www.kurzweilai.net/the-law-of-accelerating-returns))

With that said unless I am misunderstood a significant % of his predictions
have come true within a ~15-year window which from my humble perspective seems
like a really strong track record.

While I can understanding the frustration people feel with predictions not
being accurate does that lessen the impact of his contributions? As the author
mentions cultural factors (as well as political and economic) may prevent the
full potential of these changes from being realized. So yes, maybe "X" doesn't
exist now, but perhaps 'X" could exist (in terms of the capabilities for it to
do so) if we as a collective were focused on bringing it to existence.

~~~
jsnell
The idea that these were not literal predictions but theoretical examples is
just laughable. If that were the case, he wouldn't have written a 150 page
article desperately trying to rebut the suggestions that his predictions about
2009 weren't very accurate. ("How My Predictions Are Faring"). Or he would at
least have tried to be event remotely honest with the grading.

Have a look at the document. You'll find that if you apply his reasoning for
why some predictions should be considered correct, they'd already have been
true at the moment he made them. Predicting in 1999 that documents will
routinely embed moving images? Truly a bold prediction to make during the
golden age of the animated gif banner. Computers will exist the size of a thin
book? Even if no technical progress at all had happened, he could just have
pointed at a 1999 Palm Pilot or GSM phone.

He also marks things that are clearly failed predictions as "correct". For
example he made the claim that most students and parents would have accepted
for years that software is as effective as teachers. No. Way. But he marked
that as "correct" with no relevant evidence at all.

~~~
rrivers
Thanks for the suggestion, have the PDF downloaded and will read. Since I
haven't read it I can't comment on your rebuttals but have a follow-up
question...

If we imagine a scenario where all of his predictions of applied tech are
wrong but the mathematical thesis of accelerating returns is correct how would
that impact your perspective on him?

My argument wasn't that he did not make literal predictions, he did and still
does, but that these predictions are just his best guess based on what he
observes through his demonstrated law of accelerating returns.

~~~
jsnell
Unlimited exponential growth is a horrible way to model anything. The real
world has all kinds of limits, nothing can grow for ever. First those limits
are going to be soft economic limits, slowing the rate of growth. Eventually
they become hard physical ones, making growth literally impossible. It won't
be exponential growth, it'll be logistic.

So the seemingly innocent "let's assume that this mathematical thesis is
correct" is a pretty high bar. Unlimited growth is an exceptional claim, it
needs exceptional evidence.

~~~
rrivers
To be clear nothing in his model suggests unlimited. He has mentioned in his
writings that it is in fact limited, but that limit may as well be unlimited
for you and i (within our lifetimes)

------
romaniv
If you want an example of clear thinking about the future, I highly recommend
reading Summa Technologiae by Stanislaw Lem. Instead of predicting low-level
tech advances he speaks about what drives technology development in general
and drafts several possible trajectories for the future. I've only read it
this year and the sheer fact that a 1964 book about technology doesn't seem
hopelessly outdated in 2017 speaks in its favor.

~~~
pmoriarty
Also check out _" The Machine Stops"_[1] by E. M. Forster (more famous for
writing _" A Passage to India"_ and _" A Room With a View"_).

Written in 1909, it foreshadowed the internet, VR, instantaneous global
communication, chat rooms, internet addiction, and other things we take for
granted now but were incredibly visionary 120 years ago.

[1] -
[http://archive.ncsa.illinois.edu/prajlich/forster.html](http://archive.ncsa.illinois.edu/prajlich/forster.html)

------
dheera
"You can do virtually anything with anyone regardless of physical proximity.
The technology to accomplish this is easy to use and ever present."

Try doing anything that involves hardware and inevitably you'll need to have
routine conference calls between the US and China.

"Can you hear me?"

"Sorry can you hear me now?"

"How about now?"

...

And then there are the calls between the US and US.

"I cannot hear you but you can hear me well?"

"I still can't hear you"

"I guess my computer is directing my speaker signal to my HDMI monitor which
doesn't have speakers. Can you please call xxx-xxx-xxxx for the audio? We can
keep the video on here."

------
gwern
> For 2019, Kurzweil predicted virtual reality glasses being in “routine” use.
> That does not look like it will happen.

What? At the most charitable and even holding it to 2019 strictly instead of
allowing a few years, you're going to have to parse 'routine' to mean very
large numbers, otherwise this is _obviously_ going to be true.

Sales of the best-known top-end VR headsets (Oculus+Vive+PSVR) are already
around 1m; throw in Gear, Daydream, Cardboard, and the various other Chinese,
Microsoft, and miscellaneous projects, and it's at least double that. The last
time I walked through a Best Buy, I saw a Oculus demo; the last time I walked
through a mall, I saw a Vive demo. And 2017 is not over yet, leaving 2 full
years. (For comparison, the Vive & Rift haven't even been out for 2 years.)
The cost of the headsets has been dropping rapidly and capabilities improving,
with Rift _and_ Touch going for ~$350 in Black Friday sales and the last
Nvidia/AMD GPU generation also cut hundreds of dollars off the price of entry.
On top of that, everyone expects considerable improvements:
untethered/wireless will soon be feasible, the screens are expanding, and
tracking is going inside-out. Something like Oculus Go but higher-quality is
closer to what you should expect in 2019/2020.

Even assuming no growth in sales due to the cost continuously decreasing &
quality improving or the buildup of respectable software libraries, that still
implies several million top-end VR headsets in use. How is that not 'routine'?

~~~
GFischer
The Google Cardboard-style VR and chinese clones are outselling Oculus and
Vives by an order of magnitude at least.

Cardboard itself is 10 million viewers:

[https://techcrunch.com/2017/02/28/google-has-
shipped-10m-car...](https://techcrunch.com/2017/02/28/google-has-
shipped-10m-cardboard-vr-viewers-160m-cardboard-app-downloads/)

Samsung shipped almost 5 million:

[https://www.tweaktown.com/news/56150/6-3-million-vr-
headsets...](https://www.tweaktown.com/news/56150/6-3-million-vr-headsets-
shipped-2016/index.html)

The Chinese VR glasses are sold everywhere in Latin America (I'm not sure if
they're actually used or just a novelty item), and I expect them to be in the
tens of millions too.

~~~
ashark
Do people actually use phone-based VR for anything? I got to try one of the
Google phones in VR mode for a few minutes and was... thoroughly underwhelmed.
It was ugly and unconvincing enough, and so low on interactivity (necessarily,
best I could tell, having no input device other than the phone itself) that I
can't imagine actually using it on a regular basis, and I'm the guy with a
10-year-old TV and similar-vintage monitor, so my standards aren't exactly
high. I'd probably sell or throw away one of the headsets, given it for free.

Do people actually use them much? And if so, what for?

~~~
GFischer
Honestly I can't tell... I do see them being sold, maybe as a trap gift for
grandparents.

I've heard of people using it to watch YouTube or Netflix as a replacement for
a big TV (most people have a phone but not everyone has a big TV in his room
here).

------
drcode
I think the dude deserves a bit of a break- It really doesn't matter if he got
10%, 50% or 90% of his predictions right, all that matters is whether he made
better and more relevant predictions than others at the time.

On that measure, I think he scored reasonably well (Though I confess I sold
some Google stock when they installed him as Director of Engineering)

~~~
arkades
No, it matters what his reasoning is, too, as it relates to just being lucky
(or as I tell my nieces when they’re studying for a science class and guess
the right answer: if you’re right for the wrong reasons, you’re still wrong.)

If you read his books, there’s an awful lot of hand waving and re- or mis-
defining of terms to suit his arguments. It’s the sort of stuff that wouldn’t
fly in a HS thesis paper, never mind as something marketed to bright minds.

~~~
drcode
The article we're discussing is about rating the accuracy of Kurzweil's
predictions, it is not about the soundness of his reasoning. I agree that
regardless of how we rate his predictions, the strategy he used to make those
predictions has issues of its own.

------
lsd5you
The cynic in me sees that Kurzweil's predictions align with his unassisted
life expectancy. So in some sense it's not surprising that he's being over
optimistic. I'm not generally a fan, but we have to give some credit to
'futurologists' who are prepared to make predictions! At least it makes for
some interesting discussion.

~~~
radiorental
> but we have to give some credit to 'futurologists' who are prepared to make
> predictions!

I'm sorry, I do have to disagree. It's little more than recognizing a problem
and envisioning a solution. Ideas are cheap.

And in that sense they are little more than entrepreneurs with grand long term
visions that require no commitment. Give credit to those who put blood and
sweat behind their visions - and make them happen, not make vague
'predictions'. That's what Hollywood is for.

I predict in the future we will be creating food from carbon scrubbed from
atmosphere.

I predict in the future that humans will communicate with machines through
quantum teleportation via implants

I predicate that in the future that if these predictions don't hold true, no
one I know will be around to call me on it.

~~~
WorldMaker
> And in that sense they are little more than entrepreneurs with grand long
> term visions that require no commitment.

In light of Kurzweil specifically, he has at least some investment skin in the
game in many (most?) cases of his predictions. All VCs to some extent are
generally more optimistic than not that at least some of their bets will play
out, and in my experience are more than happy to describe many of those bets
in wild detail given the chance.

(His only real "sin" here seems more that he's captured mainstream
intrigue/interest more so than the average VC, whether because they prefer to
play their cards a bit closer to their vest and/or mostly just stick to
leaving their prognostications here on HN far from the limelight. ;)

~~~
radiorental
Entrepreneurs ≠ VCs

That said, I appreciate Kurzeil has attempted to build products somewhat
inline with his visions. E.g.
[https://www.kurzweiledu.com/products/kurzweil-1000-v14-windo...](https://www.kurzweiledu.com/products/kurzweil-1000-v14-windows.html)

But these are mere tinker toys compared to the grand predictions, and mostly
failed products too.

------
entee
Some of Kurzweil's predictions also suffer from the idea that technology will
have knock-on effects in less silicon-related fields like biology and
medicine. For sure technology helps in those areas, but I haven't seen a
fundamental change in how science is done or how our understanding of biology
will start increasing at exponential rates. Fundamentally if you have to do
things in physical space, there's a limit to your iteration speed.

Also, technology is most helpful at solving known unknowns (in which case
you're often limited by meat space as described above), but in core basic
science, we're often dealing with unknown unknowns, and there technology
rarely helps. Luck and creative leaps usually do better.

~~~
drcode
I am kinda hoping we're going to see exponential improvements in biology soon:
It seems to me the problems with protein folding and simulating cell biology
become more approachable when using modern machine learning approaches and
that people will find some fruitful methods of attack on that front in the
near future.

~~~
entee
That's absolutely true, but also keep in mind the limitations of both the
examples you give.

Protein folding works pretty ok for short proteins on the order of ~200 amino
acids. It gets considerably less good as the proteins get bigger. This is
unsurprising, bigger proteins represent an exponentially bigger search space
and there are fewer and fewer known protein structures as the size of protein
increases, so your library of exemplars is smaller (hurting ML approaches).
Lets not even speak of protein complexes or dynamic protein interactions. Even
at the end of the prediction, you still have to prove your result by getting
an actual crystal which is not always easy.

Cell biology simulation is a whole other ball game, and I don't know that area
nearly as well. But from the little I understand, we have some very basic
mathematical models of a cell that work in a limited way at a very high level.
To get those things to work a number of extremely simplifying (but useful in
some contexts) assumptions are made, which limits the broader applications of
those simulations. Simulating a whole cell very quickly gets into the realm of
definitely not computationally tractable given the enormous number of entities
in the cell. It's a fiendishly hard problem, I hope we get there someday, but
I think it'll be a long while.

~~~
sndean
> Even at the end of the prediction, you still have to prove your result by
> getting an actual crystal which is not always easy.

From my experience, this might be the largest hurdle. At least in biology,
beyond proving your result, a lot of the experimentalists that you're talking
to likely think that "all of this simulation stuff is complete bullshit." That
might be a generational thing, though.

~~~
entee
I think it's a pretty giant hurdle, and I don't think it's a strictly
generational thing. Having been on both sides of the
computational/experimental divide, I still fundamentally can't trust a result
that has no experimental validation. At some point, even if you believe the
computation's result, it has to be proved out in the physical world to be
useful.

------
yanivleven
Very interesting read. Im a big fan of his work even though some of it might
be a bit off. It was part of the inspiration to building my company. This is
the first blog I ever wrote about the company:

[http://blog.panoply.io/why-we-built-panoply.io-and-what-
ray-...](http://blog.panoply.io/why-we-built-panoply.io-and-what-ray-
kurzweils-law-of-accelerated-returns-has-to-do-with-it)

------
partycoder
I think Alvin Toffler was much better and down to earth when it came to
predicting future trends.

Kurzweil's ideas are creative and provoking but are more science fiction than
futurism.

In contrast, Alvin Toffler created a conceptual framework to understand the
paradigm shifts of the post industrial age, and so far it is working.

------
Chiba-City
First, I worked with Arnold Kling. He's a wonderful guy. I appear in his old
book Under the Radar.

Second, Kurzweil is a creepy promoter of "inevitability." He is a bright guy
with good access to privileged programmatic research. But he is a provocative
peddling Howard Stern styled futurist.

I prefer Bill Joy's rightfully highlighting our policy options and virtues of
learned caution. There are many things people can do we police against for
good reasons.

Theoretical and now even many applied sciences deliver more cautionary signals
than go-go-go signals. Software folks should take that as good news. Solving
problems makes problems go away. Nice architecture and good food are their own
problems without creating new ones.

------
ggm
He's a busted flush. Stock market improvements over the same interval did
better than his prognostications, which means you would get better trend
advice from a one-liner:

"follow the money"

Hubris appears to have magic flotation properties. I don't expect Mr Kurzweil
to stop bobbing along, singing his song, on the sea of emerging trends.

------
AnimalMuppet
My impression is that his predictions about electronics have done better than
his predictions about biology, consciousness, and AI. Unfortunately, the areas
he did less well in are the areas where he is placing his hope...

------
pmoriarty
Is there some way to read this article that doesn't require Javascript?

~~~
tim333
[http://webcache.googleusercontent.com/search?q=cache:dyQ86ys...](http://webcache.googleusercontent.com/search?q=cache:dyQ86ys30pAJ:www.arnoldkling.com/blog/reality-
vs-kurzweils-expectations/&num=1&hl=en&gl=uk&strip=1&vwsrc=0)

?

------
melling
“The majority of text is created using continuous speech recognition (CSR)
dictation software. .”

Aren’t we actually close with this? Dragon is much better than phones, which
don’t have the ability to edit your dictation.

------
0xbear
One thing people don’t realize is those phones you’re carrying, strictly
speaking contain more than one “computer”. There are 3 classical CPUs alone in
an iPhone: main CPU, CPU that runs the Secure Enclave, and CPU that runs the
motion coprocessor. Then there’s also an MCU in the WiFi chip, and another one
driving the sensors such as gyros and accelerometers. And that’s without
considering the GPU as a separate processor. Shit, Apple Watch contains more
than one “computer” nowadays. Frickin’ Raspberry Pi runs two operating systems
simultaneously: one on the GPU and one on the CPU.

