
There's more to Singularity studies than Kurzweil - fogus
http://www.sentientdevelopments.com/2010/08/its-not-all-about-ray-theres-more-to.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+SentientDevelopments+%28Sentient+Developments%29&utm_content=Google+Reader
======
andreyf
I think Kurzweil, not unlike Chomsky, seems like a generally fascinating
figure to discuss (early success, strong opinions). His popularity (both
positive and negative) can be attributed to a fascination with the character
more so than his exact position on one topic or another.

The one thing that I wish he would talk about more is his work in "nutrition,
health, and lifestyle" [1]. He seems to make a lot of money off
recommendations that aren't commonly accepted in the medical community (he,
himself, ingests "250 supplements, eight to 10 glasses of alkaline water and
10 cups of green tea" every day). Why the disconnect with health
practitioners? I don't know the guy, and shouldn't make judgement, but find it
hard to avoid the thought that he's profiteering off of people's fears and
ignorance.

1\.
[http://en.wikipedia.org/wiki/Raymond_Kurzweil#Work_on_nutrit...](http://en.wikipedia.org/wiki/Raymond_Kurzweil#Work_on_nutrition.2C_health.2C_and_lifestyle)

~~~
crystalis
His primary list of accomplishments should relieve some of your fears. That
much serial entrepreneurship places him securely in the 'comfortable'
category, and his continuing work seems to clear him from a profiteering
motive. His transhumanist ambitions indicate a deep and sincere belief in his
method, which is possibly more than most other dietitians can avow. I think
the largest problem with the disconnect is the nature of his interest - severe
long term results that are difficult to support with standard research methods
or testimonials.

~~~
philwelch
Many pseudoscientific cranks are honest, earnest, and not at all profiteering.
But they are still pseudoscientific cranks. What defines a crank is the
practice of developing and advocating strong opinions despite a lack of
supporting evidence, or in the face of conflicting evidence.

~~~
crystalis
I'm not sure what you have to gain by planting a 'pseudoscientific crank' flag
in a response here, but I'll address it anyways.

I don't follow Kurzweil's diet, and I haven't read any of his books, but to
the best of my knowledge, Kurzweil invokes no chakras and no crystals. He
doesn't promise unrealistic weight gain, and he doesn't seem to be advertising
any harmful methods. He may be guilty of overselling the possible benefits of
synthesizing several positive regimens (e.g., light exercise, low glycemic
index, antioxidants, nutritional supplements), but, with the exception of
alkaline water, I don't see any cogent criticism or glaring 'lack of
supporting evidence'.

Would you care to supply a context for your tar, or did you just have an
agenda?

~~~
philwelch
I only meant to suggest that whether or not someone is intentionally
"profiteering off of people's fears and ignorance" in andreyf's words isn't
necessarily the best criterion by which to judge their credibility--and
conversely, that your defenses of Kurzweil's honesty and earnestness miss the
underlying point.

~~~
crystalis
Your underlying point is that he isn't credible; andreyf's point is about
profiteering. It's possible he wanted information on credibility, rather than
the stated profiteering, but your paternalism has yet to supply any actual
answers.

~~~
philwelch
My underlying point isn't about Kurzweil at all. It's that sincerity doesn't
excuse being a professional crank. Amateur cranks are bad enough, but
professional cranks do benefit from "people's fears and ignorance" at their
expense, no matter how sincere and well-intentioned they may be.

~~~
crystalis
But this isn't about cranks. It's not even about Kurzweil! It's about
Singularitarians that explicitly aren't Kurzweil, even.

You haven't made any points germane to the conversation here. If you'd like to
place some actual points regarding Kurzweil as a crank, or tar all things
Singularity with the crank label, or find an actual topic about cranks, your
words may be well placed, but until then, your Gricean maxims may need some
work.

~~~
philwelch
Here's the conversation as I have understood it:

andreyf (<http://news.ycombinator.com/item?id=1629954>): Among other things,
deems Kurzweil's nutritional theories (and commercial promotion thereof) as
questionable.

You (<http://news.ycombinator.com/item?id=1631254>): Provided evidence of
Kurzweil's sincerity and good motives to defend him from andreyf's criticism,
as you perceived it.

Me (<http://news.ycombinator.com/item?id=1631299>): Illustrated (unclearly, I
admit) a fallacy in your argument: Kurzweil's sincerity doesn't completely
absolve him of being wrong, irrational, or even crackpottish about his
nutrition.

I admit that I was unclear. What isn't unclear is how you responded--with
repeated personal attacks, accusations of ulterior motives and "agendas", and
a militant refusal to make any good faith attempt to understand anything I was
trying to express. And now you pretend we weren't discussing an aside about
Kurzweil's idiosyncratic nutritional views at all, as if you've forgotten how
these comments were nested in the first place. You accuse me of wanting to
"tar" Kurzweil, or "all things Singularity" (as if I'm blaspheming your god?)
and then name-drop Grice to assure yourself of holding the intellectual high
ground? Yeah, I know how these forum games are played as well as anyone. I
just wanted you to know that I actually was trying to meaningfully communicate
here.

~~~
crystalis
You do make it quite clear you know how to play forum games...

I argued nowhere that Kurzweil is absolved of any putative wrongness,
irrationality, or crackpottishness. You inserted a context that wasn't there
and didn't do much to relate that context to anything else. In most books,
fanatics are the ones that can't tell that not everyone is talking about their
favorite topic. You could really use work on your maxim of relation.

~~~
philwelch
My perception was that andreyf was concerned more with Kurzweil's possible
crackpottishness than his sincerity. In that respect, _you_ are the one who
has failed to apply Grice's maxims (or, in less intellectually pretentious
terms, "missed the point"). Continually insulting me doesn't change that.

------
jimbokun
The disconnect is that a lot of people do not find the Singularity an
appealing concept. Of his list Yudkowsky seems the only sane one to me.

"Primarily concerned with the Singularity as a potential human-extinction
event, Yudkowsky has dedicated his work to advocacy and developing strategies
towards creating survivable Singularities."

My pure speculation is that many people feel a very understandable sense of
unease, but it gets expressed as doubting the possibility of the singularity
future. As opposed to accepting it as possible but advocating that we try to
avoid it, or at the very least approach it with extreme caution.

Kurzweil, on the other hand, comes across to me as very Pollyana-ish. He seems
to spend very little time considering what could go wrong with a Singularity,
and much more time eagerly anticipating our impending god-like future.

~~~
Jkeg
The majority of people, who when told of the singularity, automatically
dismiss it offhand only because it has that Hollywood "scifi ridiculous" sound
to it, seem a lot less sane to me than anyone on that list.

~~~
Avshalom
Most people have spent their lifetime watching people completely fail at
predicting even the immediate future. Dismissing predictions of the
singularity as scifi bullshit isn't a lack of sanity, it's basic
extrapolation.

~~~
Jkeg
So these people think they are better predictors of the future than all the
people who failed, even though they probably never made a serious prediction
themselves? They believe something is never going to happen just because it
hasn't happened _yet_? Sounds highly irrational to me.

~~~
Avshalom
There is a MASSIVE difference between "you're wrong" and "I'm right". Calling
the singularity bullshit in a casual conversation is an example of the prior.

------
AngryParsley
A decent portion of the names mentioned in that post are signed up for
cryonics: Kurzweil, Hanson, Minsky, and Yudkowsky. It makes sense in
hindsight. These people put a very high upper bound on what future technology
can accomplish.

Sources:

Kurzweil and Minsky:
[http://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundation...](http://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundation#Membership)

Minsky: <http://www.alcor.org/AboutAlcor/meetsciadvboard.html#minsky>

Hanson: [http://www.overcomingbias.com/2008/12/we-agree-get-
froze.htm...](http://www.overcomingbias.com/2008/12/we-agree-get-froze.html)

Yudkowsky: <http://lesswrong.com/lw/wq/you_only_live_twice/>

------
pbw
This is a good list. It would be interesting to see a similar list of
Singularity detractors. I know Jaron Lanier is one. His claim is that software
in general is crap and won't be up to the task, even if the hardware is
capable.

Kurzweil does not say much about software. He feels we will get everything we
need by reverse engineering real brains. I think this is possible, but it's
certainly not obviously true, and it's not a simple extrapolation. We could be
sitting there in 2040 with gobs of computer power, and detailed maps of the
brain, and just not understand how it all works. Look at the Genome today.

I saw Kurzweil speak in 2008. I thought he was compelling but certainly leaned
heavily on hype about the exponential which I don't think is entirely
justified: [http://www.kmeme.com/2010/07/singularity-is-always-
steep.htm...](http://www.kmeme.com/2010/07/singularity-is-always-steep.html)

~~~
nopinsight
I agree with your post in general. However, the linked article missed the
point of the key assumption of singularity: the current progress of computing
power is 'only' doubling annually because of the limits of human intelligence.
When machines reach human intelligence, they will take shorter and shorter
time to improve themselves.

If H1, the human-equivalent generation of machines, takes one year to create
H2, which doubles H1's speed and computing power, then H2 would take only 0.5
years to create H3, which would take 0.25 years to create H4, and so forth.
The math series sum up to two or that the whole big bang would only take twice
the amount of time for one doubling cycle (assuming that the delay caused by
physical limits is negligible). IF this actually holds, then the progress when
H1 is created will be way faster than the simple exponential illustrated in
the article.

~~~
pbw
I agree. I did not depict The Singularity itself in my graphs, because
Kurzweil generally does not either. He instead shows projected exponentials as
evidence for The Singularity happening: <http://www.kurzweilai.net/the-law-of-
accelerating-returns>

My first point is really just that the exponential is not hyperbolic. It's
tempting to see an exponential graphs and think the value is rising against
some invisible wall, like it is going to infinity. But of course it is not.

My second point was between the linear-exponential shooting skywards or the
log-exponential rising incrementally which is "more real"? In the case of
computer capacity I claim the incremental one is real, it reflects how we
experience the increases.

None of this really speaks to whether the Singularity will happen or not. I'm
just saying if it happens it won't be because of some upward surge in
computing capacity just prior to The Singularity. Because no such surge is
predicted.

------
russell
I am partial to von Neumann's definition, not rapture of the nerds. A
reasonable definition of a singularity is a (technological) the effects of
which nearly no one can predict. Examples, agriculture, the industrial
revolution, the automobile, the computer. The Luddites predicted massive
unemployment, not the incredible abundance we have today. The telephone was
seen as a broadcast medium like radio, no device that remove space from
personal communications. The computer, we all know what that did.

We dont even know what intelligence really is, so we cant predict what super
intelligence is or will do. Will we become extinct? I seriously doubt it, just
as most of us dont want our fellow creature to become extinct (except maybe
rats an mosquitos). Will they become our curators, or will we have a symbiotic
relationship with incredible consequences.

One consequence I do see is a huge augmentation in human intelligence with
appliances that grow complexes directly into the brain. Imagine instance face
and name recognition (my particular sub-optimal capability) or mathematical
proof reasoning. Not that we think way faster, but that everything we want is
at our fingertips. And built in virtual reality.

I think we are near the peak in our consumption of material goods. So the
optimist in me sees a future free from meaningless work, where even the
average Joe will have an IO of 200 or even 1000.

~~~
Jach
I'd like to address some of your statements if you don't mind, since I
wouldn't want them spreading and used in place of actual thought...

First, let's do away with this "Rapture of the Nerds" business. It'd be funny
if the comparison were at all valid:
<http://www.acceleratingfuture.com/steven/?p=21>

Your first paragraph hints that you're more partial to Hanson's idea of having
already undergone multiple Singularities that tremendously boost the economy.

> We dont even know what intelligence really is, so we cant predict what super
> intelligence is or will do.

Paraphrasing from Yudkowsky, an intelligent system is that which implements
its goals. If a superintelligent entity has goals even remotely recognizable
to us, we can predict what it will do quite easily. Even if you have no idea
how Chess works, just by knowing it's a game you can predict both sides will
try to win.

I'm with you on brain-machine interfaces that augment our intelligence, though
I would definitely say I could think faster, if not better. Just with the
example of an internal calculator would help tremendously; somewhere around
the order of trillions of synapses fire if you try to multiply two three-digit
numbers if your head.

> So the optimist in me sees a future free from meaningless work, where even
> the average Joe will have an IO of 200 or even 1000.

What do these figures even mean? One of the goals of a Yudkowsky-defined
Singularity is that the AI (hopefully quickly) gets around to helping us
augment our own intelligence, moving us along the path so that we can become
something like how we presently are to ants, but with present us as the new
ants. Not just all becoming Einstein.

------
philwelch
The singularity (sorry, Singularity) has always struck me as religion in
science-fiction garb: an end time is coming, eternal life is possible if you
name a cryonics firm as the beneficiary of your life insurance policy, and we
will soon enter either an unimaginable paradise or an unlivable hell where
either humans will be totally extinct and replaced with robots or we will be
immortal cyborgs, or maybe we'll "download our consciousness into computers",
or something like that. There's remarkably little content, nor any real
certainty, behind the hype.

It's perfectly reasonable to say that really interesting things are bound to
happen when we fully understand the brain enough to build one ourselves. It's
conjectural to say it'll be possible to build intelligences better than humans
at the task of building intelligences (to significant degrees of recursion),
and once you make that conjecture some sort of singularity is possible. (Not
inevitable--there may indeed be unsurpassable barriers to any kind of
intelligence. I'm talking about logical barriers like Gödel's incompleteness
theorem, or the possibility that P != NP.) But there's no gain in "futurists"
making empty predictions. I'd much rather hear from the people who actually
try to build intelligences what they think is feasible, or the people who
actually study neurology.

------
_delirium
For a less explicitly "pro-Singularity" take that still studies it seriously,
the main North American AI organization (AAAI) recently convened a panel to
look into which potential major societal effects of advancing AI should be
taken seriously by AI researchers, and what (if anything) should be done with
regards to those possibilities:
<http://www.aaai.org/Organization/presidential-panel.php>

