

First class graduates Kurzweil's Singularity University - Kisil
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2009/08/27/BUUQ19EJIL.DTL&type=tech

======
biohacker42
Is is just me or did the project seems a bit mundane? The introduction is
about biotech and nanotech, etc, and the projects are mobile apps, scaling up
3D printing, and smart traffic. Anyone else underwhelmed? Maybe I had
unreasonable expectations.

------
johnohara
Kurzweil articulated many of his early ideas on Singularity at an AI symposium
held in 2001. RealPlayer video of all the sessions used to be available
through Dr. Dobb's Journal's TechNetCast feature. I watched them all and they
were very, very interesting -- both for and against. Great stuff.

I tried to locate a link to them for this post but TechNetCast seems no more
and DDJ's site doesn't bear fruit either. If anyone here knows where they
might be found please share -- they are definitely worth the time.

Since then, Kurzweil has certainly pursued his beliefs (books, speeches,
press, etc.) with these graduates being the latest expression of those ideas.

~~~
MikeCapone
My introduction to Kurzweil was this lecture he gave at MIT in 2005:

<http://mitworld.mit.edu/video/327/>

I then read The Singularity is Near.

I tend to agree more with Eliezer Yudkowsky and Michael Anissimov than
Kurzweil about this stuff, but the video above is still worth watching (much
much better than his TED video, which was too short to allow him to make his
main points).

------
edw519
My first reaction was that if you're going to live forever, why rush to
graduate, there's plenty of time. Then when I read the article, I saw that
they are looking at all kinds of other opportunities. Good stuff.

~~~
randallsquared
Not everyone expects that the singularity will be survivable. Extinction still
seems the most probable outcome.

~~~
nazgulnarsil
this is why I don't understand the "nerd utopia" line. it seems that the vast
vast majority of possible non-human intelligences would have goals directly
harmful to human goals (even modest ones). It seems like the best we can hope
for is a _really_ nice zoo.

~~~
req2
If you said that about a topic you're familiar with, it sounds silly: it seems
that the vast vast majority of possible computer programs would have goals
directly harmful to human goals (even modest ones). The vast majority of
possible medical treatments, proteins, DNAs, etc. would be directly harmful to
human goals, and yet we keep on trying, cognizant and wary of these
ramifications.

Granted, there are "irresponsible" researchers who don't understand that an AI
might not have "gratitude" that prevents it from rm -rfing humanity like a bad
shell script, but others work on 'Friendly AI' to mitigate or eliminate this
concern.

[http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence)

~~~
nazgulnarsil
the things you mentioned would be harmless, not harmful. at worst they would
waste some resources. they certainly wouldn't requisition resources by force
(unless you want to talk about ideology as a "technology" :)

~~~
randallsquared
Er, prions, viruses, bacteria, runaway breakout of monoculture -- these things
use up resources, and have the potential to cause great harm. It's just that
few of them are likely to rise to the level of existential threat, while in
the field of recursively self-improving AI, the likelihood is strongly the
other way.

