

Peter Thiel’s CS183: Startup - Class 19 Notes Essay—Stagnation or Singularity? - r4vik
http://blakemasters.tumblr.com/post/25149261055/peter-thiels-cs183-startup-class-19-notes-essay

======
6ren
(1) From a dialectic point of view, I think Kurwzeil makes an impressively
compelling argument that the rate of progress has been exponential,
_historically_ (e.g. using others' milestones).

So it's striking that he doesn't argue for an underlying mechanism, nor for
whether it can continue - or even mention it explicitly.

The mechanism seems to be similar to _standing on the shoulders of giants_ \-
once an improved technology is developed, it can be used for improved search
for other technologies. The search can then be faster, more efficient, can
take place in new domains, with greater accuracy - whatever is the the nature
of the improvement.

But there's another assumptions: that there _will_ be more to discover, and
with a constant density. But why should the frequency of potential discoveries
be constant, such that if you seek faster, you'll find faster? To be clear, it
_does_ seem to be that way... I'm just wondering if there's an argument as to
_why_ it's that way, and why it will continue to be that way as we keep
searching. There's the assumption of mediocrity - that we are not at a
privileged center of the universe - but is there a better argument? For
example, why shouldn't it be that discoveries become exponentially rarer? So
that we have to keep searching faster and faster just to maintain a linear
rate of progress.

EDIT: e.g. consider primes, infinite but become less frequent as you go.

(2) I also like Hofstadner's argument that it's not necessarily true that an
intelligence is sufficient to understand itself e.g. a giraffe can't
understand itself. Of course, we can divide-and-conquer, and create
hierarchical understandings, such that we can understand one level in itself,
by assuming the concepts below, and ignoring concepts above. But not all
things can be neatly composed into modular hierarchies, such that each level
is easily understood - though we are biased towards seeing those that can,
because that is all that we can see. In other words, perhaps we can one day
duplicate a human mind... yet not understand it.

(3) There's a fascinating thought in these class notes, that people assume
stagnation, and don't like to generalize beyond extrapolating single
variables. This is very pragmatic, because it's almost impossible to predict
with any fidelity. But it results in a very interesting effect: people are
very confident as they stride the same well-trod paths, only ever taking one
step away from it, so only finding big wins transitively (by hill climbing).
This means that just two or three steps off the beaten track can be miraculous
improvements... all you have to do is find it, though that might take an
enormous number of attempts.

EDIT (4) Re: exponential progress in software (cf. Moore's Law for hardware),
there are dimensions of progress that seem to be exponential: the release rate
of new software; the productivity due to using others modules (SOTSOG, esp
open source); "software is eating the world" as more problems are solved with
software; software is being used on more devices, in more places (eg mobile
devices). One could argue this is cherry picking, and none of these are
equivalent to Moore's Law - but exponential improvement only requires some
kind of improvement, that can itself be built upon. NOTE: I don't have
figures, so I don't know for sure whether the above are actually exponential,
though "eating the world" seems to be. Also found this:
[http://multiverseaccordingtoben.blogspot.com.au/2011/06/is-s...](http://multiverseaccordingtoben.blogspot.com.au/2011/06/is-
software-improving-exponentially.html)

~~~
jackcviers3
6ren: Perhaps the reason that discoveries don't seem to become exponentially
rarer is that, in an infinite universe all possible probabilistic outcomes
must occur. Thus, if there is a non-zero, finite chance of something being
discovered it will and can be discovered in an infinite universe.

Each new discovery in such a universe will produce non-zero speculative
probabilities of something new to discover -- resulting in an infinite stream
of problems and solutions that present new problems.

It doesn't matter if the probabilities are speculative or eventually
discovered to be real: when speculations are "proven" false, the proof of
falsehood is itself a new discovery that also provokes further speculation.

I guess this would mean that technological progress is inevitable given an
infinite universe where some technological progress has already occurred and
where speculation or "imagination" or unguided thought exist.

------
corwinbad
Hi - this is omri (founder of genomecompiler) - our kickstarter project is
still awaiting amazon payments authorization and kickstarter authorization for
the project. See <http://www.youtube.com/watch?v=BLhU1RGTHN4> and
<http://www.youtube.com/watch?v=F8qcDQaY8Mw> for details

------
corwinbad
Cool, they talked about our genomecompiler (<http://genomecompiler.com>) and
the kickstarter project for glowing plants we're in the process of posting.

omri@genomecompiler.com

------
lupatus
I know that de Grey has a rough outline to reach his "Methuselahrity", but
does anyone know of a reference to other milestones needed to reach the
singularity's state of radical abundance.

The notes indicate that they discussed the idea of milestones to the
singularity, but I was wondering if anyone has sat down and really thought it
through on how to get there.

Thanks!

------
palehose
Does anyone know the link to the Kickstarter project "that involves taking an
oak tree and splicing firefly genes into it"?

------
tocomment
Can anyone link me to the kickstarter project they mentioned?

