

When Do Post-Humans Show Up? - darragjm
http://tierneylab.blogs.nytimes.com/2008/06/10/when-do-post-humans-show-up/

======
mechanical_fish
Post-humans are showing up all the time. I intend to evolve again on July 11,
when I get equipped with my connection to the hive mind... er, I mean my
always-on Wikipedia appliance, courtesy of Apple.

Admittedly, it won't be that impressive. If I could somehow take my iPhone
back in time (with a trans-temporal wireless network connection, of course) to
play Trivial Pursuit against my high-school-aged self, then it would be
obvious how different my augmented self is from my prior self. (And equally
obvious that skill at Trivial Pursuit, which used to be considered akin to
being smart, is now an oddball exercise, like building furniture using nothing
but a jackknife, or starting a fire without matches. We don't have to pursue
trivia anymore. We caught it. It's sitting here in this box.)

But, as it happens, I'm actually late to the iPhone party, and when I get one
nobody will care. It is as Bruce Sterling said: "the Singularity is banal".
Post-singular creatures won't be all that impressed with themselves, just as
the first human to utter a grammatical sentence didn't really stop to marvel
at it. She was probably too busy trying to get her friend to pass her a
handful of walnuts.

~~~
technoguyrob
That's a good point, but I think the idea of "Singulitarians" (for lack of a
better term) is that your iPhone or other new gadget won't be invented in
years, months or weeks, but hours or minutes. A human brain simply can't keep
up.

Of course, if the "Singularity" happens to mean we'll just be much faster
thinking version of ourselves, then perhaps you're right, it will be banal, as
presumably our perception of time will be perverted as well.

~~~
mechanical_fish
_your iPhone or other new gadget won't be invented in years, months or weeks,
but hours or minutes. A human brain simply can't keep up._

What would be the point of that? A Magic iPhone Box, one that coughed out new
designs so quickly that humans couldn't keep up, would be throttled back.
What's the point of filling up warehouse after warehouse with beautiful new
iPhones -- each unique and special -- that we don't even have time to try out?
It's a waste of material resources. Why even bother building the Magic Box in
the first place? Or should we build another Box that can appreciate the
iPhones for us, faster than the eye can see? Who's going to pay the fuel bill
for all that?

In manufacturing theory there's a concept of balancing the line: the slowest
step in the manufacturing process serves as the bottleneck, and the secret to
maximizing efficiency is to let that bottleneck control the speed of
everything else. It doesn't pay to produce a million nuts and bolts per hour
if you're only using ten thousand per hour: You just pile up expensive excess
inventory -- and if the inventory rusts, or the designers eliminate nuts and
bolts in the next version of your product, you _lose_ all that inventory.

Similarly, the pace of technological change is ultimately constrained by human
timescales. We're the technologists. We're the ones who control the knobs. We
now have machines that can play master-level chess faster than the eye can
follow, but we don't let them do that -- we don't let them run day and night,
filling up our hard drives with recordings of dazzling strategic moves that
we'll never have time to see. Instead, we turn the machines down to our speed,
and use the leftover processing cycles for something else. Or turn them off
and save some precious energy.

If you've lost the distinction between the _Terminator_ films and real life,
or have watched too many episodes of _Pinky and the Brain_ , you might believe
that a smart machine could somehow _take control_ of the knobs from the
humans. But, really, it's hard to sneak up on seven billion people, no matter
how smart a toaster you are. Particularly when those people control your
supply chain.

------
noonespecial
FTA:

 _These critics obviously have not read my book and have not read this chapter
because they do not respond to anything I’ve written. It is as if they’ve just
heard a superficial presentation of these ideas and respond without any
engagement of the extensive discussion that has already taken place about
these issues._

And now we know how Kurzweil says RTFA.

~~~
hugh
Without reading TFA, I could tell that had to be a Kurzweil quote.

I don't think there's much chance that I'll live forever, and there's much
less chance that Kurzweil will. But I sure hope that we both don't, because
otherwise he'll never shut up about it.

------
icky
It's surreal to see this even in the NYT's orbit.

~~~
izaidi
I was thinking the same thing. It wasn't too long ago that the only mainstream
coverage of this stuff was the occasional Kurzweil-centric fluff piece along
the lines of "Hey, this crazy guy says we'll live forever soon." I wonder if
the singularity debate's finally leaking into the public consciousness, or if
the Spectrum special issue just forced a blip.

~~~
technoguyrob
Well, I don't think anyone can argue that if we create an AI smarter in nearly
every way than any given human, several or a large amount of those AIs could
create an even smarter AI, and so forth recursively. The only question is if
we indeed will, and if so when (even 100-150 years isn't that far off if you
think about it). Given the huge impact it would have on society (for better or
worse), it's perfectly reasonable to be a matter of public debate.

~~~
hugh
Actually that point isn't necessarily obvious to me. Maybe we can create an AI
which is 10% smarter than ourselves, but maybe a 10% smarter AI still isn't
smart enough to create an AI 10% smarter than itself. Maybe building a 121% AI
that smart is such a hard task that even our 110% AI can't manage it -- maybe
the best it can do is a 115% AI.

Analogy: Out of Lego, I could build a robot arm that's capable of building a
slightly larger robot arm out of Lego. But I can't just keep on doing this
'til I have a robot arm the size of the Burj Dubai, because there are limits
to how big a Lego-based robot arm can be before it collapses under its own
weight.

I'm not saying that AIs _are_ like this, just that they might be.

~~~
stcredzero
Your analogy is particularly apt if you consider that two of the means of
achieving AI are emergent and thus not engineering and not at all friendly to
reverse engineering. If we can simulate the brain's processes in bulk, we
still don't necessarily understand how it does what it does. (Like making a
photocopy of ciphertext.) Likewise, if we create some sort of architecture
that lets us evolve AIs, there is no guarantee that we will be able to
understand how the results work. There's no guarantee that the resulting AIs
will be able to either.

~~~
LPTS
I think Godel's Incompleteness Theorm implies the machines can't grok the code
they are made from. I'm not sure though.

------
schtog
But what happens a year or two after that? The best answer to the question,
“Will computers ever be as smart as humans?” is probably “Yes, but only
briefly.”

what does that mean? that it then will be much smarter?

~~~
dougp
Or that the advent of ai will cause us to wipe each other out

------
tom_rath
Typically between 7 and 10 am. I hear you lucky Americans see them on
Saturday, too!

------
LPTS
Post-Humans don't want to live forever. They are capable of detatching
themselves from their egos and accepting death.

This kurzweil guy is rocking the same fear of death vibe (a prehuman drive for
survival) that keep most traditionally relgious people from experiencing the
actual spiritual evolution that the real post humans will go through. Kurzweil
is just basing his mythology to suppress the fear of death on tech and sci fi
themes rather then Christian myths.

