

Transcendent Man Wows At Tribeca Film Festival Premier - kkleiner
http://singularityhub.com/2009/04/29/transcendent-man-wows-at-tribeca-film-festival-premier/

======
Retric
The most intresting mistake with the signularity concept is the idea that you
can get a liner return on intelegence. While a computer with x processing
power might be able to compute 10 moves deep in a game and a system with 10x
the power might get there in 1/10th changing the goal to 11 moves might not be
reasonable.

Another mistake is the assumption that the universe can support "unlimited"
intelegence. Their might be a lot of room at the bottom but nothings says
computing power can't hit a fundimental limits at some point in the not to
distant future.

PS: When people assume exponential growth continues indefinatly they tend to
ignore the growth limiters that only apply as you increase the scale.

~~~
J_McQuade
"Another mistake is the assumption that the universe can support "unlimited"
intelegence."

One thinker who seeks to address this point is Frank J. Tipler with his notion
of the 'Omega Point' - a theoretical point where-after the contraction of the
universe means that the available computational power, while still point-
finite, will be increasing faster than the time remaining can decrease.

I don't know how much stock I put in the concept, myself, but at least
somebody's thinking about it!

Read a little up on it - the guy may very well be bonkers, but it's certainly
worth a look if you're into mind-bending metaphysics:

<http://en.wikipedia.org/wiki/Omega_Point_(Tipler)>

~~~
randallsquared
The original hypothesis made specific predictions (regarding the Higgs, if I
remember correctly) which have turned out not to be true. Tipler revised
accordingly, rather than doing something new, which seems like a bad sign,
scientific-method-wise. :)

------
prospero
There was a time when a person could have the entirety of human knowledge on
their bookshelf, but then Gutenberg came along and screwed that all up. Since
then, even the most ambitious person has had to settle for less than complete
knowledge. We rely on experts in their respective fields to make their
knowledge palatable and useful for everyone else.

Is the singularity the point where this ceases to become possible, or the
point where we rely on the expertise of machines? The latter seems possible,
at least, but I don't understand how the former is supposed to come about.
People place a pretty high value on remaining naive, and there will probably
always be a market for that.

~~~
bdr
The Great Library was bigger than a single bookshelf, and pre-Gutenberg.

~~~
prospero
Fair point. I was referring to the canonical works (i.e. everything by Plato,
but not everything about Plato), and was thinking about the period just before
Gutenberg. The available body of knowledge was pretty irregular before
Gutenberg (the burning of the Great Library, etc.), but monotonically
increased afterwards.

My question stands, though: haven't we already hit our individual saturation
point for knowledge? What is the Singularity, if not that?

~~~
randallsquared
One answer is that it's when we can increase the saturation point itself. To
use your terms.

~~~
prospero
That seems like an inversion of how I've always seen it described. As far as I
understand it, the Singularity is when the rate of advancement outstrips our
ability to keep up. If we already can't keep up, but technology fixes that
sometime in the future, I don't know if that's the same thing.

~~~
randallsquared
For decades now, the way to keep up has been to focus on an ever-narrowing
field of knowledge. I don't think that's controversial. One description of the
singularity is that it's when the maximum intelligence of an agent (human,
computer, whatever) begins to significantly increase due to increasing
computing power. If we're using that definition, then it fits with your
"saturation point" description, since we'd be increasing the amount of
information that an agent can handle. The "singularity" aspect is from the
perspective of an unmodified human, not the upgraded agents themselves.

But there are other definitions. :)

------
andreyf
I find the ideas of the Singularity movement greatly lacking in imagination. I
think when we discover the true nature of consciousness, life, and evolution
(which seem to be related), we'll be in for quite a fundamental philosophical
surprise.

~~~
chadmalik
The field of cognitive science (lingistics + philosophy + computer science +
neuroscience + biology etc.) has actually discovered the underpinnings more or
less. Consciousness is based on our bodily interaction with the world. Thought
uses experiential metaphors from basic physical interactions with the world
such as light/dark, up/down, etc. Read "Philosophy in the Flesh" by George
Lakoff.

AI will never come from digital machines.

~~~
spot
a computer can simulate interaction with world just as well as it can simulate
your brain.

"actually discovered the underpinnings" is a vast overstatement anyway.

~~~
chadmalik
Yes but simulating the brain will never be the same as the actual "wetware" in
operation. To my understanding, the simplistic reason that hard AI has failed
and always will - while it uses digital computation as a platform - is that
its a HARDWARE deficiency. I'm generalizing but AI people all seem to be
algorithm gurus who feel that the next algorithmic breakthrough will be "the
one" to create an AI. I think what they miss is that the physical
manifestation of brain and body are in fact required to be conscious.
Disembodied consciousness cannot exist since conscious beings (animals) have
complex biological systems known as bodies that enable what we view as
intelligence.

Yes, computers are infinitely better at rule-based computation than animals.
No, they cannot pass simple tests of consciousness that a mouse can, and
theres really no prospect of it happening because a better algorithm is
created; fact is, animals and even insects don't think based on algorithms.
Algorithms are models of the real world, not the real thing.

Being conscious means using ones eyes/ears/mouth/nose/skin to feel, hear,
smell, speak to interact with the world around you. It means having agency and
not just checking a set of pre-written rules in order to decide what to do
next. Its a biological system that enables that.

~~~
endtime
>Yes but simulating the brain will never be the same as the actual "wetware"
in operation. To my understanding, the simplistic reason that hard AI has
failed and always will - while it uses digital computation as a platform - is
that its a HARDWARE deficiency.

Are you saying that hardware deficiencies are never resolved over time?

>I'm generalizing but AI people all seem to be algorithm gurus who feel that
the next algorithmic breakthrough will be "the one" to create an AI.

I study AI at Stanford and I've never met anyone who thinks that.

>I think what they miss is that the physical manifestation of brain and body
are in fact required to be conscious. Disembodied consciousness cannot exist
since conscious beings (animals) have complex biological systems known as
bodies that enable what we view as intelligence.

You need to justify that. The reasonable argument closest to what you are
saying is that sensory stimuli are required for intelligence. This isn't hard
for a computer to simulate.

>Yes, computers are infinitely better at rule-based computation than animals.
No, they cannot pass simple tests of consciousness that a mouse can,

Like what?

>and theres really no prospect of it happening because a better algorithm is
created; fact is, animals and even insects don't think based on algorithms.

This needs justification. If they aren't computing, then what are they doing?
You're looking down into the dualism abyss...

>Algorithms are models of the real world, not the real thing.

No, algorithms are neither. Algorithms are sequences (trees, if you prefer) of
instructions.

>Being conscious means using ones eyes/ears/mouth/nose/skin to feel, hear,
smell, speak to interact with the world around you.

So are blind people less conscious? What about those with no sense of smell?

>It means having agency and not just checking a set of pre-written rules in
order to decide what to do next.

What makes those two things mutually exclusive? When I make a decision I
consult my values and my goals and attempt to make the decision that gets me
closest to my goals without compromising my values. Where do my values come
from? Some of them are based on instinct (i.e. genes, i.e. hard coded) and
some come from parents/society. My goals? I'd say most of them are just
milestones on the path to other goals, and then that there are a few
fundamental goals that come from biology (e.g. I fundamentally want to
reproduce, this means that I need to attract a mate and earn enough to support
a family, this is made much easier by an good education, etc.).

>Its a biological system that enables that.

Why? And what makes you think we couldn't build a biological computer (other
than by the traditional method ;-) )?

~~~
andreyf
_> Computers [...] cannot pass simple tests of consciousness that a mouse can

Like what?_

Vision, for one. Also, running.

 _If they aren't computing, then what are they doing? You're looking down into
the dualism abyss..._

I don't think dualism is what he was thinking of, but rather Searle's Chinese
Room argument. Coming from a CS-heavy Philosophy-light background, it took me
a long time to wrap my mind around what Searle was saying there, but it's a
pretty solid point nonetheless. The best way I've found to explain it in CS
terms is that while theoretically, intelligence could be modeled using Turing
machines, in practice, we will never have a machine fast enough to do so.

 _When I make a decision I consult my values and my goals and attempt to make
the decision that gets me closest to my goals without compromising my values_

We have no idea how you or I make decisions, and the notion that we can intuit
how decisions are made is a huge road block in even beginning to think about
decision-making. Lakoff, among others, does a great job explaining why
"values", "goals", and "morals" are simply justifications we create after
we've made a decision.

> And what makes you think we couldn't build a biological computer

We can, and we should - but let's agree that it won't be a Turing machine.

~~~
endtime
>>> Computers [...] cannot pass simple tests of consciousness that a mouse can

>>Like what?

>Vision, for one. Also, running.

Um, ever heard of a digital camera? Or if you mean interpreting images,
guessing what, they can do that too. Computer vision is a subfield of AI
(arguably; I believe some say it's signal processing). Also, blind people are
still conscious, so...

And running? Sorry, I hate to take this tone, but WTF are you talking about?
Robots can run, and running has nothing to do with consciousness.

>>If they aren't computing, then what are they doing? You're looking down into
the dualism abyss...

>I don't think dualism is what he was thinking of, but rather Searle's Chinese
Room argument. Coming from a CS-heavy Philosophy-light background, it took me
a long time to wrap my mind around what Searle was saying there, but it's a
pretty solid point nonetheless. The best way I've found to explain it in CS
terms is that while theoretically, intelligence could be modeled using Turing
machines, in practice, we will never have a machine fast enough to do so.

You've totally missed the point of the Chinese Room. The point is that
something can appear to understand without actually doing anything we'd call
understanding. In other words, ~Ax(OutwardlyIntelligent(x) ->
ActuallyIntelligent(x)). It's an argument against the sufficiency of the
Turing Test, that's it.

And saying we'll never have a machine fast enough? The only limit on the speed
of computation is the speed of light (and the physical size of the universe
limits parallelization). But these values also bound our brains. In fact, iirc
neurons are actually pretty slow relative to transistors.

>>When I make a decision I consult my values and my goals and attempt to make
the decision that gets me closest to my goals without compromising my values

>We have no idea how you or I make decisions, and the notion that we can
intuit how decisions are made is a huge road block in even beginning to think
about decision-making. Lakoff, among others, does a great job explaining why
"values", "goals", and "morals" are simply justifications we create after
we've made a decision.

Okay, maybe my deconstruction is correct and maybe it isn't. Whatever. Unless
you can explain to me why no such deconstruction is possible, I have no reason
to accept that deciding is not computation. In other words, if it's not
computation, then what _is_ it?

>> And what makes you think we couldn't build a biological computer

>We can, and we should - but let's agree that it won't be a Turing machine.

Of course it won't. It will be less powerful, because Turing machines have
infinite memory and the universe is finite. It'll be a DFA, just like all
current computers.

~~~
andreyf
> Computer vision is a subfield of AI...

One that isn't close to being to process images the way rats can.

> Robots can run, and running has nothing to do with consciousness.

An animal can learn a new gait if it loses a leg. As far as I'm aware, no
robot can do that. I was giving examples of what mice can do, not of
consciousness.

> You've totally missed the point of the Chinese Room [...]

Good point. Good things I dropped the philosophy major :)

> The only limit on the speed of computation is the speed of light (and the
> physical size of the universe limits parallelization).

Also, our architecture. The point I've heard Searle argue (not the Chinese
Room, I guess) is that the computers we build now don't have an architecture
are as likely to simulate consciousness, given the constraints of the
universe, as a convoluted system of telegraphs.

 _Unless you can explain to me why no such deconstruction is possible, I have
no reason to accept that deciding is not computation. In other words, if it's
not computation, then what is it?_

Let me give my point another shot - deconstruction of consciousness _is_
possible, but not in a way which can be simulated by our current machine
architecture.

 _It'll be a DFA, just like all current computers._

Do you suppose a brain is a DFA? Which do you suppose has more states - all of
the computers in the world, or a single mouse brain?

~~~
endtime
>One that isn't close to being to process images the way rats can.

Can you justify that? CV systems can track motion, detect objects, do OCR,
etc. I don't know much about rats, but I'd imagine that modern CV systems are
more powerful than rats.

>An animal can learn a new gait if it loses a leg. As far as I'm aware, no
robot can do that. I was giving examples of what mice can do, not of
consciousness.

Uh, okay. If you want to see organic-looking gait, look up Big Dog. Here's a
video, deep linked to a part where it stumbles on ice and regains its balance:
<http://www.youtube.com/watch?v=cHJJQ0zNNOM#t=1m25s>. In general, neural nets
often exhibit organic-seeming behavior. In any case, I don't think that
mammalian joint structure (which, and I'm just guessing here, is probably what
creates the kind of motion we think of as organic) has anything to do with
consciousness.

>Also, our architecture. The point I've heard Searle argue (not the Chinese
Room, I guess) is that the computers we build now don't have an architecture
are as likely to simulate consciousness, given the constraints of the
universe, as a convoluted system of telegraphs.

Well, we could build different hardware, but it seems much more interesting to
simulate a mind in software, in which case architecture doesn't matter.

>Let me give my point another shot - deconstruction of consciousness is
possible, but not in a way which can be simulated by our current machine
architecture.

Why not? Sure, our processors are a bit slow, but that's not really an
argument for fundamental impossibility.

>Do you suppose a brain is a DFA? Which do you suppose has more states - all
of the computers in the world, or a single mouse brain?

Not that it's especially meaningful, but I'd imagine all the computers in the
world do. To try and give you a better answer, I googled "number of neurons in
a mouse brain" and the first result was "IBM's BlueGene L supercomputer
simulates half a mouse brain" from 2007:
[http://www.engadget.com/2007/04/29/ibms-bluegene-l-
supercomp...](http://www.engadget.com/2007/04/29/ibms-bluegene-l-
supercomputer-simulates-half-a-mouse-brain/)

------
TheAmazingIdiot
For those who like Kurzweil's ides, might I recommend books written by the
author Greg Egan.

His books, in particular "Diaspora", discusses the story line of ai-humans
that live to invent, create, and discover. Their journey takes them through
10^10 universes just to solve one problem: the destruction of the Earth.

~~~
jokermatt999
Along the same lines, I'd recommend Accelerando by Charles Strauss.
(<http://www.accelerando.org/>) It's another well written piece of post-
singularity fiction that's fascinating in the ideas and implications that it
presents.

Yes, this is the second topic today I've recommended it in. No, I am not a
shill, it just came up twice.

