
Artificial Intelligence Is Lost in the Woods - donna
http://www.technologyreview.com/Infotech/18867/
======
gyro_robo
Our brains ARE computers. Conscious machines are a matter of when, not "if".

Think of a computer simulation of every atom of your brain, along with
modelling the electrical impulses going through it. It WOULD be you, and it
would be really freaked out by the lack of sensory input. Probably you'd need
to "anesthetize" parts of it that panic when they detect "you" aren't
breathing -- along with other autonomic controls -- or feed them fake
information. Otherwise you'd be in a situation where you'd boot up your brain
simulation and you'd actually _torture_ a conscious machine. If you simulated
every atom, it _would_ be you!

Note also that this would not need to run in real-time. It could be updated in
simulated femtosecond steps that could take an arbitrarily large amount of
"real" time, without realizing it. Ethics would apply to a software
simulation, as far as pain goes, but you could shut it down or pause it
without it realizing what happened, as long as it was in a virtual
environment. If you gave it physical sensors, it would find it quite jarring
to be in the middle of saying something and suddenly have it be 5 days later
partway through the sentence!

~~~
Leon
Well, from what you are talking about it sounds a lot like this good short
story on just such a situation, check it out:

<http://www.newbanner.com/SecHumSCM/WhereAmI.html>

------
cmars232
Why would we expect an intelligence with sensory inputs and expressive outputs
so radically different from our own to relate to our experiences in the same
way?

------
donna
I'm surprised Gelernter is so pessimistic (he's a brilliant guy)... It looks a
lot easier to me than he's making out.

Hecht-Nielsen's discoveries about brain architecture have supplied a great
foundation for building raw cognition, so we can move forward to the conscious
level.

~~~
mdkersey
Thanks for finding this article, Donna!

"looks a lot easier"? Please let everyone know when you've got it working!-))

To the best of my knowledge there's no neural network technology available
today currently that could be described as providing "raw cognition". Neural
nets are for the most part focussed on backpropagation networks, which aren't
prevalent in nature. The brain has much more variety in it's architecture.

Gerald M. Edelman has argued that a biological approach based on evolution and
biology will prove more fruitful than the physical symbol system
hypothesis(PSSH) of "good old-fashioned AI". Edelman explicates his ideas in
"Bright Air, Brilliant Fire: On the Matter of the Mind":

([http://www.amazon.com/Bright-Air-Brilliant-Fire-
Matter/dp/0465007643/ref=sr_1_1/103-2087616-1051018?ie=UTF8&s;=books&qid;=1182842907&sr;=1-1).](http://www.amazon.com/Bright-
Air-Brilliant-Fire-
Matter/dp/0465007643/ref=sr_1_1/103-2087616-1051018?ie=UTF8&s=books&qid=1182842907&sr=1-1\).)

In that book he clearly states his objections to the PSSH.

There is significant recent work on consciousness, tying it to neuroscience
studies of the brain. Bernard J. Baars' explications of Global Workspace
theory appear fruitful in leading us toward conscious computing. Right now I'm
reading one of Baars' books, "In the Theater of Consciousness: The Workspace
of the Mind":

[http://www.amazon.com/Theater-Consciousness-Workspace-
Mind/dp/0195147030/ref=sr_1_1/102-9391404-9424956?ie=UTF8&s;=books&qid;=1182841961&sr;=1-1](http://www.amazon.com/Theater-
Consciousness-Workspace-
Mind/dp/0195147030/ref=sr_1_1/102-9391404-9424956?ie=UTF8&s=books&qid=1182841961&sr=1-1)

The Galernter article is extremely timely since it mentioned several ideas
that clicked into place as I read Baars.

I was surprised to find that Baars is now working at the The Neurosciences
Institute which Edelman founded:
<http://www.nsi.edu/index.php?page=facilities_architecture>

------
donna
My question is: Can we build a mind out of software that aggregates millions
of minds?

