Our brains ARE computers. Conscious machines are a matter of when, not "if".
Think of a computer simulation of every atom of your brain, along with modelling the electrical impulses going through it. It WOULD be you, and it would be really freaked out by the lack of sensory input. Probably you'd need to "anesthetize" parts of it that panic when they detect "you" aren't breathing -- along with other autonomic controls -- or feed them fake information. Otherwise you'd be in a situation where you'd boot up your brain simulation and you'd actually torture a conscious machine. If you simulated every atom, it would be you!
Note also that this would not need to run in real-time. It could be updated in simulated femtosecond steps that could take an arbitrarily large amount of "real" time, without realizing it. Ethics would apply to a software simulation, as far as pain goes, but you could shut it down or pause it without it realizing what happened, as long as it was in a virtual environment. If you gave it physical sensors, it would find it quite jarring to be in the middle of saying something and suddenly have it be 5 days later partway through the sentence!
Why would we expect an intelligence with sensory inputs and expressive outputs so radically different from our own to relate to our experiences in the same way?
I'm surprised Gelernter is so pessimistic (he's a brilliant guy)... It looks a lot easier to me than he's making out.
Hecht-Nielsen's discoveries about brain architecture have supplied a great foundation for building raw cognition, so we can move forward to the conscious level.
"looks a lot easier"? Please let everyone know when you've got it working!-))
To the best of my knowledge there's no neural network technology available today currently that could be described as providing "raw cognition". Neural nets are for the most part focussed on backpropagation networks, which aren't prevalent in nature. The brain has much more variety in it's architecture.
Gerald M. Edelman has argued that a biological approach based on evolution and biology will prove more fruitful than the physical symbol system hypothesis(PSSH) of "good old-fashioned AI". Edelman explicates his ideas in "Bright Air, Brilliant Fire: On the Matter of the Mind":
In that book he clearly states his objections to the PSSH.
There is significant recent work on consciousness, tying it to neuroscience studies of the brain. Bernard J. Baars' explications of Global Workspace theory appear fruitful in leading us toward conscious computing. Right now I'm reading one of Baars' books, "In the Theater of Consciousness: The Workspace of the Mind":
Hist arguments for the reason consciousness isn't possible in software reminds me of the pessimists decrying the impossibility of human flight since we are not be birds.
Think of a computer simulation of every atom of your brain, along with modelling the electrical impulses going through it. It WOULD be you, and it would be really freaked out by the lack of sensory input. Probably you'd need to "anesthetize" parts of it that panic when they detect "you" aren't breathing -- along with other autonomic controls -- or feed them fake information. Otherwise you'd be in a situation where you'd boot up your brain simulation and you'd actually torture a conscious machine. If you simulated every atom, it would be you!
Note also that this would not need to run in real-time. It could be updated in simulated femtosecond steps that could take an arbitrarily large amount of "real" time, without realizing it. Ethics would apply to a software simulation, as far as pain goes, but you could shut it down or pause it without it realizing what happened, as long as it was in a virtual environment. If you gave it physical sensors, it would find it quite jarring to be in the middle of saying something and suddenly have it be 5 days later partway through the sentence!