One of the sacrifices that will have to be made is that we will have to admit that we could be emulated on a computer and are little more than self aware information.
I don't have any problem per se with the idea that my brain might be a computer. However, this idea has been repeated now for more than 50 years, with very little empirical evidence of the digital modus operandi of brains that I know of. I would appreciate some pointers to recent scientific works which support the brain = computer hypothesis.
What do you propose a brain might be doing, other than computation?
We have an existing theory that our computers are universal (even a Turing Machine is), which means they can do any possible computation. If the brain does computations, then our computers could do the same computations.
"Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain’s modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition."
EDIT: I forgot a very interesting one involving a computational experiment:
Selmer Bringsjord, "A New Gödelian Argument for Hypercomputing Minds Based on the Busy Beaver Problem"
Now I'm looking for evidence which supports your theory - that brains = minds = computers. Affirming that a theory exists without providing evidence that it is true is not enough.
Yeah, the candidates are not all alike, even though Barsalou cites Spivey, who is really in the analogue camp. In contrast, the "Grounded Cognition" idea revolves around simulation - which may or be not work in an analogue manner -, and only denies the existence of amodal symbols in brains, as opposed to, say, Jeff Hawkins (who didn't do any experiments, either, AFAIK). Bringsjord is still another case; he's an Ex-GOFAI guy who, together with David Ferrucci, built the storytelling system BRUTUS.
Bringsjord makes another relevant contribution to the debate in "BRUTUS and the Narrational Case Against Church's Thesis", which used to be available from citeseer, but that site is down ATM, so I cannot provide a link.
But since you're asking a question of your own instead of answering mine, I take it that you don't know of any evidence, either.
One cannot present evidence to differentiate between two theories unless they are both coherent and make clear and different predictions. Arguments against theories is a valid way to deal with them; so is asking questions to clarify what they are saying.
I don't understand you there. Spivey can make experiments which lead him to conclude that there may not be any fixed representations of anything in the brain. Bringsjord can manually solve the Busy Beaver for 6-state Turing Machines, while the machines themselves can't. These are examples of what I have; what I'm now looking for are examples of experiments from the results of which the opposite can be concluded.
How is it relevant whether there are any fixed representations in the brain? self-modifying code could achieve that.
there is no reason to believe that bringsjord can solve that problem and a computer can't. divide his method of solving it into very small steps. then answer which step did he do which a computer can't do?
There are many steps. You have a set of different Turing Machines with alphabet {0,1}, each of which has, say, 4 states. You want to know which of these is the one that, starting from a tape filled with 0's, can write the largest number of consecutive 1's onto the tape, before it halts. If it halts - you don't know that in the beginning. A human can find out, by manually simulating the sequence and counting the steps. It's a lot of work - there are 61.519 possible 4-state machines -, but Bringsjord (or more likely, a group of undergrads available to him) has/have done it. A computer can't do it. For details, please read the paper.
You're telling me that a computer cannot simulate steps of a turing machine, one at a time? it can't store the current state of the turing machine, and the rules, in memory, and use the rules to get from one state to the next?
Are you really saying that a computer with too little memory can't do it, or something like that? because it seems blatantly obvious that a computer can simulate a turing machine.
A Turing Machine is an idealized computer. But no computer/TM can find out which of the possible 61.519 4-state TMs can write the longest string of 1's on a blank tape before halting.
No, a computer cannot do it, due to incompleteness (Once there was a man called Kurt Gödel...) Bringsjords experiment proves that humans can "hypercompute" uncomputable functions. The great majority of functions which exist are uncomputable.
Computation is mathematical calculation. The brain does not do straight math like computers do.
Most of what we think we know of "consciousness" is based on estimation and assumption, but from the very best research out there we know that the brain is a molecular machine. Through a system made of billions of neurons interacting through neurotransmitters and electrical/molecular forces, we end up with processes like Long Term Potentiation (http://en.wikipedia.org/wiki/Long-term_potentiation) for memory.
Negative feedback loops (for hormones), neurotransmitter synthesis, LTP, and "thought" do not come from mathematical calculation performing a set of predetermined functions (like software)-- they come from molecules governed by the forces of physics leading to a system designed by the processes of evolution.
computers can be programmed to do estimation and statistics and that sort of thing. they can also have sensors and feedback mechanisms, and calculate anything the forces of physics would do. evolution can also take place within a computer.
I think you're missing the point, mainly that all that is programmed. There is no way a computer can truely evolve, only its software can.
Evolution is the response to external factors, mainly death, which prevents the spread of genetic material.
Now if you were to make an android that had all the risks a human does, like being hit by a car or eaten by a lion, and gave it two ROM sequences of commands that its consciousness is based on and whenever two androids 'reproduce' they only pass on one ROM sequence, then you would end up with genetic selection. Essentially all new androids would end up with the ROM sequences of the most successful andriods.
Factor in that there might be write errors and data corruption from radiation turning a 0 into a 1, then they might get true evolution.
However, a desktop computer or mainframe cannot evolve as it's one entity. Even thousands of them won't evolve as new computers aren't made through the joining of two hardware parts of two older PCs. It isn't like "aww he's got your webcamera, honey" or "Oh no, he's got a cooling fan defect, one of the heat sinks isn't working properly".
If PCs suddenly started becoming self-aware, every single one would be an alien to each other as it would be fundamentally unique.
A large portion of Asian people descend back to Ghengis Khan, Irish descend back to some warlord there. Our genetic tree as a species is illustrated by irregular growth and random dead ends, a computer won't achive that ever until they become so unrecognisably living that they risk being killed or eaten or choose not to have offspring, or conquer a country and have 500 kids with random androids.
Then there's things like Founder effect to consider, like a Micronesian islands population has 5% colour blindness and 30% carry it even though it's a 0.003% prevelance in the US. Why? Because everyone but 20 people died after a typhoon and 1 of the survivors had it, and since 1775 1/20th of the population is totally colour blind. If he'd have been killed, they'd have had 0% prevelance of colour blindness.
Yes, because expressing any algorithm = send signals to muscles that produce speech, or maybe muscles that type text on a keyboard, etc. These signals are result of some computation, obviously.