Hacker News new | past | comments | ask | show | jobs | submit login

I don't think so. He says it in his rebuttal of the "Brain simulator" reply:

The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states

(http://pami.uwaterloo.ca/tizhoosh/docs/Searle.pdf)

I believe he would say the same thing replacing "neuron firings" with "biochemical processes". This is a slippery argument that only religious arguments can equal. Can you reach your simulation down to the level of individual molecules and keep looking for "intentional states"? Molecules are pretty dumb.




This assumes the very thing CAN be simulated. Well, you can simulate molecule interactions for water molecule in a program, but this doesn't get you actual wetness.

So, what if this "consciousness" property depends on the biochemical processes and substances of the brain, in the way wetness depends on actual water?

If I simulate molecules moving rapidly and hitting each other, I get a simulation of what happens when heat is produced, but not actual heat --my simulation cannot light a match, for instance.


Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness. Conversely, your simulated heat could light a simulated match. These kind of linguistic arguments are inherently dualistic (I.e. Falsely assume that words like 'wetness' have meaning outside the realm of the human CNS, in a parallel universe perceived by the mind but not the brain).

Consciousness, OTOH, is a brain state felt by the brain so it should be in principle simulate-able, just like any physical system. Unless we believe it's metaphysical.


>Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness.

Only this "wetness" wouldn't soak an actual napkin.

Nothing linguistic about it.

Physical objects have physical properties --you can simulate those, but then you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.


Only this "wetness" wouldn't soak an actual napkin.

No "wetness" will ever do it. That would require real water, right? What causes the soaking are electrical forces, there's no such thing as "wetness" in nature. I believe it's just a word that humans invented for the properties of water.

you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.

Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us. That's what humans do with each other (the other minds problem), and we typically assume that there is "understanding".


>No "wetness" will ever do it. That would require real water, right?

You don't say! (as the meme goes).

I mean, of course, I'm using the word wetness to imply the physical implications of the presence of water.

So, to return to the actual thing under discussion, what I mean is that we can simulate stuff from the physical world, but this simulation might capture some of same information and calculations (e.g down to the individual positions of particles, exchanges of energy etc) but it doesn't have the same properties, unless you simulate the whole of their environment.

>Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us.

What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing. Not just "this neuron fires now" etc, but also stuff like the neuron's materials, physical characteristics and responses, etc. Those could be essential to things like how accurately (or not) information like memory and thoughts is saved, how it is recalled, timings of neuron firings, etc.


What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing

You might not, but current research is more hopeful. The current consensus is that you simulate a neuron well enough if you get down to the level of chemical reaction kinetics, and it appears that this description is accurate enough to recreate the electrical properties of neurons. There are yet no neuronal phenomena that can't be explained with this framework, so the consensus is more like we might than we might not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: