Questions about the model are addressed with handwaving such as "use numbers and your head" or "this is obvious and everyone else is wrong"; no existing literature or prior work is cited; rather, thought experiments that completely evade the points raised by others are built out of thin air.
The core point is that qualia emerges out of perception, and therefore that any system that can perceive in a broad sense, even if that perception is merely simulation of perception, will effectively experience consciousness. But there isn't any reference to proven scientific models for any of the parts at play here; posts treat the core premises as ultimate truths and then spin around that.
Regarding your mention of your posting history and complaining of ad hominem, this is completely disingenuous given the way you have responded to other posters. If you wish a better response to your future posts, I suggest making your posts shorter, supporting your core points with references to existing work, and not use at all the "everyone but me is wrong" style of discourse.
if you meant to summarize my ultimate truth via "The core point is that qualia emerges out of perception, and therefore that any system that can perceive in a broad sense, even if that perception is merely simulation of perception, will effectively experience consciousness" then you didn't understand. The ultimate truth or core point is that qualia emerges as an emergent property of what the parts are doing, and if you simulate those parts with enough fidelity consciousness still emerges. Certainly lots of things could perceive without being conscious - a roomba vacuum cleaner for one thing.
So this argument about simulating parts -- it's like a fine mechanical watch that tells time: if you simulate it with enough precision the simulation can "tell time" only rather than telling time, the emergent property of the clockwork is "be conscious". you may not get my analogy and that's fine.
I'm not here to convince anyone. it's not worth discussing in my opinion. I've given a very explicit model under which I'm wrong. if you disagree with me that's fine.
No, I think it's worth discussing. I may not have understood you completely, but I for one appreciated reading your explanation. You may or may not be right, but I thought your exposition was coherent and worth reading.
Presumably you agree that you could replace all the muscles and bones of a bird and it would still fly, but you feel it is not obvious that you could replace all the neurons in a brain and it would still think. Why not? what's different? What's missing?
> Presumably you agree that you could replace all the muscles and bones of a bird and it would still fly
Well here's the thing, I think I have a problem with that premise in itself. Theoretically that idea seems easy to agree with. But in practice, has it ever been done? The answer is a resounding no.
Can we make things fly? Of course, but only by being insanely inefficient at it. Humans have never built something like a common swift, which can fly for 10 months without landing (http://phys.org/news/2016-10-ten-months-air.html).
Can we reach that point one day? Maybe yes, maybe no, I have no clue. But if we are to reach that point, we need to build sound formal models to get there.
Until we have such models for reasoning with things at that scale, I don't think claiming vehemently that these things are 100% known is intellectually honest.
The question isn't whether we can make an artificial bird now, but whether such a thing is possible in principle. I think logicallee's point is that any answer other than a clear "yes" would require the assumption that there is something fundamentally "unknowable" about the problem, and generally such objections are considered unscientific. Unscientific doesn't mean wrong, of course, but it does weaken the case.
So let's go back a step: can we theoretically replace a single bone in a single bird with without affecting its flight performance? I think the answer has to be yes. If so, is there a point at which we can no longer replace successive bones? While it might be a technical challenge, it's difficult to see how there could be a clear line between "flies" and "doesn't fly".
Can we replace an individual neuron in a human brain with an electronic equivalent without affecting the perception of pain? The argument is that if one can, what's to prevent replacing all the other neurons in succession? Are certain neurons special and unreplaceable? Again, it's difficult to see that there could be any clear line between "pain" and "no pain".
I think the strength and weakness of logicallee's argument is the assumption that it's possible to replace a single neuron with an electronic equivalent without causing any other changes to the system. If one can, it would seem straightforward to argue that machines can feel pain. I think the alternative would require assuming the existence of some "life force" for which we currently have no scientific basis.
But if we are to reach that point, we need to build sound formal models to get there.
To actually build such a system, probably, but are formal models necessary to reason about the consequences? Maybe, but thought experiments might help us build those models. With that in mind, do you think it's possible to functionally emulate any single neuron in silicon? If not, what's the missing entity?
I don't think claiming vehemently that these things are 100% known is intellectually honest.
Be gentle with accusations of intellectual honesty. Honesty does not require absolute correctness, only belief. I see no reason to believe that logicallee is overstating his confidence in his logic, which at its base is Occam's Razor. This may be a case where it is better to be "clear and wrong" than "fuzzy and arguably correct".
> I think logicallee's point is that any answer other than a clear "yes" would require the assumption that there is something fundamentally "unknowable" about the problem, and generally such objections are considered unscientific. Unscientific doesn't mean wrong, of course, but it does weaken the case.
My take on it is that anything other than a "we really don't know right now, being able to reason about this question is far beyond our current scientific models" requires a burden of proof that I have not seen matched anywhere (either in logicallee's posts, or in the writings of other "computable consciousness"-ists (don't know if there is a better word for them)). Testable, verifiable models would be what this particular burden of proof calls for for me to be satisfied with a clear "yes" or "no".
> Can we replace an individual neuron in a human brain with an electronic equivalent without affecting the perception of pain? The argument is that if one can, what's to prevent replacing all the other neurons in succession? Are certain neurons special and unreplaceable? Again, it's difficult to see that there could be any clear line between "pain" and "no pain".
Well, you can easily turn this around - can we remove an individual neuron in a human brain without affecting the perception of pain? We actually know the experimental answer to this one, and it is a resounding yes - in fact, a few of your neurons probably have naturally died since you started reading my post, and your perception of pain (or anything else) most likely hasn't changed. And there are medical cases of people losing significant brain mass and operating in society at a perfectly normal level.
This makes it quite hard to build a recursive argument around "if we can substitute one neuron with an electronic equivalent without changes, we can substitute all neurons with electronic equivalents without changes", because that argument would also work with "removing the neuron" instead of "replacing it with an electronic one" (and in fact, if removing a single neuron has no effect on the perception of pain, and replacing a neuron with an electronic one has no effect on the whole either, how does that help you in any way?).
The analogy with a bird's bones doesn't quite hold - there are specific bones that we can remove that will completely prevent a bird from flying, and others that will have no effect; modern science gives us a pretty good idea of how a bird's various bones fit in the "enabling flight" hierarchy. This clear and well defined "hierarchy of bones for flying" does not seem to be present in neurons.
We might yet establish a clear, systemic hierarchy of neurons one day that lets us explain consciousness rigorously, the same way that we can explain how the different bones and muscles of a bird let it fly! But we are far from it, and there is no certainty about the feasibility of such an endeavor.
The recursive (more correctly: inductive) argument is that there is no effect. It's like saying that no matter how many times you subtract 0 from a positive integer, the result will still be a positive integer - because subtracting 0 has no effect on the functioning of the system (the two are "the same" - not just similar but exactly the same). Subtracting epsilon (an arbitrarily small number) is similar but the recursive argument breaks - if you do it enough times it's no longer a positive integer.
The fact that your brain is already wired somewhat differently than since starting to read this comment doesn't have much to do with this argument.
To break the inductive argument and prove that mechanical birds are impossible, what you would have to show is that there is some bone or ligament or muscle or feather or something else, where if you replaced it with a plastic/mechanical/etc version with the exact same properties and behavior, the flying system w=could no longer function "the same". In other words, that there is some bone or muscle or other mechanical thing that even in hundred thousand years, science could not replace with a non-biological version without making some change to the behavior of the whole system - that it could never be made "the same" even in theory. (I say "in theory" because it is quite likely that we could never do so with a neuron - the whole switching back and forth example is very far-fetched.)
So it is only a thought experiment/inductive argument: not a practical statement about biology.
One thing that is interesting to point out is that no birds have the carrying capacity of modern planes, but we haven't reproduced mechanical birds that function the same way birds do. Science doesn't really have a strong incentive to try to exactly copy the mechanics of a pigeon with plastic components and plastic feathers and ligaments and some kind of muscles, and trying to get it to be composed exactly the same way. Other than a novelty, it doesn't really "get us" anything.
In perhaps the same way, science may not have any strong incentive to exactly copy humans into a virtual reality, setting up neural networks that are wired the exact same way, and feeding the virtual neural nets outputs exactly like a child would hear and see around them, etc. Even though very large server farms like Amazon's today have more memory and do more calculation than the human brains' neural connections imply (given our estimate of what actually does the computation), so that they're like "jets" and the human brain is like a "bird" -- still, there is no clear incentive to copy humans in a way that leaves the simulation conscious.
What does a conscious simulation get us? What does a mechanical pigeon get us?
So for this reason there is at least some reason to think that we are not about to build a conscious machine - we don't need it.
> you would have to show is that there is some bone or ligament or muscle or feather or something else, where if you replaced it with a plastic/mechanical/etc version with the exact same properties and behavior, the flying system w=could no longer function "the same"
Yes. This implies that you can measure the properties of the system you intend to recreate, and reproduce them at a high enough fidelity. The core of my rebuttal is that we might not be able to a) measure all the properties that matter and/or b) recreate them with enough fidelity for certain categories of problems. Bones, we're good. Neurons at extremely large scales? I wouldn't bet either way.
> Science doesn't really have a strong incentive to try to exactly copy the mechanics of a pigeon with plastic components and plastic feathers and ligaments and some kind of muscles, and trying to get it to be composed exactly the same way. Other than a novelty, it doesn't really "get us" anything.
It could get us drones that would fly 10 months without landing. Science doesn't really get much stronger of an incentive than "the military wants it".
The core point is that qualia emerges out of perception, and therefore that any system that can perceive in a broad sense, even if that perception is merely simulation of perception, will effectively experience consciousness. But there isn't any reference to proven scientific models for any of the parts at play here; posts treat the core premises as ultimate truths and then spin around that.
Regarding your mention of your posting history and complaining of ad hominem, this is completely disingenuous given the way you have responded to other posters. If you wish a better response to your future posts, I suggest making your posts shorter, supporting your core points with references to existing work, and not use at all the "everyone but me is wrong" style of discourse.