We know a ton about the brain but so little about the mind itself. We still don't have definitive answers to what consciousness is, why it's here, what's useful for, etc. Some people debate whether the mind exists at all. Also, there's still very little understanding of the difference between the conscious and unconscious mind.
I think building an artificial consciousness is going too far. Artificial intelligence is simpler; it's just fake intelligence. Seems easy enough right? If it looks like a duck and quacks like a duck then it's intelligent. We don't need to make it "conscious" necessarily, again whatever that means, in order for it be intelligent.
I feel like we can build artificially intelligent software pretty "easily" relative to making it "conscious".
> We know a ton about the brain but so little about the mind itself. We still don't have definitive answers to what consciousness is, why it's here, what's useful for, etc. Some people debate whether the mind exists at all. Also, there's still very little understanding of the difference between the conscious and unconscious mind.
One of the really, really bad consequences of the Cold War was the scientific divide between East and West. By that I mean serious lack of scientific data exchange between the blocks. The consequences are still felt and this area (the problem of consciousness) is the one that suffered. The problem of "consciousness" was basically solved, at least at a conceptual level, by Soviet psychology and neuropsychology. Here I refer, of course, to the work of Vygotsky and Luria. What is consciousness? Almost nothing at all by itself. Consciousness as found in humans is a consequence of our cognitive development and the advanced symbolic capabilities of humans. The subjective perception we have of the thing we call consciousness is "simply" (it's not really simple when you get into details) a product of humans acquiring language skills (I'm simplifying).
This is not to say the subject is trivial, it takes volumes to describe what is happening, but the thing we informally call "consciousness" is really nothing at all in and of itself, and the perception we have of it is just a result of the very complicated process of cognitive development. Thin air, like Lisp's cons.
If you want to read on it I can recommend Vygotsky's Language and Thought (actually, it's his only book) and Luria's Language and Consciousness (I'm not sure it was ever translated into English, it's a collection of his lecture notes from a university course he did on the subject) or possibly The Cognitive Development: Its Cultural and Social Foundations.
Why this line of thinking is mostly ignored in the West I have no idea. Why do we still cling to metaphysical (even religious I would say) phantasies about "consciousness" is an interesting topic itself. Is it because it's romantic to think there's something special, transcendent, about our minds? Are we really that sentimental? I have some hypotheses, but it's a different topic.
Compared to Russia, and most of the former soviet block, the US is a fairly religious place. In most of the "importance of religion" surveys we rank in the ~50% "yes" category whereas Russia is consistently ranked as one of the least religious places on Earth. (See http://en.m.wikipedia.org/wiki/Importance_of_religion_by_cou...). It seems only natural that the US would have a lot of metaphysical threads woven thru its work on conciousness. <caveat, writing from a town w a 10 block long street that's pure churches, so...bias>
I think that's another possible narrative for the mind but it's certainly not hard science. There's no concrete data showing where the mind is in our bodies or what part of the brain creates it. Psycology can't answer questions like where does the mind come from or what is experience.
It's an interesting narrative though and I'll check out those books.
> Psycology can't answer questions like where does the mind come from or what is experience.
I'm not sure I understand what you mean by this. Of course it can, that's the whole purpose of psychology (and, more fashionably, neuroscience of course). To me that sounds like saying science can't answer these questions. Do note that when I say "psychology" I mean strictly the scientific areas of whatever comes in the bag labelled "psychology". Due to historical accidents the term acquired a lot of BS pseudo-scientific baggage, and it's really a shame those things can detract from a wealth of valuable hard results honest scientific psychology uncovered.
The answer that developmental cognitive psychology, at least the theory I'm referring to, gives is that "the mind" comes from the only place it can come from: neural processes and the way they hook into environmental interactions of the organism (social and physical). The key to understanding what gives rise to "consciousness" is in understanding the role of language acquisition in broader cognitive development. The point where a child utters it's first words is neither the beginning nor the end of this extremely nuanced process. In my first post I took it for granted it's understood that this is not just armchair speculation, it's based on empirical data. As any good scientific theory it's far from complete, maybe in some details is inaccurate but it's certainly infinitely better that an endless philosophical debate (with strong religious, or in the best case idealist, undertones) on what the mind is and where it comes from.
There's no such data, because the zombie problem is complete nonsense. Well, actually, it is complete nonsense precisely because empirical data can't say anything about it.
"Many physicalist philosophers argued that this scenario eliminates itself by its description; the basis of physicalist argument is that the world is defined entirely by physicality, thus a world that was physically identical would necessarily contain consciousness, as consciousness would necessarily be generated from any set of physical circumstances identical to our own."
Also this :
"Artificial intelligence researcher Marvin Minsky sees the argument as circular. The proposition of the possibility of something physically identical to a human but without subjective experience assumes that the physical characteristics of humans are not what produces those experiences, which is exactly what the argument was claiming to prove."
By that argument, there is no moral argument against inflicting pain on others, because the pain of another is not something we can empirically observe, except by analogy of how we react to the pain we ourselves experience.
First, the moral argument against inflicting pain on others doesn't depend on existence of pain. The moral dilemma is: is it acceptable to inflict pain on others or not. This is different, and to a large extend independent, from the question if pain exists in experience of others. In other words, if pain exists in others, it doesn't follow that you have to, by mere logical reasoning, make a moral conclusion that inflicting pain is wrong. There is an uncrossable ontological abyss between the empirical what is and the moral what should be.
Second, the case of pain from an empirical side is not at all like what we have in the philosophical zombie "problem". We can empirically observe pain. There are all sorts of physiological and neural manifestations of pain. Of course, now you may say "ah, but how can we know that these empirical manifestations mean the person is experiencing the sensation of pain". Scientifically that dilemma makes little sense, it's simply unproductive, it's scientifically useless. If we were to go by that route, we could inject a similar dilemma into every scientific problem, which inevitably would lead to the problem of solipsism. How can we really, really be certain that anything at all exists? Well, I suppose we can't, but this is a question that science has long ago abandoned because it doesn't get you anywhere, it doesn't yield any useful results.
Do note that unlike the question of pain, the zombie problem is defined so that there is in principle absolutely no way to detect, to measure, if someone is a zombie or not. On the other hand, we can in principle measure and detect events correlated with introspective reports on sensations. If we couldn't do that for some phenomenon it would be wise to consider that the phenomenon doesn't exist for the purpose of empirical scientific examination.
Frankly, I'm surprised that my previous post (where I say the zombie problem is nonsense) got downvoted because this is the foundation of scientific methodology. If you can not, even in principle, measure/detect something then it makes no sense to discuss it. Of course, you can amuse yourself and speculate on it, but that falls outside of boundaries of scientific inquiry and I hope that's what we're discussing here.
> Of course, now you may say "ah, but how can we know that these empirical manifestations mean the person is experiencing the sensation of pain". Scientifically that dilemma makes little sense, it's simply unproductive, it's scientifically useless.
Sure, it's scientifically impossible to evaluate. From a purely scientific perspective pain is just electricity. How would you convince an intelligent being that could not feel pain that it exists at all?
The existence of pain falls outside the boundaries of scientific inquiry, I agree. But are you saying that it therefore doesn't exist? Because your earlier argument seems to be that we can explain the mystery of consciousness within a scientific framework, and that is the larger point I disagree with.
> How would you convince an intelligent being that could not feel pain that it exists at all?
Assuming the being is "reasonable" (in this context it would mean it's willing to accept that there exist concepts that it may not understand or directly experience, and is willing to trust us), we could just point out the chemical and electrical phenomena correlated with pain and say that it's something that causes a certain kind of feeling of discomfort. We would get in trouble if this being also can not feel general discomfort, but you're probably bound to hit a wall in understanding at some point anyway when communicating with an entity whose experiencing capabilities are wildly different from ours.
> The existence of pain falls outside the boundaries of scientific inquiry, I agree. But are you saying that it therefore doesn't exist?
Actually, my point about pain was that it does exist, precisely because it can be examined and explained within a scientific framework. If we couldn't do that, then we could say that for all practical purposes, as far as science is concerned, "pain doesn't exist".
The same is true for consciousness. What's difficult about it is that it's not a thing, there's no hormone for consciousness, there's no brain centre where it's localized, rather it's a process and a product both phylogenic and ontogenic so it's a lot harder to capture it and identify it, to put it "under the microscope". It's not some secret sauce to intelligence, it's a consequence of intelligence. And the most important part of the process is the dynamics of language acquisition (at least when we're speaking of conscious experience in Homo sapiens).
I could go into the details but I'm afraid my posts would explode in length. ATM I don't have time to dig for good online material on this, and I'm under the impression that the theory in time got derailed into developing some practical aspects concerning child cognitive development, verbal learning etc, and away from the hard meaty implications we're discussing here, so I'm reluctant to even attempt to go into that rabbit hole. But they're explicitly there (the books I mentioned discuss the issue at length). Interestingly, about 10 years ago I was doing some work on word-meaning and symbol grounding development and I was both glad and frustrated to see literature on computer modelling in this area full of operationally defined concepts from the theory but people were seemingly unaware that this work has already been treated in depth on the theoretical level because there were no references to it then, I'm not sure if anything has changed, I've since moved on to other things. For example, the Talking Heads model[1][2]. It's not about consciousness per se, and although the authors never reference the socio-cultural theory of cognitive development (a horrible name in this day and age, it tends to evoke associations to post-modern dribble, but nothing could be further from the truth), it can give you a good idea of some aspects of the dynamics explored in the theory because what is happening in the TH model is exactly what the S-C theory describes is happening externally during language acquisition (in broader strokes though).
As for the philosophical zombie problem, I'd like to retract what I said about it being nonsense. Actually, it's very useful in showing why worrying about subjective sensation of consciousness is completely useless in AI and is very much like asking how many angels can dance on a tip of a needle. On a very related note I'd add: people are severely underestimating the significance of the Turing test.
> Actually, my point about pain was that it does exist, precisely because it can be examined and explained within a scientific framework.
The physical processes of pain (ie. the electricity) can be observed scientifically, but the "sensation" of pain (to use your word from before) cannot. But it is the "sensation" of pain that gives it its moral significance, otherwise inflicting pain would be no different morally than flipping on the switch to an electrical circuit.
> The same is true for consciousness. What's difficult about it is that it's not a thing, there's no hormone for consciousness, there's no brain centre where it's localized, rather it's a process and a product both phylogenic and ontogenic
I can only conclude that you mean something different than I do when you say "consciousness." To me the sensation of pain is a subset of consciousness. It's the difference between electricity "falling in the middle of the forest" so to speak and electricity that causes some sentient being to feel discomfort.
> Actually, it's very useful in showing why worrying about subjective sensation of consciousness is completely useless in AI
Sure it's useless to AI. To AI the zombie problem doesn't matter, because the goal is to produce intelligence, not sentience. But it's useful in a conversation about what sentience and consciousness mean.
If we created intelligence that could pass the Turing Test against anybody, it would be basically impossible to know if it experiences sentience in the way that all of us individually know that we do. But that is the essence of the zombie problem. Where does sentience come from? We have no idea.
Actually I take it back; the zombie problem will be extremely useful to AI the moment a computer can pass the Turing Test, because that's when it will matter whether we can "kill" it or not.
> The physical processes of pain (ie. the electricity) can be observed scientifically, but the "sensation" of pain (to use your word from before) cannot.
You state this as though it's a given, but it's not. You're assuming Dualism. So, of course you end up with Dualism.
> But it is the "sensation" of pain that gives it its moral significance, otherwise inflicting pain would be no different morally than flipping on the switch to an electrical circuit.
This is a silly over-simplification. Complexity matters. The patterns of electro-chemical reactions that occur when I inflict pain on another human cause that human to emote in a way that I can relate to because of the electro-chemical reactions that have been happening in me and those around me since before my birth. So what?
It's in no way comparable to flipping a light switch, except in the largely irrelevant detail that electricity was part of each system.
The fact that an incredibly complex system consisting of individuals, language, and society should yield different results from three pieces of metal and some current shouldn't be the least bit surprising, and is not a reasonable argument for dualism, or p-zombies.
Here's my take on the p-zombie "problem". We can say all kinds of shit, but it doesn't have to make sense. For example I can say "This table is also an electron". That's a sentence. It evokes some kind of imagery, but it's utter nonsense. It doesn't point out some deep mystery about tables or electrons. It's just nonsense.
> You state this as though it's a given, but it's not. You're assuming Dualism.
No. Dualism is the idea that our minds are non-physical. I say minds are fully physical, and all thinking happens in the physical realm. But somehow the results of this thinking are perceived and sensed by a self-aware being as "self" in a way that other physical processes are not.
> The patterns of electro-chemical reactions that occur when I inflict pain on another human cause that human to emote in a way that I can relate to because of the electro-chemical reactions that have been happening in me and those around me since before my birth.
Exactly. You are extrapolating by analogy that other people experience pain in the same way you do, because you cannot experience their pain directly in the way that they do. But this analogy of thinking is just an assumption. And it certainly offers no insight into why you are self-aware and a computer (a very different but still complex electrical system) is not (we assume).
As science leads you to believe that your consciousness is nothing at all (according to your description), bible based christianity tells you that your conscious is part of your soul, which is the part of you that lives forever. It is independent of your body, which will eventually be replaced with a perfect body.
For the longest time it was not even known that the mind inhabited the brain. We've only known about the biology of the brain beginning with cell theory and onwards. Yet we have an entire vocabulary relating to mental states receding so far into the distant past that we don't know how far back language predates using language to speak about mental states. When you really think about it neuro-biology is very recent and still nascent, we've been thinking about thinking for a long long time.
Thinking about thinking but not finding empirical evidence to validate that thinking. There's no science regarding the mind. Nor is there science distinguishing between consciousness and unconscious thought.
"Consciousness is data. [...] The data of consciousness--the way things seem to me right now--are data too. I am having a certain sensation of red with a certain shape right now. I am hearing a certain quality in the tone of my voice and so on. This is undeniable as the objective data in the world of science. And science ought to be dealing with that."
David Chalmers excerpt- Conversations on Consciousness by Susan Blackmore.
The problem is that we haven't adopted the definition of consciousness that's useful long term
yet: that consciousness is best interpreted as a property of reality.
If everything is conscious then some parts of it are just more dynamic (intelligent?) than others. Physical reality least, plants more [1], animals even more and humans most.
Defined like that human consciousness just becomes that part of all consciousness which we recognize as similar to our own.
In that view AI is just making a small part of reality, a computer, more dynamically conscious and, very importantly, more similar to our own so as to be more useful.
If we're to adopt that theory, we need a basically physical or metaphysical theory of what consciousness actually is rather than a theory which explains consciousness as a process atop biology.
The unconscious may simply be related to responses to stimuli repeated and internalized enought in neural structures for us to not have to supervise them anymore. This mecanism could be put on par with abstraction in my opinion, for the role it plays in enabling us to control our environement with limited attention resources.
"The zombie hunch is the idea that there could be a being that behaved exactly the way you or I behave, in every regard--it could cry at sad movies, be thrilled by joyous sunsets, enjoy ice cream and the whole thing, and yet not be conscious at all. It would just be a zombie."
I think building an artificial consciousness is going too far. Artificial intelligence is simpler; it's just fake intelligence. Seems easy enough right? If it looks like a duck and quacks like a duck then it's intelligent. We don't need to make it "conscious" necessarily, again whatever that means, in order for it be intelligent.
I feel like we can build artificially intelligent software pretty "easily" relative to making it "conscious".