I was reading "Hard problem of consciousness" Wikipedia article this morning: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
I'm not sure this adds anything to that.
The author seems to want to say that consciousness is primary and that the physical universe, matter, is consciousness somehow, so poof no more mystery.
But this doesn't touch the "hard problem" so who cares?
There is what I like to call the flux: form and movement. I lump together "external" and "internal" experience, including all proprioceptive experience, etc.
Then there is the subjective awareness. It is primary. As the author points out, every other fact we know is contingent upon the fact of subjective awareness. Everything is content to this whatever-it-is observer. This awareness itself seems not to have any qualities or properties whatsoever, making it extremely difficult to talk about (and rendering it forever beyond any scientific treatment!)
Somehow, this awareness is our "self". (It may or may not also be tied into the quantum "Measurement Problem" but that is a whole 'nother story.)
You have a body but you're not your body; you have emotions but you aren't your emotions; you have thoughts but you aren't your thoughts; you are that awareness.
Now, if that awareness created or creates the physical world (as the Bhagavad Gita seems to state) that would be pretty amazing and I'd love to read about it. This article doesn't really expand on that.
Not only that, but you're you because you're not someone else's body. That sounds tautological but I think it's important. I think a big part of what gives someone consciousness is identity, and what gives us identity, is the idea of us being an island isolotated within our body and therefore distinct from other conciousnesses.
Which sounds pointless, but I think it's actually significant if you start to consider the eventual effect of identity on "shared consciousness" via technologically-enabled telepathy, which we may actually see one day in the far future.
I read somewhere once about a story written in a book. The beginning and the end is there, everything superimposed together. For the characters in the book, there is no time.
Then the reader's mind/consciousness comes along and goes through the book, page-by-page. By this very act he creates memory and time. Maybe our minds are doing the same thing on an infinite, timeless substrate that Buddhism, Sufism, Zen etc call awareness.
Now, make that leap, where `identical == expressing an identity`, where it is all one universal awareness, and that is reality.
You now have a pass to:
"Brahman (/brəhmən/; ब्रह्मन्) connotes the highest Universal Principle, the Ultimate Reality in the universe."
"He is Allah, the one and only. God, The Eternal, the Absolute. He begot not, nor was He begotten, and there is none comparable to. Him."
Reality created nothing, nor was reality created.
"The unnamable is the eternally real...
Free from desire, you realize the mystery.
Caught in desire, you see only the manifestations....
mystery and manifestations arise from the same source....." 
Mystery = presence & essence, manifestation = matter, stuff, things, objects.
Christianity (Holy Spirit)
If you must, think of Adam and Eve as the first two self-aware beings, and "fruit of the tree of knowledge" as using their special ability for the benefit of their own ego, at the expense of being one with the universe, like all other plants and "lesser" animals. A.k.a "The original sin". In Taoist terms, Adam and Eve stopped seeing the mystery, and full of desire, saw the manifestations, and that's when they were 'cast out of the garden'. One moment perfectly content, next moment cannot ever stop wanting 'more'.
Religion is about shattering the idea that one is separate from the rest of reality.
Wait. My main issue with religion is that it forces the belief that one is separate from source/God/universe and creates a life long relationship of separation thus perverting humanity's relationship with ourselves and with our own "sacredness" or "divinity" in order to become a middleman to and sell something that was never separate from us. It's really insidious from that standpoint.
unless we speak of Daoism or Advaita, both nondualistic practices, both nonreligious.
I agree with the rest of your post.
So, reducing consciousness to matter doesn't eliminate the mystery.
Where else would it come from? If we break a conventionally physical object into smaller and smaller bits we find that: 1) There's mostly nothing there - a whole lotta nothing inside of most "stuff", and 2) What is there seems to simply be a (very fast) movement, and doesn't follow the behavior of conventional objects. Whatever it is probably can't even properly be described as noun-stuff doing verb-stuff.
So, there appears to be something out there, but if in fact there is, it clearly isn't made out of conventional objects. That "objectivization" and the languageing of the objects of awareness is created in awareness. Who is the master who makes the grass green and all that...
I am not surprised that the top comment in HN, in 2016, is that of pro-solipsism ... given its recent proclivity towards all things spiritual over rational.
Where do you get solipsism from what I said?
"In The Emotion Machine, Marvin Minsky discusses suitcase words—words that contain a variety of meanings packed into them, such as conscience, emotions, consciousness, experience, thinking, morality, right, and wrong.
"The word ‘consciousness’ is used to describe a wide range of activities, such as “how we reason and make decisions, how we represent our intentions, and how we know what we’ve recently done [p128].” If we want to better understand the various meanings of consciousness we need to analyse each one separately, rather than treating it as a single concept.
It is worse: it carries with it connotations of discarded science, such as levels of consciousness (aka. "animals have a lower level, Buddhist monks a higher one"), and the mind-body separation — it is fairly clear by now that whatever could be called a mind is a part of our body.
There was a time where the majority belief was that emotions and diseases came from four types of humors running through our veins. Slowly, the consensus switched away towards miasma theory and then germ theory.
Words like humor, miasma, aether, have now fallen into disuse.
We are at that awkward period before another old word bites the dust.
I find that usually, people that are puzzled by consciousness only want to understand the relativity of points of view: "why am I me and not someone else? Why don't I wake up in a different body?" To which the answer is obviously that memories are located in your brain alone.
The point is that even after words (and whole concepts) fall out of favor scientifically, they can still have a metaphorical meaning. Sometimes it's the best we've got even though we know it's wrong. We might like to describe someone's personality in terms of serotonin levels and amygdala response, but we really don't know enough - so we call them hot-blooded, even though that theory died many a long age ago.
This implies the naive premise that consciousness the thing is not a suitcase itself (and thus, that an umbrella world is unsuitable to name it).
So, it basically presupposes the reductionism that it's supposed to prove.
Not really. People could be legitimately doing that, because the concept is inherently multi-faceted and each approaches it from a specific angle.
No concept inherently requires that definitions of it contradict. And highly-contradicting definitions is a critical factor in having a suitcase word.
Every word has emotional meaning alongside it. It's packed in via various ways: culture, history, happenings, social group. The bad part, is words unintended to carry bad emotional meaning can cause significant problems.
Obviously they don't, and it's not up to the one saying the words but to the listener too.
What's problematic about the "say no to consciousness" case? It seems to be based on their preference for other descriptions of human brain activity, especially when the alternative description (consciousness) has been so ambiguous, vague and unhelpful.
> You cannot describe first-hand facts using only third-hand facts. This is the core philosophical dilemma in the hard problem of consciousness.
(or more accurately, I think third-hand factual systems could probably posit first-hand facts-- but be unable to assert validity or something. I am not a philosopher.)
Unlike turbulence, I only have one case where I have ever seen this consciousness which I am trying to explain as an epiphenomenon, namely my own. All others consciousness (if they even exist which I can't prove directly) are not remotely observed in the same way, they are self reported by other people.
So you have the question of why I can only directly perceive that emergent property in exactly one brain and not others.
Likewise you'd have to explain why the emergent property is apparently continuous through my whole lifetime over which my brain changes enormously.
Finally you'd have to deal with the hypothetical case where I am duplicated molecule for molecule and then ask which consciousness would I have direct access to, the duplicate or the original.
So while dualism is obviously patently absurd, the idea of consciousness as epiphenomenon, while promising, needs work too.
Removing the continuity assumption, perhaps even time, kind resolves the duplicate question: both copies (if conscious) should have a memory of conscious you and experience their consciousness being a continuation of that, while you, now you, are as removed from those future copies, as you are from me.
One question, that seems tied to consciousness, is what is "now"? Are there objective reasons to believe in the existence of a canonical "now" existing independent of conscious experience? If not then there really is no point in distinguishing either momental or continuous experiences as special in any way. Perhaps consciousness just emerges as an interpretation of a single state connecting past and future by memory and anticipation, it's simply appears continuous like the series of images at the top of the article appears to be.
What I perceive as a continuous consciousness is really just a memory of other "I"s who were caused by this brain in the past, left a memory of introspection and then dissipated to be replaced by an new one.
This view has many advantages: it solves the problem of time you elude to, it answers the problem of other minds and the problem of my childhood self. The only problem I can not see solved is the hypothetical molecular level duplicate I mentioned above.
>So you have the question of why I can only directly perceive that emergent property in exactly one brain and not others
You are the thing carrying around that brain and sense organs, and the plumbing thereof. This is a unique situation, hence the uniqueness.
> Likewise you'd have to explain why the emergent property is apparently continuous through my whole lifetime over which my brain changes enormously.
Emergent phenomenon can be very robust, such as Jupiter's spot which was observed 340 years ago, or the common orbital plane of the solar system, which is billions of years old.
Also, your consciousness is not continuous. You turn (at least some of) it off when you go to sleep. Very curious! It's at least robustly bistable!
> Finally you'd have to deal with the hypothetical case where I am duplicated molecule for molecule and then ask which consciousness would I have direct access to, the duplicate or the original.
This seems very simple too. There are two copies of you at the instant of duplication, which diverge from that moment on. You1 would have access to the consciousness of You1's nervous system and You2 that created by You2's nervous system. This question is only difficult if you insist on something other than the bodies being involved.
I put you in a duplication chamber. In front of you is a card with the letter "A"
In a neighboring identical empty chamber is card with a "B" on it.
You close your eyes and I pull a big steam-punk lever and an exact copy of you is made in chamber B
When you open your eyes, what letter do you see and why?
But the original observer that was put into the duplication chamber, would maintain their letter A
To say somehow the different history of the atoms in the original brain vs the atoms in duplicate is somehow relevant to the emergent property would be a very odd claim: we don't take into account the prior history of the atoms in turbulent flow for example, just what the are doing right now.
Or for clarity, if I gradually replaced every molecule in your brain with another identical one, the emergent "you" would not be affected. So, if it weren't already obvious, it clearly can not be dependent on the particular atoms that make you up.
And by the way, if your brain at 10 years old and 90 years old, in spite of being very different structurally, still give rise to your same "you", the copy would not even have to be very good.
It is a fun problem I must say.
In terms of "me", there would be two individual instances of me that open their eyes and either see A or B. The "original" is determined by you the question-asker, and the fact that that "original" walked into chamber A. Because that "original" walked into chamber A, that "original" would see the letter A, whereas the clone would see the letter B. Its possible that the "original" was, at time of cloning, completely recreated/rearranged. However, that "original" would retain the A designation/"originality" because to an observer, someone walked into a chamber with no other exits, and exited out the same chamber, that someone is the same person.
You are as much what others think of you as what you think of yourself, in this case.
(Btw, the videogame _Soma_ goes over your exact scenario)
Put another way, using our current knowledge of physics and chemistry, we can, on a high level anyway, account for every item in the cause and effect chain from the photons hitting the retina to the arm that a person raises to shade his eyes. There is no room to insert anything into that causal chain let alone something so massively influential that it causes all the body actions that we consider driven by conscious choice. Thus whatever is going on must be made from very ordinary matter.
It's this very fact that makes it such a fun puzzle to look at, very ordinary mater maybe doing something (causing consciousness) that is wildly unexpected.
It denies the most fundamental fact of human experience, the existence of that experience itself.
Basically, if none of the aspects of consciousness that make it important or special in our eyes turn out to be true, I think that denying its existence is actually a less misleading way to correct the record than trying to completely change our intuitive understanding of whatever it is we actually have.
Your eye is really more like a flashlight in the dark. And it interprets things way to much (see optical illusions.)
It's easy to dismiss these problems, because you are rarely are in situations where the difference matters. This is no coincidence, since people rarely construct things that our eyes can't process. For example, fluorescent lights are actually turning ON and OFF constantly, but we just happen to not be able to see that. There really is no color purple, we just think it exists.
But it's quite easy to mess people up. See "rotating tunnel illusion", or the experiment where subjects fail to notice they are now giving directions to a different person, or the experiment where people fail to notice a guy in a gorilla suit (!) when given a simple task.
So the upshot is "vision is nothing like you think it is". Maybe we should have different words for the way a computers see things (a lot more reality based) vs the way humans see things (with all their foibles.) Ditto for Consciousness.
How does "saying no to consciousness" deny "the existence of the human experience"? If you were talking about brain denial, then I would have to agree. It seems fully possible for the brain to continue to do interesting things even if "consciousness" turns out to be false. Thankfully it's nonfalsifiable apparently....
Some people try to define "consciousness" as something other than "subjective experience" but doing so is really just evading the "hard question".
It would be analogous to telling a physicist trying to provide an explanation for why objects fall to the ground that it might simply turn out to be false that objects fall to the ground at all. That doesn't make sense, because the exploration started with an observable phenomenon.
It is false. Gravity is not about "ground" nor does it require that everything fall to the ground, indeed there seem to be forces on objects stronger than the force of gravity. The ability to falsify concepts is critically important in many branches of epistemology.
The same applies to eliminitavists. They aren't denying that something happens that feels to you like experience, they are suggesting, however, that it may not be the thing that you think it is. They'll often point to people's own misunderstandings of their consciousness as examples that we can, in fact, actually be wrong about our own first person experiences. One example of this is the fact that we think we experience full field colour vision. In practice we have colour vision only in a narrow field in the center of our full visual field. You can perform experiments on yourself to demonstrate that this is in fact true. Another example is the sense of continuity to consciousness, which again, can be demonstrated to actually be false. If our "consciousness" is sufficiently different in reality from what we normally presume to be consciousness then, in some real sense, consciousness as we commonly think of it indeed doesn't exist. That doesn't mean there is nothing, just that calling it consciousness with all the associations that implies is perhaps sufficiently misleading as to be wrong.
The question is why the experience is triggered. Why does moving atoms at specific positions trigger the perception of color or pain? Current physics doesn't have an answer for that, even in theory, assuming we can see exactly what happens in the brain.
For example, in order to understand why atoms moving in certain ways produce a taste, do you need the explanation to trigger some experience or memory of a taste? If so, you will never understand.
Or do you expect the answer to be short? What if qualia tend to be as complex as the brains that experience them (for holistic reasons), so the proof for our own qualia spans trillions of pages? I think that's actually quite probable, but you'll never understand in that case either.
The space of naturalistic explanations of things is humongous and we're already mass producing objects so complex no single individual could hope to understand every aspect of them. If you're going to start with "moving atoms at specific positions" in order to understand a defining aspect of a system made out of hundreds of billions of complex switches connected in obscure ways, let me tell you that you're going to need several Libraries of Congress before you're done.
On the other hand, I don't understand all the details of what goes into a Macbook Pro, from network protocols to graphics rendering, but I have a basic high level understanding of how the things it does are possible, and it doesn't take a Library of Congress to explain that. My high level knowledge of how atoms move provide sufficient assurance and understanding of how a Macbook Pro experience is created. That is simply not true for qualia.
I don't think it does per se. In a world where computers didn't exist you would probably not be able to say whether a MacBook Pro is a thing that could possibly exist or not just from fundamental knowledge of physics. If I gave you the most low-level possible description of physics, it'd probably take you some time to figure out whether it supports solid macroscopic objects. If I just give you the rules to the Game of Life cellular automaton, it's far from obvious that it supports replicators, let alone that it's Turing complete.
Most of these things are only obvious in hindsight, and the understanding is made easier by the lack of cognitive dissonance: we know we made them using certain principles, so of course it works.
> That is simply not true for qualia.
I don't know about that. My own high level knowledge of physics does provide me with sufficient assurance and understanding of how qualia happens. Perhaps I'm wrong, but I do feel like I understand it: what defines or distinguishes, say, my qualia of red, is how it primes my brain into thinking about red things, how it changes my mood, how it behaves relative to every other concept, which makes it about as complex as my entire brain would be. I also imagine that there is a continuum of "qualia" such that basically any physical system has some, but they are so trivial we don't make the connection.
Again, that could be nonsense, but it makes perfect sense to me -- about as much as a MacBook does. The hard problem that you see flat out does not exist to me, and presumably the reason is that we are satisfied by different explanations.
Over here it seems like you're defining consciousness by its relation to other objects. In other words, you're reducing consciousness to its visible outputs, such as behaviour, mood, etc. If that's all consciousness is, I fully agree with you that we have sufficient knowledge of physics to explain it. But that's not what I'm referring to. When I talk about consciousness, I'm referring to something that is completely independent of its outputs. Is it not conceivable that a man who is missing both his arms and legs, mute, blind, and deaf, can still have a conscious ego? So what is that experience? This is the question which has no parallel in our knowledge of how atoms move.
The brain has many internal inputs and outputs, even more of them than external ones. Even if you imagine a completely isolated brain that can't see, hear or move, it's still exchanging signals internally. For instance, such a brain could fabricate a dream world, create (possibly nonsensical) images to feed to its visual cortex directly, and so on. Its conscious experience would be whatever it builds for itself, operating through these internal input and output channels, shaped by random noise and whatever memories it may have of real things. It's what dreaming is, basically. That is consistent with my view.
If, on the other hand, you ask me to imagine a mute, blind, deaf, paralyzed man who also has his brain cut up in such a way that it can't talk to itself, then no, I do not think that it is conceivable that the man has a conscious ego.
A brain is made out of billions of neurons, each of which has its own inputs and outputs, and so does each part of each neuron, down to the atomic level. Every physical entity is a kind of IO machine, so in a sense, any physical definition I may give of qualia will reduce to IO patterns. That is, however, missing the bigger picture, which is the structure of the system: how the information flows within it, and how certain conditions modify that flow. That's where the meat is.
In my mind, consciousness and qualia are structural properties. When experiencing a qualia, the information flows in a particular way in your brain, like a mode of operation, and that's what the qualia is.
I think of it this way: you experience "anger" when your brain is put in an "anger mode", which taints everything it sees or does -- anger is not an "input" or an "output", it is a "mode" that your whole brain is in. It makes you "different", although not in a way that makes you lose your sense of self (well, usually). And in a lesser way, when you see red, your brain enters a "red mode", when you eat nougat it is put in a "nougat mode", and when you think about red or nougat your brain also enters a "recall" version of red or nougat mode that has similar properties.
So what happens is that your brain can enter billions of subtly different "modes" that correspond to different stimuli (real or not). Each of these "modes" makes you think a bit differently, in a way that's adapted to the stimuli you received. These are the "experiences" it can have: they are not inputs or outputs, they are ways of thinking, triggered by stimuli or memories or sometimes random noise.
I guess I would say a qualia is a bit like a protein fold, it's a functional reconfiguration.
The 2 big discrepancies I see are this: 1. We have the subjective perception of free will: I can choose to write these sentences or not. Where does that intention originate? If our consciousness is just neuronal computation, then it is completely deterministic or probabilistic. That is entirely contradictory of most people's experience, because we can choose what we want to do, and do things that strongly contradict our "mood". Some people can even go further to consciously modify their brain states. If you claim that is also just a product of complex structural properties, you get into an infinitely regressing claim.
2. Despite being able to create incredibly complex connected structures, arguably even more complicated than a single person's brain, such as the Internet, we have no evidence of such an entity having a subjective feeling of consciousness. There is simply no parallel in our current knowledge for how subjective experience can arise from system configurations. You seem to be simply assuming that an experience is such a configuration, but we have no evidence of that. If such a configuration could be consciousness itself, then we have examples of other entities which are connected in similarly complex ways, yet we don't believe they are concious. This argument is explained here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconsci...
Now, this "free will" only corresponds to epistemic non-determinism about ourselves: "for all we know", we can do X, and we can also do Y, because if we knew we couldn't, we wouldn't be evaluating these possibilities. So in a sense, it is our self-model that has free will: it is the model that we imagine in various circumstances, but the model isn't the real thing. So our mistake is to assume that this property of the model is also a property of the real thing, i.e. that our epistemic uncertainty translates into some kind of metaphysical non-determinism.
I would expect that any decision system that can construct hypotheticals about itself would have an impression of free will, or a qualia of free will. They may not necessarily be able to express it, or to feel it as vividly as we do, but they would have it.
2. I know the point you are trying to make, but where you see a reductio ad absurdum I just see a lack of imagination. Our usual understanding of consciousness is very, very, very deeply anthropomorphic. Neither the Internet nor the USA are systems that have a human or animal consciousness, that much is clear. "But what about consciousness in general?" Man, I'm sorry to be blunt here, but you wouldn't know it even if it hit you in the face. We're human. We have human consciousness. Our idea of consciousness outside of the exact thing that we have is: fuck. all.
We can start with Leibniz's thought experiment they mention in the article, where a gigantic brain is built out of gears and pipes. Or the similar thought experiment about the "China brain" which emulates a human brain by using one person to emulate each neuron. To me, it is clear that both of these things would, in fact, be conscious. I see no good reason to think otherwise and I am perfectly content biting that bullet. I have zero reservations, even intuitively.
Now, regarding something like an "USA consciousness". Personally, I do not think this is bizarre. It's only bizarre under an anthropomorphic view of consciousness, or to put it in an other way, it's only bizarre if you believe that conscious entities ought to be relatable (an absurd expectation). But if you use the criteria I've given, you can find several things that would correspond to "USA's qualia". For example, 9/11 caused large structural changes that you could interpret as the USA feeling a qualia of fear and/or anger. A new episode of Game of Thrones may also be a qualia of sorts. Makes perfect sense to me.
Perhaps that's not evidence to you, but I don't know what more you'd expect. I think you conflate consciousness with human consciousness, so you only perceive consciousness when it's similar enough to your own. If I may indulge in a silly (but, I think, accurate) comparison, it's a bit like a shoemaker conflating craftsmanship with shoemaking, and then saying there is no evidence that a boatsman is a craftsman, because they've never seen them make shoes.
I'll settle for that.
It's analogous to: What is Artificial Intelligence? AS is what computers can't do yet.
Edit red to add: to put it another way, consciousness is that part of brain activity the text books don't have a chapter on.
We can only point to what consciousness is not.
Is there any proof of this? The while article seems to hinge on this assumption and then go on from there. We can't see or measure consciousness (brain activity yes, consciousness no). We only experience it. The article makes far to many naive assumptions without any proof.
More points in favor of physicalism: scientists' ability to alter rat neurons to be photosensitive and attach them to fiber optic cables, the ability to implant memories in rats by altering a neuron, the mapping of place neurons in rats -- or perhaps the fact that brains are produced entirely from matter consumed by the mother during gestation and by the creature afterward, so where does the dualistic mind get a chance to insert itself?
I'll note that I used to be a staunch dualist for religious reasons, but became convinced of physicalism over a period of years of semi-active reading.
One could argue that consciousness is a field phenomena and that our (gross) material brain is a device that interacts with this field. Modify the device and you modify the local effect.
I am obviously just speculating because I don't see an obvious connection between matter and consciousness even if they are linked in some way. The article doesn't give any clarity about this problem stated by Leibniz et al.
Dualism has worse problems, though. If there is no connection between consciousness and matter, how do psychoactive substances alter consciousness? How do conscious decisions cause material movements?
If you're going to take both subjective awareness and physics seriously there's a huge and crucial mystery here!
Q.E.D. (Or running in circles, as you like.)
What we have are various phenomenon that we attempt to classify. Some would like there to be one meta category we can lump everything into. That's typically called material or physical. But it could also be labelled ideal or mental, from a different categorization scheme.
The problem we've had so far is that some phenomena are not easily placed into the one all-encompassing meta category. It would be easier for us conceptually if we could make it all fit, but we haven't quite succeeded so far.
I suggest the problem is with the concept "true".
The assumption that all phenomenon have a physical basis is exceedingly practical and produces fantastic predictive success. "True-ness" as it is intuitively defined is not interesting in this case, indeed it may be nonsense.
Stated another way, if we're going to describe "true" as some aspect of reality that we can never directly perceive but only get at through fallible senses and perceptions then we are not only flirting with something that is likely a-priori unprovable but also likely very uninteresting.
Whatever label you put to the category, they are all in the same one.
Otherwise, our category should just be called "everything".
This of course doesn't mean there could never be another explanation. But the time to take it seriously is when serious evidence is presented.
Just because we can imagine a certain reality in which events, actions and things might occur in a different way, doesn't mean it is this reality.
The argument is disingenuous, not you of course.
So far we haven't come up with a decent explanation of consciousness which is why I am prepared to be a bit more open minded. The fact you can't observe it, only experience it adds to that. It may well be physical, but it may not.
> So far we haven't come up with a decent explanation of consciousness which is why I am prepared to be a bit more open minded.
Then we should say we don't know. It's disingenuous to say that because we don't have an answer, or as in this case, a full and complete answer, then it means that it might be this entirely different thing for which nobody has any evidence, no one has any explanation and no one can even describe or define.
you can take the wrong turn Penrose took and posit that it's due to quantum effects, as a sort of middle ground.
My way of looking at it is that we use "consciousness" as a default bucket for things we don't have instrumentation to measure, or otherwise have a functioning predictive model for.
The reason it's bad, using a Dennet frame of mind, is that it's a sort of left-handed "skyhook" - throwing "quantum" in the resolve the longstanding divide between Mechanists and those who believe in a Prime Mover or deity. It's an Ex Deux Machina of sorts.
I can sympathize with Penrose; his entire career was about measuring the information transforms of things like black holes, so why not apply the same basic recipe to consciousness? It's still an interesting idea, but it's not, SFAIK, testable.
If consciousness is something fundamentally nonphysical than causality is out the window, because my actions in the world are not actually caused by anything in the world.
Epiphenomenalism is [roughly] a conception of the mind as a non-physical entity that 'creates' conciousness, but interacts with the physical world in a read-only manner.
100% knowledge of all physical causes does not rule it out, and it does not rule out 100% physical causality.
The idea of proof, in the empirical sense in which term is used in dealing with any statements about phenomena, presumes this about all phenomena. It is itself unprovable (for any phenomenon), as it is the axiomatic underpinning on which all "proof" of anything about phenomena rests.
Its a good thing we invented computers in recorded history and they're not prehistorical or delivered from space aliens, or we'd be having the same sophistry about philosophical discussions about computers as we do about consciousness. What, you claim a mere arrangement of impure silicon atoms implements the majesty and beauty of the lambda and the red black tree? When you think of binding a value to a variable, how do you propose to do that with mere atoms alone? So you think you can build a computer completely out of NOR gates, well, that's very nice but seeing as computers came from space aliens and no human has ever made a computer we'll wait on that theory until we see you run "hello world" on it. Surely an "intelligent designer" created computers and such complexity cannot operate via mere atoms, much like mere atoms arranged in peculiar shapes makes hard steel.
Something like this already almost exists between the EE/microcontroller type people vs the CS/algo people. Note my plethora of weasel words, something, almost, etc.
Aside from the philosophical analogies being entertaining by themselves, it would make interesting hard sci fi. Imagine a world where god is blamed for giving us clay tablets containing most of automata theory. Wouldn't that be an interesting sci fi setting?
Related video about why we don't have AI yet: https://www.youtube.com/watch?v=c3of7xYoMQM
This is not true.
I wish I could respond with something more interesting, but that's all there is to it. You are just saying something that is incorrect. Our brains are information-theoretically and complexity-theoretically entirely reasonable.
I know this has been the traditional view of most information theorists. But many philosophers (and a few computer scientists) challenge that view. See Fodor's frame problem. The basic idea is this: yes, our brains are complexity-theoretically reasonable after the problem is defined given a set of inputs, outputs, and problem states. However the real issue is how our brains formulate the problem in the first place - how to sort through the combinatorially explosive amount of information in order to narrow down what is relevant to a problem. This objection has never been satisfactorily answered by information theorists in my opinion. The answers given by info theorists always seems to presuppose the existence of a problem formulation before solving it.
Who decides that a particular representation in the brain has zero mutual information with current sensory input? This requires a comparison of some sort (i.e. computation).
What are you referring to? The sensory input to the brain takes, as a ridiculously high upper bound, perhaps terabits per second. In reality it's probably a few megabits. It's really not much. Vision is, I suspect, by far the highest bandwidth input to our brain, and computer vision has met or exceeded human vision in many tasks.
This only holds if you presume those operations are being performed in the same way as a computer, rather than the incredibly imprecise rule-of-thumb heuristics they actually use.
I'd really like to see a source for this, especially taking into account that neurons are natively probabilistic.
> When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared. The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean
> [...] But how does Giulio know that the cerebellum isn’t conscious? [...] he just told us that a 2D square grid [a simple circuit that scores high phi] is conscious!
In terms of reproducible results, it would be most interesting to research the shape of different qualia and how they progress if the underlying network changes (both in function and connectivity). Right now, we can only rely on self-aware and -reporting systems. But if we know how actual experiences integrate if two systems merge it might be possible to get a "feeling" for others.
Cases such as these are therefore extremely interesting. If it is possible to gradually change the connectivity between two systems, according to the theory, there should be a sudden "split" into two experiences. High bandwidth brain-computer-brain interfaces could possibly provide such a possibility in the future.
 is another reason why I tend to be pretty agnostic about intuitive arguments.
"Turchin has applied the concept of metasystem transition in the domain of computing, via the notion of metacompilation or supercompilation. A supercompiler is a compiler program that compiles its own code, thus increasing its own efficiency, producing a remarkable speedup in its execution."
That's sidestepping the issue. Of course we know what consciousness is in this sense.
What we don't know, and what we actually mean when we say we don't know what consciousness is, is what causes it and how it emerges.
Why is consciousness in need of being explained in that manner, is what I think the author would ask. It's similar to saying that physical matter or energy are hard problems, as we don't know what causes them or how they emerge - not in the sense of conversion from one form of energy or matter to another, but what generated matter or energy in the first place.
(With that said, I do think there are strong arguments for favoring the physical world, and that pose a challenge for the author's position.)
And why not? Most (all?) including rocks, have a cause and constituent elements that need to be arranged in a certain way for them to have a particular function.
The strange thing would be to presuppose consciousness as without something before it. Which would be like what Christians for example consider for the "soul".
I have major issues explaining why i think that but somehow it fits my model.
When thinking of consciousness as a feedback loop, stuff sort of makes sense. I have been unable to poke holes in the theory but also unable to add strength to it.
Related: meditation (at least some kinds) seems to operate by turning the subjective awareness back onto itself, becoming aware of becoming aware of... etc... A little like a camera hooked up to a monitor that it's pointed at, you get an infinite regression.
In the camera-monitor system there is a physical loop of information from the lens-to-screen-to-lens-to...
When you do it with you awareness, there must be something looping somewhere, regardless of the physicality or otherwise of consciousness, eh?
Then this book sounds like it's not worth reading. There is absolutely no need to invoke pseudo-quantum mumbo-jumbo to investigate consciousness. We don't understand the chemical operation of neurons well enough to say whether quantum effects are necessary for neural functioning (although they probably aren't). Most "quantum-looking" (i.e. obviously non-classical) processes happen at much smaller space and energy scales than the chemical processes we know are involved in cognition.
Quantum entanglement almost certainly isn't involved because it can't be used to transmit information and it requires high degrees of precision to make use of in the first place.
The brain's thermal noise floor is vastly larger than the scale of most "interesting" quantum effects.
Well technically you will be conscious, but the meaning/description of conscious in your head wont fit what it actually is.
But beyond that I think I just want to echo what paulsutter said in his comment -- consciousness is a suitcase word with too many different meanings. To have a 'real' discussion you would need to unpack all of the different meanings at the beginning, but no one is going to do that.
Also, the word consciousness is directly tied into belief systems, so if you want to understand someone's belief system, you can ask them about it, but the nature of beliefs is that you are not going to have a constructive conversation.
If anyone is curious about my own beliefs, I think that we can generally answer the Leibniz machine question with 'emergence' so perhaps I am an emergentist https://en.wikipedia.org/wiki/Emergentism
I like the general explanation of how something like consciousness arises from https://en.wikipedia.org/wiki/Metasystem_transition
mystery and matter have never been mutually exclusive. depending on it's configuration, matter can lead to some mysterious things. "it's matter" is a bad answer to pretty much every question.
didn't Descartes try to explain this hundreds of years ago?
> it is the physical part that is the big mystery, rather than the consciousness part.
you say tomato, i say tomato. seems like the author is trying to mince words or use people's connotations of these words to start an argument.
Basically, our conceptual understanding is lacking. But dualists or idealists might say the same thing, so ...
my point exactly. :) i'm not sure whose eyes the author is trying to open.
The German philosopher Gottfried Wilhelm Leibniz made the point vividly in 1714. Perception or consciousness, he wrote, is “inexplicable on mechanical principles, i.e. by shapes and movements. If we imagine a machine whose structure makes it think, sense, and be conscious, we can conceive of it being enlarged in such a way that we can go inside it like a mill” — think of the 1966 movie “Fantastic Voyage,” or imagine the ultimate brain scanner. Leibniz continued, “Suppose we do: visiting its insides, we will never find anything but parts pushing each other — never anything that could explain a conscious state.”
Many make the same mistake today — the Very Large Mistake (as Winnie-the-Pooh might put it) of thinking that we know enough about the nature of physical stuff to know that conscious experience can’t be physical. We don’t. We don’t know the intrinsic nature of physical stuff..
We find this idea extremely difficult because we’re so very deeply committed to the belief that we know more about the physical than we do, and (in particular) know enough to know that consciousness can’t be physical. We don’t see that the hard problem is not what consciousness is, it’s what matter is — what the physical is.
People making computer analogies below are engaging in the exact fallacy the piece is arguing against. A turing machine is a mathematical model for symbolic manipulation, and yet it's assumed one could somehow "map" consciousness 1-to-1 onto such a machine. The fact we can't simulate a single protein fold on a supercomputer gives me real pause in believing we've begun to grasp or have the ability to digitally emulate the kind of tricks life used to bootstrap consciousness out of matter.
I know the "no real evidence" / "too warm & wet & macro" arguments against quantum consciousness, but that to me looks like by far the most fruitful path to investigate to break out of dualist/"eliminativist" dilemma we find ourselves in. Recent research on quantum biology already makes clear life makes is far more quantum than I think almost anyone would have believed possible a decade or two ago. Considering just photosynthesis, arguably no life as we know it on earth would be possible without clever non-classical hacks by nature.
I guess it's solved. I'm glad the NY Times is where I go for my scientific analyses.
"The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour."
The author seems to be traveling a road very close to solipsism.
If the only thing you can know is your own consciousness...
In short, you exist in virtual reality which is really manifested by your mental map which is layer upon layer of memories, personality traits, beliefs and thoughts.
That does not necessarily mean that "you" create _the_ world, only that your mind gives raise to the subjective reality you have access to, the only reality you have access to.
There may be an external, objective world out there, but we wouldn't be able to directly access it.
At which point you need to explain consciousness in other people, which you cannot perceive directly.
But to assume an objective external world exists when nobody has ever experienced such a thing, is a leap of quite different proportions, akin to believing in a bearded man in the sky.
If there ever was a man trying to imagine something hugely complex, then he was surely Tolkien with LOTR. He imagined languages, countries and peoples in an amazing detail. But that shows how hard it would be to try to imagine something as complex as the real world.
Our minds don't have such generative powers, thus the world doesn't come from our imagination.
Also, that which does the imagining, your subconscious, say, has to be more complex than what you might perceive it to be.
And by the way, it clearly worked : we colonized the planet. And now we have arrived at a juncture wherein we are left having to explain our existence when the reality is that our only identifiable purpose is to reproduce. Everything else is incidental.
Our decisions are simply the function of the "stored state" in our body and outside influences in the moment up to the decision. Animals function the same. Of course, there are so many influences and the process of thinking(3) is dependent on so many variables that we can actually make different decisions based on, for example, our same experiences up to the moment of the decision but depending on if it's 9 AM or 9 PM, or if we drunk the coffee one or two hours before etc. So it appears to be a "free" decision but is still just a function of the state(4) and the input.
2) No every religion has a "good for everybody" god, so for these religions it's not even a problem. If the god is there just to favor your group when he is not distracted by something else (yes, there are religions where the god has a limited attention span), he can smite everybody else and he doesn't have to be absolutely "good." For those who believe in a "good God" the shock is, of course, when something like this happens: https://en.wikipedia.org/wiki/1755_Lisbon_earthquake#Effect_...
3) which we surely believe to posses, in some aspects more complex than what we understand the animals do, simply because we don't know their language and they can't write
4) not just of our neurons but also of the exact amount of the traces of different chemicals
The pseudorandomness, which also exists, is also not "free will." The "free will" is just a religious-thought based philosophical construct.
My dictionary has qualia "a quality or property as perceived or experienced by a person"
Take an example like something tasting good. That has obvious evolutionary advantage for getting fed.
Separately, saying we "know nothing" about physics, just because we have to experience physics through our senses, is somewhere between disingenuous and false. We know how to predict the behaviour of physical systems very well. We know much less about how to predict conscious systems. So, if by "knowledge" you mean "ability to predict outcomes, possibly as result of stimuli," we know physics much more than consciousness. Or, perhaps, we know simpler physical systems, but but the complex systems that give rise to consciousness -- again, we know more about physics than consciousness, but not enough physics to explain consciousness as a physical system.
Finally, arguing that the mind is just its physical/chemical make up requires a lot more proof than we currently have. That argument reduces to "there is no free will." It may be true, but we don't know enough of quantum processes in the brains function to say that for sure. We certainly can't simulate or predict a couscous human brain yet.
Is this article agreeing with me, or disagreeing with me? I can't honestly tell.
I think the article finds "emergent property" mysterious. It intuits that there is a link between knowing what the "stuff" is that behaves as physics describes it, and "consciousness", by which we know there is something rather than nothing. I think that's a mistake: it's like thinking you couldn't understand an article on the web without directly observing all the electronics that created it. I don't believe that fundamental particles (e.g., quarks) have any identifier attached to them to distinguish them from other fundamental particles, except those that describe their behavior: (probability distribution of) location, (probability distribution of) momentum, (probability distribution of) mass, etc. There is no "stuff" that is not behavior. Physics works to refine our model of the fundamental particles; everything after that is "emergent property". A sorting algorithm in a high level language doesn't depend on what CPU it runs on, given sufficient RAM and that there are no hardware faults. It may have been designed by a human, but it may also have been generated by simulated evolution (genetic algorithm). Fundamental particles/waves don't have hardware faults, but they do have probability distributions. Atoms are an emergent property of their constituents and decay when their constituents reach a low probability state incompatible with the constitution of the atom. Molecules are an emergent property of atoms and become different molecules when their constituents decay or their components react to some external force (usually the electromagnetic forces of another molecule). Cells are an emergent property of some kinds of molecules, and so on up the complexity scale. Consciousness is perhaps the top of the pyramid.
Really? Most people I know would throw self-awareness into the mix. When you make such a bad start, the rest of your article isn't worth reading.
We don't doubt that consciousness exists, what we do call into question is the assertion that it is of a non-corporeal nature. Therefore it has to be physical in nature.
panpsychism: 'the doctrine or belief that everything material, however small, has an element of individual consciousness.'
I have a hypothesis that a consciousness is a function of a universe. Each universe has just one consciousness.
Another person's consciousness is inaccessible because it's in a different universe. Only the matter part of that is projected in your universe, not the actual identity.
Without a hypothesis like this, we are left with postulating hidden variables: some hidden context pointer which resolves to you or me or such.
Because my matter and your matter aren't in close enough physical proximity or the appropriate configuration to have any communication with one another. Same way that you and I both live in houses, but my front door doesn't open into your living room.
I think that being locked to a particular consciousness and having no access to another one is on par with not having access to parallel universes. I'm going to click Update right now. In another universe, a parallel me closes the page without clicking Update. I have no access to that.
The writer (and a very large part of society) is so oblivious to the unconscious dogma "Everything can be explained in a consistent scientific system" that he doesn't even see it he's holding it - precisely like a Christian will tell you "Look stuff is like this, it says so in the bible".
All this mental masturbation about trying to locate consciousness in the brain/body is utterly ridicoulous. Why do you think you haven't found it yet? Hello?
But according to scientific rationalism everything that exist is matter (for only matter can be PROVED to exist with a scientific experiment...), so consciousness must be there too right?
Just like a man who's lost his keys outside, but the street is very dark and cold, so he decided to search for them in his house, which has lights to see better and is much more comfortable.
autism is like lacking of part of the conscious experience of interpersonal interaction
your cells are processing reality for you and when you lack the energy you will lack some of that processing
What is difficult to demonstrate is if that is all there is to it. For this we can turn to math. Do the intricacies of the rules of mathematics actually exist anywhere? Obviously not, unless you consider words and symbols written in books to qualify. Math exists everywhere, because you can use it to describe the physical, and it exists nowhere, because there is no literal physical representation of it anywhere.
But we can actually build physical representations of particular math equations, inputting them as programs into a computer. The representations obey the rules of mathematics, as well as the rules of the physical world, if you destroy the computer, you also destroy the operation of the program.
But yet there is more to these programs than meets the eye physically. There are hidden rules that they operate by, more than just the physical affects them. These are the rules of math. We can analyze the programs using various mathematical techniques and prove things regarding them.
It is the same way with consciousness. We only see the physical affects because that's what we're looking for. We don't see the countless hidden rules that also affect consciousness. They are so numerous and manifold that they look wholly continuous with the physical world and physical rules.
What are these rules? In a word, they are ideas. Ideas have logic to them and can be compared with other ideas. We can say that one course of action is good or bad, when compared to other courses of action. Ideas and thoughts themselves are so comparable to computer programs that it's amazing to me that we programmers scoff at the idea of thinking machines.
You analyze thoughts with other thoughts, with tools of logic. You analyze computer programs with the rules of math, many times those rules are implemented with other computer programs. The question, "what is consciousness," is purely the domain of philosophy. Biology and physics can only takes us so far in our quest to understand ourselves.
To attempt to do so would be like trying to analyze a running computer system by smashing it apart and looking at the silicon under a microscope. It fundamentally mistakes what a computer program is. And knowing that it is just electrical signals traveling across transistor gates doesn't get you very far either. Even analyzing the voltage levels across the entire running system isn't going to tell you much either.
- Consciousness is out of the scope of physics. You have to leave the common "physics is everything" perspective to understand consciousness, however counterintuitive that is.
- Physics is just a tool to describe the patterns of our subjective experience / consciousness.
- The hard problem is, why does positioning atoms in a piece of matter (brain) trigger the perception of color or pain?
- Dualism and solipsism are approximately correct, depending on the specific definitions, which vary a lot.
The hard problem you pose (positioning atoms... triggering perception of color or pain), what else is that but a physics problem?
Many philosophers today, including materialists, are heirs to Cartesian metaphysics, the same metaphysics that brought us the mind-body problem and the problem of qualia. The difficulties the author has in mind are very much bound up with this Cartesian (and Galilean) legacy.
The idea that material beings can be conscious is often treated like it's some new and shocking claim. The reason for that is, again, the broadly Cartesian heritage at work. For Descartes, it takes an immaterial "res cogitans" to be conscious. The material body, as a desiccated "res extensa", is incapable of functioning as a substrate for anything we might call "consciousness". And because Descartes holds that only human beings possess minds understood as "res cogitans", only human beings can possess consciousness. Furthermore, "res extensa" lacks sensory qualities like color or sound. As a result, these qualities are located in the "res cogitans" as immaterial "qualia". Materialism does not escape the Cartesian paradigm. Instead, it denies the "res cogitans" and tries to locate what was ascribed to the "res cogitans" back in the "res extensa" while maintaining many of the metaphysical and methodological tenets of Cartesianism. This is where the problem of qualia occurs. The problem is insurmountable as long as the Cartesian suppositions are maintained. No amount of "keeping at it" with the current methods will ever resolve the issue. (See Nagel for more.)
For what it's worth, Aristotelian metaphysics encounters no such difficulties because it does not adhere to the dualism of Cartesian metaphysics (though sadly, few philosophers understand Aristotelian metaphysics; there seems to be a resurgence of interest, however). Indeed, for Aristotle, "consciousness" (though he does not use this term) is part and parcel of what it means to be an animal, human or otherwise. (See Jaworski and Feser for more.)
"It’s true that people can make all sorts of mistakes about what is going on when they have experience, but none of them threaten the fundamental sense in which we know exactly what experience is just in having it."
If there was any doubt about the Cartesian flavor of the author's reasoning, this statement should have dispelled it.
"Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become 'dualists.' Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become 'eliminativists.'"
Ah, but the eliminativists haven't escaped the legacy of Cartesian dualism either! They've eliminated not only the "res cogitans", but the facts of experience (e.g., "qualia") altogether! It's a notoriously incoherent position.
Strawson says he's a panpsychic physicalist, but that falls into the dualist camp once you get down to it. He offers no third way between Cartesian dualism and eliminativism.