I really recommend this podcast in which Sam Harris talks with Chalmers about all the options currently in the game: https://www.samharris.org/podcast/item/the-light-of-the-mind
One of the most promising theories (discussed in the podcast) assumes that consciousness is a fundamental attribute of reality, the other side of the coin (matter being the first one). Which leads to the conclusions, that consciousness is everywhere, but not as 'condensed' as in the brain. Seems crazy at first, but the more you think about it the more plausible it seems.
Here's a nice paper about it: http://rstb.royalsocietypublishing.org/content/370/1668/2014...
Within this paradigm it starts to make sense to call the reality "The Mind", as some Buddhist schools do. There's also the crazy part, about how under some conditions it could potentially lead to experiencing other peoples' minds.
For the more adventurous, here's a talk by Culadasa, very experienced meditator and a neuroscience professor, in which he shares some thoughts about the problem and also some of these crazy experiences he had: http://s3.amazonaws.com/dharmatreasure/150430-tcmc--culadasa...
The problem is, what do we do with this now? There's never been any physical evidence that consciousness is a real thing or can effect the world in any way other than via the actions of people who believe they are conscious (which, speaking from my perspective of a person who believes they are conscious, is no small thing); the only real consequences of deciding that we are part of a greater conscious system are going to be philosophical. And there are plenty of existing philosophical systems which believe that already --- you mention Buddhism.
And while I do think that believing that we're part of the world, rather than being somehow separate from it, is definitely a good idea, I'm an engineer. This seems dissatisfying. Is there anything else can we do with this idea?
I think the question of consciousness is critical in at least one domain: the question of whether or not consciousness survives beyond the death of the physical body (and, relatedly, whether it precedes the birth of the body). If we bracket away religious doctrine and just focus on the facts of the matter, this debate hinges on how consciousness relates to the physical body it inhabits. Is it created by the body, or does it merely use the body as a conduit?
My boy William James gave a great lecture† in 1897 where he claimed that scientific understanding had not yet ruled out immortality, since it was unknown whether this relationship between mind and body was productive (body creates mind) or transmissive (body filters mind). I could be wrong, but I think this is still an open question, and it very much depends on understanding the true nature of consciousness. That's why the Hard Problem is so awesome.
† https://www.uky.edu/~eushe2/Pajares/jimmortal.html (published in 1898)
So no soul. But still my natural offspring will start of from seed energy, half of which is from my body and other half from my partner. Its possible that only natural born twins will only have the exact same consciousness if subjected to exact same experiences. This is because we may accumulate information in our DNA over our life time. So a clone will be starting of with that information as opposed to comparatively clean slate I started off with. Or may be the accumulated info will only be written to sperm cells or ovum. Information is either passed down this way or its just a totally random mutation(or DNA shuffling) that produces new feature adaptations that causes evolution.
We're getting more and more detail in the nuts-and-bolts models of how the brain constructs experiences and consciousness. IMO it's a just a matter of time before there are no mysteries left, and every high level description of human experience can be broken down into smaller details recursively until what's left is low-level descriptions of neurons.
Go ahead and explain qualia as experienced in first person.
For more engineering approach to the problem, there's lot we can do. One way would be to study in more and more details the correlates of consciousness to understand the "linking" mechanism better. Anil Seth describes this approach here: https://aeon.co/essays/the-hard-problem-of-consciousness-is-....
On the other hand, if consciousness indeed is a fundamental attribute of reality, we should extend the 'engineering' to cover also the 'mind' part of reality. This is exactly what meditation is - to train the mind so that it can see more and more clearly all the components of conscious experience until it reaches the bottom. As a result you come up with a number of 'edge cases' - interesting states that can be used to reason about this reality conceptually.
There are eg. 'experience of space itself', experience of consciousness (so called immaterial Jhanas) or finally experience caused by calming the unconscious mind to the extent that it stops projecting any content into conscious mind, while the latter is still operating. This (supposedly) leads to realisation by the mind itself that the 'experienced world' is nothing but its creation. Of course we all can understand it conceptually, but this experience leads to permanent change in perception (you can't "unsee it") which becomes as objective as the human mind can possibly be. Meaning, that it realises in every moment that each experience is its own creation.
It may all seem mystical, exotic or plain crazy, but there's no other way to understand the mind than through the mind itself, if the discussed hypothesis is true.
Dennett's approach is absurd. Consciousness is the only thing in the Universe that is self-evident in the strict sense. Everything else is at least second-hand stuff - see the brain in a vat hypothesis. To say that consciousness is an illusion is ridiculous; I can see how he gets there, but the conclusion is absurd. Somewhere along the way there's a mistake; no, I don't know where the mistake is, and I could not even begin to hypothesize. OP's article points out this predicament very well. It is a hard problem indeed.
> One of the most promising theories (discussed in the podcast) assumes that consciousness is a fundamental attribute of reality, the other side of the coin (matter being the first one).
I can hear the objections being raised already, but it's a neat promising step towards solving the hard problem. Suddenly the reducibility paradox vanishes.
It would also relieve another problem. If you can't trivially reduce consciousness to neural activity, then how is it coupled with perception? There would appear to be a gap between the sensory chain and the fact of awareness. And then for consciousness to work, it would already have to be everywhere (in a sense, possibly even a somewhat metaphorical one, or at least non-trivial). The assumption you mention solves this problem.
> here's a talk by Culadasa, very experienced meditator and a neuroscience professor
It's very, very hard to agree with Dennett, and it's getting harder the more you advance in the practice of meditation. As soon as you realize that you can peel off and disconnect layers upon layers of perception (external at first, then the many internal layers too), and also greatly reduce the waves of what is commonly called "thinking", while consciousness does not diminish but instead becomes at once more vivid and more stable, more calm and more intense, less connected to external factors but more broad - all that talk about "the illusion of consciousness" starts to look extremely suspicious. It's not perception, and it's not thinking; it can relate to these things, but it's fundamentally different. It can ultimately exist completely independent of inputs, either external (sensory) or internal (thoughts, mind activity in the trivial sense, even memory).
Don't just read about it. Go ahead and have the experience yourself. It changes a lot of perspectives. You don't even have to go all the way to the highest levels described in the literature - the intermediate stuff is revelatory enough already.
Buddhism seemed to go deeper than many other philosophical frameworkss for this reason. However, I looked into many philosophical systems, and I was extremely surprised to discover that the thinker with the most consistent views on reality and consciousness (both self-consistent and consistent with my own experiences) was drumroll Ayn Rand.
Though she's known for her views on egoism and capitalism, she wrote a ton about consciousness, perception and concept-formation. If you look at my comment history you'll see I talk about her a lot on HN, and that's because I think computer scientists will gain immense value from studying her works. Introdution to Objectivist Epistemology and The Romantic Manifesto contain her deepest writings on consciousness and cognition and are the books I'd recommend. http://aynrandlexicon.com/lexicon/consciousness.html is a good taster.
I've met a couple of other Objectivists who have practiced meditation - not many but they exist. I was very surprised to discover the connection between the two areas. Hopefully in a thread like this on a site like this, curious minds will read this comment and be persuaded to investigate further. It's a fascinating journey!
Same w Ayn Rand. What ideas did you read, can you summarize?
"I am finding that everything we say about consciousness is just an artifact of our language. If we are careful to always include the subject of the sentence as well as the unspoken assumptions then I have not come across any questions that don't have a straightforward answer. Same with morality."
It sounds like you're influenced by Wittgenstein, or at least his ideas. If you assume all these issues are artefacts of language then there's no way I can give you an adequate summary in any amount of writing, and especially not one comment. Human minds can only communicate by exchanging strings of words, after all.
To give you a straight answer, though: in vipasanna meditation you focus on one thing (typically the breath, though I often used ambient sounds) and attempt to break down your experience of that thing into constituent sensations. One thing you learn from this is how frequently your attention wanders, and by really training yourself to focus you learn how your attention is directed from moment to moment. The main thing you are trying to gain, though, is a first-hand experience of the Buddhist view of reality: that solid objects do not exist, and that existence consists only of flickering sensations, with no permanence or independent existence.
I really can't summarise Ayn Rand, since her writing is already extremely well summarised and essentialised. Click around that website if you want to learn her views on any particular topic. The relevant ideas here, though, are her views on concepts. There's been a long-running divide in philosophy between people who thought concepts reflected something intrinsic to the universe (e.g., that all horses contain an essence of horse-ness, or reflect Plato's ideal horse) -- and the people who thought this was nonsense and that language was therefore an arbitrary human construct (this latter view is currently dominant). Rand found a third position: concepts are human constructs, but they serve an objective purpose, and there are objective rules which determine their formation. ITOE is the only place where you will find her full argument.
To compare the two: Buddha takes sensory experience as the gold standard for reality, and concludes that reality exists, but is ultimately transient and lacks identity. Rand takes perceptual experience as the gold standard and concludes that reality exists, that the everyday objects we perceive really exist, and that therefore reality is logical and consistent.
(As for Wittgenstein, he says human language is just an arbitrary game which can never describe true reality, and ends up in a kind of quasi-mysticism (man helpless in the face of an ineffable, incomprehensible universe)).
I predict you will not be satisfied with this answer, because you are following a very misleading philosophical framework, but I wrote it up for the interest of lurkers.
To illustrate, if I described a parallel world that you would never encounter, never be able to deduce its existence from any observations in this one etc. then in what way are the sentences "it exists" and "it doesn't exist" meaningful?
I think meaning is only relative to some subject who can comprehend it.
I also think morality is relative to someone with a "sense of morality". Absolute morality is like absolute humor. Humor requires a human with a "sense of humor". A humorless person is like a psychopath. They don't share the same phychological responses as others. Some moral value can be widespread just like a joke can be funny to a lot of people.
So I guess I am a relativist...
I would say what we have are beliefs. And we use language to express them and can only legitimately convince each other, ie change someone': beliefs, through exposing a double standard in beliefs. All other methods eg. appeal to emotion could achieve this but are not logically legitimate without the double standard reason for changing beliefs. Knowledge is another word for bias.
From your perspective, do you dispute this?
Interesting. It's probably that most people who consider the question "what do words really mean?" hit on a very small set of possible answers. There's also the fact that his ideas have influenced many other intellectuals in many fields, and you many have absorbed his way of thinking by osmosis.
can only legitimately convince each other, ie change someone': beliefs, through exposing a double standard in beliefs
The primary legitimate way to convince another person is to point to reality -- the simplest way to do so is to point to something observable. If I want to convince you that eggs break when dropped, I drop a bunch of eggs in front of you.
Knowledge is another word for bias.
I can't really make sense of this statement, but it seems you're implying that all of our knowledge is untrustworthy. I definitely dispute this -- we have to have some certain knowledge or we don't have any knowledge at all. Put it this way -- do you know that "knowledge is another word for bias"? Are you certain? How?
"Also I say that the word 'exists' is meaningless unless it is relative to a conscious observer."
But does the conscious observer then exist? Do they have to exist relative to another conscious observer? Existence has to be absolute or you end up in such paradoxes. Otherwise, I agree that asserting the existence of parallel universes is arbitrary.
"I also think morality is relative to someone with a "sense of morality". Absolute morality is like absolute humor. Humor requires a human with a "sense of humor"."
Rand's view is that morality is objective, which she distinguishes from intrinsic. Intrinsic morality would be what you call absolute morality -- things are inherently good or evil, regardless of their relation to other things. In contrast, Rand says things are good or evil in relation some agent. However, this relationship is an objective fact of reality. (E.g., "sunlight is good for lizards", or "friendships are good for human beings").
What you call reality is actually a collection of statements that you believe to be true. You just have a very high confidence that they are true. You can point to something but you also have to make an argument which assumes these statements. If I already believe them, then you can successfully build on that. If I don't, then you have to convince me of X, and the only legitimate way to do that is to show that I would have a double standard if I don't believe X when I already believe Y for much less convincing reasons.
The statement is defining the word "knowledge". And you can't get me to play the game of "are you certain?" Do you know that "knowledge is another word for bias?" What I said is what I believe. You see you used the word "know" but I already told you how I define that word. You can convince me to change my beliefs, but the only ways you'd do that is by exposing a double standard, as above.
I believe that my position is consistent and the one that most successfully models reality, with the least ad-hoc adjustments.
Rand's view is that morality is objective, which she distinguishes from intrinsic.
But you would have to define the word "objective", probably as "independent of whatever any group of people, no matter how large, thinks". But then you can never prove that an objective morality exists, because you'd just be appealing to what groups of people think, as your premise. Some people, such as moral relativists, simply wouldn't agree.
(There are many more logical philosophical problems with Rand's argument, see this: http://www.owl232.net/rand5.htm )
The word "exist" is only relative to a conscious observer. So, as far as the conscious observer is concerned, of course they exist. The light from a distant star exists. The star may no longer exist, but it exists in the past because the observer infers that it had to generate the light. If they are wrong, then the star might never have existed, actually.
It's actually not self contradictory. Observers observe things. Obviously.
Let me make crystal clear what I am saying, using another language: mathematics.
I am saying "A exists" is meaningless. "A exists for B" is a binary relation, and its meaning is "B can observe A" or "B can deduce the existence of A from something that exists for B".
I am saying that "A should do B" is meaningless. Instead, moral statements are ternary relations: "If A wants C, A should do B", which recursively means P: "If A doesn't do B, and C doesn't happen, then A can be blamed", which recursively means "If D wants to be considered rational, B shouldn't blame A if A did B and C didn't happen" etc.
What you are saying when you say "You should do your homework" actually has a hidden premise, "... if you want to do well in school". This relative morality makes reasoning easier without ad-hoc rules. Like, you may want to escape a volcano, and thus you might violate some other "absolute principle" such as failing to respond to a question in a debate, in order to run for your life.
Relativism just makes more sense to me and people like me. As I said at the outset, if you are careful to include all your premises, all these "magical" things become pretty straightforward.
"I believe that my position is consistent and the one that most successfully models reality, with the least ad-hoc adjustments."
If reality is only a set of beliefs, why are you trying to model it with another set of beliefs?
I've studied that Owl link in depth, and (though he appears intellectually honest for the most part) he mis-characterises Rand's views. I'll tap out here, but I want to point out that you definitely seem heavily influenced by Kantian ideas.
To answer your question: because beliefs is what we have. We don't really know things. We believe things. And we use language to communicate our beliefs. If you define knowledge to be "true justified belief" then you'd be stuck as you wouldn't ever be sure of something 100% so you wouldn't "know" anything by that definition.
And before you ask how come we know mathematical truths are true, it's because they are part of the language we have chosen to use, eg logic and math. If we used some sociological mumbojumbo arguments to convince each other eg of institutional oppressio then maybe mathematical theorems underpinning statistical results wouldn't be as certain in that context.
Again, I haven't read Kant beyond the "categorical imperative". I just noticed that people usually face all these difficulties because they fail to precisely DEFINE all their terms. This set of definitions just seems to best match what's really going on.
> Don't just read about it. Go ahead and have the experience yourself. It changes a lot of perspectives. You don't even have to go all the way to the highest levels described in the literature - the intermediate stuff is revelatory enough already.
It is indeed. The more you meditate, the more you see that most of the things that you do 'outside' are meaningless, just playing with your representations of the world and pretending they are real. Of course we can't do anything to somehow reach the 'real' world, but at least we can drop the pretending part.
The title of this book may seem both scary and beautiful in this regard: http://www.buddha-heute.de/downloads/treeriver.pdf
Not sure I'd recommend it for beginners, but I just love the title.
Argh. Yes. Fixed, thanks.
Need more caffeine.
It's like talking about Ruby and saying "Python" instead.
I don't think science should be based to accepting things that are "self-evident", especially things that subjectively self-evident.
The advances of Galileo et al were based on challenging beliefs about the world which seemed self-evident at the time.
The idea that science should begin investigating mental processes based on accepting what is subjectively self-evident seems especially flawed given difficulty that any system has in observing itself.
Heliocentrism was never "self-evident" it was merely obvious, and wrong. (And incidentally, it was actually far more scientific than Galileo until years after his death.)
I guess the point I'm trying to make is that the only way to observe consciousness is to experience it directly, and doing so proves it exists (at least to ourselves). And sure, from the perspective of some kind of science automaton in the material world, consciousness is as ridiculous as invisible unicorns, but we still experience it.
It's kind of bad PR. Supporting the existence of the hard problem appears awfully woo to begin with (undeservedly but understandably). When people appeal to meditation and psychedelics, which have even stronger woo connotations, it really doesn't help.
I find statements such as this baffling. To me, consciousness is the one thing strictly self-evident, with everything else having a second-hand status. There really is a hard stop here.
> meditation [...] Its structure is just too cult-like
That is a real problem, but not an unavoidable one. Many people practice meditation without joining any cult. It's something you do, like going to the gym. Heck, even the gym requires membership, whereas meditation can be practiced entirely on your own.
Gurdjieff talked about the effort "to remember oneself." Is is possible that some people have never had these moments naturally? Or don't remember them?
It would explain why they abjure the "hard problem" if they literally don't know what it is, even as they talk about it like they do. (Human p-zombies, if I understand the term correctly.)
I agree that there's an almost eerie inability to grasp the idea of what some of us refer to as consciousness. This word has all kinds of linguistic baggage, so it often feels like thinkers talk past each other when talking about it. I hesitate (perhaps refuse?) to read any more into it than that.
I am finding that everything we say about consciousness is just an artifact of our language. If we are careful to always include the subject of the sentence as well as the unspoken assumptions then I have not come across any questions that don't have a straightforward answer. Same with morality.
If you were to examine the contents of your awareness systematically you would have "subject" after "subject", and you could make sentences about them (there are thoughts and feelings which lack names but these would still be subject to your awareness.)
But there is always this other "thing", the awareness, that can never quite be subject to language. (In other words, if you were not "aware" there is no way I could describe awareness to you. It has no qualities.)
So this is the first aspect of the "Hard Problem of Consciousness": the foundational fact from which flows the veracity of all others is this indescribable "self" that is "aware", what is it?
The second aspect is the puzzle of "why qualia?" What the heck is "red" anyway? The subjective experience of "red" (and all the others) is kind of impossible. Yet there it is.
It's like a paradox. One the one hand, there is no doubting that our organs mediate the contents of awareness. But on the other hand subjective experience seems impossible.
Anyway, all that to say, in order to make sense of people who flat out claim that the brain generates or creates consciousness, I have recently begun to entertain the idea that maybe, just maybe, they don't have the experience.
What I mean is that self-awareness is not automatic. You have to notice it, then work at it. So maybe the reason they don't see the hard problem of consciousness is that they don't see the hard problem of consciousness.
Without regard to the question at hand - I think the lack of self-awareness is fairly evident in western society in general.
To extrapolate, would mean that a large portion of western society is not aware of their own consciousness or the influence it has on their day-to-day lives
"I think it would be a good idea." ;-)
What are you talking about though when you describe meditation? What is it like to peel away layers? What did you find underneath, can you describe it? Links?
More likely, you simply misunderstand what is meant by "illusion". What is self-evident is that there are thoughts, and some of these thoughts are "I am experiencing X". It's naive to simply infer from that, that true subjective experience actually exists, just like it's naive to infer that water can break pencils simply because you can see it .
You should give Dennet more credit. He's almost certainly right, just like science proved in every other case humans thought they were special in the past. The closest analog of consciousness as a pseudo-mystical property that was eventually simply replaced by a scientific concept is "vitalism", which was eventually superceded by biology. Not by any proof that elan vital didn't exist, but by simple recognition that it was special pleading and provided no explanatory power whatsoever.
Therein the problem lies. Equating consciousness with thoughts. It's vastly different. It can exist independently of "thoughts". It can exist independently of an "I". This can be verified as an experience.
At most it could be a very, very special kind of "thought", radically different from the rest.
Just because qualia seem "vastly different" to thoughts doesn't mean they aren't reducible to thoughts. This would make the qualitative distinction of qualia from thoughts an illusion, a false conclusion inferred from an incorrect perception.
Your statement "There are just thoughts with 'I am experiencing X' content" could be rightly used as a counterargument to "'I' exist" statement.
It could also be used against 'there exists pure consciousness without content' statement.
But to say that may be used against 'there exists experience of thoughts as opposed to just eg. electrical impulses' doesn't seem to make sense.
The latter doesn't mean that the experience and thoughts are two different things, is it what bothers you?
So you consider qualia and thoughts to be identical? Because I'm drawing a distinction between them and saying the former doesn't exist, while the latter do exist. They certainly seem qualitatively distinct since there's so much focus on qualia in the hard problem.
Just realised that I seriously intended to argue with you about probably the single most mysterious thing known to humanity. Feels stupid.
Anyway, what is a thought in your opinion? Is it just a physical thing? What is 'experience of red'? An illusion?
(Again, I feel stupid trying to achieve something here :)
Funny, I see it as exactly the opposite, as would any materialist I'd hazard. "Qualia" is a term describing a vague, poorly defined concept that's inherently intertwined with our complex perceptual systems and with our internal model of the world, and this dizzyingly complex aggregate leads us to falsely conclude we have subjective experience.
"Thought" is merely a mentally held belief, a proposition inferred from ones perceptions.
Agreed, but this will necessarily be true of any objective, scientific account of consciousness. For instance, subatomic particles are as abstracted from our everyday macroscopic world as possible.
> Thought is always 'made' of some prior experience/perception, when you look at it. It's a very high level of description compared to consciousness/qualia.
I don't think we should conflate experience and perception. Materialists of course acknowledge we have perception, they simply deny the subjective experience that non-materialists infer from it.
Of course, this fuzzy boundary is where our natural language often fails us, so when I equate a thought with a proposition, I of course don't mean a proposition in English or any other language, I mean some abstract representation that structurally conforms to a proposition in a given mind.
The structure this thought takes isn't objective, in the sense that it's the same for every person, because the correspondence of that structure to that proposition is determined by the mind that interprets it, just like a program in x86 machine code isn't the same program when run on an ARM processor -- equivalent semantics have different structures on different CPUs, and equivalent structures have different semantics on different CPUs.
I'm sure it does, just like it probably seems nonsensical to deny that water breaks pencils. I mean, just look at it, you can just see it for yourself. How could you possibly deny the evidence of your own eyes? It's obvious to someone who is ignorant of any broader context, that water simply must break pencils and reconstitute them. We are likewise ignorant of the broader context of what constitutes consciousness, and we're all just flapping our gums making exactly these same fallacious arguments.
Like I initially said, "doesn't exist"/illusion has a specific meaning. It doesn't mean that we don't have perceptions or that perceptions don't convey some kind of knowledge, but merely that perception is not truly subjective, and so qualia don't deserve distinguished ontological status. Their apparent irreducibility is an illusion, a mental trick, like the many optical and auditory illusions to which our brain is susceptible.
I highly recommend reading the paper on the attention schema model of consciousness  for an idea of how a real scientific theory might explain this trick.
But you're not saying that the pencil is not broken, you're saying I can't see a broken pencil.
>but merely that perception is not trulky subjective
I'm not sure what distinction you're making? What would it mean for something to be "truly subjective"?
>I highly recommend reading the paper on the attention schema model of consciousness
Hmm, the paper makes some interesting observations about the nature of awareness as part of the mind model, and it explains how a p-zombie could believe it has a subjective experience of that model.
...but I know I'm not a p-zombie. Even if I can't prove it to you, I know I definitely have subjective experience.
No, I'm saying you're inferring a false conclusion (the pencil is broken/we have subjectivity) from an inherently flawed perception (I see a broken pencil sitting in water/I perceive subjectivity).
For the sake of argument, let's assume the mental model of the paper I linked to. Our pseudo-perception of our own subjectivity is really a false conclusion we infer from a differential between the signal we receive from our perceptions of the outside world, with the signal from our internal model of ourselves perceiving the outside world. The constant switching back and forth as these signals consecutively dominate each other, combined with our slow perceptual response time relative to this switching, yields the illusion of subjectivity, similar to how a uniprocessor computer yields the illusion of real parallelism by context switching processes thousands of times per second. It's still just an illusion though.
> I'm not sure what distinction you're making? What would it mean for something to be "truly subjective"?
Subjective in the ontological sense, in that it cannot be reduced to an account consisting purely of third-person objective facts.
Certainly we perceive a type of "epistemic subjectivity" that's inherent to our distinct perspectives, but that's not true subjectivity in the ontological sense, like qualia.
> ...but I know I'm not a p-zombie. Even if I can't prove it to you, I know I definitely have subjective experience.
Materialistic theories of consciousness necessarily deny the conceivability of p-zombies, so you have nothing to prove. Any system with the same information processing structure as the model described in the paper (or whatever the "correct" model may be) would necessarily have consciousness. There is no non-functional "secret sauce" by which you can construct a p-zombie.
Max Planck, and nothing significant has changed since then.
BTW it may be appealing to the HN crowd that Bertrand Russell, the uber geek, subscribed to neutral monism as well.
There are several good rebuttals to Chalmers in this collection of essays: Explaing Consciousness: The Hard Problem: https://www.amazon.com/Explaining-Consciousness-Problem-Jona...
One of the best IMO is Thomas W. Clark's functional identity hypothesis that says that experience is _equivalent_ to the processing system. Basically, this is what it feels like to be a human brain.
Did you really mean to say "assumes" there? (As in assumes as aximoatic.) Because that's one hell of an an assumption.
However when we accept the 'panpsychist' axiom, we're free to come up with more detailed theories (as Integrated Information Theor) that eg. predict under what conditions physical systems and consciousness coincide in a way we can experience. And indeed they claim they can explain eg. why brain, as opposed to silicon circuits, coincides with it.
To be clear, I'm not saying this theory is correct - I've no idea. I'm just saying that this whole approach is as valid as the physicalist one.
Well, except you snuck two assumptions in there. I'm certainly not assuming, nor claiming to know definitively whether consciousness is an emergent phenomenon.
> However when we accept the 'panpsychist' axiom, we're free to come up with more detailed theorie
If you assume a false premise you can come up with literally anything, so this is not particularly surprising or illuminating.
> I'm just saying that this whole approach is as valid as the physicalist one.
One is grounded in what can be observed objectively and the other is not.
... and to believe your explanation, I'm going to need evidence that it's as valid as the "physicalist" one.
 Well, at least as far as others can confirm that they observe roughly the same things, e.g. "red" or "consciousness".
 I think you're thinking of "materialist"?
Here's as far as I got last time. I'm almost certainly wrong, but hey, when it comes to consciousness, we know so little that being able to pinpoint why someone is wrong is still progress.
- Time is real but not "absolute". The universe exists as a causally connected network of ever-present nows. We aren't "slices of the Minkowski spacetime" experiencing itself. That's a fantastic model of reality at certain scales, but it's not reality.
- Ensembles of causally connected physical systems evolving in the now are "ontoglogically real" in the sense that something about them actually exists above and beyond their parts.
- Certain evolving physical systems have "an interior". It feels like something to be the interior of such a system.
- We don't experience the world, we experience being the interior of a certain kind of evolving physical system whose job it is to create, process, and manipulate representations. "We" at any moment are the interior of the gestalt representation of that moment.
- Evolution selected and maximized interiority for a reason. It has a purpose. How it has a purpose I have no idea, and any solutions smack of dualism. However, the closest thing I have to a hypothesis that isn't completely crazy is that what we call "the physical" and "the mental" are simply two projections of some "actual" underlying structure that is something like a Hilbert space. If that's the case, then it's possible that putting the physical system into a certain state causes the interior of that system to "feel" something which, in turn, causes some kind of "pressure" on the physical again. I have no clue how this could work, but I do like the strategy of starting with the commonsensical but heretical idea that we wouldn't be conscious if evolution didn't find a use for that property, and then going from there.
Anybody else have any crazy thoughts so we can at least talk about why we're wrong about them?
I'm simply suggesting that we try seeing where we go if we reject this hypothesis and instead treat consciousness as a thing that itself has survival utility. Doing so does cause dualism to creep in awfully quickly, but I'm not really so ideologically committed to pure physicalism that I'm unwilling to take the thought experiment seriously.
What you've stimulated for me is the question of whether brains and their processes could have the same utility without the quality of interiority? We can make obvious observations like "the pain of intense heat incentivizes the animal to stop touching fires". Was it necessary that the animal feel the pain, or could the exact same behavioral modification have been duly recorded in some neural adjustment without the interior experience?
My personal intuition is that 1. it is possible for a machine with the right interfaces running a sophisticated enough turing machine to replicate the above, and 2. such a turing machine would not have the same quality of interiority that we have. This suggests that consciousness doesn't have some inherent utility. Except! From my internal observations, consciousness seems to have the quality that it focuses inputs and computation. I filter out noise, center my reasoning/recall/attention on one or two things at a time. Perhaps this ability to filter and focus allows for more peak "thought" or "processing" given equal grey matter mass or energy input. So my thesis is that the evolutionary advantage of consciousness is that it was a software based way to extract more performance out of brains given hardware limitations.
The philosophical objection to the idea that consciousness is functional is that, it this were so, then consciousness would be able to affect the physical world. I feel this way, and because (and ONLY because) I felt that way I move my arm. But how did my actual "feeling" of anything cause my neural network to exhibit different outward behavior than it would have had I not felt anything at all?
From there, the claim is that you need to import dualism (by saying that the world of feelings somehow communicates back with the particles that and energy that make up the neurons).
I don't think that's the only way to solve the problem, but it's a common objection.
fixed that for me. too late for edit.
At first blush, we would suppose that neurons firing this way and that to create actionable representations would have the same property. Evolution would be happy. But there's something extra there: it FEELS like something to be a neural network in a sequence of evolving states. Why? The heretical hypothesis is that the feeling itself had a purpose and was part of the system.
I don't know; is there? How do I tell?
I mean, I know what I experience, but I'm not an objective observer. There's evidence that what we perceive as the continuous stream of events is actually assembled in our memory after the fact from various out-of-order sensory inputs.
How do I know that what I perceive as consciousness (and remember, I have no way of knowing whether it's the same as what anyone else perceives, or whether they perceive it at all) is anything more than just wishful thinking? And that's a serious question to which I really want an answer.
The presence of feeling is rather obvious when you feel things like pain. Can't exactly make that up. That's definitely there. A lot of problems are grounded in feeling itself and wouldn't exist if nobody felt anything.
Proving its presence in other people is impossible, but it would seem very strange that you work one way and everyone else works a completely different way without some outside system tinkering with it.
At the risk of repeating myself --- do I feel it?
I cannot remember pain. I can remember being in pain, I can remember second-order things about the pain, like where it is or what kind it is. I can remember that it makes me unable to concentrate, and even a mild headache will make me useless even though I'm not physically debilitated. I can remember that cutting myself feels different to toothache, which feels different from that ghastly time when I had shingles (second-stage chickenpox. The most pain I had ever been in my entire life. And I had a mild case).
I know it was nasty, and I'm certainly going to avoid it, but when I'm not actually in pain, I cannot summon up even a shadow of a sensation of what the pain itself was actually like. Which is weird, because I can remember other sensations. But not pain.
So it's not obvious that I feel pain. Maybe it doesn't exist at all, and all it is is an illusion formed by my brain's negative reinforcement signal ('this is bad. Stop doing this'). That would at least explain why it doesn't show up in my memory.
> Maybe it doesn't exist at all, and all it is is an illusion formed by my brain's negative reinforcement signal ('this is bad. Stop doing this'). That would at least explain why it doesn't show up in my memory.
It doesn't matter whether pain is an illusion, we can easily recall visual illusions.
That's physical pain, but if you try that with emotional pain I'll be you can feel it just fine and quite literally re-live it.
Maybe there is, and every process has an interior. The ones that lack concepts like learning, feelings, memories, building representations, etc, aren't fundamentally different but are just boring and don't tend to affect the world in self-reflective ways. Like comparing a cellular automata where all cells start dead and stay dead to Conway's game of life.
The mathematical universe hypothesis and the novel Permutation City by Greg Egan don't put it this way but I can't help feeling they're related and worth looking into.
I think its the underlying physics that shapes our thinking.
Basically its all physics rules how particles/sub-particles/super-particles interact and the energy flowing through it. Now am "I" in control of making decision to do one thing and not the other? Or was it inevitable that the decision was made as that was the only possible outcome of particles inter acting in my brain at that moment?
I was watching videos of elephants in wild. They are just the same like us. I saw a group of elephants finding a waterhole, but the food was scarce. They could water the plants in nearby area to produce more food. They just haven't figured it out yet. Just like us they pass around information using their language. They do not have language for what they have not figured out yet. They will have that capability once they figure out something and find the need to communicate it. Any organism I take I see behavior that is very much ours if we too lacked such knowledge and technique. Our communication latencies are decreasing steadily with advancement of tech as we figure out more things. Now when we have the tech to naturally communicate with each other the exact same thought with all the context of that thought to somebody instant or in realtime as it happens, then we are essentially a super organism. Each of us can be equated to neurons in brain with each of us having many neurons with each neuron having many molecules interacting with each which has atoms interacting.
HN is like a place where I find information I value more. Many quality ideas from all around Earth. When I try communicate with my sister to clarify some doubt she has, I often tries to explain to her why something is so with all the context. She couldn't intake all the data I provide as fast as I provide it(keep up with me) and within a minute or two she's like STOP! People want just exactly only what they ask for. Everything else is considered noise.
For a prosaic view of consciousness I would recommend the book "Consciousness and the Brain" by cognitive psychologist Stanisas Dehaene. The experiments described in the book do, in my opinion, a good job in delimiting the problem and in facilitating accurate definitions about what conscience is.
We should start there, otherwise we are talking about how many angels can dance in the head of a pin (1).
(1) - Actually that problem has been solved already:
Put another way, I'm of the belief (and I think Chalmers and many others share it) that if you can demonstrate to me that C. Elegans feels itself wiggling around, and you can show me why it feels anything at all and why it feels this way vs that way, then you have solved the hard problem. I could take your technical specifications for basic interiority (the solution to the hard problem) and then give you human-level reflective consciousness in a few years given a handful of engineers and sufficiently powerful hardware.
This isn't necessarily a popular opinion, though. I once read Julian Jaynes "The Origin of Consciousness in the Breakdown of the Bicameral Mind" and was really frustrated that the author spent the entire book explaining how we became "conscious" by listening to one half of our brain talking to the other half. How much time did he spend trying to explain how there was anything doing the "hearing" to begin with? Nearly zero. Maddening.
A more widely used term is "subjectivity".
> How much time did he spend trying to explain how there was anything doing the "hearing" to begin with? Nearly zero. Maddening.
This seems like a common sentiment, but I'm not sure it's justified. You're already assuming the existence of a subject just because of an observation, but that needs justification. That belief is just an inference, a chain of thoughts sourced from a thought and yielding the final thought "I experienced X". The question is really whether that chain of inference is valid or fallacious, in which case subjectivity is an illusion.
I suggest reading some scientific attempts to account for subjectivity . Our approaches to consciousness amounts to marvelling at a pencil sitting in a glass of water  and trying to figure out how water seems able to break and reconstitute the pencil, when we should really be developing the theory of light refraction. Like biology did to vitalism, all this mysticism surrounding consciousness will soon just fade into history.
I don't think this removes responsibility for actions though, because even if this is true, whatever we choose to do is still what our bodies do,so we can still choose our actions. Like how two superrational players of the prisoners dilemma can choose the action of their coplayer by choosing their own action.
1) consciousness is merely something that emerges from matter (eg. functionalism)
2) consciousness and matter coexist everywhere, but in some configurations of matter consciousness is 'condensed' and gives rise to our subjective experience (eg. Integrated Information Theory by Tononi and Koch)
How can the room I am sitting in be simultaneously out
there and, as it were, inside my head, my experience? We
still have no answer to that question.
Consciousness is a difficult problem, yes, but there's no need to make it seem even more difficult than it actually is.
1. Representations experience themselves at all
2. Representations seem to feel different depending on their structure and relationships to other representations
Does anyone have a better interpretation of this paragraph that highlights the supposed mystery about our connection to "the real world"?
BTW. your parent seems to mistake "inside" with "the brain" which seems like a category error.
"It seems obvious that what we're experiencing isn't the world or any so-called "objects" in it, but a structured representation of the whole and its parts."
What do you mean by "the whole and its parts"? Do you mean that what we experience for our whole life is just a play of representations created by our mind? Because it seems to be the case.
And when you think about it, it is as mindfucky as possible. It may be obvious for you (is it?), but most people act as if it were very far from obvious as in each action most of us attribute real, independent existence to most constructs of our minds that, under scrutiny, are completely arbitrary.
Once you understand that you construct the whole world, you also start to understand how many ways to construct it are there (Sapir-Whorf etc.). Once you understand how fundamentally these possible understandings may differ, you start to appreciate how arbitrary 'your world' is. And then you start to wonder WTF is 'real world'? What is 'out there'? Is there anything at all?
Just a loose association, but isn't your question a bit like asking "what's wrong with this picture" ? http://personalpages.to.infn.it/~fiorenti/escher/pgallery.gi...
On the other hand I understand your question perfectly. It's even hard to tell whether there is something strange about it or not.
BTW. your parent seems to mistake "inside" with "the
brain" which seems like a category error.
On a few time scales you brain makes plans. It uses beliefs and a model of the world to make those plans. It stores those plans and the actions it has taken. And then checks if actions had their intended effects, and if plans reached their intended goals. Updating its model of the world, its beliefs, and what useful actions are.
This is a constant process reflecting upon the current situation, and upon the recent past of actions and plans. It is like an observer observing itself. This is the root of consciousness, I think.
Probably abstract thinking and language, enough intelligence, and a theory of mind, leads to self awareness.
If this is correct, we can formulate some necessary parts for a system to be conscious:
* recognition of the environment
* a model of the environment
* ability to reason about actions, their consequences, and how to achieve goals
* memory of past plans and goals and outcomes
* a process of executing actions, making new plans, and learning from the outcomes
Here's a list of things that you are adressing and that are not dealing with the hard problem of consciousness ('easy problems'): https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#...
What you describe is when the observer is disconnected from longer term memory.
"This is a constant process reflecting upon the current situation, and upon the recent past of actions and plans. It is like an observer observing itself. This is the root of consciousness, I think."
Your description seems to fall under "the ability of a system to access its own internal states" and/or "the reportability of mental states" from the 'easy problems' section of the wiki pages I linked to.
Both could be possible without consciousness at all. We could imagine (or even program) a robot that in response to external stimuli modifies its state. It could also have second order algorithms, eg. it could check what state it's in and if it remains in the same state for too long, the state changes randomly. It would be the same as 'observer observing itself'.
This functional explanation in any way doesn't lead to the phenomena of consciousness, which would require the robot to have some specific first-hand 'experience' of the world and itself. That's the difference between the easy and the hard problem of consciousness.
I am curious about your thoughts. Here is why I think it does address the so called hard problem:
The goals of these plans are evaluated by simulating the world and the body and future mental states. So the brain can measure, or "feel", the response, and thus judge the plans and compare them.
This constant self reflection and self prediction is what feels like something to the thing doing it. Why would it not?
But in general I have some objection to the "hard problem".
We cannot proof something to be conscious. Probably we never can. The best we can do is find good proxies indicators of consciousness and good lower bounds for what can potentially be conscious, and what cannot.
But the opposite is also true, we cannot proof there is such a thing as the hard problem of consciousness. For all we know every program we have every written has had some kind of experience as it was running.
I agree it is reasonable to assume this is not the case. But I don't think you can proof this is not the case. Therefor the hard problem might, or might not, actually, be a problem.
You can see color of red and don't predict anything at all, while still being consciouss. Predicting is just a narrow part of our conscious activities, however all of them are conscious.
So cosciousness is something much much more basic and fundamental than any high order cognitive function.
> We cannot proof something to be conscious. Probably we never can. The best we can do is find good proxies indicators of consciousness and good lower bounds for what can potentially be conscious, and what cannot.
We can't prove it because we have no idea what consciousness is, even on a philosophical level. Moreover if consciousness is basic property of reality, in a way parallel and coexisting with another property - matter, as panpsychists suggest, than it indeed would be impossible, as this dimension would be completely 'invisible' from the matter point of view.
When you're talking about 'proving' most likely you assume existence of some reproducible causal chain in the material, 'intersubjective' space. So you can't 'prove' consciousness in this paradigm by its very definition.
Of course one can insist to throw away any theory that's not provable in our paradigm, but does it bring us any closer to the understanding of consciousness? Also, is there anything inherently wrong with neutral monism or naturalistic dualism? These theories are not internally inconsistent, moreover they are also not inconsistent with anything current physics states - they just provide a wider view that at least somehow addresses the notion of consciousness.
This is what I think: Consciousness is continuously self observing past actions, past plans, actual outcomes, planning new actions and predicting future outcomes. That feels like something to the thing doing such a process, because practically all of those steps involve simulating/predicting/checking what something means to itself. Our brains do this because it is a self correcting mechanism.
> You can see color of red and don't predict anything at all, while still being consciouss.
I don't think so, when you see that color, you cannot help observing its context, what it means, check if something needs to be done, etc. All that is what I meant with predicting. Maybe the color is of no significance, but the only way you figured that out was by "predicting".
Even if consciousness is 100% material (as I suggest), we can not prove it exists in others. We don't need consciousness as another dimension for it to be improvable.
As a matter of fact, if it is another dimension, then our brains are "interacting" with that dimension. So the hypothesis predicts that certain configuration of matter can have some kind of interaction with that dimension. That should not be hard to proof.
But since we have no such proof, and more mundane hypotheses, we must conclude that the other dimension hypothesis is extremely unlikely.
Another line of thinking on the dimension hypothesis is this: we have brains as small as from C. Elegance with 300 neurons, to ants, 0.25 million neurons, to mice, 70 million neurons, to humans, 86 billion neurons. At which neuron count does consciousness come in? What if we simulate the full neurons of any of these, will it behave the same? Will it be conscious the same?
Hard problem solving is not required to create consciousness. Humans can’t solve hard problems but they make children conscious meanwhile human children grown by animals become animals.
May be hard problems of consciousness is like describing feeling or summary of all feelings. We can feel but can’t explain. Meanwhile every our feeling has underneath chemical reactions.
You're serious? :) Humans can make children conscious? They barely can control their instincts, most of the time they are not aware what and why they are doing. We have no clue how are we working, how reality is created.
To say that consciousness "just" appears automatically when some complexity is reached is just magical thinking or, at the very least, explaining the problem away. You would have to answer why, how, when is the critical point, what's the difference between conscious and unconscious robot etc.
His "spread mind" site is great too:
As a "seeker" I sometimes wonder if he figured out something close to what nonduality teaches, only in a more technical language (ie. consciousnsss is not located inside anymore than it is outside, therefore somewhat blurring the lines between I and the world). I think his example of the rainbow is such a great metaphor.
Bernardo Kastrup has his theories as well with his whirlpool" metaphor but to me it sounds like it makes sense only to him, and I don't get much out of his metaphor. Whereas Riccardo's pointers are reminiscent of older teachings in that they really invite the reader to examine direct experience (Just my interpretation of his theory).
there are no challenges, and even simple numbers can be conscious. we have sequenced DNA today, you can download it right now: http://www.sanger.ac.uk/resources/downloads/human/
After downloading it go ahead and take a checksum.
within 500 years (a ridiculous overestimation) someone will build a simulation that simulates (emulates) part of the brain that is described in that genome. Whether at full speed or one-tenth speed it will be done, and for test purposes someone might do it in a determinstic VM of sorts. They can take that checksum, let's say it's:
(That's an SHA-512 hash).
I'll tell you what I hashed. All I hashed was "I'm a brain".
Instead of a text string, within 500 years someone will hash a VM that contains as much data as a human brain, is human for practical purposes, and reports the same thing.
For me personally, this is beyond even discussion and I personally consider the matter closed. However, at hte HN link at the top of this comment I report a state of affairs under which I would be wrong. I don't consider it even worth thinking about though.
I don't consider any question around this area to be "open". We just have to accept what we deduced.
if you don't like it or don't agree with it, you can wait until someone simulates an adult brain reporting self-consciousness, and then you will be forced to say, "oh well, I was wrong."
There is no conceivable scenario under which I would have to say, "oh well, I was wrong." (Above, I outline such a scenario though. It's not going to happen.)
We are just self replicating, self preserving processes. The main function is to maintain equilibrium in the face of entropy. We need food, water, shelter and companionship to survive. In order to get those, we need to learn to operate in the world and reason about it. It's nothing magical, just reinforcement learning and other kinds of learning - a little semi-supervised learning, for example, and a lot of unsupervised learning.
All this grand system exists for the sole purpose of protecting its existence. It protects its own life and self replicates, which is another kind of survival. Consciousness is that which protects the body, that is its sole purpose.
If you are not conscious in the morning, you don't drink and eat, and in 3 days you're dead. That's why you need to be conscious every day. That's the greatest miracle of consciousness. Our species would not exist without it. But it's better to work with clear cut notions, like perception, action and reward.
Thus, qualia appear by neural net processing of sensations, and are fed into a recurrent network that judges them moment to moment and updates its internal state, as well as perform actions. That is why we feel a stream of perceptions, not just separate moments.
Qualia appear to be irreducible because the mapping from perceptions to representations is nonlinear - even if we can duplicate it in neural nets, we still can't assign semantic value to each neuron and weight in the network. So it appears magical because it is of a level of complexity that exceeds the human working memory. But it is not so magical that we can't replicate it. We can recognize objects better than humans on certain datasets, and we have word embeddings built on huge corpora of text that are being used for translation. We can compute the "feel" of a word or image.
Are neural nets conscious of their representations? If not - why? What's the difference between neural net that has subjective experience of its intermediate representations and the one that doesn't? On a functional level it doesn't need any subjective experience, so why would it explain anything?
Attempts like this definitely give us some intuitions, but they are working around the hard problem without addressing it.