Hacker News new | comments | show | ask | jobs | submit login
The Challenge of Consciousness (nybooks.com)
106 points by lermontov 154 days ago | hide | past | web | 118 comments | favorite



I always point to David Chalmers when someone wants to understand the problem of consciousness. I like how casual he is about possible solutions and gives credibility to all the options (unlike eg. Daniel Dennett, who seems almost religious about how revolutionary his "there's no problem at all" approach is).

I really recommend this podcast in which Sam Harris talks with Chalmers about all the options currently in the game: https://www.samharris.org/podcast/item/the-light-of-the-mind

One of the most promising theories (discussed in the podcast) assumes that consciousness is a fundamental attribute of reality, the other side of the coin (matter being the first one). Which leads to the conclusions, that consciousness is everywhere, but not as 'condensed' as in the brain. Seems crazy at first, but the more you think about it the more plausible it seems.

Here's a nice paper about it: http://rstb.royalsocietypublishing.org/content/370/1668/2014...

Within this paradigm it starts to make sense to call the reality "The Mind", as some Buddhist schools do. There's also the crazy part, about how under some conditions it could potentially lead to experiencing other peoples' minds.

For the more adventurous, here's a talk by Culadasa, very experienced meditator and a neuroscience professor, in which he shares some thoughts about the problem and also some of these crazy experiences he had: http://s3.amazonaws.com/dharmatreasure/150430-tcmc--culadasa...


If you believe consciousness exists, and that the universe is real, then it's pretty much tautological that the universe is a system which includes consciousness, which is a very, very tiny step away from stating that the universe itself is conscious --- not by way of mysticism, but simply by redefining your terms. In much the same way, my house can be considered a data processing device, because it contains a computer. (Well. Several computers. Many computers.)

The problem is, what do we do with this now? There's never been any physical evidence that consciousness is a real thing or can effect the world in any way other than via the actions of people who believe they are conscious (which, speaking from my perspective of a person who believes they are conscious, is no small thing); the only real consequences of deciding that we are part of a greater conscious system are going to be philosophical. And there are plenty of existing philosophical systems which believe that already --- you mention Buddhism.

And while I do think that believing that we're part of the world, rather than being somehow separate from it, is definitely a good idea, I'm an engineer. This seems dissatisfying. Is there anything else can we do with this idea?


> the only real consequences of deciding that we are part of a greater conscious system are going to be philosophical

I think the question of consciousness is critical in at least one domain: the question of whether or not consciousness survives beyond the death of the physical body (and, relatedly, whether it precedes the birth of the body). If we bracket away religious doctrine and just focus on the facts of the matter, this debate hinges on how consciousness relates to the physical body it inhabits. Is it created by the body, or does it merely use the body as a conduit?

My boy William James gave a great lecture† in 1897 where he claimed that scientific understanding had not yet ruled out immortality, since it was unknown whether this relationship between mind and body was productive (body creates mind) or transmissive (body filters mind). I could be wrong, but I think this is still an open question, and it very much depends on understanding the true nature of consciousness. That's why the Hard Problem is so awesome.

https://www.uky.edu/~eushe2/Pajares/jimmortal.html (published in 1898)


If I clone myself and my clone goes through the exact experiences and interactions I go through, I think both I and my clone will have exact same consciousness. Cloning because thats the only way to ensure my exact replica of DNA sequence in someone else. If a person that is born naturally and subjected to all the same experiences and interactions, that may not result in exact same consciousness as both bodies may have different tolerance to something(for example some allergy) and hence may not experience the same thing and so their consciousness would probably diverge there.

So no soul. But still my natural offspring will start of from seed energy, half of which is from my body and other half from my partner. Its possible that only natural born twins will only have the exact same consciousness if subjected to exact same experiences. This is because we may accumulate information in our DNA over our life time. So a clone will be starting of with that information as opposed to comparatively clean slate I started off with. Or may be the accumulated info will only be written to sperm cells or ovum. Information is either passed down this way or its just a totally random mutation(or DNA shuffling) that produces new feature adaptations that causes evolution.


It's quite definitively created by the body. It's very difficult to explain certain phenomena otherwise. Like, certain people will get a right-hemisphere stroke that not only removes their ability to use their left arm, but also causes them to deny that they aren't able to use their left arm. And this is temporarily suppressed by squirting cold water into the left ear.

We're getting more and more detail in the nuts-and-bolts models of how the brain constructs experiences and consciousness. IMO it's a just a matter of time before there are no mysteries left, and every high level description of human experience can be broken down into smaller details recursively until what's left is low-level descriptions of neurons.


While I too believe consciousness is material, those observations don't prove it, they just demonstrate that consciousness can be influenced by the material world.


And while I too believe that fire is material, our observations don't prove it, they just demonstrate that how phlogiston gets released by combustion can be influenced by the material world.


The difference is that you can easily replicate fire and understand how it works. You can tell if something is on fire or not. It's been explained in terms of very simple components.

Go ahead and explain qualia as experienced in first person.


I agree that it doesn't have to be in any way mystical and that it comes to redefining what's fundamental and what's merely constructed or emergent (although it's a very tricky term). In the panpsychist approach neither matter nor consciousness can be reduced to the other one.

For more engineering approach to the problem, there's lot we can do. One way would be to study in more and more details the correlates of consciousness to understand the "linking" mechanism better. Anil Seth describes this approach here: https://aeon.co/essays/the-hard-problem-of-consciousness-is-....

On the other hand, if consciousness indeed is a fundamental attribute of reality, we should extend the 'engineering' to cover also the 'mind' part of reality. This is exactly what meditation is - to train the mind so that it can see more and more clearly all the components of conscious experience until it reaches the bottom. As a result you come up with a number of 'edge cases' - interesting states that can be used to reason about this reality conceptually.

There are eg. 'experience of space itself', experience of consciousness (so called immaterial Jhanas) or finally experience caused by calming the unconscious mind to the extent that it stops projecting any content into conscious mind, while the latter is still operating. This (supposedly) leads to realisation by the mind itself that the 'experienced world' is nothing but its creation. Of course we all can understand it conceptually, but this experience leads to permanent change in perception (you can't "unsee it") which becomes as objective as the human mind can possibly be. Meaning, that it realises in every moment that each experience is its own creation.

It may all seem mystical, exotic or plain crazy, but there's no other way to understand the mind than through the mind itself, if the discussed hypothesis is true.


EDIT: fixed stupid brain bug

Dennett's approach is absurd. Consciousness is the only thing in the Universe that is self-evident in the strict sense. Everything else is at least second-hand stuff - see the brain in a vat hypothesis. To say that consciousness is an illusion is ridiculous; I can see how he gets there, but the conclusion is absurd. Somewhere along the way there's a mistake; no, I don't know where the mistake is, and I could not even begin to hypothesize. OP's article points out this predicament very well. It is a hard problem indeed.

> One of the most promising theories (discussed in the podcast) assumes that consciousness is a fundamental attribute of reality, the other side of the coin (matter being the first one).

I can hear the objections being raised already, but it's a neat promising step towards solving the hard problem. Suddenly the reducibility paradox vanishes.

It would also relieve another problem. If you can't trivially reduce consciousness to neural activity, then how is it coupled with perception? There would appear to be a gap between the sensory chain and the fact of awareness. And then for consciousness to work, it would already have to be everywhere (in a sense, possibly even a somewhat metaphorical one, or at least non-trivial). The assumption you mention solves this problem.

> here's a talk by Culadasa, very experienced meditator and a neuroscience professor

It's very, very hard to agree with Dennett, and it's getting harder the more you advance in the practice of meditation. As soon as you realize that you can peel off and disconnect layers upon layers of perception (external at first, then the many internal layers too), and also greatly reduce the waves of what is commonly called "thinking", while consciousness does not diminish but instead becomes at once more vivid and more stable, more calm and more intense, less connected to external factors but more broad - all that talk about "the illusion of consciousness" starts to look extremely suspicious. It's not perception, and it's not thinking; it can relate to these things, but it's fundamentally different. It can ultimately exist completely independent of inputs, either external (sensory) or internal (thoughts, mind activity in the trivial sense, even memory).

Don't just read about it. Go ahead and have the experience yourself. It changes a lot of perspectives. You don't even have to go all the way to the highest levels described in the literature - the intermediate stuff is revelatory enough already.


I also practiced meditation (Vipassana) for several years, and, like you, realised that positing the non-existence of consciousness is a dead-end.

Buddhism seemed to go deeper than many other philosophical frameworkss for this reason. However, I looked into many philosophical systems, and I was extremely surprised to discover that the thinker with the most consistent views on reality and consciousness (both self-consistent and consistent with my own experiences) was drumroll Ayn Rand.

Though she's known for her views on egoism and capitalism, she wrote a ton about consciousness, perception and concept-formation. If you look at my comment history you'll see I talk about her a lot on HN, and that's because I think computer scientists will gain immense value from studying her works. Introdution to Objectivist Epistemology and The Romantic Manifesto contain her deepest writings on consciousness and cognition and are the books I'd recommend. http://aynrandlexicon.com/lexicon/consciousness.html is a good taster.

http://aynrandlexicon.com/ayn-rand-works/introduction-to-obj...

http://aynrandlexicon.com/ayn-rand-works/the-romantic-manife...

I've met a couple of other Objectivists who have practiced meditation - not many but they exist. I was very surprised to discover the connection between the two areas. Hopefully in a thread like this on a site like this, curious minds will read this comment and be persuaded to investigate further. It's a fascinating journey!


What EXACTLY did you discover during meditation? What steps did you take and what was the resulting experience? Can you be specific?

Same w Ayn Rand. What ideas did you read, can you summarize?


Reading your other comment:

"I am finding that everything we say about consciousness is just an artifact of our language. If we are careful to always include the subject of the sentence as well as the unspoken assumptions then I have not come across any questions that don't have a straightforward answer. Same with morality."

It sounds like you're influenced by Wittgenstein, or at least his ideas. If you assume all these issues are artefacts of language then there's no way I can give you an adequate summary in any amount of writing, and especially not one comment. Human minds can only communicate by exchanging strings of words, after all.

To give you a straight answer, though: in vipasanna meditation you focus on one thing (typically the breath, though I often used ambient sounds) and attempt to break down your experience of that thing into constituent sensations. One thing you learn from this is how frequently your attention wanders, and by really training yourself to focus you learn how your attention is directed from moment to moment. The main thing you are trying to gain, though, is a first-hand experience of the Buddhist view of reality: that solid objects do not exist, and that existence consists only of flickering sensations, with no permanence or independent existence.

I really can't summarise Ayn Rand, since her writing is already extremely well summarised and essentialised. Click around that website if you want to learn her views on any particular topic. The relevant ideas here, though, are her views on concepts. There's been a long-running divide in philosophy between people who thought concepts reflected something intrinsic to the universe (e.g., that all horses contain an essence of horse-ness, or reflect Plato's ideal horse) -- and the people who thought this was nonsense and that language was therefore an arbitrary human construct (this latter view is currently dominant). Rand found a third position: concepts are human constructs, but they serve an objective purpose, and there are objective rules which determine their formation. ITOE is the only place where you will find her full argument.

To compare the two: Buddha takes sensory experience as the gold standard for reality, and concludes that reality exists, but is ultimately transient and lacks identity. Rand takes perceptual experience as the gold standard and concludes that reality exists, that the everyday objects we perceive really exist, and that therefore reality is logical and consistent.

(As for Wittgenstein, he says human language is just an arbitrary game which can never describe true reality, and ends up in a kind of quasi-mysticism (man helpless in the face of an ineffable, incomprehensible universe)).

I predict you will not be satisfied with this answer, because you are following a very misleading philosophical framework, but I wrote it up for the interest of lurkers.


Also I say that the word "exists" is meaningless unless it is relative to a conscious observer.

To illustrate, if I described a parallel world that you would never encounter, never be able to deduce its existence from any observations in this one etc. then in what way are the sentences "it exists" and "it doesn't exist" meaningful?

I think meaning is only relative to some subject who can comprehend it.

I also think morality is relative to someone with a "sense of morality". Absolute morality is like absolute humor. Humor requires a human with a "sense of humor". A humorless person is like a psychopath. They don't share the same phychological responses as others. Some moral value can be widespread just like a joke can be funny to a lot of people.

So I guess I am a relativist...


It's kind of interesting, you are the second person recently to tell me that I am espousing Wittgenstein's ideas. I never read the man's works or know much of his philosophy, though I know others.

I would say what we have are beliefs. And we use language to express them and can only legitimately convince each other, ie change someone': beliefs, through exposing a double standard in beliefs. All other methods eg. appeal to emotion could achieve this but are not logically legitimate without the double standard reason for changing beliefs. Knowledge is another word for bias.

From your perspective, do you dispute this?


It's kind of interesting, you are the second person recently to tell me that I am espousing Wittgenstein's ideas. I never read the man's works or know much of his philosophy, though I know others.

Interesting. It's probably that most people who consider the question "what do words really mean?" hit on a very small set of possible answers. There's also the fact that his ideas have influenced many other intellectuals in many fields, and you many have absorbed his way of thinking by osmosis.

can only legitimately convince each other, ie change someone': beliefs, through exposing a double standard in beliefs

The primary legitimate way to convince another person is to point to reality -- the simplest way to do so is to point to something observable. If I want to convince you that eggs break when dropped, I drop a bunch of eggs in front of you.

Knowledge is another word for bias.

I can't really make sense of this statement, but it seems you're implying that all of our knowledge is untrustworthy. I definitely dispute this -- we have to have some certain knowledge or we don't have any knowledge at all. Put it this way -- do you know that "knowledge is another word for bias"? Are you certain? How?

"Also I say that the word 'exists' is meaningless unless it is relative to a conscious observer."

But does the conscious observer then exist? Do they have to exist relative to another conscious observer? Existence has to be absolute or you end up in such paradoxes. Otherwise, I agree that asserting the existence of parallel universes is arbitrary.

"I also think morality is relative to someone with a "sense of morality". Absolute morality is like absolute humor. Humor requires a human with a "sense of humor"."

Rand's view is that morality is objective, which she distinguishes from intrinsic. Intrinsic morality would be what you call absolute morality -- things are inherently good or evil, regardless of their relation to other things. In contrast, Rand says things are good or evil in relation some agent. However, this relationship is an objective fact of reality. (E.g., "sunlight is good for lizards", or "friendships are good for human beings").


The primary legitimate way to convince another person is to point to reality -- the simplest way to do so is to point to something observable. If I want to convince you that eggs break when dropped, I drop a bunch of eggs in front of you.

What you call reality is actually a collection of statements that you believe to be true. You just have a very high confidence that they are true. You can point to something but you also have to make an argument which assumes these statements. If I already believe them, then you can successfully build on that. If I don't, then you have to convince me of X, and the only legitimate way to do that is to show that I would have a double standard if I don't believe X when I already believe Y for much less convincing reasons.

I can't really make sense of this statement, but it seems you're implying that all of our knowledge is untrustworthy. I definitely dispute this -- we have to have some certain knowledge or we don't have any knowledge at all. Put it this way -- do you know that "knowledge is another word for bias"? Are you certain? How?

The statement is defining the word "knowledge". And you can't get me to play the game of "are you certain?" Do you know that "knowledge is another word for bias?" What I said is what I believe. You see you used the word "know" but I already told you how I define that word. You can convince me to change my beliefs, but the only ways you'd do that is by exposing a double standard, as above.

I believe that my position is consistent and the one that most successfully models reality, with the least ad-hoc adjustments.

Rand's view is that morality is objective, which she distinguishes from intrinsic.

But you would have to define the word "objective", probably as "independent of whatever any group of people, no matter how large, thinks". But then you can never prove that an objective morality exists, because you'd just be appealing to what groups of people think, as your premise. Some people, such as moral relativists, simply wouldn't agree.

(There are many more logical philosophical problems with Rand's argument, see this: http://www.owl232.net/rand5.htm )

But does the conscious observer then exist? Do they have to exist relative to another conscious observer? Existence has to be absolute or you end up in such paradoxes. Otherwise, I agree that asserting the existence of parallel universes is arbitrary.

The word "exist" is only relative to a conscious observer. So, as far as the conscious observer is concerned, of course they exist. The light from a distant star exists. The star may no longer exist, but it exists in the past because the observer infers that it had to generate the light. If they are wrong, then the star might never have existed, actually.

It's actually not self contradictory. Observers observe things. Obviously.

Let me make crystal clear what I am saying, using another language: mathematics.

I am saying "A exists" is meaningless. "A exists for B" is a binary relation, and its meaning is "B can observe A" or "B can deduce the existence of A from something that exists for B".

I am saying that "A should do B" is meaningless. Instead, moral statements are ternary relations: "If A wants C, A should do B", which recursively means P: "If A doesn't do B, and C doesn't happen, then A can be blamed", which recursively means "If D wants to be considered rational, B shouldn't blame A if A did B and C didn't happen" etc.

What you are saying when you say "You should do your homework" actually has a hidden premise, "... if you want to do well in school". This relative morality makes reasoning easier without ad-hoc rules. Like, you may want to escape a volcano, and thus you might violate some other "absolute principle" such as failing to respond to a question in a debate, in order to run for your life.

Relativism just makes more sense to me and people like me. As I said at the outset, if you are careful to include all your premises, all these "magical" things become pretty straightforward.


"What you call reality is actually a collection of statements that you believe to be true."

"I believe that my position is consistent and the one that most successfully models reality, with the least ad-hoc adjustments."

If reality is only a set of beliefs, why are you trying to model it with another set of beliefs?

I've studied that Owl link in depth, and (though he appears intellectually honest for the most part) he mis-characterises Rand's views. I'll tap out here, but I want to point out that you definitely seem heavily influenced by Kantian ideas.


What do you have against my binary and ternary definitions of existence and moral obligations though? Don't you see how a) they usefully match your actual experience b) they seem to eliminate any "mysticism" in Rand's words?

To answer your question: because beliefs is what we have. We don't really know things. We believe things. And we use language to communicate our beliefs. If you define knowledge to be "true justified belief" then you'd be stuck as you wouldn't ever be sure of something 100% so you wouldn't "know" anything by that definition.

And before you ask how come we know mathematical truths are true, it's because they are part of the language we have chosen to use, eg logic and math. If we used some sociological mumbojumbo arguments to convince each other eg of institutional oppressio then maybe mathematical theorems underpinning statistical results wouldn't be as certain in that context.

Again, I haven't read Kant beyond the "categorical imperative". I just noticed that people usually face all these difficulties because they fail to precisely DEFINE all their terms. This set of definitions just seems to best match what's really going on.


Did you mean Dennett, instead of Chalmers? I agree that the Dennett's conclusions seems to be false almost on a logical level. The self is an illusion for sure, the consciousness for sure isn't (EDIT: not saying here that there exists 'pure consciousness without content').

> Don't just read about it. Go ahead and have the experience yourself. It changes a lot of perspectives. You don't even have to go all the way to the highest levels described in the literature - the intermediate stuff is revelatory enough already.

It is indeed. The more you meditate, the more you see that most of the things that you do 'outside' are meaningless, just playing with your representations of the world and pretending they are real. Of course we can't do anything to somehow reach the 'real' world, but at least we can drop the pretending part.

The title of this book may seem both scary and beautiful in this regard: http://www.buddha-heute.de/downloads/treeriver.pdf Not sure I'd recommend it for beginners, but I just love the title.


> Did you mean Dennett, instead of Chalmers?

Argh. Yes. Fixed, thanks.

Need more caffeine.

It's like talking about Ruby and saying "Python" instead.


Dennett's approach is absurd. Consciousness is the only thing in the Universe that is self-evident in the strict sense.

I don't think science should be based to accepting things that are "self-evident", especially things that subjectively self-evident.

The advances of Galileo et al were based on challenging beliefs about the world which seemed self-evident at the time.

The idea that science should begin investigating mental processes based on accepting what is subjectively self-evident seems especially flawed given difficulty that any system has in observing itself.


Consciousness is self evident in the most literal sense. For something to be evident, it must be observed, which means it will be experienced, but consciousness is experience itself. It is self-evident because it is the thing which allows things to be evident.

Heliocentrism was never "self-evident" it was merely obvious, and wrong. (And incidentally, it was actually far more scientific than Galileo until years after his death.)


I think you are conflating conscious as some means to gain experience with consciousness-the-model-of-experience.


I'm not sure exactly what distinction you're making, but I realised after posting that I made a leap from observation to experience. Observation doesn't require experience, in the sense that a p-zombie could take in sensory information, and use them to make scientific inferences, without ever experiencing those observations.

I guess the point I'm trying to make is that the only way to observe consciousness is to experience it directly, and doing so proves it exists (at least to ourselves). And sure, from the perspective of some kind of science automaton in the material world, consciousness is as ridiculous as invisible unicorns, but we still experience it.


I don't think that meditation is useful evidence for the hard problem. Personally, I don't need further convincing that the hard problem exists, and I'm sympathetic to the usefulness of meditation as an exploratory tool, but it is a practice that requires you to basically accept consciousness as dogma and then spend a lot of time studying it. Its structure is just too cult-like to be useful for convincing others, and you should be worried about letting it reinforce your own beliefs as well.

It's kind of bad PR. Supporting the existence of the hard problem appears awfully woo to begin with (undeservedly but understandably). When people appeal to meditation and psychedelics, which have even stronger woo connotations, it really doesn't help.


> accept consciousness as dogma

I find statements such as this baffling. To me, consciousness is the one thing strictly self-evident, with everything else having a second-hand status. There really is a hard stop here.

> meditation [...] Its structure is just too cult-like

That is a real problem, but not an unavoidable one. Many people practice meditation without joining any cult. It's something you do, like going to the gym. Heck, even the gym requires membership, whereas meditation can be practiced entirely on your own.


After ~30 years of thinking about it, I am beginning to suspect that some people (e.g. Dennett) don't know they are conscious.

Gurdjieff talked about the effort "to remember oneself." Is is possible that some people have never had these moments naturally? Or don't remember them?

It would explain why they abjure the "hard problem" if they literally don't know what it is, even as they talk about it like they do. (Human p-zombies, if I understand the term correctly.)


If you're willing to entertain the idea that some people don't know they're conscious, it's not that far of a leap to say they actually aren't. You might be interested in reading about Julian Jaymes. He put forward a hypothesis that consciousness is a very recent (3000 years or so) phenomenon. This is a rabbit hole I prefer not to go down, because it just seems so absurd and solipist.

I agree that there's an almost eerie inability to grasp the idea of what some of us refer to as consciousness. This word has all kinds of linguistic baggage, so it often feels like thinkers talk past each other when talking about it. I hesitate (perhaps refuse?) to read any more into it than that.


My personal belief (to about 90%) is that there was some sort of Disaster (lit. "sundering from the stars") in the past and we are all, globally, suffering from shock in the aftermath. I don't think current conditions are normal.


Can you explain in more detail?

I am finding that everything we say about consciousness is just an artifact of our language. If we are careful to always include the subject of the sentence as well as the unspoken assumptions then I have not come across any questions that don't have a straightforward answer. Same with morality.


Well, I can try. ;-)

If you were to examine the contents of your awareness systematically you would have "subject" after "subject", and you could make sentences about them (there are thoughts and feelings which lack names but these would still be subject to your awareness.)

But there is always this other "thing", the awareness, that can never quite be subject to language. (In other words, if you were not "aware" there is no way I could describe awareness to you. It has no qualities.)

So this is the first aspect of the "Hard Problem of Consciousness": the foundational fact from which flows the veracity of all others is this indescribable "self" that is "aware", what is it?

The second aspect is the puzzle of "why qualia?" What the heck is "red" anyway? The subjective experience of "red" (and all the others) is kind of impossible. Yet there it is.

It's like a paradox. One the one hand, there is no doubting that our organs mediate the contents of awareness. But on the other hand subjective experience seems impossible.

Anyway, all that to say, in order to make sense of people who flat out claim that the brain generates or creates consciousness, I have recently begun to entertain the idea that maybe, just maybe, they don't have the experience.

What I mean is that self-awareness is not automatic. You have to notice it, then work at it. So maybe the reason they don't see the hard problem of consciousness is that they don't see the hard problem of consciousness.


> What I mean is that self-awareness is not automatic.

Without regard to the question at hand - I think the lack of self-awareness is fairly evident in western society in general.

To extrapolate, would mean that a large portion of western society is not aware of their own consciousness or the influence it has on their day-to-day lives


"What do you think of Western Civilization, Mr. Gandhi?"

"I think it would be a good idea." ;-)


You sound like John Searle.

What are you talking about though when you describe meditation? What is it like to peel away layers? What did you find underneath, can you describe it? Links?


If Dennett claims that he's unconscious then it just provides a reason to dismiss his other claims, too.


> To say that consciousness is an illusion is ridiculous

More likely, you simply misunderstand what is meant by "illusion". What is self-evident is that there are thoughts, and some of these thoughts are "I am experiencing X". It's naive to simply infer from that, that true subjective experience actually exists, just like it's naive to infer that water can break pencils simply because you can see it [1].

You should give Dennet more credit. He's almost certainly right, just like science proved in every other case humans thought they were special in the past. The closest analog of consciousness as a pseudo-mystical property that was eventually simply replaced by a scientific concept is "vitalism", which was eventually superceded by biology. Not by any proof that elan vital didn't exist, but by simple recognition that it was special pleading and provided no explanatory power whatsoever.

[1] http://etc.usf.edu/clippix/pix/refraction-of-pencil-in-cup-o...


> What is self-evident is that there are thoughts, and some of these thoughts are "I am experiencing X".

Therein the problem lies. Equating consciousness with thoughts. It's vastly different. It can exist independently of "thoughts". It can exist independently of an "I". This can be verified as an experience.

At most it could be a very, very special kind of "thought", radically different from the rest.


> Therein the problem lies. Equating consciousness with thoughts. It's vastly different.

Just because qualia seem "vastly different" to thoughts doesn't mean they aren't reducible to thoughts. This would make the qualitative distinction of qualia from thoughts an illusion, a false conclusion inferred from an incorrect perception.


There's a very distinct experience of being conscious and having completely no thoughts in the mind. To treat thoughts as some kind of basic element is a mistake that neither neuroscientist nor meditator would ever make. Look up default network mode and how it can be 'turned off'.


I think you're mistaking the utility of epistemically treating qualia on their own terms, with their fundamental ontological status. Just because reducibility may not be useful in most cases of analyzing qualia as a phenomena reducible to thoughts, doesn't make it untrue that they are reducible to thoughts.


I'm not sure your argument makes sense. To say there exists conscious experience doesn't add anything ontologically to 'it's self-evident that there are thoughts'. The self-evidence of thoughts and conscious experience are the same thing.

Your statement "There are just thoughts with 'I am experiencing X' content" could be rightly used as a counterargument to "'I' exist" statement. It could also be used against 'there exists pure consciousness without content' statement. But to say that may be used against 'there exists experience of thoughts as opposed to just eg. electrical impulses' doesn't seem to make sense.

The latter doesn't mean that the experience and thoughts are two different things, is it what bothers you?


> To say there exists conscious experience doesn't add anything ontologically to 'it's self-evident that there are thoughts'.

So you consider qualia and thoughts to be identical? Because I'm drawing a distinction between them and saying the former doesn't exist, while the latter do exist. They certainly seem qualitatively distinct since there's so much focus on qualia in the hard problem.


The problem with 'qualia' and 'consciousness' is that these concepts seem to reify something existing separately from their content. The other problem here is that 'thought' is a complex and compound entity while 'qualia' seems to imply something atomic. So let's not use this concept.

Just realised that I seriously intended to argue with you about probably the single most mysterious thing known to humanity. Feels stupid.

Anyway, what is a thought in your opinion? Is it just a physical thing? What is 'experience of red'? An illusion? (Again, I feel stupid trying to achieve something here :)


> The other problem here is that 'thought' is a complex and compound entity while 'qualia' seems to imply something atomic. So let's not use this concept.

Funny, I see it as exactly the opposite, as would any materialist I'd hazard. "Qualia" is a term describing a vague, poorly defined concept that's inherently intertwined with our complex perceptual systems and with our internal model of the world, and this dizzyingly complex aggregate leads us to falsely conclude we have subjective experience.

"Thought" is merely a mentally held belief, a proposition inferred from ones perceptions.


Well, when you analyse thought as something you experience, you'll come to realise eg. that it can be verbal or visual (check out Temple Grandin) and is always compound. You can talk about thought/belief as something basic on a level of propositional logic, but this level is as abstracted from experience and observation as possible. Thought is always 'made' of some prior experience/perception, when you look at it. It's a very high level of description compared to consciousness/qualia.


> You can talk about thought/belief as something basic on a level of propositional logic, but this level is as abstracted from experience and observation as possible.

Agreed, but this will necessarily be true of any objective, scientific account of consciousness. For instance, subatomic particles are as abstracted from our everyday macroscopic world as possible.

> Thought is always 'made' of some prior experience/perception, when you look at it. It's a very high level of description compared to consciousness/qualia.

I don't think we should conflate experience and perception. Materialists of course acknowledge we have perception, they simply deny the subjective experience that non-materialists infer from it.

Of course, this fuzzy boundary is where our natural language often fails us, so when I equate a thought with a proposition, I of course don't mean a proposition in English or any other language, I mean some abstract representation that structurally conforms to a proposition in a given mind.

The structure this thought takes isn't objective, in the sense that it's the same for every person, because the correspondence of that structure to that proposition is determined by the mind that interprets it, just like a program in x86 machine code isn't the same program when run on an ARM processor -- equivalent semantics have different structures on different CPUs, and equivalent structures have different semantics on different CPUs.


Qualia and thoughts are distinct, but to say the former doesn't exist just seems nonsensical to me. Qualia are self-evident by their very nature. I look at something red and I experience the appearance of red, and I don't know how someone who experiences that can say it doesn't happen. I could believe the red thing doesn't exist, and that the experience is caused by something else, but to say my experience of a red thing doesn't exist is just clearly false.


> Qualia and thoughts are distinct, but to say the former doesn't exist just seems nonsensical to me.

I'm sure it does, just like it probably seems nonsensical to deny that water breaks pencils. I mean, just look at it, you can just see it for yourself. How could you possibly deny the evidence of your own eyes? It's obvious to someone who is ignorant of any broader context, that water simply must break pencils and reconstitute them. We are likewise ignorant of the broader context of what constitutes consciousness, and we're all just flapping our gums making exactly these same fallacious arguments.

Like I initially said, "doesn't exist"/illusion has a specific meaning. It doesn't mean that we don't have perceptions or that perceptions don't convey some kind of knowledge, but merely that perception is not truly subjective, and so qualia don't deserve distinguished ontological status. Their apparent irreducibility is an illusion, a mental trick, like the many optical and auditory illusions to which our brain is susceptible.

I highly recommend reading the paper on the attention schema model of consciousness [1] for an idea of how a real scientific theory might explain this trick.

[1] http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...


>I'm sure it does, just like it probably seems nonsensical to deny that water breaks pencils.

But you're not saying that the pencil is not broken, you're saying I can't see a broken pencil.

>but merely that perception is not trulky subjective

I'm not sure what distinction you're making? What would it mean for something to be "truly subjective"?

>I highly recommend reading the paper on the attention schema model of consciousness

Hmm, the paper makes some interesting observations about the nature of awareness as part of the mind model, and it explains how a p-zombie could believe it has a subjective experience of that model.

...but I know I'm not a p-zombie. Even if I can't prove it to you, I know I definitely have subjective experience.


> But you're not saying that the pencil is not broken, you're saying I can't see a broken pencil.

No, I'm saying you're inferring a false conclusion (the pencil is broken/we have subjectivity) from an inherently flawed perception (I see a broken pencil sitting in water/I perceive subjectivity).

For the sake of argument, let's assume the mental model of the paper I linked to. Our pseudo-perception of our own subjectivity is really a false conclusion we infer from a differential between the signal we receive from our perceptions of the outside world, with the signal from our internal model of ourselves perceiving the outside world. The constant switching back and forth as these signals consecutively dominate each other, combined with our slow perceptual response time relative to this switching, yields the illusion of subjectivity, similar to how a uniprocessor computer yields the illusion of real parallelism by context switching processes thousands of times per second. It's still just an illusion though.

> I'm not sure what distinction you're making? What would it mean for something to be "truly subjective"?

Subjective in the ontological sense, in that it cannot be reduced to an account consisting purely of third-person objective facts.

Certainly we perceive a type of "epistemic subjectivity" that's inherent to our distinct perspectives, but that's not true subjectivity in the ontological sense, like qualia.

> ...but I know I'm not a p-zombie. Even if I can't prove it to you, I know I definitely have subjective experience.

Materialistic theories of consciousness necessarily deny the conceivability of p-zombies, so you have nothing to prove. Any system with the same information processing structure as the model described in the paper (or whatever the "correct" model may be) would necessarily have consciousness. There is no non-functional "secret sauce" by which you can construct a p-zombie.


"I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness."

Max Planck, and nothing significant has changed since then.


I first came upon the consciousness as reality theme when I read “The self-aware universe” [0] by Goswami which introduces the idea of monistic idealism. Penrose also contemplates with the idea in the “Emperor’s New Mind” [1]. I highly recommend both books to anyone interested in exploring alternate theories.

[0]: https://www.amazon.com/Self-Aware-Universe-Consciousness-Cre... [1]: https://www.amazon.com/Emperors-New-Mind-Concerning-Computer...


Just to be clear, what you are talking about is different to what Chalmers and others are talking about. The "idealistic" monism assumes that matter doesn't actually exists, while "neutral" monism assumes that matter and consciousness have similar ontological status as they are irreducible properties of the 'real stuff' [0].

BTW it may be appealing to the HN crowd that Bertrand Russell, the uber geek, subscribed to neutral monism as well.

[0]: https://en.wikipedia.org/wiki/Neutral_monism


Scott Aaronson's take on the paper you linked to:

http://www.scottaaronson.com/blog/?p=1799


I find reading Chalmers to be irritating. His first premise is dualism. If you read his original paper on the hard problem you'll find basically every statement presupposes this fact and then concludes this fact. It's maddening. Dualism cannot be assumed apriori.

There are several good rebuttals to Chalmers in this collection of essays: Explaing Consciousness: The Hard Problem: https://www.amazon.com/Explaining-Consciousness-Problem-Jona...

One of the best IMO is Thomas W. Clark's functional identity hypothesis that says that experience is _equivalent_ to the processing system. Basically, this is what it feels like to be a human brain.

http://www.naturalism.org/philosophy/consciousness/the-expla...


> One of the most promising theories (discussed in the podcast) assumes that consciousness is a fundamental attribute of reality

Did you really mean to say "assumes" there? (As in assumes as aximoatic.) Because that's one hell of an an assumption.


Indeed, it definitely is one hell of an assumption. Just as "matter is fundamental and consciousness is an emergent phenomena" is. Until we find a way to test it in any way (which may not be possible at all) the only thing we're left with is to play with the axioms and get a feel of what the resulting theory can and cannot explain.

However when we accept the 'panpsychist' axiom, we're free to come up with more detailed theories (as Integrated Information Theor) that eg. predict under what conditions physical systems and consciousness coincide in a way we can experience. And indeed they claim they can explain eg. why brain, as opposed to silicon circuits, coincides with it.

To be clear, I'm not saying this theory is correct - I've no idea. I'm just saying that this whole approach is as valid as the physicalist one.


> Indeed, it definitely is one hell of an assumption. Just as "matter is fundamental and consciousness is an emergent phenomena" is.

Well, except you snuck two assumptions in there. I'm certainly not assuming, nor claiming to know definitively whether consciousness is an emergent phenomenon.

> However when we accept the 'panpsychist' axiom, we're free to come up with more detailed theorie

If you assume a false premise you can come up with literally anything, so this is not particularly surprising or illuminating.

> I'm just saying that this whole approach is as valid as the physicalist one.

One is grounded in what can be observed objectively[1] and the other is not.

... and to believe your explanation, I'm going to need evidence that it's as valid as the "physicalist"[2] one.

[1] Well, at least as far as others can confirm that they observe roughly the same things, e.g. "red" or "consciousness".

[2] I think you're thinking of "materialist"?


I promised myself I wouldn't follow the Chalmerian rabbit down the consciousness rabbit hole for a while (one eventually gets tired of being dizzy no matter how fascinating the reason). Every time I swear to myself that I'll give it a rest for a few months, something like this pops up on HN and it starts all over again.

Here's as far as I got last time. I'm almost certainly wrong, but hey, when it comes to consciousness, we know so little that being able to pinpoint why someone is wrong is still progress.

- Time is real but not "absolute". The universe exists as a causally connected network of ever-present nows. We aren't "slices of the Minkowski spacetime" experiencing itself. That's a fantastic model of reality at certain scales, but it's not reality.

- Ensembles of causally connected physical systems evolving in the now are "ontoglogically real" in the sense that something about them actually exists above and beyond their parts.

- Certain evolving physical systems have "an interior". It feels like something to be the interior of such a system.

- We don't experience the world, we experience being the interior of a certain kind of evolving physical system whose job it is to create, process, and manipulate representations. "We" at any moment are the interior of the gestalt representation of that moment.

- Evolution selected and maximized interiority for a reason. It has a purpose. How it has a purpose I have no idea, and any solutions smack of dualism. However, the closest thing I have to a hypothesis that isn't completely crazy is that what we call "the physical" and "the mental" are simply two projections of some "actual" underlying structure that is something like a Hilbert space. If that's the case, then it's possible that putting the physical system into a certain state causes the interior of that system to "feel" something which, in turn, causes some kind of "pressure" on the physical again. I have no clue how this could work, but I do like the strategy of starting with the commonsensical but heretical idea that we wouldn't be conscious if evolution didn't find a use for that property, and then going from there.

Anybody else have any crazy thoughts so we can at least talk about why we're wrong about them?


I think saying that "evolution selected consciousness for a reason" is a potential category error. It seems a bit like saying that evolution selected blood pressure. Evolution selected circulatory systems - blood pressure is an inevitable phenomenon of a circulatory system. Similarly, it might be that evolution selected information processing and decision making systems, and that consciousness might be an inevitable phenomenon of sufficiently complex information processing systems.


That's why I labelled the idea "heretical". I think your explanation is by far the most popular one in science. The idea is that consciousness is an "epiphenomenon" - the "steam of the locomotive". It has no functional purpose.

I'm simply suggesting that we try seeing where we go if we reject this hypothesis and instead treat consciousness as a thing that itself has survival utility. Doing so does cause dualism to creep in awfully quickly, but I'm not really so ideologically committed to pure physicalism that I'm unwilling to take the thought experiment seriously.


I can't make the leap from evolution selecting for consciousness for some utility to dualism. Can you make that more explicit for me?

What you've stimulated for me is the question of whether brains and their processes could have the same utility without the quality of interiority? We can make obvious observations like "the pain of intense heat incentivizes the animal to stop touching fires". Was it necessary that the animal feel the pain, or could the exact same behavioral modification have been duly recorded in some neural adjustment without the interior experience?

My personal intuition is that 1. it is possible for a machine with the right interfaces running a sophisticated enough turing machine to replicate the above, and 2. such a turing machine would not have the same quality of interiority that we have. This suggests that consciousness doesn't have some inherent utility. Except! From my internal observations, consciousness seems to have the quality that it focuses inputs and computation. I filter out noise, center my reasoning/recall/attention on one or two things at a time. Perhaps this ability to filter and focus allows for more peak "thought" or "processing" given equal grey matter mass or energy input. So my thesis is that the evolutionary advantage of consciousness is that it was a software based way to extract more performance out of brains given hardware limitations.


> I can't make the leap from evolution selecting for consciousness for some utility to dualism. Can you make that more explicit for me?

The philosophical objection to the idea that consciousness is functional is that, it this were so, then consciousness would be able to affect the physical world. I feel this way, and because (and ONLY because) I felt that way I move my arm. But how did my actual "feeling" of anything cause my neural network to exhibit different outward behavior than it would have had I not felt anything at all?

From there, the claim is that you need to import dualism (by saying that the world of feelings somehow communicates back with the particles that and energy that make up the neurons).

I don't think that's the only way to solve the problem, but it's a common objection.


An argument why I believe consciousness can affect particles/"physics": if it couldn't, why would we be speaking about our internal experiences? If they were only experiences like in a movie, the fact that they exist wouldn't influence anything and our bodies wouldn't talk about them. Yet here I am, typing about internal experiences.


consciousness can be seen as the counterweight to emotions, so it can have survival utility


There is no widely agreed upon functional utility of being conscious. The zombie thought experiments are about this.


The novel Blindsight by Peter Watts is great for exploring the relationships between consciousness and intelligence. The characters theorize that consciousness could be an important/easy stepping stone for the evolution of intelligence, but that it may be necessary to keep, and that consciousness could be evolved away from, relegated away like a vestigial organ.


>but that it may not be necessary to keep

fixed that for me. too late for edit.


Is your "interior" just the mental model we have of the world (including ourselves)? If so, it's not surprising that evolution selected for it. It's advantageous to be able to make predictions about the world (including ourselves), and to make predictions about a thing you need a model of the thing.


True, but what is surprising is that there is something "it is like" to be that interior. As an example: CPUs can perform calculations on data we consider "representations". The outcome of those calculations, in turn, affect the world. We suspect that they do this without there ever being something it is like to be a CPU.

At first blush, we would suppose that neurons firing this way and that to create actionable representations would have the same property. Evolution would be happy. But there's something extra there: it FEELS like something to be a neural network in a sequence of evolving states. Why? The heretical hypothesis is that the feeling itself had a purpose and was part of the system.


> True, but what is surprising is that there is something "it is like" to be that interior.

I don't know; is there? How do I tell?

I mean, I know what I experience, but I'm not an objective observer. There's evidence that what we perceive as the continuous stream of events is actually assembled in our memory after the fact from various out-of-order sensory inputs.

How do I know that what I perceive as consciousness (and remember, I have no way of knowing whether it's the same as what anyone else perceives, or whether they perceive it at all) is anything more than just wishful thinking? And that's a serious question to which I really want an answer.


That seems more relevant to the quality of information being felt, not the feeling itself. Even if it's just assembled in memory later, it's still felt, there is still a "it is like".

The presence of feeling is rather obvious when you feel things like pain. Can't exactly make that up. That's definitely there. A lot of problems are grounded in feeling itself and wouldn't exist if nobody felt anything.

Proving its presence in other people is impossible, but it would seem very strange that you work one way and everyone else works a completely different way without some outside system tinkering with it.


> The presence of feeling is rather obvious when you feel things like pain. Can't exactly make that up.

At the risk of repeating myself --- do I feel it?

I cannot remember pain. I can remember being in pain, I can remember second-order things about the pain, like where it is or what kind it is. I can remember that it makes me unable to concentrate, and even a mild headache will make me useless even though I'm not physically debilitated. I can remember that cutting myself feels different to toothache, which feels different from that ghastly time when I had shingles (second-stage chickenpox. The most pain I had ever been in my entire life. And I had a mild case).

I know it was nasty, and I'm certainly going to avoid it, but when I'm not actually in pain, I cannot summon up even a shadow of a sensation of what the pain itself was actually like. Which is weird, because I can remember other sensations. But not pain.

So it's not obvious that I feel pain. Maybe it doesn't exist at all, and all it is is an illusion formed by my brain's negative reinforcement signal ('this is bad. Stop doing this'). That would at least explain why it doesn't show up in my memory.


I share your bafflement at trying to recall pain, I can't do it to any fidelity either. But I actually think that this is not unique among sensations, and most of our memories are about as hazy as this. I can remember that I did or saw or smelled things, but I can't exactly reproduce the sensations/images. I also have to disagree with the logic of the following:

> Maybe it doesn't exist at all, and all it is is an illusion formed by my brain's negative reinforcement signal ('this is bad. Stop doing this'). That would at least explain why it doesn't show up in my memory.

It doesn't matter whether pain is an illusion, we can easily recall visual illusions.


> I know it was nasty, and I'm certainly going to avoid it, but when I'm not actually in pain, I cannot summon up even a shadow of a sensation of what the pain itself was actually like.

That's physical pain, but if you try that with emotional pain I'll be you can feel it just fine and quite literally re-live it.


I think that's more that you can literally re-inflict emotional pain by thinking about it, rather than that you're just remembering the pain itself.


>We suspect that they do this without there ever being something it is like to be a CPU.

Maybe there is, and every process has an interior. The ones that lack concepts like learning, feelings, memories, building representations, etc, aren't fundamentally different but are just boring and don't tend to affect the world in self-reflective ways. Like comparing a cellular automata where all cells start dead and stay dead to Conway's game of life.

The mathematical universe hypothesis[0] and the novel Permutation City by Greg Egan don't put it this way but I can't help feeling they're related and worth looking into.

[0] https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...


Perhaps feeling is a more effective way to convey information.


I suspect you're exactly right.


What about the idea often proposed by Graham Hancock where consciousness is not generated in the brain but rather the brain is a sort receiver for it?


That's an idea with zero evidence and thus indistinguishable from fantasy.


That leaves the question of where is consciousness, then.


Hard to square that idea away with observable concepts like how hormones and brain damage affect people. One could say that the brain does all the computation, and the link to consciousness is one-way, but if that consciousness-link isn't affecting the physical computation, then that doesn't explain why our brain makes us talk about being conscious. That consciousness-link-thing would be entirely unrelated to why we talk about being conscious and should be called something else.


Then we should be able to detect the transmissions and build other receivers. Or we should be able to block the transmissions, or generate new ones. If this hypothesis were true, it should be very easy to construct tests for it but as far as I am aware nothing is forthcoming.


Isn't time what we experience because we have memory? I think my consciousness is the state I got into because of the experiences I had due to interactions. Brain is finite state machine. Finite because it is limited to states of universe. Even if I live near a black hole I may only live 70 years in local time experienced, which may equal to like 1000 years in Earth's local time experienced.

I think its the underlying physics that shapes our thinking. Basically its all physics rules how particles/sub-particles/super-particles interact and the energy flowing through it. Now am "I" in control of making decision to do one thing and not the other? Or was it inevitable that the decision was made as that was the only possible outcome of particles inter acting in my brain at that moment? https://news.ycombinator.com/item?id=13012952

I was watching videos of elephants in wild. They are just the same like us. I saw a group of elephants finding a waterhole, but the food was scarce. They could water the plants in nearby area to produce more food. They just haven't figured it out yet. Just like us they pass around information using their language. They do not have language for what they have not figured out yet. They will have that capability once they figure out something and find the need to communicate it. Any organism I take I see behavior that is very much ours if we too lacked such knowledge and technique. Our communication latencies are decreasing steadily with advancement of tech as we figure out more things. Now when we have the tech to naturally communicate with each other the exact same thought with all the context of that thought to somebody instant or in realtime as it happens, then we are essentially a super organism. Each of us can be equated to neurons in brain with each of us having many neurons with each neuron having many molecules interacting with each which has atoms interacting.

HN is like a place where I find information I value more. Many quality ideas from all around Earth. When I try communicate with my sister to clarify some doubt she has, I often tries to explain to her why something is so with all the context. She couldn't intake all the data I provide as fast as I provide it(keep up with me) and within a minute or two she's like STOP! People want just exactly only what they ask for. Everything else is considered noise.


Most of the times I read about this subject I get the feeling that every person in the discussion is talking about different things.

For a prosaic view of consciousness I would recommend the book "Consciousness and the Brain" by cognitive psychologist Stanisas Dehaene. The experiments described in the book do, in my opinion, a good job in delimiting the problem and in facilitating accurate definitions about what conscience is.

We should start there, otherwise we are talking about how many angels can dance in the head of a pin (1).

(1) - Actually that problem has been solved already: http://www.improbable.com/airchives/paperair/volume7/v7i3/an...


That's one reason why I've started using the term "interiority" to describe the bedrock "existence of a feeling of what it is like" phenomenon. It's the smallest word I can think of that still conveys the essence of the hard part of the hard problem.

Put another way, I'm of the belief (and I think Chalmers and many others share it) that if you can demonstrate to me that C. Elegans feels itself wiggling around, and you can show me why it feels anything at all and why it feels this way vs that way, then you have solved the hard problem. I could take your technical specifications for basic interiority (the solution to the hard problem) and then give you human-level reflective consciousness in a few years given a handful of engineers and sufficiently powerful hardware.

This isn't necessarily a popular opinion, though. I once read Julian Jaynes "The Origin of Consciousness in the Breakdown of the Bicameral Mind" and was really frustrated that the author spent the entire book explaining how we became "conscious" by listening to one half of our brain talking to the other half. How much time did he spend trying to explain how there was anything doing the "hearing" to begin with? Nearly zero. Maddening.


> That's one reason why I've started using the term "interiority" to describe the bedrock "existence of a feeling of what it is like" phenomenon. It's the smallest word I can think of that still conveys the essence of the hard part of the hard problem.

A more widely used term is "subjectivity".

> How much time did he spend trying to explain how there was anything doing the "hearing" to begin with? Nearly zero. Maddening.

This seems like a common sentiment, but I'm not sure it's justified. You're already assuming the existence of a subject just because of an observation, but that needs justification. That belief is just an inference, a chain of thoughts sourced from a thought and yielding the final thought "I experienced X". The question is really whether that chain of inference is valid or fallacious, in which case subjectivity is an illusion.

I suggest reading some scientific attempts to account for subjectivity [1]. Our approaches to consciousness amounts to marvelling at a pencil sitting in a glass of water [2] and trying to figure out how water seems able to break and reconstitute the pencil, when we should really be developing the theory of light refraction. Like biology did to vitalism, all this mysticism surrounding consciousness will soon just fade into history.

[1] http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...

[2] http://etc.usf.edu/clippix/pix/refraction-of-pencil-in-cup-o...


You can cut off any part of the body and still have consciousness as long as blood, oxygen, etc reach the brain. That seems a little stronger evidence of causality than mere correlation. Still it could be, I suppose, that the brain does not hold consciousness, but is in fact merely an antenna to send and recieve consciousness... probably from a lab at NASA, via interactions with virtual particles, providing propellentless mental acceleration across space and time


And of course, there's occasionalism, which holds that the mind and the body (Including the brain) match up without the mind actually having a causal influence on the body at all.

I don't think this removes responsibility for actions though, because even if this is true, whatever we choose to do is still what our bodies do,so we can still choose our actions. Like how two superrational players of the prisoners dilemma can choose the action of their coplayer by choosing their own action.


Keep in mind that causality can be understood in multiple ways in your example:

1) consciousness is merely something that emerges from matter (eg. functionalism)

2) consciousness and matter coexist everywhere, but in some configurations of matter consciousness is 'condensed' and gives rise to our subjective experience (eg. Integrated Information Theory by Tononi and Koch)


  How can the room I am sitting in be simultaneously out
  there and, as it were, inside my head, my experience? We
  still have no answer to that question.
Really? It seems quite obvious that events "out there" can affect what's going on "in here" (i.e. my brain) via sound, light, direct contact, etc.

Consciousness is a difficult problem, yes, but there's no need to make it seem even more difficult than it actually is.


I share your confusion with this part of an otherwise-excellent article. It was such a surprise to read it that I fear I'm missing something. It seems obvious that what we're experiencing isn't the world or any so-called "objects" in it, but a structured representation of the whole and its parts. The hard part is understanding why:

1. Representations experience themselves at all

2. Representations seem to feel different depending on their structure and relationships to other representations

Does anyone have a better interpretation of this paragraph that highlights the supposed mystery about our connection to "the real world"?


If you both want to see the mystery replace 'the real world' with 'noumenon' and read some Immanuel Kant. He wrote awfully lot about the 'supposed mystery' :)

BTW. your parent seems to mistake "inside" with "the brain" which seems like a category error.

"It seems obvious that what we're experiencing isn't the world or any so-called "objects" in it, but a structured representation of the whole and its parts." What do you mean by "the whole and its parts"? Do you mean that what we experience for our whole life is just a play of representations created by our mind? Because it seems to be the case. And when you think about it, it is as mindfucky as possible. It may be obvious for you (is it?), but most people act as if it were very far from obvious as in each action most of us attribute real, independent existence to most constructs of our minds that, under scrutiny, are completely arbitrary.

Once you understand that you construct the whole world, you also start to understand how many ways to construct it are there (Sapir-Whorf etc.). Once you understand how fundamentally these possible understandings may differ, you start to appreciate how arbitrary 'your world' is. And then you start to wonder WTF is 'real world'? What is 'out there'? Is there anything at all?

Just a loose association, but isn't your question a bit like asking "what's wrong with this picture" ? http://personalpages.to.infn.it/~fiorenti/escher/pgallery.gi...

On the other hand I understand your question perfectly. It's even hard to tell whether there is something strange about it or not.


   BTW. your parent seems to mistake "inside" with "the
   brain" which seems like a category error.
I was just paraphrasing the article itself, which says "inside my head". I think it's pretty clear what this means.


Sure, fair enough.


I'll give it a shot based on my current understanding of non-dual pointers. The perceived mystery is that of two seemingly separate realities existing - one being the unquestionable objective reality believed to be "out there" vs. the unquestionable reality of the experience "in here". But where exactly is "there" and where exactly is "here"? Could it be that the separation is artificial? Aeons of evolution created the ability of a process in brains of evolving organism to create ever more complex concepts, culminating eventually with the concept of an "I" to whom things are believed to happen, thereby slowly changing a global experience of "reality just is" to a belief in "I experience reality". Meditation (which has been mentioned a few times) is not supposed to be some non-scientific escape to a subjective mystical state but a brutally honest inspection into the source of separation into objectivity and subjectivity - the source of this "I". Those who manage to see through it claim that ultimately, there is nothing other than consciousness. Much as I'd like to, I can't claim to know, but logically, after thinking about it for a long time, it seems to make more and more sense.


Well if you go down to it then the only thing you know is that you are witnessing your perception everything else is an assumption.


Consciousness is the way your brain self corrects. That is my personal theory.

On a few time scales you brain makes plans. It uses beliefs and a model of the world to make those plans. It stores those plans and the actions it has taken. And then checks if actions had their intended effects, and if plans reached their intended goals. Updating its model of the world, its beliefs, and what useful actions are.

This is a constant process reflecting upon the current situation, and upon the recent past of actions and plans. It is like an observer observing itself. This is the root of consciousness, I think.

Probably abstract thinking and language, enough intelligence, and a theory of mind, leads to self awareness.

If this is correct, we can formulate some necessary parts for a system to be conscious:

* recognition of the environment * a model of the environment * ability to reason about actions, their consequences, and how to achieve goals * memory of past plans and goals and outcomes * a process of executing actions, making new plans, and learning from the outcomes


Self-awareness and consciousness are different things. Sometimes when you wake up you don't know who you are, where you are, what you are, but there is a conscious experience. Consciousness is much more fundamental than your level of reasoning allows to explain.

Here's a list of things that you are adressing and that are not dealing with the hard problem of consciousness ('easy problems'): https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#...


An observer observing itself, would that not feel like something to the observer?

What you describe is when the observer is disconnected from longer term memory.


OK, let me rephrase my objection.

"This is a constant process reflecting upon the current situation, and upon the recent past of actions and plans. It is like an observer observing itself. This is the root of consciousness, I think."

Your description seems to fall under "the ability of a system to access its own internal states" and/or "the reportability of mental states" from the 'easy problems' section of the wiki pages I linked to.

Both could be possible without consciousness at all. We could imagine (or even program) a robot that in response to external stimuli modifies its state. It could also have second order algorithms, eg. it could check what state it's in and if it remains in the same state for too long, the state changes randomly. It would be the same as 'observer observing itself'.

This functional explanation in any way doesn't lead to the phenomena of consciousness, which would require the robot to have some specific first-hand 'experience' of the world and itself. That's the difference between the easy and the hard problem of consciousness.


Maybe this thread is not yet dead, seeing a reply a few hours ago. I did intent to reply, but life happened I suppose.

I am curious about your thoughts. Here is why I think it does address the so called hard problem:

The goals of these plans are evaluated by simulating the world and the body and future mental states. So the brain can measure, or "feel", the response, and thus judge the plans and compare them.

This constant self reflection and self prediction is what feels like something to the thing doing it. Why would it not?

---

But in general I have some objection to the "hard problem".

We cannot proof something to be conscious. Probably we never can. The best we can do is find good proxies indicators of consciousness and good lower bounds for what can potentially be conscious, and what cannot.

But the opposite is also true, we cannot proof there is such a thing as the hard problem of consciousness. For all we know every program we have every written has had some kind of experience as it was running.

I agree it is reasonable to assume this is not the case. But I don't think you can proof this is not the case. Therefor the hard problem might, or might not, actually, be a problem.


To predict no 'feeling' need to be involved. You can have prediction without consciousness (any program that simulates some environment to check whether some condition will be met, eg. collision detection in 3d games) and consciousness without prediction (there probably are people with brain damages that are conscious, but have limited imagination capacities).

You can see color of red and don't predict anything at all, while still being consciouss. Predicting is just a narrow part of our conscious activities, however all of them are conscious.

So cosciousness is something much much more basic and fundamental than any high order cognitive function.

> We cannot proof something to be conscious. Probably we never can. The best we can do is find good proxies indicators of consciousness and good lower bounds for what can potentially be conscious, and what cannot.

We can't prove it because we have no idea what consciousness is, even on a philosophical level. Moreover if consciousness is basic property of reality, in a way parallel and coexisting with another property - matter, as panpsychists suggest, than it indeed would be impossible, as this dimension would be completely 'invisible' from the matter point of view.

When you're talking about 'proving' most likely you assume existence of some reproducible causal chain in the material, 'intersubjective' space. So you can't 'prove' consciousness in this paradigm by its very definition.

Of course one can insist to throw away any theory that's not provable in our paradigm, but does it bring us any closer to the understanding of consciousness? Also, is there anything inherently wrong with neutral monism or naturalistic dualism? These theories are not internally inconsistent, moreover they are also not inconsistent with anything current physics states - they just provide a wider view that at least somehow addresses the notion of consciousness.


It think you took predicting a bit too far and singular. It is not just predicting that is conscious, and it is more simulating or considering or even pattern matching.

This is what I think: Consciousness is continuously self observing past actions, past plans, actual outcomes, planning new actions and predicting future outcomes. That feels like something to the thing doing such a process, because practically all of those steps involve simulating/predicting/checking what something means to itself. Our brains do this because it is a self correcting mechanism.

> You can see color of red and don't predict anything at all, while still being consciouss.

I don't think so, when you see that color, you cannot help observing its context, what it means, check if something needs to be done, etc. All that is what I meant with predicting. Maybe the color is of no significance, but the only way you figured that out was by "predicting".

---

Even if consciousness is 100% material (as I suggest), we can not prove it exists in others. We don't need consciousness as another dimension for it to be improvable.

As a matter of fact, if it is another dimension, then our brains are "interacting" with that dimension. So the hypothesis predicts that certain configuration of matter can have some kind of interaction with that dimension. That should not be hard to proof.

But since we have no such proof, and more mundane hypotheses, we must conclude that the other dimension hypothesis is extremely unlikely.

Another line of thinking on the dimension hypothesis is this: we have brains as small as from C. Elegance with 300 neurons, to ants, 0.25 million neurons, to mice, 70 million neurons, to humans, 86 billion neurons. At which neuron count does consciousness come in? What if we simulate the full neurons of any of these, will it behave the same? Will it be conscious the same?


But what if the consciousness is just a notion for the systems that reach certain level of complexity, i.e. can solve the same easy problems as human brain can. I mean there is no qualitative difference between programmed robot and us, but only quantitative. So physicalism accepts your objection and go on with programming robot until it can solve Turing Test. Will you accept that the robot is conscious now? Why can’t “first-hand 'experience' of the world and itself” automatically emerge if the system is complex enough and aimed to survive?

Hard problem solving is not required to create consciousness. Humans can’t solve hard problems but they make children conscious meanwhile human children grown by animals become animals.

May be hard problems of consciousness is like describing feeling or summary of all feelings. We can feel but can’t explain. Meanwhile every our feeling has underneath chemical reactions.


> Hard problem solving is not required to create consciousness. Humans can’t solve hard problems but they make children conscious meanwhile human children grown by animals become animals.

You're serious? :) Humans can make children conscious? They barely can control their instincts, most of the time they are not aware what and why they are doing. We have no clue how are we working, how reality is created.

To say that consciousness "just" appears automatically when some complexity is reached is just magical thinking or, at the very least, explaining the problem away. You would have to answer why, how, when is the critical point, what's the difference between conscious and unconscious robot etc.


Riccardo Manzotti has some pretty cool Philosophical Cartoons on his site:

http://www.consciousness.it/RM_Cartoons.php

His "spread mind" site is great too:

http://www.thespreadmind.com/

As a "seeker" I sometimes wonder if he figured out something close to what nonduality teaches, only in a more technical language (ie. consciousnsss is not located inside anymore than it is outside, therefore somewhat blurring the lines between I and the world). I think his example of the rainbow is such a great metaphor.

Bernardo Kastrup has his theories as well with his whirlpool" metaphor but to me it sounds like it makes sense only to him, and I don't get much out of his metaphor. Whereas Riccardo's pointers are reminiscent of older teachings in that they really invite the reader to examine direct experience (Just my interpretation of his theory).


A similar discussion that might interest some of you: https://news.ycombinator.com/item?id=11956101


According to the article, there are 85 billion neurons in the brain. Bill Gates has a net worth of 85 billion dollars. That's interesting I think. When I read 85 billion neurons I thought that's not really that much considering.... and I wonder how much Mark Zuckerberg is worth - then I Googled who's the richest and saw Bill Gates had 85B.


Is this an ongoing series or something? It stops abruptly at the end...


you can read the thread where I report what I consider to be basic truth about consciousness here:

https://news.ycombinator.com/item?id=12878939

there are no challenges, and even simple numbers can be conscious. we have sequenced DNA today, you can download it right now: http://www.sanger.ac.uk/resources/downloads/human/

After downloading it go ahead and take a checksum.

within 500 years (a ridiculous overestimation) someone will build a simulation that simulates (emulates) part of the brain that is described in that genome. Whether at full speed or one-tenth speed it will be done, and for test purposes someone might do it in a determinstic VM of sorts. They can take that checksum, let's say it's:

3ad235db427e566bbb31097c77882d20f30db779ff8f3896cb35bb43ebf76dd164e084109bf2bcb8a9ba1bc7477c1eeb3dfcf699bc3f8e7f4d96aceeaa080003

(That's an SHA-512 hash).

I'll tell you what I hashed. All I hashed was "I'm a brain".

Instead of a text string, within 500 years someone will hash a VM that contains as much data as a human brain, is human for practical purposes, and reports the same thing.

For me personally, this is beyond even discussion and I personally consider the matter closed. However, at hte HN link at the top of this comment I report a state of affairs under which I would be wrong. I don't consider it even worth thinking about though.

I don't consider any question around this area to be "open". We just have to accept what we deduced.

if you don't like it or don't agree with it, you can wait until someone simulates an adult brain reporting self-consciousness, and then you will be forced to say, "oh well, I was wrong."

There is no conceivable scenario under which I would have to say, "oh well, I was wrong." (Above, I outline such a scenario though. It's not going to happen.)


Consciousness is a suitcase word, poorly defined. Instead of consciousness, how about we consider more clear notions such as perception, decision, reward, action, memory and attention - all of them developed through learning. Such notions can describe the "mind" completely. They are much more concrete, and are being investigated in AI (reinforcement learning).

We are just self replicating, self preserving processes. The main function is to maintain equilibrium in the face of entropy. We need food, water, shelter and companionship to survive. In order to get those, we need to learn to operate in the world and reason about it. It's nothing magical, just reinforcement learning and other kinds of learning - a little semi-supervised learning, for example, and a lot of unsupervised learning.

All this grand system exists for the sole purpose of protecting its existence. It protects its own life and self replicates, which is another kind of survival. Consciousness is that which protects the body, that is its sole purpose.

If you are not conscious in the morning, you don't drink and eat, and in 3 days you're dead. That's why you need to be conscious every day. That's the greatest miracle of consciousness. Our species would not exist without it. But it's better to work with clear cut notions, like perception, action and reward.


There's an easy and a hard problem of consciousness. You're addressing the easy one. These are technical terms, you can look them up.


The hard problem is "why do we have qualia?" and it is explained by neural nets. You input sensations and you get latent representations that are mapped into a space of possible representations. So all those ineffable perceptions are just values in this space of representations.

Thus, qualia appear by neural net processing of sensations, and are fed into a recurrent network that judges them moment to moment and updates its internal state, as well as perform actions. That is why we feel a stream of perceptions, not just separate moments.

Qualia appear to be irreducible because the mapping from perceptions to representations is nonlinear - even if we can duplicate it in neural nets, we still can't assign semantic value to each neuron and weight in the network. So it appears magical because it is of a level of complexity that exceeds the human working memory. But it is not so magical that we can't replicate it. We can recognize objects better than humans on certain datasets, and we have word embeddings built on huge corpora of text that are being used for translation. We can compute the "feel" of a word or image.


This explanation doesn't touch the hard problem. Please have a look at the 'easy' problems: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#... - you're still within the functional mindset of these problems.

Are neural nets conscious of their representations? If not - why? What's the difference between neural net that has subjective experience of its intermediate representations and the one that doesn't? On a functional level it doesn't need any subjective experience, so why would it explain anything?

Attempts like this definitely give us some intuitions, but they are working around the hard problem without addressing it.


I never understand this ‘qualia’ problem. For me, it looks like it is not scientific at all due to occam's razor principle. Is there smth like Turning test for ‘qualia’?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: