Hacker News new | past | comments | ask | show | jobs | submit login
Consciousness as a State of Matter (arxiv.org)
94 points by evanb on Jan 8, 2014 | hide | past | web | favorite | 134 comments



I am glad this paper has been written, but already by page 3 there are serious problems with it.

For example: It is an easily observable phenomenon (e.g. by anyone who meditates) that you can be conscious without remembering anything. In other words, consciousness is independent of something like memory.

Yet Max's list on page 3 has stuff like "independence", "utility", "integration" which have nothing to do with observations of what consciousness is actually like. Rather, they are more like high-level ideas of what human beings are like.

But we don't need to explain human beings (complex biological organisms that walk around and do stuff). Science has got that covered already, at least kind of. So if you are going to clearly think about consciousness, you need to factor out what consciousness really is and look at the properties of that.

This is supposed to be a foundational principle of science: that your hypotheses are attempts to explain things that are actually observed. The first step is to observe things carefully! You don't just go making up hypotheses.

So it's a giant red flag any time a scientist writes a paper about consciousness where they conflate it with memory in some way (which is almost every time). It's a red flag because it indicates that the scientist has not actually spent any time observing consciousness, because they aren't noticing things that are obvious to people who have done that.

(You might think that because we are all walking around conscious every day, there would be no need to observe consciousness, but this isn't true. We walk around in a space governed by Newtonian physics, but it took until Newton to figure out this thing called inertia and that a frictional force is required to make things stop, etc, because if you don't look carefully and make careful measurements, most of the everyday world doesn't appear that way at all. Same thing with consciousness.)


> For example: It is an easily observable phenomenon (e.g. by anyone who meditates) that you can be conscious without remembering anything. In other words, consciousness is independent of something like memory.

If you're meditating, how do you know that you're meditating without having, at minimum, a working short-term memory? If meditating actually disabled your memory, wouldn't you immediately forget that you were meditating and stop?


It is not that meditating disables anything; rather, meditation consists in observing consciousness in a way that is orthogonal to these things. When meditating, sometimes (often!) you are remembering things, but sometimes you aren't. And in fact what you come to realize is that this is also true in daily life: there are many, many gaps during the day when you are not remembering or thinking about anything. According to your question, wouldn't you just immediately forget what you are doing and stop in the middle of your day? Well, no, you don't, and you can hypothesize as to why (unconscious processes or whatever) but the point is that unconscious processes can be explained without hypothesizing consciousness (isn't that like what a robot has?)

All I can say is, try meditating sometime and you will see. (I don't recommend mantra meditation, but rather something more like Vipassana or "mindfulness" or any meditation that is not about distracting your mind by keeping it busy).

When you become comfortable with meditation, you become very aware of what your consciousness is doing. You gain a palpable sense for the present moment. Once you have that, it makes a lot of these kinds of questions unnecessary (or at least the questions become very different in nature). If you don't have this taste for the present, then asking/answering questions like this is like trying to explain colors to a blind person. It just doesn't work because most of the questions are about things that don't really have anything to do with consciousness.


I think you have a fundamentally flawed understanding of just how small a timeframe short term memory can apply to. As a commenter said below, your brain is constantly utilizing memory, whether or not you know it.

> the point is that unconscious processes can be explained without hypothesizing consciousness

This really doesn't make any sense, by the way.


Indeed. And as a general rule, I would personally not trust these kinds of intuitions and insights - the overall introspection framework/method is, in my opinion, not a very much reliable tool. (People who have been practicing meditation for a number of years would maybe not like to call that 'intuition' (but rather accumulated experience from a careful and long thinking process, etc.), but the point still stands, I think.)

To borrow Metzinger[1] et al.'s terminology, this internal model of what consciousness is may not be very transparent to us (much in the same way why we don't have good intuitions why/how it is that we have notions about solidity and classical (in one or other sense) mechanics (it is useful to have an image of a solid tree/obstacle when running through a forest while being chased by a tiger, and all that.) We may have some intuitions, but those intuitions may turn out to be wrong.)

[1]: A very interesting book I'm yet to read: http://mitpress.mit.edu/books/being-no-one


Working memory (the active feedback loops that constitute your current thinking) is subtly different from short-term memory. This is still an area of active research.

Working memory can hold a limited number of things (usually said to be 7), whereas short-term memory has a larger limit in capacity and time. These are both temporary systems that feed off each other and rely on the larger long-term memory to give meaning and context to the "symbols" they contain.

Everything is interconnected, so making definitive statements about any of this is hard.

(IANANeuroScientist but my work does Neuroscience research)


The ultimate of meditation is to stay in here-now, be present in a present. So technically and logically speaking, you don't need a memory in such state.


Now the question is, do meditators just hallucinate a feeling that gets processed into a memory of having achieved such a state?


By hallucinate you mean, not knowing what/where you are or doing, in that particular moment? Then yes, one can use that word. The simplest analogy of here-now is when you are watching an exceptional movie (as per your taste) and you get lost in those moments of watching. The same phenomenon is apparent in observing other forms of arts (songs, paintings etc).

So, are such states hallucination? Every person (human being) knows for sure that they are real.


Do humans "hallucinate" what is in front of them? It's fairly well understood that our visual cortex is the product of evolution. So we will see things as they are meant to be seen, by our visual cortex. This is completely different from what is really there.


You can state intent all day long, but even while meditating your brain is keeping state. Reference breathing, heartbeat, intent, etc.

I believe meditation is pulling focus into the present moment, but I also believe that moment is far from discrete.


Well of course your brain is keeping a certain state, even while in complete meditation. So what is the point? It is not like reaching a certain meditative state would make you superior among others, or a super-human even. Many have suggested that people reaching such states eventually do return back, and for some, such states even dissipate eventually if they don't keep up with the practice.

Ultimately, the idea is that the state of 'here-now' (whether arrived through practice, observation or heck even induced through means of substance - though there are differences) is the state when You are not your mind anymore. So in that sense, looking at above suggestion about memory, I would simply ask: what is the need of a memory?


So it's a tautology? Not useful.


There's no memory whatsoever in dreamless sleep, and yet you have no problem admitting to yourself that you've slept so many hours last night.


This isn't relevant / applicable / the same because dreamless sleep would not involve consciousness (in most any sense/definition of the word), whereas meditation would. So your example doesn't really say anything, in my opinion. Unless you had something else in mind?


If you have no memories from dreamless sleep, how do you know it's an unconscious state?


That's an interesting point. One way to approach it would be to say that it is an unconscious state because of the very fact that one cannot recall it. But this is surely not necessarily correct (works given certain assumptions about the nature of consciousness), so I'm not sure. It still seems to be a somewhat orthogonal question to me, though. :)


I can't speak for meditation, but over the last couple of years I've noticed that as I'm going to sleep and my mind is sort of just running along that every so often I can actually catch myself losing my short-term memory. It's like somebody pushed the reset button on the internal monologue. It takes about 3 or 4 seconds.

(It's terrifying what your brain does in sleep....)


Feel the same. I doubt that it's 3-4 seconds, because I loose the sense of time too, but dunno about you. Hmm, isn't seconds sleep very similar to that too? I would even say that it's much more extreme. I can hang in-between reality and dream for minutes in the worst case, until I either wake-up unpleasantly or doze off.


I'm awake enough when it's happening that I don't think I'm in "sleep time" yet. It's actually a bit disturbing. I say this accounting for the fact that this is all pretty hard to tell about what time it is, but, part of the reason why I noticed it is that it seems to be happening much earlier than I'm used to.


i do actually forget that i'm meditating, for short periods. that's sort of my goal, come to think of it. those are the most rewarding moments of the experience.


> So it's a giant red flag any time a scientist writes a paper about consciousness where they conflate it with memory in some way (which is almost every time). It's a red flag because it indicates that the scientist has not actually spent any time observing consciousness, because they aren't noticing things that are obvious to people who have done that.

Give me a break. Would you mind sharing what makes you more qualified than the author to "observe consciousness" and come to conclusions regarding the requirement of memory? I'd say it's a "red flag" when a game designer on hacker news throws out unsubstantiated attacks on an extensive well-written rational analysis about the physical manifestation of consciousness by a prominent MIT physics professor and cosmologist.

Additionally, I think you are quite clearly wrong. Consciousness does require memory. It is not possible to observe, experience, or process without memory, all of which are required for the integration of consciousness. Even to perceive requires memory, since it requires the integration, filtering, and processing of large amounts data into consistent perception and a conscious interpretation thereof - none of which is instantaneous and all of which takes time, requiring memory.


>> Give me a break. […]what makes you more qualified than the author[…]

I had the exact same thought. It's understandable that people want to disagree, because it's not in their logical realm, or they find it's out of their scope of normal and accepted, but that doesn't mean that it's wrong at all.

When someone comes over that negative, to an author who has put so much effort into the paper. One would find it fair to have an equally professional, detailed and long answer. Imagine a time where everybody has the power to publicly "try to refute" your paper with a tweet, absolute horror.


These off-base criticisms reflect the fact that commenter did not read the whole paper and likely did not even read the introduction, which ends on page 3, where he found "serious" problems.

For example, the paper nowhere conflates consciousness and memory. Instead, the paper repeats the suggestion that a requirement of consciousness may be the ability to process a substantial amount of information. This idea is related to the idea of memory, but it's not really the same.

The paper is not about describing the biological experience of consciousness at all. This paper asks the question: "Is there some way to understand from the Hamiltonian and density matrix of our universe that it should contain consciousness?"

Tegmark then proposes some criteria that are probably necessary for consciousness to arise and then presents some metrics for the various criteria and calculates those metrics for various conditions. The paper is really describing a framework for how consciousness can be considered in the context of a physical representation. The calculations should be relatively straight-forward to follow for anyone with a decent memory of their linear algebra class.

To put it another way: If you had a Hamiltonian and density matrix that described a universe that you thought contained consciousness, what kind of things would you calculate for that Hamiltonian and density matrix to try and find out if it did or didn't? This paper suggests some ways to think about this question.

So I don't understand why this comment spends so long talking about "observing consciousness."


I am glad this paper has been written...

What is there to celebrate about half-baked philosophy dressed as science and published?

Philosophers and theologians have spent the last two thousand years not agreeing what consciousness is and there is no clear consensus now. That's not really surprising - all the seemingly "basic" qualities people perceive - say lightness, darkness, wetness or coldness, are very "high level" compared to what we know about the real states of matter

The argument that consciousness is a state of matter because consciousness is something we intuitively perceive is something of a throw-back to the "four elements of matter" thinking - the prescientific system where matter is organized by its perceived properties in contrast to any serious investigation of the causes of those perceived properties.


I am glad this comment has been written, but already by paragraph 2 there are serious problems with it.

> consciousness is independent of something like memory.

I'm not going to say you are completely wrong here but where is your proof?


Whether you are conscious of your memory is independent of whether you are using memory.


Given the amount of different seemingly unrelated terms in there:

  quantum factorization
  Hilbert & Fourier space
  tensor factorization of matrices
  Hilbert-Schmidt superoperators
  neural-network-based consciousness
  error-correcting codes
  condensed matter criticality
  Quantum Darwinism program
  ...
I have to ask just to be certain: This paper is the real deal, and not auto-generated, right?


quantom factorization, hilbert spaces, fourier transforms and error correcting codes, all of these would be found in any quantum mechanics 101 course, definitely far from unrelated.


I'm not sure I agree with this, after skimming the paper. For instance, the author does not seem to be using terms like quantum factorization in the usual way (In the sense of a problem tackled by Shor's algorithm).


I've only glanced at the paper, but it looks to me as if he's using it in another perfectly usual way, namely referring to situations where the wavefunction and/or the Hilbert space it lives in can be written exactly or approximately as a product over simpler things. There's nothing wrong with that.


The author is one of the biggest names nowadays in cosmology.

http://en.wikipedia.org/wiki/Max_Tegmark


I felt that if the science hipster existed, thats what she would write. Disclaimer: I'm in no position to judge any of this.


Hah! I read that abstract and thought, ok this reads like it was auto-generated. I'm withholding judgement until I've got time later to give it some additional reading.


As soon as I saw the Title of this paper, there were two ideas that sprung up:

1)"Sokal Hoax"

2)David Deutsch, Max Tegmark, Some Other trouble-maker

Max Tegmark has done similar things before:[1]. He is also related to the FQXi which is funded by an organization that has sometimes promoted explicit religious agendas in a scientific context. However we should be judging every paper by its merits alone & the rigor of its arguments. The only problem is that work of this nature is very interdisciplinary, so who is qualified?

[1]http://discovermagazine.com/2008/jul/16-is-the-universe-actu...


That crunching, gasping sound you can hear is Luboš Motl choking on his cornflakes.


I have to say, if we're the things rendering this universe, it would be a helluva lot easier to code something similar up over coding up string theory.


Sorry, we just kind of threw that together in Perl to meet the deadline! ;;


Information is not matter, it's spiritual by its essence. Information can be stored in matter (physical medium), but in itself it's not matter at all. Consciousness lies in the realm of information, or how it's traditionally called a spiritual world. In essence however there is a metaphysical view that there is no conceptual separation between spiritual and physical (or putting it in other terms, physical world is also defined with information). The separation is only that of perception. That can bridge the two approaches.


I am also of this view. We live in a semantic world. Information is fundamental and matter encodes it. There is some interesting work being done in this area but not enough. A book and summary paper is available if you search "Quantum Meaning: A Semantic Interpretation of Quantum Theory"


How would you define information?


That's not easy. It's the definitive essence of things. To give an example, when you know something, and write that down. The written words are physical, but the essence they convey (the concept, the information) is not physical at all. When others read that, they don't consume the physical words (though they use them as a medium). They consume information which they contain.

Here is a good article on this subject: http://philpapers.org/rec/MATBEA


But information is intimately tied to matter, no? Information is capable of having an affect on matter: when I store information on a CD, that information is capable of having an "interesting" affect on a CD player to produce music, or words on a page have an "interesting" affect on the reader. Information is abstract, but I can't call it "spiritual".

What is the difference between a random string of bits and, say, a huffman coded string of audio? Both have seemingly random distributions. The informational aspect comes with its ability to cause an "interesting" affect in an information-coupled hunk of matter. It seems wrong to call the random string information when it cannot produce a low-probability, surprising outcome when it interacts with a specific hunk of matter. But the encoded audio stream does exactly that!

Independent of a hunk of matter capable of decoding it, can a string of bits be said to carry information?


> But information is intimately tied to matter, no?

Rather it's other way around. Matter is tied to information. The matter is the medium for it, but information can be in itself. It's commonly expressed with the metaphor of a vessel. Matter is a vessel which is filled with essence (which thus binds it). But the filling itself is unbound until it's expressed through the vessel (medium).

> Information is capable of having an affect on matter

Definitely, as I said above, there is an approach which says they aren't conceptually separated (but only perceptually).

> Information is abstract, but I can't call it "spiritual".

How do you define "spiritual"? According to R' Boruch of Kosov for example, spiritual can be understood as abstract, or information-type. It was understood similarly by Tsiolkovsky.


I think we're mostly in agreement as far as matter being the medium for information. But can information exist independent of matter? If not then does it make sense to reify information as if it is its own entity? Is it really helpful to decouple the two concepts?


According to the view which puts information as primary, it not only can exist independent of matter, it defines the matter.


That may be true from a quantum physics perspective, but it's not very useful to us at the scale we live on (analogous to relativity vs quantum mechanics). I'm asking so many questions because I've been trying to come up with a useful working definition of information. It's surprisingly hard.


It surely is hard.

According to mystics it's very useful practically for our relation and interaction with the world and spiritual elevation. I.e. it can be put out of the abstract theory into very practical humane terms.


Wow. That's a lot of words.

How exactly does consciousness "emerge" from matter? Are there some kind of psychophysical bits that aren't currently accounted for in physics? Surely it's not merely epiphenomenal. How do intentions affect the physical world as seems to be happening when our minds move our bodies? What is the solution, here, to the mind-body problem?

I just finished reading Thomas Nagel's "Mind and Cosmos" which highlights some of the salient issues with reductionist explanations of consciousness, cognition, and value. I don't think these things are just a state of matter.

http://www.amazon.ca/Mind-Cosmos-Materialist-Neo-Darwinian-C...


There is no "mind emerging from matter" in the same way a hand does not "emerge" from fingers and a palm. This is because the label we apply is only in our heads. Reality is perfectly ok just being a bunch of quantum interactions; it is humans who have to label things "chair" "hand" "conciousness". Reality is reductionistic.


I don't agree that conscious experience is necessarily reducible. When I put a piece of cheesecake in my mouth, certain electrical signals will likely be firing in my brain, but those signals don't mean what it's like to consciously experience cheesecake in my mouth. That is, the signals themselves aren't indicative of the quality of my experience that only I, myself, can have in my conscious existence. The signals are only a physical correlate to my experience. I don't experience electrical signals in my brain. My experience is different than them.

My point is, there is other stuff out there in the universe (mental stuff) that is different than physical stuff, that seems almost impossible to be explained as physical stuff. I believe in the physical sciences, evolution, and mathematics, but I don't think that they fully encapsulate all that there is, nor can they definitively explain things like consciousness (they at least need a little more added to them).


> My point is, there is other stuff out there in the universe (mental stuff) that is different than physical stuff, that seems almost impossible to be explained as physical stuff. I believe in the physical sciences, evolution, and mathematics, but I don't think that they fully encapsulate all that there is, nor can they definitively explain things like consciousness (they at least need a little more added to them).

See, physics has explained the mechanics of what happens when you put cheesecake in your mouth. You do experience electrical signals in your brain. It is reproducible.

You're just refusing to correlate that phenomena to what you feel, which is okay, because it's not entirely obvious, feeling is an internal feedback process. But consider this mental experiment: if I blindfold you and stimulate your brain the same way as cheesecake in your mouth by the use of electrodes, would you be able to discern?


Let's put it another way. I feel pain when the atoms in my brain are in certain configurations. Why is that?

You simply can't explain that with physics. You can explain the physical symptoms of pain - like crying or sweaing. But not the feeling of pain. And the reason for that is that you won't be able to define the feeling of pain. Crying or sweating is definable (or reducible), it's just a complex motion of physical particles. But what's the feeling of pain (not the physical symptoms)?


> I feel pain when the atoms in my brain are in certain configurations. Why is that?

That's tautological.

You feel pain when your brain is in a particular set of states. Your brain is in one of those particular states. You feel pain.

There's no underlying "why", it's the definition of pain itself.

> You simply can't explain that with physics.

You can explain the mechanism with physics. What I think you mean is that you can't describe, subjectively, how you feel with it.


I just think it's not as simple as that. I may very well not be able to distinguish the difference between electrode-stimulation and actual-cake-in-my-mouth. What I'm trying to say is that I don't believe that even an exhaustive list of physical information about my brain includes "what it is like" for me to have the experience qua experience.


I see you work with UX. I guess you have an underlying belief that we're unique snowflakes, based on your training/experience, so the idea that experience can be captured by brain states feels too reductionist and has to be explained by an unknown mechanism... but let me suggest that it seems to be quite the contrary.

Attributing it entirely to matter actually validates the notion that each individual reacts differently to the same stimuli, because each organism has a unique constitution, neuronal activation levels, brain chemistry, etc., which in turn are all influenced by genetics, environment, study, diet. In other words, we're really the result of accumulated experiences, thus unique, and experience things uniquely. The point is that common physics alone seem enough to validate that intution.


First, argumentum ad hominem (if you don't know what that means, you can read about it here: http://en.wikipedia.org/wiki/Ad_hominem).

Second, my background is actually in Cognitive Science and Artificial Intelligence which includes Comp Sci., Psych, Psycholinguistics, Linguistics, and Philosophy. That is to say, I've put some serious thought into these issues and am not making opinions willy nilly.

Third, personally, I want to be able to explain the universe in terms of neat physical laws and mathematical formulae. But I don't think (at the moment) that what we have (yet) sufficiently explains what's going on (especially in terms of consciousness).

The common "explanation" for what consciousness is (usually put forth by materialist-determinist science) is that it simply emerges from a certain complexity of matter (put enough genes and DNA and neurons together and, bam, consciousness). I just feel like that begs the question. If we're going to explain what consciousness is I think we need to do better than that. That's all.


My intention wasn't to attack you at all, so sorry if I offended. I was just trying to think from your shoes based on what is in your profile. I didn't knew you had a background in C.S.

In that case, it's even more interesting, to me, that you think like that. Given your background you certainly know about neural networks, and what the simplest models are capable of. You probably also understand how emergent and apparently random behavior can arise from well define frameworks (Rule30, prime numbers distribution). It's intriguing to me that in light of evidences like that, it's still required for consciousness to be explained by something other than emergent behavior.

And I don't think "put enough genes and DNA and neurons together and, bam, consciousness" captures the issue. That may produce a machinery like the brain, but doesn't necessarily produces consciousness.

My hunch is that consciousness is the convergence of feedback loops and the perception of boundaries, allowing the distinction between myself vs. environment, and that should be conditional of a certain structure. I believe we'll be closer to understand consciousness by trying to reproduce it.


Cool. First of all, just let me say, that arguing about consciousness today has been very exciting and fun. Thanks for participating :D

My position is not the norm, for sure. I used to think along the lines you're describing (some sort of Churchland connectionism or dynamic system), and was driven to find a way of reducing consciousness to something that could be reproduced in a computer. But the more I learned the more I saw the gap between neurons and experience. I don't know for sure if it couldn't eventually be explained with some future advanced physical/chemical/biological theory, but right now there seems to be a big gap.

If we could look at all the pieces leading up to experience under a microscope, I still don't think there would be a way of seeing someone's experience or subject it to proper scientific scrutiny short of actually being that someone. That is, I don't believe that any set of facts would ever allow me to know what it's like to be someone else.

I think the monism Nagel describes in the book I linked to is an interesting idea of how things like consciousness, cognition, value, and intentionality can be compatible with materialist realism while still being something different without necessarily deriving from divine intervention or subjective idealism.


> If we could look at all the pieces leading up to experience under a microscope, I still don't think there would be a way of seeing someone's experience or subject it to proper scientific scrutiny short of actually being that someone.

See... but if you take a connectionist approach, it should in fact validate your intuition that you can't experience like someone else short of being them.

Making an analogy with neural network models, you can't transfer the weights from a network to another with a different structure and expect it to produce the same states. The experience imprints in the structure, and from that structure emerges the experience. And that's a ridiculously simple model, with ideal neurons and nothing else in the organism modeled... imagine the richness of behavior of the real thing.

I don't know... maybe it's our bias to believe matter is messy, filthy and mundane and that our consciousness, all the richness of our thoughts and emotions can't be explained only by it... but I actually find no less fascinating to think that is from structure alone that may arise sentient beings capable of living and breathing and feeling, out of the same atoms you find in the dirt.


Asking for a better level of explanation is not at all similar to staking out room for an extra-material homunculus type consciousness. "Put enough protons, neutrons and electrons together, and bam, bread" is a poor explanation of baking, but not so poor as to warrant skepticism on which fundamental particles we are eating.

Nagel has an almost myopic view of the 'experience' in my estimation. His classic claim that a human could never experience bat flight starts by pondering about the possible structure of bat consciousness, and is in the end justified by the physiological differences. Yet somehow he mysticizes the impossibility of experience transfer. As for your grandpost about experience, perhaps you are conflating the experience with the description (thought experiment: have you ever been able to isolate the feeling of an electrical signal in your brain?).

I have a similar background to yours, and for a while I was kind of stuck in a Nagel-Dennett-Russell sort of place that felt like it was probably correct but lacked any sort of richness that living through consciousness provides. My recommendation is to dive into the rabbit hole of continental philosophy. Deleuze has a great radical materialism (inspired by Spinoza?!), and wonderfully blurs psychology and philosophy in A Thousand Plateaus. Heidegger has an exploration of the experience of language being the bootstrapping tool of consciousness in Being and Time. And some psilocybin never hurt.


Haha, yeah I've dipped my toe into continental-heidegger-psilocybin-phenomenology and it's wonderful Being. I'm still trying to answer WTF (in general). I haven't read Deleuze. Maybe I'll check him out next!

I'm feeling like the more I look for definitive, objective answers, the more I'm pushed towards things like art and aesthetics; human expression, shared being, and culture. Those things seem more real to me than quantum mechanics or string theory. I'm not sure what to make of it all, or that anything can be made of it at all, but something sure is happening, and I feel, simply, that I want to be a part of it and play with whomever will join me :)


It's all atoms bumping into each other, designed by Darwinian evolution working over a period of 3.6 billion years to design mechanisms so complex that describing and simulating their complete interaction will be the work of generations. Along the way new physics theories may arise, but that won't change anything about the fact that it's all Quantum Mechanics. Just like you can have thermodynamics without knowing the lowest level of interactions, or in fact like Darwin was right about evolution without knowing about DNA.


Rather, models of reality are reductionistic when they say a chair (or a human) can be explained as "just" a bunch of atoms. It's worth reflecting once in a while on whether this is true, or even meaningful. For example, what is the standard of explanation? The idea of a perfect mathematical model of low-level physics is just one abstract conception of understanding reality.


Penrose addressed this (poorly, IMHO), but it gives a good idea as to his approach: http://en.wikipedia.org/wiki/The_Emperor's_New_Mind


I'm a physical scientist, work in biology and computer science, but cannot understand this paper for the life of me. Can somebody sum it up? It seems to start from a premise that makes no sense ("computronium" as a form of physical matter distinct from other forms of matter) which strongly contradicts the current mainstream understanding of vitalism and dualism.


I find it really strange that it is necessary to talk about quantum physics or the nature of time to explore the concept of consciousness. I thought it was pretty clear by now that consciousness is a macroscopic, emergent feature, seeing what very very simple neural networks models are able to do.


> consciousness is a macroscopic, emergent feature

So far everybody agrees, but if you try to define emergent you get into trouble. So is consciousness weakly or strongly emerging? Is there something special about qualia? So far, to the best of my knowledge, nobody has given a good definition, let alone a really good argument for either side.


Renaming metaphysical dualism doesn't change the fact that it's metaphysical dualism. It's just a new label on the same old can of worms. How do we tell conscious matter from non-conscious matter? By finding or failing to find the non-material property of consciousness. It's perhaps a bit more scientific than relying on souls, but only by virtue of not positing life after death or an immortal being or some other manifold of meta-physics.

Being a materialist is all well and good, so long as one takes it seriously, and there are only two serious materialist positions: Deny the idea that mental constructs have any reality [and ignore the paradox required for denial to have any meaning] or posit mind as an inherent property of all matter and live with the consequences of universal animism. Any other form of materialism is a weak waffling half measure.


Within my very limited grasp of the paper, it seems to me it is asserting precisely the opposite of your second claim. Tegmark seems to me to be proposing a limited set of properties that an assemblage of matter must have in order to exhibit, or perhaps to implement, consciousness. So he is denying a "universal animism" (because the properties are not univerally present), and attributing consciousness to operations of matter, so not presuming "mental constructs" apart from matter either.


Paul Adams (macarthur fellow 1986) teaches a class on self organization of the brain, and he begins with similar physics analogies involving phase changes. You can check out his stuff here: http://www.syndar.org/


thank you !!


Tegmark also featured yesterday on HuffPo live: http://live.huffingtonpost.com/r/segment/math-our-universe/5...


"How We Became Posthuman" by N. Katherine Hayles - really interesting book that traces the trajectory of how (among other trhings) information came to be considered (im)material essence of consciousness, a "thing" that exists independent of our embodied experience - she succinctly (and briefly) covers ground from turing to norbert weiner to the macy conferences , systems theory, Hans Moravec and so on. The book was published in 1999 I think and makes frequent reference to Francisco Varela's book "The Embodied Mind: Cognitive Science and Human Experience", which I think touches on a lot of the subjects discussed in this thread (ie, creating a dialogue between cognitive science and Buddhist meditative psychology) - I think both these books are very well written and may provide valuable context to those who are interested:

http://www.press.uchicago.edu/ucp/books/book/chicago/H/bo376...

http://mitpress.mit.edu/books/embodied-mind


Ok. Can anyone explain how Consciousness could be a state of Matter in Plain English? Because quantum mechanics voodoo?


Because "quantum". It's the new "god did it". See: http://www.youtube.com/watch?v=kmdJtSwH9O4


Because we are ultimately matter? Some electrical signals in your brain is current you. That's state of matter which is collectively called you.


Well, computers are ultimately matter, but nobody calls, say, a microkernel architecture a "state of matter".

Granted my knowledge of physics is nil compared to that paper's writer, but as far as I know, a "state of matter" doesn't mean "Duh, it's made of matters." It has a (more or less) well-defined meaning. Liquid or plasma is a state of matter. "Having a pleasant smell", "moving on its own", or "with a rounded edge" isn't.

So, you can't just say it's made of matter. You need to justify that "being conscious" is more like the former than the latter.


That's why I used the word collective. Computers are not conscious. But let's go a step further. What is being conscious? Sensory input, electrical signals working on memories and imagination pushing the unknown. I don't think there's any magic in being conscious. Having said that, I do accept my knowledge and understanding of this subject is limited. But I am not going to fall to 'if it can't be explained it must be divine' trap.


I think you misunderstood. I'm not objecting to the idea that mind is ultimately made of matter. I'm objecting to labeling it as a "state of matter", which has an entirely different meaning. (It does NOT mean "here's a bunch of matter that interacts in an interesting way.") The paper's abstract even explicitly states:

> We explore five basic principles that may distinguish conscious matter from other physical systems such as solids, liquids and gases: ...

So it seems clear that the author argues "being conscious" is parallel to "being liquid", but this better have a really solid (no pun intended) justification.


I agree with you. And I don't think "being conscious" is like "being liquid". Liquidity is a description of physical attributes. Consciousness is an existential state of being (whatever that means).

I think it would be nice if we could explain everything we observe in the universe in terms of physical laws and mathematical formulae. But I don't think we're there yet.

And why not a possible divine explanation? Last I checked, we haven't completely ruled it out. It's not a trap. At the very least, materialists should be able to counter divine arguments, not just ignore them.


Ruling out the divine because that thought thread stops the need for further exploration. And may be, thought is metaphysical and have physical and meta connotations which I/we can't even imagine. But I would like to stay ignorant, knowing that I don't know rather than accepting an unexplainable explanation.


Undeniably, there is brain involved. But, How does some electrical signals in my brain turn out as an experience I have? Huge gap...


A direct analogy is that no one would argue against the current thru a FET transistor is more or less a constant times the square of ( 1 minus the ratio of the gate voltage to device pinchoff voltage) (plus or minus typos, etc) There is of course a huge gap between that basic EE lab transistor experiment and what the HN webserver does when I click "reply". It is a simple solution to fill that gap with souls separate from the hardware and spirits of port 80 and woo woo like that than to actually figure it out.


Agreed. I have yet to see a non-speculative argument for how mind and matter are connected. It's always explained as "Well, you know, there's just all these electrical signals going on in the brain. And, like, when the brain is damaged, or we prod it with electricity, weird things happen to the person's consciousness" (I'm oversimplifying). I understand that the brain and the mind are essentially connected, but that doesn't explain what consciousness is, where it comes from, or how it got there in the first place.


It gets queasy because its brains talking about brains, but analogies not involving the thinker thinking about itself never seem to cause any queasiness when an aggregate property establishes itself.

So where is the strength of an iron I-beam? You can't cut it up and squeeze "strength juice" out of it. There's no little piece you can look at that says, "hey, I'm Euler's column law!". No individual atom decided, hey, I'm an I-beam now. But if you pile up an enormous number of iron (and other) atoms in the shape of an I-beam it inevitably behaves like a beam (or column) and suddenly "knows" its supposed to have angular momentum and collapse at a certain compression force and such. There's no particular magic to it, it just naturally happens to enormous numbers of iron atoms in a small place.

About the same with stellar formation. There's no "essence of star" and no individual hydrogen atom decided to create a star. Just pile up a certain ridiculous number of hydrogen atoms and good luck not having it turn into a star. And not enough atoms, well no way it'll ignite, at least not naturally. It can kinda smolder at most.

There are other analogies with grains of sand and landslides, water crystals and avalanches, etc.

Any time you pile up a certain number of neurons at a certain complexity you get consciousness. We've run a couple billion experiments for quite a few centuries and every time you do it, aside from physical brain damage, it works.

Now stuff like Jaynes "Origin of the bicameral mind" is a little controversial, but, perhaps, a great big pile of neurons can boot up in a peculiar mode, sort of.

The specific hack of where it came from is only our species was able to add substantial amounts of fish to the diet, and fish oils and proteins seem to allow serious spectacular brain development in a positive feedback cycle until you get consciousness, then you overfish your ocean till all the fish die, then ... we're about to find out what comes after that.


Oh yeah, I'm very interested in historical circumstances and matter combinations that have seemingly given rise to consciousness.

But saying "[when] you pile up a certain number of neurons at a certain complexity you get consciousness" is begging the question. What is consciousness. What is it in the neurons that makes it? What is it about the universe we're in that allows for the phenomenon to emerge from it? This has always been how science has explained it to date: it just emerges from the complexity. But that doesn't explain anything >:|


You may want to look into physics and how a bulk property like "moment of inertia" kinda comes into being. Its a property of solid matter that kind of appears out of nothing.

Here's another fun topic to google. There's an island and a population of predator and their prey. Now graph those populations and you get some interesting oscillators. Where do those waveforms come from? Well, nowhere. Its a system thing there are no waveform particles or whatever.

Edited to add, maybe another way to say it is its not the emergence or "where" or "source" that doesn't exist, but the property itself that doesn't exist. There is no consciousness. There are just certain patterns that are really common among big piles of neurons. Like the big pile of neurons occasionally saying stuff like "I think therefore I am".

There is no consciousness to measure, or you could give it a number. I've got 100 consciousness score, how bout you? This is very much like intelligence. I know it when I see it, but you really kick over an anthill when you claim you can give out an "intelligence number" like an IQ. Coincidentally both seem to be self organizational, again all you need is a big pile of neurons and some time and not only does consciousness pop out, but so does intelligence.


last desperate dying gasp of mind body dualism.


Lost edit perms on this old post. I withdraw any implication he's writing about dualism. He has peculiar definitions of consciousness and "state of matter".

Dualism is still toast. However I no longer think he's proposing dualism.

It seems to be a multi-disciplinary collection of boundary conditions.


First of all, that would be property dualism, not substance dualism; property dualism is less controversial (in the sense that Descartes went with full-blown substance dualism, and screwed things up.) And second of all, I don't think this is dualism. It's maybe some or other kind of functionalism[1], or something else, or whatever; but not dualism. And it's not Penrose-style "something something consciousness and quantum mechanics" voodoo.

[1]: http://plato.stanford.edu/entries/functionalism/


Are you made of matter? Are you concious?

If so then the matter you are made of is concious; it is matter that is in a state of conciousness.


That's an odd statement. Is there precedent for attributing emergent phenomenon to the pieces? Because that is what seems to be happening here. To my eyes, if a system is conscious, that does not mean the pieces that make the system are conscious. Consciousness is an emergent property of the system.

Snowflakes are pretty. The water molecules in the snowflake do not inherent this; they are not themselves pretty as a result of being in the pretty snowflake. (Although you could consider them beautiful for other reasons. From a life-sciences and/or chemistry point of view, water is pretty beautiful)

But then again, I'm an engineer with a passing interest in chemistry, not a quantum physicist, so of course I see everything as systems. To me nothing they talk about makes sense anyway.


States of matter are emergent. A single atom cannot be said to be a gas, liquid or solid, only a sufficiently large group of them.


Still, a snowflake is not a state of matter. You cannot say "Oh if we increase the pressure it will enter phase X and thus the snowflake state of matter". It is built upon a state of matter (crystaline solid), no?


A snowflake is second-tier emergence (this is a made-up term). Hydrogen and oxygen atoms do not have states of matter. Water molecules, as collections of H and O atoms in specific arrangements, have states of matter. Snowflakes are collections of water molecules in specific arrangements and in specific states of matter. A snowman. A family of snowmen. The general practice of creating snowmen. And so on and so on.


Right, so the ice is not a snowflake, but the snowflake is made of ice. Similarly, organic chemicals are not a consciousness, but a consciousness emerges (we think) in a lump of organic chemicals. Do you disagree?

P.S. A snowman isn't exactly an emergent property. Now, an avalanche system might be...


A single unconnected atom would be a gas, right?


No, a single, unconnected atom would be just that. The ideas of solid, liquid, gas, plasma, Bose Einstein Condensate, ect... are defined by the interactions of atoms in varying energy states and degrees of freedom.


I agree that describing the actual matter itself as conscious, rather than the object it forms, is a stretch too far. When I die the matter I'm made of will still be exactly the same stuff, but the object it forms will no longer be a human being.


So a cup of coffee on my desk is conscious? I mean, come on! It is the complex neural activity in the brain that brings about consciousness. We just don't understand why that neural activity translates into consciousness.


gottfiend leibniz developed a theory called 'monadology' which suggested that consciousness is the fundamental basis of everything, including matter.

if consciousness is being "aware" of yourself, then at a very, very, very low level, the coffee cup is "aware" of itself because it is a perfect analog computer simulating "a cup of coffee on a desk" - which is what it is.


gottfiend leibniz...

That's an interesting misspelling of "Gottfried". A universal genius, indeed.


http://en.wikipedia.org/wiki/Correlation_does_not_imply_caus...

EDIT: Why the downvote? Did I not write enough? My point is, simply, that observing consciousness in correlation to matter does not imply that consciousness is caused by matter. The provided link is to the Wikipedia article that explains this basic statistical principle.


The downvote is probably for the straw man. No one is saying that consciousness is caused by matter.


I guess I didn't see it as a straw man.

"Are you made of matter? Are you concious? If so then the matter you are made of is concious"... implying that if you are both made of matter and conscious then it is the matter that you are made of that makes you conscious. In other words, consciousness is caused by matter.

I don't believe that we have sufficient empirical evidence to determine a causal relationship, only a correlation.

EDIT: I realize I mistakenly jumped to the conclusion that the parent was implying matter caused consciousness when all that was said was that "conscious matter is conscious". I just wanted to further explain what I was thinking. Straw man or not, whether my argument was sound or not, I thought I had a valid point to make to contribute to the conversation and I don't feel that it should have been downvoted.


So if we come to discover that time is fundamental instead of emergent, it could disprove his hypothesis of how consciousness is emergent.

That's at least one way in which he makes his hypothesis testable.

Just, please, no one forward this article to any Republican legislators and draw their attention to the NSF grants that supported the research.


Rred the abstract, was about to call it BS, then I realized this was written by Max Tegmark!

So definitely worth reading carefully.


His whole idea of "consciousness as a state of matter" thing might be taken a bit more seriously if he could provide a conscious object...

who doesn't want a sentient sword?


Easy, but it has to be an object that is processing information in complex ways. From the first page:

"I have long contended that consciousness is the way information feels when being processed in certain complex ways "


This guy really hates Penrose:

"...Penrose and others have spec- ulated that gravity is crucial for a proper understanding of quantum mechanics even on small scales relevant to brains and laboratory experiments, and that it causes non-unitary wavefunction collapse [35]. Yet the Occam’s razor approach is clearly the commonly held view that neither relativistic, gravitational nor non-unitary effects are central to understanding consciousness or how con- scious observers perceive their immediate surroundings: astronauts appear to still perceive themselves in a semi- classical 3D space even when they are effectively in a zero- gravity environment..." Ugh. That is really lame.

People still cite his 1999 article [1] on how the brain can't be quantum, as if that was the end of the discussion. Meh.

Anyway, now he seems to be saying the opposite of this, while cleverly avoiding contradicting (or even citing) this earlier work.

[1] http://arxiv.org/abs/quantph/9907009


Sigh... they're completely clueless. I'm starting to think that I'm the only one who understands consciousness and it's not that complicated really. Hint: the first and most important step is to define "consciousness", specify what exactly does that word mean. Investigating a very vague word with many possible meanings is pointless.


Reason for the downvote?


I think Tegmark is a straight up genius. interested in reading this article,, but, has he crossed the line from genius into madness?


There's physics and there's metaphysics. That's known since the times of Aristotle and Plato. Attempts by physicists to solve the "hard problem of consciousness" are doomed to fail. Different tools are necessary for each "magisterium". Philosophers and Theologians are better equipped to address that problem and write more coherent and logically consistent papers. A transcendental problem should have a transcendental / fundamental answer and there's nothing fundamental about a bunch of equations. I lack the ability and the time to express the last point more eloquently.


Anyone with ideas on how can I start my "perceptronium mining" business?


Real, fake, don't care. Reading that abstract was pure joy.


arXiv's "General Physics" and "General Math" categories are great for finding these kinds of papers.


One might find another one of Tegmark's papers of note: http://arxiv.org/abs/quantph/9907009. I've barely skimmed it, but being vaguely familiar with how decoherence a la Zurek is supposed to work (had a homework on it last year, regrettably), the timescales he notes seem about right.


"perceptronium"?


See also: 'utilitronium'. http://wiki.lesswrong.com/wiki/Utilitronium

> Utilitronium is relatively homogeneous matter optimized for maximum utility (like computronium is optimized for maximum computing power). For a paperclip maximiz[ing artificial intelligence], utilitronium is paperclips. For more complex values, no homogeneous organization of matter will have optimal utility.

These sorts of terms are popular in the utilitarianism/rationality/friendly AI communities.


These sorts of terms are 'dangerous' if you like, because it's very easy to invent new words or phrases without giving them any actual _meaning_... but inventing the word is often enough for people to latch on to. The best argument is from Wittgenstein:

What if the diviner tells us that when he holds the rod he feels that the water is five feet under the ground? or that he feels that a mixture of copper and gold is five feet under the ground? Suppose that to our doubts he answered: "You can estimate a length when you see it. Why shouldn't I have a different way of estimating it?" If we understand the idea of such an estimation, we shall get clear about the nature of our doubts about the statements of the diviner, and of the man who said he felt the visual image behind the bridge of his nose. There is the statement: "this pencil is five inches long", and the statement, "I feel that this pencil is five inches long", and we must get clear about the relation of the grammar of the first statement to the grammar of the second. To the statement "I feel in my hand that the water is three feet under the ground" we should like to answer: "I don't know what this means." But diviner would say: "Surely you know what it means. You know what 'three feet under the ground' means, and you know what 'I feel' means!" But I should answer him: I know what a word means in certain contexts. Thus I understand the phrase "three feet under the ground", say in the connections "The measurement has shown that the water runs three feet under the ground", "If we dig three feet deep we are going to strike water", "The depth of the water is three feet by the eye". But the use of the expression "a feeling in my hands of water being three feet under the ground" has yet to be explained to me. http://www.geocities.jp/mickindex/wittgenstein/witt_blue_en....


I understand what the concept of computronium means. It makes a certain kind of sense. The problem I have is that both a slide rule and an iPhone are computronium, yet there is little about their organisation or physical function that is similar. I'm not sure what saying that they are 'computronium' does for us.

Furthermore computronium would presumably need a power source, facilities for maintenance or repair, heat dissipation, protection from external disruption, etc. I don't think it's a given that an optimal design would be in any way homogenous. Ultimately I think it's very naive and simplistic way of thinking about things, but if it's just intended as a shorthand for the purposes of thought experiments that's fine. I'm just concerned that some people might take the idea too literally.


If computronium is a substance 'optimized for maximum computing power', then neither an iPhone or a slide rule are computronium.

Computronium by that definition would be more like an entire planet terraformed to be a massive supercomputer - something with maximum possible computation density.


Sounds more philosophical rather than being quantum physics. With lots and lots of big words.

Maybe at one point quantum physics becomes philosophy.

Also, Apart from explaining the quantum states Schrödinger's cat has always felt to be more of a philosophical thought experiment rather than being mathematical or quantum one.


Huxley was a mystic. He knew his stuff :-)

Headlessness is a synonym for mindlessness.

Only way out is practice mindfulness.


Downvoted, haha, I guess most people didn't know about Huxley's psychedelic experiments.


When does a perceptual schematic become the bitter mote of a soul...


What they're trying to do is address Copenhagen indeterminacy by assuming that the collapse of the wavefunction involves the ontologically similar definition of mental states. How does a mental state cause a collapse of the wavefunction at all? — This is the Quantum Measurement problem. [EDIT:] A better question, then, is when does a mental state cause a collapse, assuming that it can, or should be able to ontologically speaking. Does a materialist framework answer this question?

Statistically speaking collapse of the wavefunction and quantum decoherence have the same signature, such that we never know when observation (mental states) causes collapse. To investigate consciousness as matter is to assert the ontological underpinnings of mental states as part of the internal causal nexus of the quantum system.

I'm still collecting my thoughts on this subject here: https://gomockingbird.com/mockingbird/#xl6a68x/NLd6c9

The gist is that observation (including our tools used for observation, along with indexical subjective states) itself has to be treated as a quantum system, not as a classical system[0]. And matter can be described by quantum mechanics.

[0]: http://quantum-ethics.org/


A mental state doesn't cause the collapse of the wavefunction. (For example, in the Schroedinger's Cat setup, not only is the cat a perfectly good observer, but so is the detector.) Rather, the collapse of the wavefunction may cause a change in someone's mental state - or it may cause a change in nobody's mental state, if it is unobserved by any human.

A quantum mechanical "observer" does not need to be a human, or even an animal.


While we can have mechanical detectors in experiment setups that are part of these experiments, the fact is that there is no way to conclusively say that the presence of consciousness does not play a role in the collapse of the wave function. All experiments we do are done by conscious humans, there is always a conscious observer part of the experiment. That is kind of the conundrum for which after 100+ years there still is no definitive answer brought forward by science.

You should also realize that you are positing something equally crazy by implying causation the other way around, meaning that the collapse of the wave function somehow can cause changes to the mental state of a conscious person at breathtakingly large distances.


I was under the impression that the delayed-choice quantum eraser experiments demonstrated that mechanical detectors do not cause waveform collapse, the output of the detector still had to be observed by a human before that occurred.


I think the question of whether or not mental states cause the collapse of the wavefunction is open to interpretation[0]. All observers consistute an ontologically distinct mental reality[1], which by postulation is describable in terms of matter, and so a fortiori describable by quantum mechanics. Generally speaking, this means that the notion of "subjectivity" at the fundamental level of this supposed (quantum) ontology is statistically insignificant.

The idea here is indeed that for formal consistency, all entities, mental or otherwise, must be described in quantum mechanical terms such that the description of mental states structurally composes Hilbert space as Hilbertian subspaces.

Essentially, as per my diagram mentioned earlier, mental states occupy a position in the table such that if all mental states are accounted for in the description of Hilbert space, a pre-theoretical "rhythm" of statistically described decoherence is established.

Nowhere do I presuppose that mental states are necessarily human, but only that the structure of consciousness is amenable to the structure of material reality. Biological reality itself is an implementation, or supervenes on, a fundamental material reality. There is no ontological commitment here to hominids as such in that hominids alone experience consciousness.

[0]: https://en.wikipedia.org/wiki/Consciousness_causes_collapse#...

[1]: "The order and connection of ideas is the same as the order and connection of things." E2P7. Ethica. Baruch Spinoza.


Bong physics!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: