For example: It is an easily observable phenomenon (e.g. by anyone who meditates) that you can be conscious without remembering anything. In other words, consciousness is independent of something like memory.
Yet Max's list on page 3 has stuff like "independence", "utility", "integration" which have nothing to do with observations of what consciousness is actually like. Rather, they are more like high-level ideas of what human beings are like.
But we don't need to explain human beings (complex biological organisms that walk around and do stuff). Science has got that covered already, at least kind of. So if you are going to clearly think about consciousness, you need to factor out what consciousness really is and look at the properties of that.
This is supposed to be a foundational principle of science: that your hypotheses are attempts to explain things that are actually observed. The first step is to observe things carefully! You don't just go making up hypotheses.
So it's a giant red flag any time a scientist writes a paper about consciousness where they conflate it with memory in some way (which is almost every time). It's a red flag because it indicates that the scientist has not actually spent any time observing consciousness, because they aren't noticing things that are obvious to people who have done that.
(You might think that because we are all walking around conscious every day, there would be no need to observe consciousness, but this isn't true. We walk around in a space governed by Newtonian physics, but it took until Newton to figure out this thing called inertia and that a frictional force is required to make things stop, etc, because if you don't look carefully and make careful measurements, most of the everyday world doesn't appear that way at all. Same thing with consciousness.)
If you're meditating, how do you know that you're meditating without having, at minimum, a working short-term memory? If meditating actually disabled your memory, wouldn't you immediately forget that you were meditating and stop?
All I can say is, try meditating sometime and you will see. (I don't recommend mantra meditation, but rather something more like Vipassana or "mindfulness" or any meditation that is not about distracting your mind by keeping it busy).
When you become comfortable with meditation, you become very aware of what your consciousness is doing. You gain a palpable sense for the present moment. Once you have that, it makes a lot of these kinds of questions unnecessary (or at least the questions become very different in nature). If you don't have this taste for the present, then asking/answering questions like this is like trying to explain colors to a blind person. It just doesn't work because most of the questions are about things that don't really have anything to do with consciousness.
> the point is that unconscious processes can be explained without hypothesizing consciousness
This really doesn't make any sense, by the way.
To borrow Metzinger et al.'s terminology, this internal model of what consciousness is may not be very transparent to us (much in the same way why we don't have good intuitions why/how it is that we have notions about solidity and classical (in one or other sense) mechanics (it is useful to have an image of a solid tree/obstacle when running through a forest while being chased by a tiger, and all that.) We may have some intuitions, but those intuitions may turn out to be wrong.)
: A very interesting book I'm yet to read: http://mitpress.mit.edu/books/being-no-one
Working memory can hold a limited number of things (usually said to be 7), whereas short-term memory has a larger limit in capacity and time. These are both temporary systems that feed off each other and rely on the larger long-term memory to give meaning and context to the "symbols" they contain.
Everything is interconnected, so making definitive statements about any of this is hard.
(IANANeuroScientist but my work does Neuroscience research)
So, are such states hallucination? Every person (human being) knows for sure that they are real.
I believe meditation is pulling focus into the present moment, but I also believe that moment is far from discrete.
Ultimately, the idea is that the state of 'here-now' (whether arrived through practice, observation or heck even induced through means of substance - though there are differences) is the state when You are not your mind anymore. So in that sense, looking at above suggestion about memory, I would simply ask: what is the need of a memory?
(It's terrifying what your brain does in sleep....)
Give me a break. Would you mind sharing what makes you more qualified than the author to "observe consciousness" and come to conclusions regarding the requirement of memory? I'd say it's a "red flag" when a game designer on hacker news throws out unsubstantiated attacks on an extensive well-written rational analysis about the physical manifestation of consciousness by a prominent MIT physics professor and cosmologist.
Additionally, I think you are quite clearly wrong. Consciousness does require memory. It is not possible to observe, experience, or process without memory, all of which are required for the integration of consciousness. Even to perceive requires memory, since it requires the integration, filtering, and processing of large amounts data into consistent perception and a conscious interpretation thereof - none of which is instantaneous and all of which takes time, requiring memory.
I had the exact same thought. It's understandable that people want to disagree, because it's not in their logical realm, or they find it's out of their scope of normal and accepted, but that doesn't mean that it's wrong at all.
When someone comes over that negative, to an author who has put so much effort into the paper. One would find it fair to have an equally professional, detailed and long answer. Imagine a time where everybody has the power to publicly "try to refute" your paper with a tweet, absolute horror.
For example, the paper nowhere conflates consciousness and memory. Instead, the paper repeats the suggestion that a requirement of consciousness may be the ability to process a substantial amount of information. This idea is related to the idea of memory, but it's not really the same.
The paper is not about describing the biological experience of consciousness at all. This paper asks the question: "Is there some way to understand from the Hamiltonian and density matrix of our universe that it should contain consciousness?"
Tegmark then proposes some criteria that are probably necessary for consciousness to arise and then presents some metrics for the various criteria and calculates those metrics for various conditions. The paper is really describing a framework for how consciousness can be considered in the context of a physical representation. The calculations should be relatively straight-forward to follow for anyone with a decent memory of their linear algebra class.
To put it another way: If you had a Hamiltonian and density matrix that described a universe that you thought contained consciousness, what kind of things would you calculate for that Hamiltonian and density matrix to try and find out if it did or didn't? This paper suggests some ways to think about this question.
So I don't understand why this comment spends so long talking about "observing consciousness."
What is there to celebrate about half-baked philosophy dressed as science and published?
Philosophers and theologians have spent the last two thousand years not agreeing what consciousness is and there is no clear consensus now. That's not really surprising - all the seemingly "basic" qualities people perceive - say lightness, darkness, wetness or coldness, are very "high level" compared to what we know about the real states of matter
The argument that consciousness is a state of matter because consciousness is something we intuitively perceive is something of a throw-back to the "four elements of matter" thinking - the prescientific system where matter is organized by its perceived properties in contrast to any serious investigation of the causes of those perceived properties.
> consciousness is independent of something like memory.
I'm not going to say you are completely wrong here but where is your proof?
Hilbert & Fourier space
tensor factorization of matrices
condensed matter criticality
Quantum Darwinism program
2)David Deutsch, Max Tegmark, Some Other trouble-maker
Max Tegmark has done similar things before:. He is also related to the FQXi which is funded by an organization that has sometimes promoted explicit religious agendas in a scientific context.
However we should be judging every paper by its merits alone & the rigor of its arguments.
The only problem is that work of this nature is very interdisciplinary, so who is qualified?
Here is a good article on this subject: http://philpapers.org/rec/MATBEA
What is the difference between a random string of bits and, say, a huffman coded string of audio? Both have seemingly random distributions. The informational aspect comes with its ability to cause an "interesting" affect in an information-coupled hunk of matter. It seems wrong to call the random string information when it cannot produce a low-probability, surprising outcome when it interacts with a specific hunk of matter. But the encoded audio stream does exactly that!
Independent of a hunk of matter capable of decoding it, can a string of bits be said to carry information?
Rather it's other way around. Matter is tied to information. The matter is the medium for it, but information can be in itself. It's commonly expressed with the metaphor of a vessel. Matter is a vessel which is filled with essence (which thus binds it). But the filling itself is unbound until it's expressed through the vessel (medium).
> Information is capable of having an affect on matter
Definitely, as I said above, there is an approach which says they aren't conceptually separated (but only perceptually).
> Information is abstract, but I can't call it "spiritual".
How do you define "spiritual"? According to R' Boruch of Kosov for example, spiritual can be understood as abstract, or information-type. It was understood similarly by Tsiolkovsky.
According to mystics it's very useful practically for our relation and interaction with the world and spiritual elevation. I.e. it can be put out of the abstract theory into very practical humane terms.
How exactly does consciousness "emerge" from matter? Are there some kind of psychophysical bits that aren't currently accounted for in physics? Surely it's not merely epiphenomenal. How do intentions affect the physical world as seems to be happening when our minds move our bodies? What is the solution, here, to the mind-body problem?
I just finished reading Thomas Nagel's "Mind and Cosmos" which highlights some of the salient issues with reductionist explanations of consciousness, cognition, and value. I don't think these things are just a state of matter.
My point is, there is other stuff out there in the universe (mental stuff) that is different than physical stuff, that seems almost impossible to be explained as physical stuff. I believe in the physical sciences, evolution, and mathematics, but I don't think that they fully encapsulate all that there is, nor can they definitively explain things like consciousness (they at least need a little more added to them).
See, physics has explained the mechanics of what happens when you put cheesecake in your mouth. You do experience electrical signals in your brain. It is reproducible.
You're just refusing to correlate that phenomena to what you feel, which is okay, because it's not entirely obvious, feeling is an internal feedback process. But consider this mental experiment: if I blindfold you and stimulate your brain the same way as cheesecake in your mouth by the use of electrodes, would you be able to discern?
You simply can't explain that with physics. You can explain the physical symptoms of pain - like crying or sweaing. But not the feeling of pain. And the reason for that is that you won't be able to define the feeling of pain. Crying or sweating is definable (or reducible), it's just a complex motion of physical particles. But what's the feeling of pain (not the physical symptoms)?
You feel pain when your brain is in a particular set of states.
Your brain is in one of those particular states.
You feel pain.
There's no underlying "why", it's the definition of pain itself.
> You simply can't explain that with physics.
You can explain the mechanism with physics. What I think you mean is that you can't describe, subjectively, how you feel with it.
Attributing it entirely to matter actually validates the notion that each individual reacts differently to the same stimuli, because each organism has a unique constitution, neuronal activation levels, brain chemistry, etc., which in turn are all influenced by genetics, environment, study, diet. In other words, we're really the result of accumulated experiences, thus unique, and experience things uniquely. The point is that common physics alone seem enough to validate that intution.
Second, my background is actually in Cognitive Science and Artificial Intelligence which includes Comp Sci., Psych, Psycholinguistics, Linguistics, and Philosophy. That is to say, I've put some serious thought into these issues and am not making opinions willy nilly.
Third, personally, I want to be able to explain the universe in terms of neat physical laws and mathematical formulae. But I don't think (at the moment) that what we have (yet) sufficiently explains what's going on (especially in terms of consciousness).
The common "explanation" for what consciousness is (usually put forth by materialist-determinist science) is that it simply emerges from a certain complexity of matter (put enough genes and DNA and neurons together and, bam, consciousness). I just feel like that begs the question. If we're going to explain what consciousness is I think we need to do better than that. That's all.
In that case, it's even more interesting, to me, that you think like that. Given your background you certainly know about neural networks, and what the simplest models are capable of. You probably also understand how emergent and apparently random behavior can arise from well define frameworks (Rule30, prime numbers distribution). It's intriguing to me that in light of evidences like that, it's still required for consciousness to be explained by something other than emergent behavior.
And I don't think "put enough genes and DNA and neurons together and, bam, consciousness" captures the issue. That may produce a machinery like the brain, but doesn't necessarily produces consciousness.
My hunch is that consciousness is the convergence of feedback loops and the perception of boundaries, allowing the distinction between myself vs. environment, and that should be conditional of a certain structure. I believe we'll be closer to understand consciousness by trying to reproduce it.
My position is not the norm, for sure. I used to think along the lines you're describing (some sort of Churchland connectionism or dynamic system), and was driven to find a way of reducing consciousness to something that could be reproduced in a computer. But the more I learned the more I saw the gap between neurons and experience. I don't know for sure if it couldn't eventually be explained with some future advanced physical/chemical/biological theory, but right now there seems to be a big gap.
If we could look at all the pieces leading up to experience under a microscope, I still don't think there would be a way of seeing someone's experience or subject it to proper scientific scrutiny short of actually being that someone. That is, I don't believe that any set of facts would ever allow me to know what it's like to be someone else.
I think the monism Nagel describes in the book I linked to is an interesting idea of how things like consciousness, cognition, value, and intentionality can be compatible with materialist realism while still being something different without necessarily deriving from divine intervention or subjective idealism.
See... but if you take a connectionist approach, it should in fact validate your intuition that you can't experience like someone else short of being them.
Making an analogy with neural network models, you can't transfer the weights from a network to another with a different structure and expect it to produce the same states. The experience imprints in the structure, and from that structure emerges the experience. And that's a ridiculously simple model, with ideal neurons and nothing else in the organism modeled... imagine the richness of behavior of the real thing.
I don't know... maybe it's our bias to believe matter is messy, filthy and mundane and that our consciousness, all the richness of our thoughts and emotions can't be explained only by it... but I actually find no less fascinating to think that is from structure alone that may arise sentient beings capable of living and breathing and feeling, out of the same atoms you find in the dirt.
Nagel has an almost myopic view of the 'experience' in my estimation. His classic claim that a human could never experience bat flight starts by pondering about the possible structure of bat consciousness, and is in the end justified by the physiological differences. Yet somehow he mysticizes the impossibility of experience transfer. As for your grandpost about experience, perhaps you are conflating the experience with the description (thought experiment: have you ever been able to isolate the feeling of an electrical signal in your brain?).
I have a similar background to yours, and for a while I was kind of stuck in a Nagel-Dennett-Russell sort of place that felt like it was probably correct but lacked any sort of richness that living through consciousness provides. My recommendation is to dive into the rabbit hole of continental philosophy. Deleuze has a great radical materialism (inspired by Spinoza?!), and wonderfully blurs psychology and philosophy in A Thousand Plateaus. Heidegger has an exploration of the experience of language being the bootstrapping tool of consciousness in Being and Time. And some psilocybin never hurt.
I'm feeling like the more I look for definitive, objective answers, the more I'm pushed towards things like art and aesthetics; human expression, shared being, and culture. Those things seem more real to me than quantum mechanics or string theory. I'm not sure what to make of it all, or that anything can be made of it at all, but something sure is happening, and I feel, simply, that I want to be a part of it and play with whomever will join me :)
So far everybody agrees, but if you try to define emergent you get into trouble. So is consciousness weakly or strongly emerging? Is there something special about qualia? So far, to the best of my knowledge, nobody has given a good definition, let alone a really good argument for either side.
Being a materialist is all well and good, so long as one takes it seriously, and there are only two serious materialist positions: Deny the idea that mental constructs have any reality [and ignore the paradox required for denial to have any meaning] or posit mind as an inherent property of all matter and live with the consequences of universal animism. Any other form of materialism is a weak waffling half measure.
Granted my knowledge of physics is nil compared to that paper's writer, but as far as I know, a "state of matter" doesn't mean "Duh, it's made of matters." It has a (more or less) well-defined meaning. Liquid or plasma is a state of matter. "Having a pleasant smell", "moving on its own", or "with a rounded edge" isn't.
So, you can't just say it's made of matter. You need to justify that "being conscious" is more like the former than the latter.
> We explore five basic principles that may distinguish conscious matter from other physical systems such as solids, liquids and gases: ...
So it seems clear that the author argues "being conscious" is parallel to "being liquid", but this better have a really solid (no pun intended) justification.
I think it would be nice if we could explain everything we observe in the universe in terms of physical laws and mathematical formulae. But I don't think we're there yet.
And why not a possible divine explanation? Last I checked, we haven't completely ruled it out. It's not a trap. At the very least, materialists should be able to counter divine arguments, not just ignore them.
So where is the strength of an iron I-beam? You can't cut it up and squeeze "strength juice" out of it. There's no little piece you can look at that says, "hey, I'm Euler's column law!". No individual atom decided, hey, I'm an I-beam now. But if you pile up an enormous number of iron (and other) atoms in the shape of an I-beam it inevitably behaves like a beam (or column) and suddenly "knows" its supposed to have angular momentum and collapse at a certain compression force and such. There's no particular magic to it, it just naturally happens to enormous numbers of iron atoms in a small place.
About the same with stellar formation. There's no "essence of star" and no individual hydrogen atom decided to create a star. Just pile up a certain ridiculous number of hydrogen atoms and good luck not having it turn into a star. And not enough atoms, well no way it'll ignite, at least not naturally. It can kinda smolder at most.
There are other analogies with grains of sand and landslides, water crystals and avalanches, etc.
Any time you pile up a certain number of neurons at a certain complexity you get consciousness. We've run a couple billion experiments for quite a few centuries and every time you do it, aside from physical brain damage, it works.
Now stuff like Jaynes "Origin of the bicameral mind" is a little controversial, but, perhaps, a great big pile of neurons can boot up in a peculiar mode, sort of.
The specific hack of where it came from is only our species was able to add substantial amounts of fish to the diet, and fish oils and proteins seem to allow serious spectacular brain development in a positive feedback cycle until you get consciousness, then you overfish your ocean till all the fish die, then ... we're about to find out what comes after that.
But saying "[when] you pile up a certain number of neurons at a certain complexity you get consciousness" is begging the question. What is consciousness. What is it in the neurons that makes it? What is it about the universe we're in that allows for the phenomenon to emerge from it? This has always been how science has explained it to date: it just emerges from the complexity. But that doesn't explain anything >:|
Here's another fun topic to google. There's an island and a population of predator and their prey. Now graph those populations and you get some interesting oscillators. Where do those waveforms come from? Well, nowhere. Its a system thing there are no waveform particles or whatever.
Edited to add, maybe another way to say it is its not the emergence or "where" or "source" that doesn't exist, but the property itself that doesn't exist. There is no consciousness. There are just certain patterns that are really common among big piles of neurons. Like the big pile of neurons occasionally saying stuff like "I think therefore I am".
There is no consciousness to measure, or you could give it a number. I've got 100 consciousness score, how bout you? This is very much like intelligence. I know it when I see it, but you really kick over an anthill when you claim you can give out an "intelligence number" like an IQ. Coincidentally both seem to be self organizational, again all you need is a big pile of neurons and some time and not only does consciousness pop out, but so does intelligence.
Dualism is still toast. However I no longer think he's proposing dualism.
It seems to be a multi-disciplinary collection of boundary conditions.
If so then the matter you are made of is concious; it is matter that is in a state of conciousness.
Snowflakes are pretty. The water molecules in the snowflake do not inherent this; they are not themselves pretty as a result of being in the pretty snowflake. (Although you could consider them beautiful for other reasons. From a life-sciences and/or chemistry point of view, water is pretty beautiful)
But then again, I'm an engineer with a passing interest in chemistry, not a quantum physicist, so of course I see everything as systems. To me nothing they talk about makes sense anyway.
P.S. A snowman isn't exactly an emergent property. Now, an avalanche system might be...
if consciousness is being "aware" of yourself, then at a very, very, very low level, the coffee cup is "aware" of itself because it is a perfect analog computer simulating "a cup of coffee on a desk" - which is what it is.
That's an interesting misspelling of "Gottfried". A universal genius, indeed.
EDIT: Why the downvote? Did I not write enough? My point is, simply, that observing consciousness in correlation to matter does not imply that consciousness is caused by matter. The provided link is to the Wikipedia article that explains this basic statistical principle.
"Are you made of matter? Are you concious? If so then the matter you are made of is concious"... implying that if you are both made of matter and conscious then it is the matter that you are made of that makes you conscious. In other words, consciousness is caused by matter.
I don't believe that we have sufficient empirical evidence to determine a causal relationship, only a correlation.
EDIT: I realize I mistakenly jumped to the conclusion that the parent was implying matter caused consciousness when all that was said was that "conscious matter is conscious". I just wanted to further explain what I was thinking. Straw man or not, whether my argument was sound or not, I thought I had a valid point to make to contribute to the conversation and I don't feel that it should have been downvoted.
That's at least one way in which he makes his hypothesis testable.
Just, please, no one forward this article to any Republican legislators and draw their attention to the NSF grants that supported the research.
So definitely worth reading carefully.
who doesn't want a sentient sword?
"I have long contended that consciousness is the way information feels when being processed in certain complex ways "
"...Penrose and others have spec-
ulated that gravity is crucial for a proper understanding
of quantum mechanics even on small scales relevant to
brains and laboratory experiments, and that it causes
non-unitary wavefunction collapse . Yet the Occam’s
razor approach is clearly the commonly held view that
neither relativistic, gravitational nor non-unitary eﬀects
are central to understanding consciousness or how con-
scious observers perceive their immediate surroundings:
astronauts appear to still perceive themselves in a semi-
classical 3D space even when they are eﬀectively in a zero-
gravity environment..." Ugh. That is really lame.
People still cite his 1999 article  on how the brain can't be quantum, as if that was the end of the discussion. Meh.
Anyway, now he seems to be saying the opposite of this, while cleverly avoiding contradicting (or even citing) this earlier work.
> Utilitronium is relatively homogeneous matter optimized for maximum utility (like computronium is optimized for maximum computing power). For a paperclip maximiz[ing artificial intelligence], utilitronium is paperclips. For more complex values, no homogeneous organization of matter will have optimal utility.
These sorts of terms are popular in the utilitarianism/rationality/friendly AI communities.
What if the diviner tells us that when he holds the rod he feels that the water is five feet under the ground? or that he feels that a mixture of copper and gold is five feet under the ground? Suppose that to our doubts he answered: "You can estimate a length when you see it. Why shouldn't I have a different way of estimating it?"
If we understand the idea of such an estimation, we shall get clear about the nature of our doubts about the statements of the diviner, and of the man who said he felt the visual image behind the bridge of his nose.
There is the statement: "this pencil is five inches long", and the statement, "I feel that this pencil is five inches long", and we must get clear about the relation of the grammar of the first statement to the grammar of the second. To the statement "I feel in my hand that the water is three feet under the ground" we should like to answer: "I don't know what this means." But diviner would say: "Surely you know what it means. You know what 'three feet under the ground' means, and you know what 'I feel' means!" But I should answer him: I know what a word means in certain contexts. Thus I understand the phrase "three feet under the ground", say in the connections "The measurement has shown that the water runs three feet under the ground", "If we dig three feet deep we are going to strike water", "The depth of the water is three feet by the eye". But the use of the expression "a feeling in my hands of water being three feet under the ground" has yet to be explained to me.
Furthermore computronium would presumably need a power source, facilities for maintenance or repair, heat dissipation, protection from external disruption, etc. I don't think it's a given that an optimal design would be in any way homogenous. Ultimately I think it's very naive and simplistic way of thinking about things, but if it's just intended as a shorthand for the purposes of thought experiments that's fine. I'm just concerned that some people might take the idea too literally.
Computronium by that definition would be more like an entire planet terraformed to be a massive supercomputer - something with maximum possible computation density.
Maybe at one point quantum physics becomes philosophy.
Apart from explaining the quantum states Schrödinger's cat has always felt to be more of a philosophical thought experiment rather than being mathematical or quantum one.
Headlessness is a synonym for mindlessness.
Only way out is practice mindfulness.
Statistically speaking collapse of the wavefunction and quantum decoherence have the same signature, such that we never know when observation (mental states) causes collapse. To investigate consciousness as matter is to assert the ontological underpinnings of mental states as part of the internal causal nexus of the quantum system.
I'm still collecting my thoughts on this subject here: https://gomockingbird.com/mockingbird/#xl6a68x/NLd6c9
The gist is that observation (including our tools used for observation, along with indexical subjective states) itself has to be treated as a quantum system, not as a classical system. And matter can be described by quantum mechanics.
A quantum mechanical "observer" does not need to be a human, or even an animal.
You should also realize that you are positing something equally crazy by implying causation the other way around, meaning that the collapse of the wave function somehow can cause changes to the mental state of a conscious person at breathtakingly large distances.
The idea here is indeed that for formal consistency, all entities, mental or otherwise, must be described in quantum mechanical terms such that the description of mental states structurally composes Hilbert space as Hilbertian subspaces.
Essentially, as per my diagram mentioned earlier, mental states occupy a position in the table such that if all mental states are accounted for in the description of Hilbert space, a pre-theoretical "rhythm" of statistically described decoherence is established.
Nowhere do I presuppose that mental states are necessarily human, but only that the structure of consciousness is amenable to the structure of material reality. Biological reality itself is an implementation, or supervenes on, a fundamental material reality. There is no ontological commitment here to hominids as such in that hominids alone experience consciousness.
: "The order and connection of ideas is the same as the order and connection of things." E2P7. Ethica. Baruch Spinoza.