This strikes me as disingenuous. Consciousness is correlated strongly and solely with the brain. Meaning, if you cut off my hand I can still think and feel and reason. If you remove a fist-sized volume of tissue from my pre-frontal cortex I'll never have another thought again.
I guess I'm missing the point Manzotti is driving at. Experience is something that needs to be accounted for. I can deny many things but I can't deny that I'm experiencing some thoughts and feelings and sense perceptions right now. Even if the self and experience is illusory there must be something causing that illusion to exist. If consciousness exists in this universe then it must be explained by the laws and constants of this universe. If it's not neurons that produce consciousness then there must be some other physical phenomena that generates experience.
Consciousness first, e.g. what you call matter is just a thing made up by your consciousness. No need to discuss this one since it's not really a useful axiom when we want to explain the phenomena from the point of view of a physical world.
Consciousness as a result of matter and ...
The laws of physics are deterministic / rational / bounded / mathematics e.g. we are computers and consciousness is a computation. As a result basically everything that moves is conscious (although not necessarily the same "level" of conscious) since that movement is a computation all that is needed is to define a language for that movement that makes it a isomorphic projection to/from a identity for some "consciousness complexity class".
The laws of physics are mystical, something may be special about us, free will, yada yada ... Then anything could make sense, lets go take ayahuasca!
We can wave it off, but it seems as important as other explanations.
I have a fourth theory: The nature of reality is dualistic, and the mind simultaneously creates the physical world while the mind arises from the physical world, supporting each other in a feedback loop. It's a dualistic phenomenon similar to particle-antiparticle pairs that form at a black hole's event horizon. two sides of the same coin.
Subjective experience is commonly thrown out the door but what if it is just as important as objective reality?
Personally I'm a fan of defining the base of my reality as consciousness or subjective experience but I still like to think that there is Truth and that my experience contains at least a shadow of it.
Edit: I do think duality is a really useful conceptualization since you can think of The World (or what I just called Truth above) as existing and then there is Absurd which is everything that doesn't exist (and you can't disprove because yay category theory!). Then you can say all of Existence can be categorized in some arbitrary way, there are X categories that can be nested to any depth and maybe there are some strange loops or whatever, we then convert that into a binary tree (losing some relations along the way maybe) and call the resulting dualities shadows of the True Pair (of Truth and Absurd).
Now your problem is perceiving the world as it is, i.e. evaluating these 'perspectives', so you need some function to order them and pick the best one, lets call it 'religion'. If your religion function is to be effective we need a good assumption, one idea (suggested by some prior research) is to investigate the True Pair and assume that every other duality will have a part that corresponds to Truth and a part that corresponds to Absurd (or yin-yang, submissive-dominant, sender-receiver... whatever you prefer). A perspective that pairs things that cannot be interpreted as opposites becomes muddled and unsure about the nature of the information being evaluated.
Anyway, you can investigate the assumptions famous researchers like Jesus, Muhammed, The Buddha and others have come up with and maybe concoct your own view from theirs or you can ignore the problem and use the default perspectives of your culture / language.
This doesn't seem right... computation requires causality. It's not enough that you can build a model complex enough to define a random sequence of states as equivalent to computing something; if any consciousness is computed thereby, it's the model rather than the random sequence of states that would be conscious, no?
All this will be easier to reason about when consciousness is understood, assuming we ever get there.
He just made the claim that the natural basis of consciousness is not limited to neurons, even though it might be predominated by them.
He is trying to say there are other natural places we can search for the components of consciousness without having to resort to the false convenience of some flavor of dualism, whether Cartesian or Chalmersian(?).
Assuming physicalism is the case and not idealism, dualism, or some other metaphysics. If consciousness isn't the result of neuronal activity, then that would be a good reason to suspect that physicalism is somehow false.
To be pragmatic - the absence of a physical cause for consciousness does not preclude any other causes. Whatever environment consciousness exists in that environment must have rules, laws, constants, and symmetries that define that environment. So all we're doing is trading the laws of the universe for the laws of some ethereal no-material realm or supernatural realm. We're not actually performing any useful work, we're just shuffling the problem around.
The hard problem of consciousness, the questions of determinism and free will - these problems will challenge any consciousness that finds itself in a rule-bound system.
If falling isn't the result of gravity, then that would be a good reason to suspect that physics is somehow false.
If it's not neurons, you're going to need extraordinary
I'm not saying it's impossible. Who knows. But it remains a hard problem for now, at least.
Not necessarily. If you think about it, consciousness is not physical. My thoughts have no mass. My experience of the color red has no height or width. Why should we assume that something that is not physical must come from The physical world? I think the best argument is that neuron functioning correlates with thought. However, as we know, correlation does not necessarily mean causation.
I think one big hurdle for understanding consciousness is that people still have a matter – only mindset. I think people need to recognize that consciousness is not material. Then they will begin to make progress on the fundamental question, which is how something that is not material can arise from matter, or whether it comes from another source.
You could also construct a physical storage medium that doesn't change mass to encode data, e.g. an array of objects which are rotated to encode state.
This spiritualistic view that thoughts have no mass == consciousness is not material is demonstrably wrong at both sides of the equation. Thoughts certainly have mass, and consciousness is certainly material as is evidenced by observing anyone with severe traumatic brain injury or degenerative brain disease.
It seems much more likely that consciousness is an emergent property of a neural network with the right feedback mechanisms in place and probably with some close tie in with language.
Poke someone in the brain and consciousness changes, take a tiny bit of LSD and consciousness changes - it's not a magical property.
Could you (or anyone else who knows the meaning of "physicalism") come up with a clear definition of what "physical" means for you? Each time I read something suggesting that consciousness might not be physical I completely lost track of the meaning, a bit like when I read that some experience (or chemical!) is not "natural".
Certainly, it has nothing to do with mass or energy (or you would merely postulate the existence of some new particle with this and that properties). It certainly does not mean, either, that it is not composed of matter (like for instance sound or temperature), or you would have said "immaterial".
Maybe that's because of those years spent studying physics, but I just can't get what could exist but still not be physical (or natural) as for me those are equivalent.
As for consciousness, it may not depend on brains, but it may turn out to always depend on some physical medium and physical process, and that would be enough to show that it's a physical phenomena.
>A hole is not physical but it's realized in the midst of physicality.
Yet no one seems to think this poses some metaphysical problem requiring us to go "beyond" physics to explain holes.
There are a variety of ways we could recognize the pattern.
+ The series of bits you start with.
+ The bitmap on the screen.
+ the BMP which would also generate that bitmap.
+ The GIF which results in the same frame.
+ The emotional impression you take away from it. "That meme I remember so well"
And so on.
The JPEG is closer to qualia than is the arrangement of bits; closer to "your experience".
But the person you are replying to probably wants to argue that qualitative experiences aren't realized on any physical sort of anything, let alone translatable across various physical mediums.
Now, again, I think that argument is wrong- I think these experiences happen in our brain, though it's currently unclear exactly how - but I at least understand their impulse to put those experiences in a different category than a JPEG, and why the ability to translate a JPEG across different mediums wouldn't speak to the concern they are raising.
One thing to consider with that is that only the brain is able to give answers about consciousness—i.e. it's producer of verbal content and thought,' so there could be no communication of it if something like consciousness were going on at other loci in the body. To get a better feel for how something like that might happen, I'd highly recommend checking out some research on split brain patients who's corpus callosums (callosi?) have been severed (this connects the two brain hemispheres). One of my favorite examples: split brain patient shown an instruction to go out through the front door only to one eye (each eye is connected to the opposite brain hemisphere)—specifically it's shown to the eye which connects it to the hemisphere not used for speaking. The split brain patient will get up and follow the instruction, walking toward the door. Then the researcher asks the patient, "what are you doing?" and cognitive dissonance resolves itself into some plausible but essentially bullshit answer like, "I was, umm, going home to get a Coke..." If you'd like to hear more, check out 
Also, to be clear here—since this conflation derails just about every discussion of consciousness—the article is talking about the 'hard problem' of consciousness. So, the issue at hand is immediate subjective experience, not a self-reflective mechanism (simple versions of that could readily be coded up).
> If consciousness exists in this universe then it must be explained by the laws and constants of this universe.
If our universe is likened to a board game with some finite set of rules, e.g. Monopoly, the 'physics' of this universe is fully determined by the rules of the game (even if there are non-deterministic aspects where you have to e.g. roll dice). The non-hard problem of consciousness is a question in this realm, like "can I sell one of my properties to another player?"; the hard problem is necessarily outside the scope of the rules; it's a question more like, "what is the molecular composition of a 'Chance' card?". The rules of Monopoly do not cover this. In a similar manner I believe the 'hard problem' of consciousness is getting at what it means to be a part of the fabric of the universe, and the laws of physics are expressed through that medium, but no specific laws create it.
 Social Brain, Michael S. Gazzaniga
edit: I wrote 'not used for language' previously and changed it to 'not used for speaking,' which is an important distinction.
A conscious brain that advertises itself with communication certainly makes matters easier for us, however we can make reasonable inferences about consciousness in people and animals based on sophisticated behavior, not just communication.
And if there is consciousness bound up in, say, a toenail, and it's not doing anything or communicating anything, not even doing some complicated information processing that it's keeping to itself, it's true that we might very well never discover it. But it's also true that we wouldn't have much reason for believing in any such form of consciousness given that it's completely unobservable.
>Also, to be clear here—since this conflation derails just about every discussion of consciousness
It sounds like you've got a dog in this one. That's fine; I do too. But I wouldn't say it's necessarily a conflation- some people's inclination to resolve the hard problem is to argue that it is best resolved, in some way or another, by mapping subjectivity onto biology. That's not enough, necessarily, you'll want some good philosophical reasons for thinking that that kind of explanation is capable of doing enough to give a satisfactory account of the hard problem. And that's certainly a matter of debate. But I wouldn't say it's a conflation, necessarily.
>In a similar manner I believe the 'hard problem' of consciousness is getting at what it means to be a part of the fabric of the universe, and the laws of physics are expressed through that medium, but no specific laws create it.
This sounds like an attempt to smuggle a preferred answer into the definition of the problem. It may be "about" some universal medium in which consciouness and lays of physics are equal participants, if yours is the right answer. But I think it's enough to say that what makes the hard problem a hard problem is the need to explain qualia, and getting it through to people that they need to take qualia seriously, and not just talk about physical correlations.
The reason I see is roughly as follows: it also exists in the part of us which is able to communicate, and the conditions for it being there do not appear related the anatomical functions (e.g. cognition and communication) of that other area—so that the problem becomes more a question of demonstrating why we should assume that it does not exist in the parts which don't have communication capabilities. In other words, if all the parts which are able to communicate find it present and yet it is not due to the communication capability itself—why assume its not present rather than just not communicated in other areas? (And to clarify, I believe the above would be nonsense for the non-Hard Problem which is clearly related to cognition—but I'm only speaking of The Hard Problem.)
> But I wouldn't say it's necessarily a conflation
I think we're talking about different things here. I am specifically talking about people conflating the Hard Problem of consciousness with the non-Hard Problem of consciousness. Or do you mean to say that the two problems are the same and that the designation of 'Hard Problem' as something separate is vacuous?
> This sounds like an attempt to smuggle a preferred answer into the definition of the problem.
I agree it sounds that way, and I'm open to further argument that that's the case—but I'm pretty sure it's not. Instead, I think if you follow the problem of trying to define qualia far enough, you end up running into my fabric of the universe situation—or at least the distinction I make between the rules of monopoly versus the substrate in which the physical incarnation of monopoly exists. I believe that substrate issue is in the definition of the hard problem, though you can probably find phrasings of it in which its absent.
Anyway, I appreciated your reply and would be happy to discuss further.
People can give after-the-fact subjective reports of what, if anything, they were experiencing in non-communicative states (for lack of a better term). They can tell us that something was "going on" in their head while they were dreaming, or that they don't recall experiencing anything while being in a coma or being "dead" for a few minutes before being resuscitated. We seem to agree that this is the convenient version of consciousness that people are able to communicate about. And we can look at what brain states match up with these reports -- reports of hard consciousness -- and find that certain brain configurations and activities seem to always be there when hard consciousness is there.
But perhaps, you might say, a person who doesn't recall any experiences during a coma actually had experiences then, too. But just doesn't know it and/or weren't able to communicate about it.
So on the one hand, the kinds of consciousness a person is able to communicate about is corroborated by some sort of interesting brain activity. And the other kind of consciousness that they can't communicate about, the kind posed by a philosopher, isn't corroborated by anything. That starts to look like a "consciousness of the gaps" problem.
And this is without even taking into account aforementioned things like sophisticated behavior and other biological clues which you say are "just" the non-hard problem. There again, I think that these are important evidence, and that excluding them is borrowing from a conclusion to pay for an argument.
It's also more elegant from the perspective of simple explanation. The amount of explanatory debt one incurs by taking on the construction project of a parallel, universe-spanning medium inhabited by consciousness is enormous, and there's significant risk that the investment won't pay off.
>I think we're talking about different things here. I am specifically talking about people conflating the Hard Problem of consciousness with the non-Hard Problem of consciousness. Or do you mean to say that the two problems are the same and that the designation of 'Hard Problem' as something separate is vacuous?
The hard problem is real in that it demands an explanation that is more than just citing brain activity or waving it away as an illusion. But the notion that physical stuff is automatically about something other than the hard problem is baking your conclusion into the definition of the problem. There are certainly all kinds of non-hard things physics/biology explains, but they may yet explain hard things too, provided a compelling enough argument can be made.
>Instead, I think if you follow the problem of trying to define qualia far enough, you end up running into my fabric of the universe situation
My problem with talking this way is that, if I were to do the same thing, I would end up saying that following the problem of qualia far enough leads to somethingorother about circuits of synapse firing and its intimate connection to experiences, thus the problem is "really" about those things, and someone talking about a universal medium shared by consciousness and physics but independent from them is talking about the wrong thing.
But in a way I think you are right,because I think at the end of the day an adequate answer does have to be about finding some ontological level, some "medium", on which consciousness and physics can be understood to be continuous with each other. I just happen to think that that medium is physics, and that we will have to get comfortable with the idea that there are robustly physical answers to questions of what are qualia and what is consciousness. I.e. the idea that what manifests itself as experiences are at some raw level, sheer, nakedly physical events witnessed by other physical events which we call brain activity.
Could you clarify what you mean by that? Physics is branch of knowledge, a collection of descriptions of regularities in material structures evolving in time. I don't see how a collection of descriptions can be the medium in which subjectivity/qualia exist. See what I mean?
Or you mean what's referred to in those descriptions? If so, I see a difficulty in it because you just get back to the same subjectivity/qualia problem. I mean, what is being referred to in those descriptions? Our starting point for what physics is talking about is subjectivity—the only thing we have direct empirical contact with. Then we make abstractions on top of it so that various parts are grouped into 'objects' etc. (that grouping is largely subconscious), and then physics comes into play in giving descriptions of regularities in the time evolution of categories of those objects. But none of that deals with what the objects being discussed are, it just just talks about how a selection of characteristics which happen to be useful to use change in time.
I know some people believe the best we can do is say that what something is is the collection of things we say about it. So, the things being referred to in physics are the regularities in certain of their attributes, e.g. the measurements we make of their mass and velocity, and the forces being applied to them or which they are applying etc.
Personally that sounds like a cop-out, though granted, it's at least an attempt at a pragmatic cop-out: after all, what's the point in entertaining the idea that there's more to the thing if we know we can never access it/talk about it. But here's the value: if you just admit the boundaries to your knowledge, your taking a more realistic account of things (the alternative of pretending the boundaries aren't there can only lead to mistakes). In that case, we have not discovered what the things are which are referred to by physics, we have just charted regularities in certain attributes we care about (it's also important to note that we invented those attributes, and which set of attributes we deem to be the intrinsic/essential
characteristics have a degree of arbitrariness to them since
multiple equivalent formulations exist—e.g. reformulations of Newtonian mechanics which give the same outputs for the same inputs but in which the notion of 'force' does not exist, or relativistic/quantum mechanics of course, though that's moving beyond strict reformulation).
Edit: I clarified a number of things on re-reading a few minutes after posting!
A cop-out would be blithely insisting qualia don't exist (like some people in this thread are doing), or insisting that one or another philosophical explanation is forbidden "by definition." And in my opinion, saying it's "emergent" would be a cop-out because often "emergence" just serves to label a phenomena without explaining it.
By contrast, saying that qualia are nakedly physical is to say that they are something, and to say that we are witnessing raw physical events in a direct way is to at least aspire (successfully, I would hope) to give some tangible account of what it is for a mind to have qualia in a way that respects the first person perspective. That could be wrong or confused or unpersuasive in any number of ways. But it's at least something more than a cop-out.
There is more to it than that, and I'm happy to give my best shot at taking that starting point, and applying some philosophical duct-tape to those starting pieces to try to show what it would mean to say qualia are literally physical, and what kind of explanation gives them their due as "real" while still tying them to the physical world.
But that may be a long conversation, and I am humble enough to know I won't be able to settle one of the oldest philosophical issues in a hacker news thread. So I would like to at least point to literature that drives my intuitions on the subject.
Consciousness Explained, by Daniel Dennett, to at least pump the intuition that we should not merely regard a physicalist account of conscious as a legitimate answer to the hard problem, but as the leading candidate for explanation, maybe even the only game in town. Caveat being that he does insist that we should deny qualia altogether in the end, which I think is a cop-out. But he brings the ball to the 1 yard line.
Godel Escher Bach & I Am A Strange Loop by Douglas Hofstadter, to drive home that a key aspect of consciousness is the ability for a system (be it a brain or computer) to interpret the medium that it itself is a part of. Hofstadter I think has the idea that brings it into the end zone, but wouldn't be able to get to the 1-yard line by himself.
"The Intertwining—The Chiasm" by Maurice-Merleau Ponty, who gives a powerful argument that turns the tables against the subject/object distinction. Killer quote: "What is this talisman of color, this singular virtue of the visible that makes it, held at the end of the gaze, nonetheless much more than a correlative of my vision, such that it imposes my vision upon me as a continuation of its own sovereign existence?"
I didn't get to everything, but I wanted to put enough on the table to make this a satisfactory end point to the conversation if need be.
Shouldn't the one claiming the existence of the thing be the one to describe or identify it?
What is this so-called "perceiver" that you reference and are sure exists?
a) physicality is not presumed. i.e., Manzotti doesn't seem to assume that consciousness is even "visible to scientific instrumentation."
b) you're conflating logical "thought" with "consciousness". (> Consciousness is correlated strongly and solely with the brain. Meaning, if you cut off my hand I can still think and feel and reason. If you remove a fist-sized volume of tissue from my pre-frontal cortex I'll never have another thought again.) That's a gigantic assumption that is absolutely unwarranted.
I suppose that we haven't observed any of the effects of consciousness, i.e. expression of choice, could be an argument that there is no such consciousness in a computer. Or, it could just be that computers are conscious but lack any tools to exert free will. Their consciousness is "read only," so to speak.
For example, if we were to start thinking about the state of no thought — just pure awareness — we’d understand less about what we’re trying to examine.
Thought is an amazing gift, but it’s limited. It would appear we’re reaching that limit when we start thinking about consciousness — hence the wildly varying thoughts on it.
As Sam Harris points out, "beginning meditators often think that they are able to concentrate on a single object, such as the breath, for minutes at a time, only to report after days or weeks of intensive practice that their attention is now carried away by thought every few seconds. This is actually progress. It takes a certain degree of concentration to even notice how distracted you are."
Amazingly, most of us are unable to even recognize (with any precision) when we're thinking. And that's a crucial first step in discovering what's beyond thought.
Thus his argument are about "separable consciousness", and we don't know if real consciousness is "separable consciousness".
Has anyone seen a good argument that p-zombies are logically possible, that goes beyond being some form of "I can string these words together, and they form a grammatically-correct sentence?"
Keep in mind that Chalmers is a property dualist who thinks there is some additional law of nature binding consciousness with informationally rich physical processes (not necessary brain activity).
The p-zombie argument isn't circular, but it can be attacked on other grounds.
I myself am fond of Bernardo Kastrup's work on Idealism: https://www.reddit.com/r/philosophy/comments/41gb0j/bernardo...
Basically (don't want to spend much time on this so it's not the best way summarize it):
- Consciousness is defined as my current feelings, perceptions, thoughts.
- In a way, consciousness (or experience) is the only thing that exists.
- The idea that the physical world is everything is ridiculously wrong. Physics is just a way to summarize patterns observed through our consciousness. The physical world is just a mental model.
- Physical state of the brain clearly affects consciousness but there could very well be an opposite casual link - consciousness affecting the brain by "pushing atoms" or making supposedly random quantum phenomena not-so-random. Why? Because I wouldn't be typing this otherwise - my brain would never realize it affects some consciousness in "parallel universe" (metaphorically speaking).
- Other people (or computers) don't have consciousness as defined above. But we could define a separate term, say "consciousness2", to describe some physical processes typical for brains. If the "consciousness pushing atoms in brain" hypothesis is correct, we could say that all brains with this kind of fishy activity are "conscious2".
Perhaps my main disagreement is that he puts other people's minds on the same level as one's subjective consciousness (but this is just after quickly skimming the paper).
You might find it interesting to read about various forms: http://www.philosophybasics.com/branch_idealism.html. Kastrup's work will probably interest you too.
Kant for example, might be said to have believed that "reality" was "mentally constructed", and he used a really similar language in describing how we perceive the world. But he also wrote quite clearly that Berkeley's idealism was absurd and that we can never perceive anything which violates physical laws. And together with denying the possibility that we might ever come to real knowledge of the world by thinking alone, that was the whole point of his "transcendental idealism".
I think the best alternative is Reinforcement Learning. In RL, there is an agent which exists inside an environment. It can perceive the world around, move about and perform actions. The agent has a goal to achieve and receives reward signals from time to time. Such an agent can learn behavior that maximizes rewards.
That's what consciousness is. It is not an experience, it is a whole loop "perception -> judgement -> action -> reward" that defines life moment by moment. This also explains the role of consciousness - it is to choose actions that lead to survival of the individual and of its genes (reproduction). It is all a self replicating loop, in the end. The purpose of life is life, the purpose of consciousness is to guard life.
Consciousness is a set of four functions (perception, evaluating actions, acting, learning from reward signals) that work together to select the next action, to protect the body, on which consciousness depends - yep, full circle.
This whole "RL agent inside environment = consciousness" theory has the advantage that it is concrete and not supernatural in any way, and has promising applications (such as AlphaGo and self driving car).
By the way, to understand what you mean by consciousness, do you consider yourself conscious when sleeping?
Consciousness is that thing that allows me to make a sandwich when I get up in the morning, so as not to die of hunger. How do you like this short and concrete definition?
The magic of consciousness happens when there are: 1- an agent, 2- the world around it, 3- self replication and evolution, that give the main thrust of purpose to the agent, the purpose of self replication being recurrent, it is just more self replication.
1. Why loop in RL produces consciousness and other kids of loops don't?
2. In the "perception -> judgement -> action -> reward" loop, how these different parts are qualitatively different. If you remove action, does consciousness stop?
4. Following that definition even the simplest LR robot of few hundred lines would be conscious and more complex industrial robot with advanced machine vision would not be.
Because it helps in selecting and carrying out actions, that are necessary for maximizing rewards. In real life, reward == more life, so, by natural selection, consciousness is that function that preserves the body.
> If you remove action, does consciousness stop?
If you remove actions, there would be no purpose to be learned by the agent, so it would not form values, so it would not have emotion. It would be just a feed-forward perception system that does not develop any response to anything.
> Following that definition even the simplest LR robot of few hundred lines would be conscious and more complex industrial robot with advanced machine vision would not be.
I'd say consciousness depends on the game the agent is playing, but there is always a kind of consciousness as long as the agent can answer adaptively to changing situations and learn to act better with time.
Would plants be conscious under that definition?
In complex organisms, gene regulatory networks coordinate the development of the body and internal organs. So it has the complexity necessary for consciousness, it's just a small size chemical-based consciousness, but the main ingredients are present - sensing, acting, learning and a goal - to maximize its own life and replicate, and let's not forget the most essential one - it is part of the world, which guides its evolution.
(2) does anyone in epigenetics share this view?
If your position is that those questions are meaningless, that's fine, but since they are a well-accepted part of the 'problem', you have to call that out explicitly rather than just ignore them.
You can't. Information is physical. Matter irreducibly contains information: https://www.scottaaronson.com/blog/?p=3327
What you need to do is get over the prejudice that information processing is undesirable as an explanation for consciousness.
There seems little need to posit consciousness as some irreducible property of matter, except for those who are mysteriously uncomfortable with the idea of being purely physical beings. Most arguments against physicalism are smoke and mirrors.
As for specifically what physical process produces consciousness, it's an open question that's being studied. We're currently like Ada Lovelace studying a web browser and marvelling at what it does, having little idea how it actually works, yet we're mostly convinced it's not magic. I think you'd be more forgiving if Miss Lovelace took some time trying to reverse engineer the operation of a computer and it's program.
Personally, I think  is a great starting point on this journey. The problem of apparent subjectivity was a huge stumbling block, and I think that paper is a great stab at it.
I can't answer the ultimate skeptic except to say that not only do skeptical arguments against knowledge depend on that very same knowledge, and so undermine themselves, but even if that were not so, abstract musings will never be more plausible than the concrete reality staring you in the face.
The point is that subjective phenomena are the most basic facts of my existence, so it's not even meaningful for me to explain (or worse, deny) them with respect to (in favor of) things I extrapolated from them.
I think last time you didn't agree that subjective phenomena were the most immediate facts of my experience. If that's still the case, perhaps I can try to be more precise.
The question is whether human subjectivity actually is true first person, irreducible subjectivity, or just an illusion of it.
Consider the analogy to sight: seeing is believing right? And yet, at some point you'll see an optical illusion whose logical properties are simply inconsistent with other properties you've seen. Like how water appears to break pencils . So how do you reconcile this? Clearly seeing can't always entail believing.
Enter science. As I linked elsewhere in this thread, science can explain fake subjectivity quite well. So your choice would seem to be throwing away our best tool for objective knowledge just to preserve some special property of human minds that ultimately makes no real difference.
When you're having a dream, it is all an illusion (of the first kind), but to conclude that you are therefore not experiencing anything whatsoever and thus there's nothing to explain would be a grave mistake. You're not blacked out, and no amount of reasoning would make it so.
The same holds here, no matter how much more consistent this reality appears to be.
I think it does. Unless you seriously think you dream before you've ever had a single experience? Dreams are memory mashups.
> When you're having a dream, it is all an illusion (of the first kind), but to conclude that you are therefore not experiencing anything whatsoever and thus there's nothing to explain would be a grave mistake.
I've already acknowledged that humans "experience" things, in a colloquial sense.
What's left to explain is whether it's possible to capture what we term "experience" using third-person objective facts. We still need an account of how this happens, so there's plenty left to explain.
Thanks for the clarification. Perhaps we don't disagree in the way I thought. Let me see if I understand our divergence.
Because my experiences have a certain consistency, I infer an objective cause. Because this continues to happen, I become increasingly certain about this model (e.g., materialism). In brief, the way I model reality is the result of a particular configuration of experiences. I then use this model to explain the existence of experience itself. So far I hope we're on the same page.
Where I think the divergence happens is in whether a person considers the inferred model (materialism) or the "sheer phenomenon" (colloquially, "experience") to be more fundamental. I think I understand (and respect) why you and others feel it should be the former. Being the sort of person who spends time relaxing my inferences about reality (i.e., "meditating"), I feel it's the latter.
Anyway, I should probably go spend time doing something else. Apologies if I've mischaracterized your views.
> When you're having a dream, it is all an illusion (of the first kind)
How would you characterize a dream in which friends or family members are present? Are these not "facts about an external reality"? Presumably, you aren't dreaming about my family members, as you have never met them.
Yet here I am, dreaming about them for some reason. I would argue this is due to the "external reality" where my family members exist and I have interacted with them.
From these sorts of examples, we can conclude that whatever we think we are experiencing might not be what we are actually experiencing -- but crucially, not that we are therefore not experiencing anything whatsoever. The visual experience of a broken pencil doesn't stop happening just because you know it's not really a broken pencil. It keeps happening but means something else.
If you are dead honest with yourself, and drop all your metaphysics for a moment, you will discover that various things certainly seem to be happening (sights, sounds, thoughts, etc.). Try it right now! This "sheer fact" deserves an explanation, and though logical and metaphysical games can produce the explanation "there's nothing there to explain," many people don't consider that any kind of explanation. (Furthermore, I hypothesize that for those that do consider it a valid explanation, spending more time in that "dead honest" state, where there's minimal conceptualization of one's experience, will slowly eat away at their confidence in the usefulness of the "explanation." Conversely, thinking harder and harder about it may indeed bolster it.)
I'm curious what you think of it.
>> The trouble I have is that "modeling" can happen in the absence of what I'm calling "experience."
> This is the big claim. I'm highly skeptical.
Sure, it depends on what the word "modeling" means. The way I've always used it, it applies to lots of existing systems, including robots. Are they conscious? Maybe. I don't know. It seems easy to build a very simple system that trivially "models" itself. Does that mean it's conscious?
> As another angle, this seems closely related to identity.
As it happens, (lack of) identity is a critical realization on the Buddhist path (among others). If we look carefully enough, we can't find one. Instead, we find something like this. Take a perception like sound. Instead of discovering a self that hears it, it's more like the sound itself is made of the sheer fact of subjectivity; i.e., "made of" consciousness. There is no "I" that is conscious (though of course the concept still remains useful). Instead, consciousness is that property which [everything I could possibly call or think of as] the world is "made of."
> Meditation is very interesting. I've done some, and I'd like to dig into it more. However, I interpret the experience differently.
That's fair. There are different ways and degrees of doing it.
Ultimately, I think the distinction comes down to whether we treat "consciousness" as an abstract, objective property (that I assume others have), or this "sheer fact of subjectivity" that seems impossible to communicate (that I can know I have). One of these I'm happy to reduce to other objective properties, and the other I am not.
No, it would not be. I think everyone agrees what humans can do is fairly distinct (and many animals, to a much lesser degree). If it wasn't clear, I'm arguing it can be understood as an incredibly sophisticated form of modeling. A toy system can have a model of itself, but it will be a very weak and limited model with essentially no broader context. The models that the human brain constructs are immensely richer, with many orders of magnitude more complexity. I would argue that significant awareness requires this depth and richness.
> Take a perception like sound. Instead of discovering a self that hears it, it's more like the sound itself is made of the sheer fact of subjectivity; i.e., "made of" consciousness.
We are now getting into territory that feels more difficult to address. I don't know how to meaningfully interpret this claim that sound itself is made of consciousness. Do you mean to say that the phenomenon going on in the human brain is the actually same thing that is occurring in a wave of air molecules? I'd be curious to see what evidence you believe there is for this idea, beyond your personal meditation sessions.
Almost all of my daily life involves some sort of modeling and planning, which I think is why so many models of consciousness take that as a starting point. But I think it's important to incorporate non-ordinary states if we want to capture the entirety of the concept. (I'm not saying that the brain isn't doing some sort of modeling even when I'm in non-ordinary states, including contentless ones, but it fails to capture the interesting aspects of the phenomenon under question.)
As for the "made of" stuff, let me try a different tack. Sorry, wall of text coming.
For a moment, I want you to be as profoundly skeptical as you can. For example, you cannot be certain that matter, space, etc. exist. This could all be a dream, or illusion, or simulation, etc.
But notice that there is something you can be certain of. For example, if you close your eyes and listen to a sound (ideally a continuous one), you can be certain that sound is happening. But of course, we're being skeptical, so we better reduce that to "something is happening." Or if that's not skeptical enough, "it seems like something is happening." Be as skeptical as you can, and notice that there is still "something" that you simply cannot meaningfully doubt.
(At this point, sometimes people think "oh, well maybe it's not really happening," and in a certain philosophical sense, that's useful. But no matter how much philosophizing you do, still, there that [something/nothing/whatever-you-feel-like-naming-it] is. I don't know how better to put this.)
What is it I'm certain of? It's the happening-ness of it. Put another way, my conscious experience of it. To say "it is happening" is to say "I am conscious of it." Somehow, its very "existence" is inseparable from "my consciousness of it." Two descriptions of the same "sheer subjective fact." In this sense, "sound" refers to a particular modulation of this property called "consciousness," almost as though (get ready for woo!) consciousness were an energetic property that manifests as sight, sound, smells, thoughts, emotions, memories, etc.
This property is literally the only thing I've ever encountered. I'm willing to entertain various metaphysical explanations for the patterns it makes (generally known as "physical reality") but those hypothetical constructs are forever secondary for me. I can never know if they really exist, and find myself unable to reduce a certain phenomenon to uncertain phenomena.
I realize how nonsensical this sounds. Why couldn't the whole above story be some nonsense my (real, physical) brain is concocting? I don't have a good answer.
Totally with you on this.
> What is it I'm certain of? It's the happening-ness of it. Put another way, my conscious experience of it. To say "it is happening" is to say "I am conscious of it."
Still with you. Although being skeptical, I would say we should be quite careful in how much we are implying with a phrase like "I am conscious of it".
> In this sense, "sound" refers to a particular modulation of this property called "consciousness," almost as though (get ready for woo!) consciousness were an energetic property that manifests as sight, sound, smells, thoughts, emotions, memories, etc.
Thanks for the "woo" warning :). This is definitely where you lose me! I can't find any reason to make that jump. I don't see how that conclusion follows, or what reason there is to think that is the case.
For me, it is much simpler to assume (with some fundamental caveats) that what we see is what we get, especially if we are able to test it from many different angles and with many different tools of measurement. I understand that these tools are limited, but they can still tell us a great deal. In the domains where science can't make strong statements, I willingly lean back on accumulated human wisdom (but with a skeptical eye).
At the end of the day, I try to discard as many unsupported assumptions or beliefs as I can when constructing my worldview , to the best of my ability.
> Why couldn't the whole above story be some nonsense my (real, physical) brain is concocting?
This is a very good question. Cheers!
Sure. I'm more or less defining what I mean by "consciousness" in that statement. It is the indisputable fact that (put colloquially) "something seems to be happening." Since we're being skeptical, there's really no phrasing that cannot be disputed, so I can only ask you to do the experiment to see what I might be pointing at. Something definitely seems to be happening, even if in some other sense we might say it is not "actually" happening.
I no longer consider it meaningful to "explain" that definite property in terms of fundamentally uncertain metaphysical concepts like time and space, no matter how likely they appear to be. I appreciate that others may differ here. Thanks for the dialogue!
The trouble I have is that "modeling" can happen in the absence of what I'm calling "experience." Given that one can happen without the other, they cannot be equivalent.
So the trouble always seems to come back to giving a formal definition for "experience." I admit that I can't, but I can give a procedure (roughly, meditation) that seems to get other people to a point where they say "oh, that."
For a particular reason that I seem unable to communicate, at that point it becomes less meaningful to say that experience is "caused by" physical things as opposed to merely correlated with them. The more clearly I see what it is I'm trying to explain, the more clear it becomes why there's an "explanatory gap."
I'm sure this is terribly unsatisfying and sounds like hand-waving woo, and I'm sorry I can't do better right now.
This is the big claim. I'm highly skeptical.
The way I see it, the self-modeling that occurs in biological organisms _is_ the conscious experience. They are one and the same. Each momentary, instantaneous experience is the latest output of the self-modeling process.
As another angle, this seems closely related to identity. I see identity as being intertwined with, perhaps even stemming from, the accumulated impact and memory of these models. At least, the portions that refer to us. The boundaries can be quite fuzzy -- we are partially defined by our friends, etc.
Each instantaneous model changes the brain a bit, and over time this process constructs one's identity (starting with a tiny kernel at birth)
In summary, maybe we can say that "I" is simply the brain's current best guess at what the entity that houses it actually is, and that conscious experience is the brain's best guess at what this "I" is currently experiencing (caveat: these guesses are biased towards survival -- accuracy is not the only concern).
> but I can give a procedure (roughly, meditation) that seems to get other people to a point where they say "oh, that."
Meditation is very interesting. I've done some, and I'd like to dig into it more. However, I interpret the experience differently.
> I'm sure this is terribly unsatisfying and sounds like hand-waving woo, and I'm sorry I can't do better right now.
Not at all! I'm enjoying the discussion. Thanks for being frank.
As for closing gaps, do we really expect science to be able to explain everything in terms of some fundamental theory, from quarks to societies? Or maybe in terms of emerging complexity going from chemistry all the way up to social organization?
Certainly consciousness appears to be subjective experience. That doesn't make it so. See: http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...
> As for closing gaps, do we really expect science to be able to explain everything in terms of some fundamental theory, from quarks to societies?
That certainly seems to be the trend. Every step forward in some field sheds light on other fields, and cross-disciplinary work is becoming ever more relevant.
That's an intriguing article, but it doesn't quite go far enough. It never really explains the experience of red or the smell of a rose, it just posits an attention model for having conscious awareness. It leaves the notion of qualia untouched.
Similar criticisms of Dennett have been made. But it does look promising. What needs to happens is an accompanying mechanistic explanation for the color experience itself.
Baby steps! Consciousness is an onion we'll have to peel apart one layer at a time.
There is no experiment we can construct which will tell us if a particular physical system experiences consciousness, be that a computer, a human brain, or a seemingly inert rock.
Only if you assume the conclusion in thinking consciousness is magic by making first-person subjective experience some irreducible fact of reality.
Your exact same argument would apply to "living matter" too. What distinguishes living matter from non-living matter? Some philosophers made a big stink about how living matter cannot be created from non-living matter, and they invented philosophies to address this question, like vitalism. Where is vitalism now? The same thing will happen with dualistic philosophies of mind.
If experience is irreducible, then something else is. In physics, that would probably be fields. Irreducible doesn't mean magic. It means we can't find anything to reduce it to.
Light is a physical thing. Physical light is interesting/useful because it interacts with physical objects.
Intelligence (or "intelligent behavior"; going to just talk about that and not exclusively-human-level for this comparison before touching "consciousness" here) is usually characterized by the way people/things choose to act. By the information (such as the information encoded in a series of nerve impulses by a brain, or the purchase and sell orders by a trading bot) produced in response to the inputs. A simulation of intelligence that produces the same information in reaction to the same inputs as a different physical implementation shares certain essential qualities that make the simulation of intelligence as interesting as other implementations, as long as you can translate the I/O into physical/worldly interactions. As compared to the much less usefulness (or restricted usefulness) of a simulation of a light bulb (unless you can translate its output into physical form, but the main way we know to do that is with a physical lightbulb or something even more expensive).
Now it's possible that maybe consciousness doesn't share this same substrate-independence as intelligence; I just think there's a likely connection between intelligence and consciousness and that extends to qualities like this.
>Could consciousness gather around local "concentrations" of hyper-connected networks with a lot of feedback circuits in them?
Does it influence then influence the particles to act physically (or virtually) different then? The rules it uses to identify particles to manipulate and the ways it manipulates them would then just be considered a part of physics. (It could be that such a "consciousness field" could enable higher classes of computation that Turing machines are capable of. The natural thing would be to then try to figure out if a computer could be built that takes advantage of the consciousness field. But then... would an algorithm that uses the consciousness field to calculate a specific value and then complete be considered conscious or to have the moral-standing like a human just by virtue of interacting with the consciousness field? What about a computer that produces intelligent behavior and interacts with the consciousness field? I feel like people uncomfortable with this would then propose the existence of an undetected consciousness² field that gives humans something that makes them unique that the computer doesn't interact with, because it seems like that's the main reason to propose a regular consciousness field to begin with.)
If the consciousness field doesn't manipulate the particles but just passively exists near them... Then it can't have anything to do with why we talk about consciousness. Some chain of thought occurs in our brains that results in nerve impulses that makes us open our mouths to talk about "consciousness", and a non-interacting field doesn't have to do with that process by definition. Maybe it could exist separately, but it doesn't provide any answers in conversations we have.
Donald Hoffman also has interesting ideas:
"There is subjective experience. This is the primary and incontrovertible."
This statement is a fine way of starting a conversation and far from a silly thing to assume as a first pass.
But what happens if we try to square his claim that what he is presenting "avoids any a priori metaphysical assumption or bias" with the need to clarify what he means by "subjective"? Is subjectivity a means of knowing (from the inside)? The phenomenal aspect of consciousness (a rug appearing redly)? A way of categorizing facts (the domain of what can be seen from a perspective)? All that matters for now is that any of these responses will result in his first Grand Fact being completely different. And, further, that any answer will involve theoretical machinery, however implicit it is to most people.
What if we then step to the next word and argue that the concept of "experience" is both dispensable (we can do all explanatory work without it) and adds to confusion (by muddling causal and justificatory dimensions of our behavior) and that we should replace it with a family of more restricted concepts? So much for incontrovertible.
And what if we keep going with the second clause of the statement and counter that nothing, in general, can be primary and incontrovertible (that all concepts must earn their keep in explanation)?
It would be tedious---but easy---to do this for every one of the "facts of nature" which ground his pretty pompous exposition. It's easy to think this sort of response is just being clever, but being comfortable with everything being up for grabs and being more modest about what we are actually doing is really the only way an issue as hazy and messy as consciousness can be seriously approached. The playfulness used by Dennett or Hofstader, for example, is not just superfluous or "mere rhetoric", but essential for the stage of understanding we're at.
An aside: I was only scanning for modest evidence that the author was aware of and responsive to the complexity of the issue to see if it was worth my time to read further, not that he actually explicitly addresses every possible move! Also, his substantive position, as far as I glean it, can still be interesting. FWIW I personally have idealist sympathies (appropriately construed) and think both subjectivity and experience do earn their keep...so far.
This seems to be the only claim I can make that doesn't assume anything. Of course, for that to really be true, I shouldn't even try to formulate it in language (or thought) but then it's hard to communicate.
I suppose the problem is that, without precision, it's impossible to know if we're talking about the same thing, and so we can't make progress. And yet the rest of his argument seems to make sense to me.
Take colors: colors map to wavelengths of light. I wonder if there is a good reason for our perceptions of color to have red be lower frequency and indigo be higher frequency. I guess what I'm wondering is if our brain mapped these differently, it would be suboptimal in some way - perhaps the 'mixing' of colors would work out less 'well' (e.g. red+yellow=orange wouldn't work in the new layout as well.) If so, then perhaps one could use evolutionary selection pressures as an explanation to lead us to the qualia we have.
Or have I perhaps just missed the whole point?
What approach to accounting for qualia do you prefer?
The solution is not to embrace some version of dualism. Dualism has its own issues. The solution is to embrace a richer metaphysics. Aristotle provides such a metaphysics.
That's conjecture. The only arguments against physicalist explanations for qualia amounts to a bunch of meaningless philosophical hand-waving.
Every thought experiment purporting to invalidate physicalism is also either fatally flawed, or simply points to a knowledge gap that we haven't yet explored.
None of the above justifies declaring physicalism "insufficient" or "unsatisfactory". People adopt anti-physicalist philosophies because they don't like the idea of being an automaton, not because the arguments against it are sound.
I suggest you acquaint yourself with the subject matter. You've only offered us a few uninformed claims and bizarre reactions.
I frankly don't see how your original comment could possibly be interpreted other than as an anti-physicalist position.
That said, a lot of Chalmers acclaim has to be related to the inability of any of the rest of us to truly pose an alternative. The hard problem is hard because it seems something deep in science and reality itself would have to give way to come up with a real solution.
An alternative is that qualia doesn't real exist (or doesn't exist beyond being a vague description for emergent properties arising from our cells) and that "consciousness" is a vague term with no clear definition. It's like if someone came up with a "hard problem of ghosts," and said that there wasn't any good alternative to where ghosts come from besides dead people.
It can be difficult to come up with an opposing theory as to why something exist if you don't believe that it actually exists. This isn't because the question is hard, but because you find the premise to be completely flawed.
When a line of argument is based upon the existence of an invisible element that can't be shown or detected, it starts to veer into the territory of religion. If there's no evidence beyond "I feel it must be true," its important to acknowledge that others might feel differently.
All of that leaves off the red experience. There is no red experience in the physical world of things anymore than a tomato actually has an objective smell or taste. Those are all mental and creature dependant (carrion likely smells and tastes wonderful to vultures but not humans).
Somehow, this is strongly correlated with perception and the brain, but how is a deep mystery. This isn't to deny the brain or the eye's role in experiencing red, only that we don't have an explanation for how the red experience is present, when none of those things (or processes) are objectively red.
You claim that there's some other invisible force at work when this happens, but what evidence is there to back that up? It seems like the mechanical description does a pretty accurate job of describing what's happening. Neuroscientists who have studied this don't seem to be running into any issues. Not only is there no evidence for qualia, but the idea of qualia doesn't seem to even solve any particular problem.
Machines detect wavelengths of light. That's not what a color experience is. Photons aren't literally colored red or green. That's just how we see objects when the cones in our eyes are excited, sending electrical signals to our visual cortex.
This is more obvious with smell and taste, since that greatly depends on an animal's sensory apparatus. Nothing has an objective smell or taste.
> You claim that there's some other invisible force at work when this happens,
But I made no such claim. I'm only pointing out how the problem is hard. I have no idea what the answer is.
Right. We detect certain wavelengths, which activate certain receptors, which send out certain signals, which cause certain neurons to fire. The "color experience" is the neurons firing and how that interacts with other neuronal activity. I'm not seeing where the need for qualia is, or what's particularly hard about this.
This is one way to go which possibly solves the problem. But it's not without difficulty. One is that it's hard to see (pun unintended) how the explanation for neuronal activity is the same thing as having a red experience.
Second problem is that it makes the physical system of brain activity special. Searle might be down with that, but it won't help machine detectors have a red experience. Or maybe even aliens, if they're made of something other than meat.
It's also hard to see what makes neuronal activity any more special than the cells in your toes or molecules in a rock. Why can't a rock have a red experience when interacting with light at that wavelength?
I guess I don't understand why that's hard to see. It seems pretty straightforward to me. We know that neuronal activity is necessary for the experience, we know that it coincides with the experience, and we know that changing how neurons operate can effect the experience. It doesn't seem like a stretch to say that they are the experience.
> Second problem is that it makes the physical system of brain activity special...It's also hard to see what makes neuronal activity any more special than the cells in your toes or molecules in a rock. Why can't a rock have a red experience when interacting with light at that wavelength?
If we're defining "red experience" as the neuronal activity that humans have when they see red, your wondering why a rock can't have that? I mean, because it's a rock that doesn't have a single neuron, let alone a human brain?
But beyond that - are there people that actually argue that brain activity isn't special? You're not going to have a "red experience" if you have no brain activity.
Because brains are made up of ordinary matter, just like rocks. If it's the behavior of neurons that's special, then consciousness is no longer identical with neurons, it's identical with any physical system that functions the same way. And then you have the possibility of very counterintuitive arrangements, like a billion Chinese instantiating a blue experience, or a meteor shower simulating experiences.
> We know that neuronal activity is necessary for the experience, we know that it coincides with the experience, and we know that changing how neurons operate can affect the experience. It doesn't seem like a stretch to say that they are the experience.
But what is it about neurons that make them experiential? And only some neurons, because a lot of neuronal activity is not conscious.
Sure, I don't think brain uploads are impossible. If we were having this conversation in 100 years, I might be saying that "red experience" is the result of neuronal/circuit activity. But functions the same way is the relevant part - a rock does not function the same way. I don't think either of us expect a human to act the same way if their brain is replaced by a rock. On some level, we both know that the brain is fundamentally different from a rock.
> And then you have the possibility of very counterintuitive arrangements, like a billion Chinese instantiating a blue experience, or a meteor shower simulating experiences.
I think we can both agree that a computer is merely a physical object. That doesn't mean that surfing the web or playing a video game on "a billion Chinese"/"a meteor shower" is any less counterintuitive. Imagining any incredibly complex system being completely simulated by random physical phenomena is bizarre.
And the human brain is much, much more complex than a laptop. Really, go read up on it - 86 billion neurons, 100 trillion synapses, neurons firing 200 times a second. Consider the work it takes to simulate one second of brain activity (with far fewer neurons and synapses than a human brain):
> The simulation involved 1.73 billion virtual nerve cells connected by 10.4 trillion synapses and was run on Japan's K computer, which was ranked the fastest in the world in 2011.
> It took the Fujitsu-built K about 40 minutes to complete a simulation of one second of neuronal network activity in real time, according to Japanese research institute RIKEN, which runs the machine.
> The simulation harnessed the power of 82,944 processors on the K computer, which is now ranked fourth on the biannual international Top500 supercomputer standings (China's Tianhe-2 is the fastest now).
You're far more likely to see the dust in the air randomly play Casablanca for you from start to finish than you are for meteor showers to randomly start simulating the human brain. The human brain is complex. Really, really, really complex. So complex that weird emergent stuff like "red experience" happens. People have a hard time conceptualizing things when they become so vast; that's understandable. But there's no need to invent invisible qualia simply because we have a hard time understanding things on this scale.
goatlover has explained it well: Subjective awareness exists. "Qualia" is a name for the content of awareness. We know awareness exists because we have it (or are it.) We literally know nothing except as qualia.
The content of awareness is obviously mediated by our nervous system, but the subjective experience of "redness" is a mystery. (To take a simple example.)
Why is red red?
edit: Chathamization what do you mean when you say "qualia"?
Well, as I said in an earlier post, I'm talking about the existence of qualia "beyond being a vague description for emergent properties arising from our cells." People have a sense of self, sure, and people experience things. But usually the people who bring up "qualia" seem to think that our sense of self can't be explained as simply being neuronal activity.
For instance, "redness" doesn't seem to be mysterious at all to me. Our brains fire in a certain way when presented with certain stimuli. As for why they fire in that particular way, it's complicated because the brain is complicated (and each persons brain is unique). But it's a complicated physical system (and one that we're gaining a better understanding of each day).
Light of a certain frequency, neurons, "and then a miracle occurs", and "I" experience red...
With all due respect you're eliding that bit in the middle.
In my understanding, this "miracle" is human general intelligence. I think that any dynamic pattern-matching and model-building entity will experience a certain degree of "consciousness". Conscious experience is the entity's internal modeling of its own sensory input . And likewise, conscious thought is the entity's modeling of its own internal state and processes (including the modeling process itself).
 This includes things like seeing my friend walk down the street. This sensory input is incredibly more difficult to model than in an alternative scenario where I'm looking at a red apple resting on a table.
When people first witnessed the steam governor  it was common to say that it seemed alive.
What you and the other guy are basically saying is that if we pile up a bunch of cybernetic devices together you eventually pass a threshold where subjectivity (aka consciousness) begins.
Now, I don't agree nor do I disagree.
My point is that we are far from having a scientific theory that says how this happens. It's no good to wave ones hands and say, "Complexity!" or "Model-building." I want to know why red looks red as opposed to some other sensation?
We can't even describe sensory inputs! We can only correlate our experience with the reports of experiences of others. Maybe you see "green" and I see "blue" when we both look at a "red" object. How can we "do science" to/on subjectivity?
Really, I'm just glad that people (in the West) are finally taking this seriously.
Edit: my original point was just that "qualia" (which I use as a name for the content of awareness) exist. I don't know what it would mean to deny the existence of the contents of awareness, since we quite literally only know anything by way of those contents.
The gist of this seems about right, although the configuration of the components is likely to be just as important as the raw quantity.
I would also argue that there is a spectrum of gradient of consciousness (or awareness, sentience, etc), likely without a clear starting threshold. As an analogy, perhaps it is similar to how the boundary edge of a cloud in the sky is impossible to define, but there isn't much disagreement about which parts of the cloud are bulky.
> my original point was just that "qualia" (which I use as a name for the content of awareness) exist
For you, qualia is synonymous with awareness? I've generally heard it described as the ephemeral aspects of subjective experience.
I'm fairly skeptical of the way many people talk about "subjective experience", as there often seems to be a large amount of unspoken assumptions involved. The biggest one being that there is an irreducible, singular entity or "subject" having the experience.
I find awareness to be a concept with less assumptions --
to me, it doesn't imply a "subject" to the same degree. Thus I'm more comfortable with your definition of qualia, although I wonder if others share it.
If we recast the subject as primary vs secondary qualities, where primary qualities are non-observer dependant properties like number, shape, extension and secondary are observer-dependant properties like color, sound, taste, then it's easier to understand the issue.
Science provides an account of the world that is colorless, soundless, soundless, tasteless But we experience the world as having colors, sounds, tastes, so where do those come from?
I argued a few comments above that those experiences are the product of the powerful modeling apparatus of the human brain, the same thing that likely enables our broad intelligence, turning inward on itself.
Those experiences are the manifestation of the system modeling itself: its sensory inputs, internal states and processes, and, in a recursive fashion, some of the modeling processes themselves. The richness of these experiences gives us a clue at how sophisticated the modeling ability of the brain actually is.
No, it isn't.
> There is no red experience in the physical world of things
There is no way of showing that the subjective “red experience” is not simply a label applied to a particular set of conditions in the “physical world of things”.
> This isn't to deny the brain or they eye's role in experiencing red, only that we don't have an explanation for it.
There's a huge difference between “we can't explain the entire relationship between physical conditions and subjective experience”(which is true but does not require qualia) and “subjective experienced involve something outside of physical conditions” (which is unprovable—not merely in practice, but in principle—but necessary for qualia to be a real, nonphysical thing.)
This can be turned around to say that there's no way of showing that a particular set of conditions in the "physical world of things (or processes)" isn't an abstraction from subjective experience.
Either way, you have an explanatory gap. I know I have red experiences. I know there are abstract explanations for how my visual system works, how objects reflect light, and so on, making it possible to see objects as colored. I don't see how the abstract explanation (the scientific one) explains my having a red experience, because an abstract explanation isn't the same thing as having an experience.
Sure, it can, and if someone said it was easy to show that o jective reality exists independent of subjective experience, that would even be relevant.
Of course, the difference is that the concept of an objective reality is a useful abstraction from subjective experience (it's what let's science, which has utility in predicting future experiences, exist). Qualia offers... what?
A way to clearly demarcate subjective experience. We don't have to use the term "qualia", but then people tend to use words like consciousness differently, and the debate ends up being a semantic one.
> the difference is that the concept of an objective reality is a useful abstraction from subjective experience
Agreed, and I think there is a real, objective world independent of our experiences. But to get at it, we do have to abstract from our consciousness. That works really well for most things, until we turn it around and try to explain the nature of our subjectivity. Qualia is philosophically specific term meant to highlight that difficulty.
What do you mean when you use the term "exist"?
I cannot fathom the semantic meaning of "qualia don't exist".
Either you are the legendary "P-Zombie" or you're using the words fundamentally differently than I am, I think.
I hope this doesn't sound offensive, and apologize in advance if so. I am genuinely and sincerely curious about this.
"I don't see any advance in this debate since things that Dennett and Hofstadter wrote 20+ years ago (both independently and in their co-authored book "The Mind's I").
Is it really surprising that we have a first person subjective experience? We know that we are incredibly complex things, constantly integrating and acting on very complicated external stimuli. Such a system should have references to its own body and its own neural states, its train of reasoning should frequently include itself, its focus will drift forward and backwards in time... this is just how a system like this would work. If the system communicates about its state then its language should have referents to these internal states, referents like "experience", and "feels like", and "I understand". Is that surprising? Wouldn't it be surprising if it wasn't like that?
I think that Tononi's approach is a good approximation, but it can't be a full solution because the word 'consciousness' is too anthropocentric. One criticism of IIT showed that a seemingly uninteresting complex artificial system could have a very high IIT complexity quotient. The problem is that the things we use to define the term 'consciousness' are things that can be approximated to varying degrees by chimps or dolphins or generative adversarial networks or antfarms or thermometers. But behind our use of the word 'consciousness' there is still almost always a very slightly disguised dualism that uses it as a substitute for the word 'soul'."
Therefore, I think I agree with what Manzotti is saying here. Certainly I think he is spot-on with this:
>Essentially, when Chalmers so dramatically announced “the hard problem,” insisting that we had no solution to the question of consciousness, he simultaneously assumed that the constraints governing any enquiry into it were already well defined and unassailable.
The most frustrating part of the debate is that most educated laypeople and some professional philosophers will still say with great confidence: "consciousness is a mystery/we don't know what consciousness is/ we don't understand how the brain works".
I don't think these statements stack up against Dennett and Hoftstadter's work, and we have learned an incredible amount about how the brain works in the last 20 years, both through human neuroscience and computational neuroscience, and philosophers of the Chalmers type are wilfully ignorant of it.
That abstract idealization fails to capture the subjective experiences of color, sound, pain, pleasure, etc, because it's been divorced from them to gain a God's eye view from nowhere, which has no color, sound, feels, etc. The problem is very deep philosophical one, which goes to the heart of objectivity vs subjectivity, and has been around in some form since ancient philosophy, both East and West.
In a way, we are observers, that closely observe ourselves.
Manzotti: Of course, and Chalmers on a number of occasions has espoused dualist positions, the idea that the world is divided into separate “realms of reality.”
Has there been any substantive new work on the consciousness question? This feels like someone finding something old and then trying to shoehorn it onto computing.
The idea is to develop more and more complex computers with numbers and arrangements of connections similar to those of the brain, until consciousness “emerges.”
It's not even worth discussing this anymore.
Some might say "even if you knew what was inside, it would simply be meaningless - in fact, the only part of your unconsciousness that you could understand would end up being exactly the same as your conscious experience". Wittgenstein gave the answer "it would lack any sort of meaningfulness to you whatsoever" to the question "what would we understand if we could listen to a lion speak?" It's not unreasonable to assume that a memdump of unconsciousness would actually provide little insight into itself. If interpretation of unconsciousness is impossible internally because it has a kind of illegible presentation from inside your mind, then there might not be a guarantee you would be able to recast into an internal and understandable grammar just as result of seeing it appear outside of your head.
Maybe consciousness and unconsciousness are the ends of a spectrum that measures something like "the degree of sparsity of error-correction (ie. error-correcting codes) necessary for providing the illusion of singular focus while recovering information from billions of sources". Maybe there is a compressive sensing-like sampling method that allows neurons to recognize a cat through pretty short chains of neurons because of an object recognition approach that greatly reduces the amount of computation necessary for the sparsity of that signal..?
Hell, maybe consciousness is even way trippier than that, and emerges from neural approximations of dynamically reconfigurable circuits that verify zero-knowledge proofs..
Problem is that none of these explain where and why what is felt feels what it does, though!
For those who anticipate an AI singularity: it may be harmful to the universe when we decide on an objective function for our GAI that optimizes specifically for human experience.
Humans were produced by a process: reality. We really should optimize the process, if possible, and not just human or human-biased experience.
"Parks: Quite an achievement...."
That's a pretty decent feat, yes.
To be honest I thought it was crackpot-ish nonsense the last time I looked into it, but I'll give it another go.
First impression: gimmicky and just playing with language (as in so much of modern philosophy), but I prefer it to Nagel or Searle or Chalmers, at least its fucking consistent:
>Emily sees a red apple floating in the air. However, there is no red apple in front of her. She is hallucinating. Is this a case of experience without an object? No. Emily perceives an apple that she met some time earlier in her life. For instance, she experiences the apple that had been on her table yesterday. Crucially, she perceives it, she does not re-imagine it!
Fine. That's not really how people use the word "perceive", but if that allows you to accept how our "consciousness" can be a real-world thing, then it works. Amusingly, the "spread mind" has a superficial similarity to one of Chalmers' own ideas, the "extended mind". But I prefer Spread Mind because it is physicalist and incorporates the obviously correct notion of embodiment rather than taking the step (which Manzotti correctly diagnoses as insane) of separating the cognitive and the conscious.
This, linked from the article, is terrific too.
If you're interested, there are several high-quality popular philosophy podcasts as well: https://www.reddit.com/r/philosophy/comments/11zcba/the_best...