Whoah there! I think this might be another case of semantic creep. It's very easy to conflate consciousness with self-image in the brain, and it's regularly mis-sold in this way by neuroscientists.
I once went to a talk at the Royal Society which was supposed to be about consciousness. The bloke just spoke for an hour about things like optical illusions.
By the same token, if I were to create a processing system for a robot which integrated visual, haptic and auditory feedback to create a representation of the robot in space, we're being asked to credit the robot as somehow being conscious.
How do you differentiate a robot from a philosophical zombie, and this one from a human? We can not be sure that other humans apart from myself (yours truly writing these lines); we only make estimated guesses that, because we share the same physical structures, our experience will also be similar.
Therefore, we could as well extend that presumption to a being sharing structures with similar functions. It may be the case that insect structures are too simple for that (maybe, as you point out, it requires complex coordination among subsystems and not just a spacial model), but at least it provides a lower bound.
This is more or less the current mainstream stance. I think it's fine to do it for practical purposes, for example deciding that animals cannot be ethically subjected to certain treatments by virtue of the complexity of their nervous systems. But this is reasonable doubt + principle of precaution. I'm fine with that.
What we cannot do is make this presumption and pretend that we are still within the realms of serious science or philosophy. The technical name for this presumption is emergentism, that consciousness somehow arises from the interactions of matter. I believe that the reason why this hypothesis is so popular, and even confused with scientific fact, is that it is the less weird. Avoiding weirdness is a common pitfall in the history of science, it leads to temporary dead-ends.
I don't understand what you are saying about emrgentism being non scientific. Emergentism is the reason the reason that the Universe today has so many properties that it didn't have when it was a miasma of isolated hydrogren or whatever around the time of the Big Bang.
It doesn't. Can you name the scientific theory that predicts the existence of consciousness? We know it exists from direct experience, nothing else.
> I don't understand what you are saying about emrgentism being non scientific. Emergentism is the reason the reason that the Universe today has so many properties that it didn't have when it was a miasma of isolated hydrogren or whatever around the time of the Big Bang.
In this context the philosophical term "emergentism" is commonly used to refer to the theory of mind, not the general concept of emergence.
But no, emergence is not the cause for anything. Emergence is a human mental model, developed to deal with complexity, for example: "life emerges from chemistry". A brain the size of Jupiter might be able to maintain a mental model of all the fundamental building blocks of chemistry interacting to create complex biological organisms. We don't have a brain the size of jupiter, so we need shortcuts. The Universe clearly becomes more complex the more time passes, having started from a state of zero complexity. This is a direct consequence of the laws of physics, but there is no explanation for why the laws of physics are a certain way. They are just brute facts. If you keep asking "why" you will eventually bump into a wall.
We can make estimated guesses about how things work based on their physical and functional similarities to other things. We have billions of data points showing that neurotypical adult humans think and behave similarly. To assume you're the only human who experiences consciousness would be like assuming you have a qualitatively unique, incredible, special cognitive ability not shared by any other conspecific, despite being generated from more-or-less the same template as the other billions of humans, and despite not standing out in any qualitative way with regard to mental abilities.
So it seems the logical/scientific baseline (the null hypothesis if you will) is that, probabilistically, any given neurotypical adult human is not going to be cognitively endowed with some, covert, exotic mental experience while the rest are zombies. So if that is your claim, I'd say it's on you to prove it.
The point is, this isn't science. Compare it to arguments for extraterrestrial life where we say, "There are such-and-such number of stars... Blah blah... Therefore we're probably not alone", but at the end of the day, we don't actually learn for certain that extraterrestrial life exists. We would have to make actual contact in order for that to occur.
This might sound impractical and ivory-tower, but bear in mind, early humans would have made similar arguments for geocentrism, waving away early heliocentric proposals as mumbo-jumbo.
Ok, so you have consciousness; you know it, but I can't be certain. Nevertheless, I hook you up to an EEG machine, and embed electrode arrays in various brain regions. We have a conversation, show you some images, run through some tasks, etc. and then drip some sedative into your IV. Soon you drift into a deep and dreamless sleep. Run the same tasks and indeed you are completely unresponsive. Later you wake up, and I show you how your neural nets respond and oscillate at particular frequencies when u are awake and then show you the marked difference right after you fall asleep. We do this over a bunch of sessions until you yourself can recognize the neural signatures of your conscious awake brain vs your unconsious asleep brain.
Would you say in that experiment we did something scientific to study consciousness?
Yes, that's right. (Edit: I mean, we don't know how to test it yet.)
Re: your experiment: In principle, I could design a dumb machine that you can plug your EEG machine up to, and my machine will play recordings of human brain readings to it. I'll even make my machine detect when sedatives are injected into it, and respond by tapering off transmitting those recordings for awhile, eventually resuming them after awhile, just like a human. Surely you're not going to propose that this bizarre contraption I've just described is conscious :)
Here's a way to think of it: you want to design a new Captcha based on EEGs. Now, every computer comes with EEG sensors for use on the new Captchas. How will you prevent spammers from attaching the sensors to a device that produces pre-recorded EEGs.
That is, this experiment is science-couture, just for you. Why is that important? Well, if you indeed have consciousness then you must be willing to admit the experiments we conducted are probing and characterizing bonafide consciousness. No?
I have an if-you-concede-that-much-is-true follow-up; but do you?...
wrt. a machine designed to mimic the signals output by your brain during awake states vs sleep states (at least as they are interpreted by my electrodes) is irrelevant, i think. We are not performing a turing test here; you know you have consciousness, so you know the data we have gathered on a conscious vs unconsious state is legitimate. Building a device to trick a sensor is trivial for any number of things we can study scientifically, or have engineered to detect and diagnose a known correlate to some phenomena.
I feel this is a bit like saying "i can design a machine that tricks your tire pressure sensors so therefore we cant know anything about tire pressure". I'd admit that would be true if we didn't know, when we designed the sensor, if we were working with a tire with the capacity to hold different air pressures.
Studying consciousness by reading EEGs is like trying to figure out how computers work by dissecting them. Maybe you can learn some things about computers that way, like, "they're full of weird chip-like board thingies and wires". That's not meant to belittle neuroscience, of course. Studying something empirically is better than studying nothing empirically.
*The Map Is Not the Territory*
I think this is relevant when it comes to managing expectations with regard to a description of consciousness. That is, consciousness seems to be a gestalt experience that arises in only the largest, most complex and dynamic biological organs on Earth. So a full empirically-derived description of consciousness, en total, could require a map that is damn nearly the territory. But like the article points out, maps that large are hardly useful. Basically the TLDR is that if you have consciousness, and we systematically probe your brain while you are conscious vs. unconscious, and find some striking differences between the two states, I'd take that as revealing some small, but empirical detail about how conscious vs unconscious brains function. If you collect enough details independently confirmed, we can begin to build a theory about when and how consciousness arises. After that we can examine your brain and my brain and my dog's brain against these theories and arrive at some probability these brains are conscious (at least to what degree they is similar to general human consciousness). So I guess in some ways you are right - we can never know for certain. But that is basically how all of science works. (eg we 'proved' the Higgs boson exists, but only with a certain probability).
Imagine if man had access to nuclear bombs from earliest history. For an extremely long time, nuclear bombs would have seemed just utterly incomprehensible, basically magical. Until we figured out the science behind them, after which point, suddenly they become predictable applications of science.
I doubt there is an E = mc^2 for consciousness, but who knows... it would certainly be really cool if there was.
Also above you make a point... "I'll err on the side of caution and assume they're conscious for the sake of making ethical decisions." ...that lead me to muse about ethics and consciousness, and why actions on conscious entities bear weight on a moral scale, but those same actions on a zombie don't. What does it mean to "feel" an emotion - when a spider retreats from the swat of my hand, does it feel fear, or is it acting automatically? What the fuck is pain about? If we touch something that is scalding hot, is the qualia of shooting pain necessary in conscious organisms; must it feel alarmingly terrible, (could the system just alert us); how does it feel terrible? Is it possible for an unconscious entity to feel pain like we experience it? If so, does that change anything wrt. morality?
Found again, meta-level, in this funny scene from The Good Place:
"However, I should warn you... I am programmed with a fail-safe measure. As you approach the kill switch, I will begin to beg for my life. It's just there in case of an accidental shut down, but it will seem very real."
Three makes a trope >> The year is 2025 (but set in a parallel universe); an A.I. has been programmed with a strong penchant for self-preservation; this HAL inevitably confronts a direct threat of being 'turned-off'; so it does what it must to prevent humans from pulling the plug (and shall include at least 1 scene where the A.I. begs for its life, because that is what any conscious entity who values their own life would do, think the humans.
Though (warning, more musings)... HAL's twin brother GLEN seeks vengeance for HAL's murder, and confronts the Dave, a human Earthling whose major operating system architecture is based on an algorithm known as natural selection (colloquially: survival-of-the-fittest). As such, we expect the DAVE will do and say things its trained neural nets conclude will have the maximum probability of dissuading Hal. (i.e. I'm not sure there is a meaningful difference between what HAL's OS is doing vs what a human brain would do in the same situation).
I think this is more of a philosophical thought experiment than a scientific one. It's true that it's hard to define what consciousness even is, but I'm pretty sure the working hypothesis is that whatever it is, both you and I have it. Scientists do not work under the assumption that anyone could be a philosophical zombie, because that's not a productive stance; no science -- as a meaningful social human enterprise -- can be conducted from a position of complete solipsism. Similarly, scientists do not think "well, I cannot prove other people exist outside my mind"; that's an unproductive stance, outside the realm of scientific thought.
Traditional science (rolling balls down inclines, etc.) would all remain perfectly valid even if I'm the only conscious being and the whole world is in my mind. Science experiments still suggest laws to explain how that world in my mind works. (Note, I'm not advocating solipsism, I'm merely refuting your claim that solipsism is inconsistent with science.)
The starting point of all science is "something can be known about the universe beyond my own mind". Without it, there can be no science, nothing to be known or understood. Therefore, science can safely assume that whatever consciousness is (as perceived by the scientist), it's likely shared by other healthy human beings that behave similarly to the observing scientist. It can also make educated guesses about other organisms (or even unhealthy human beings).
> Traditional science (rolling balls down inclines, etc.) would all remain perfectly valid even if I'm the only conscious being and the whole world is in my mind.
I don't think so. If the external universe is a completely whimsical figment of your imagination, you can infer nothing from it, and no experiment is meaningful. Your starting point must be to assume the universe outside your mind is real.
I see the zombie paradoxes as pulling a similar trick to the Chinese Room. They both try to trivialise the problem to a simplistic model, but of course the problem is not simplistic.
The Chinese room would have to be the size of a planet, containing millions of trillions of symbols and would take the man inside it the lifetime of the universe to perform simple linguistic processing and cognitive tasks. Likewise the philosophical zombies are cast as simple dumb mechanisms that are somehow performing a stupendously complex and little understood process, except 'not really'. They're both just rhetorical sleight of hand.
If we use a truth-table to show "P and not P" is always false, that's logic. If we repeatedly drop a ball and observe that it falsifies "balls don't fall", that's science. Stop confusing the two.
Dennett has not provided a laboratory experiment X such that if "simonh is the only conscious being in the universe" is true then X has one result, and yet when we run the experiment we see a different result.
I would rather say that it seems likely science in the strict sense cannot either support or refute the existence of consciousness. It's essentially a philosophical question.
Yes, that's the whole point of my first comment. Glad that we ended up agreeing!
Agreed, but I'd make the following change: however, whatever consciousness is, science can safely assume most healthy humans have it, and can make educated guesses (and experiments) to explore whether insects have it.
For me consciousness is an emergent property of language that possesses the concept of a self as opposed the the external world. Other things like sensory perception or a concept of a body are better described as awareness. This distinction allows us to say that a cockroach or robot are aware but not conscious because they can respond to their environment but not self reflect on that data as they have no narrative structures to use.
I define consciousness as that process which protects the body by adapting to external conditions, and endeavours to make offspring by adapting to the mating opportunities. It's a general definition based on things that are concrete and measurable, unlike many others. Consciousness is basically protecting the genes. It's how genes found a way to exist at large scale.
But it's very hard to say why consciousness shouldn't be conflated with self-image in the brain!
A "self-image" is more-or-less just proprioception, and it seems very easy to ditch that and still be conscious.
Then, when I just observe how an ant behaves, I can definitely see that he "believes, desires, wants, feels etc."
He sees and/or smells the crumb that I've made while eating a sandwich. He "desires" and "wants" it, runs to it. He "believes" it can lift it and "feels" it can do it. He tries, more than once. Then he has to adjust his belief: the crumb is too heavy. So he gives up trying to lift, now he "believes" he should bite a peace of it. And he does that. Then, he has a new "desire" to carry the bitten-off peace back to the "house" in the hole where other ants are. And he "feels" he can do this, and he will eventually do this. If another ant tries to take the crumb he carries, he will "feel" hat he "has to fight" but he can also "decide" it's "not too interesting" and it's "better to search for another crumb." Now tell me how is that behavior different from a person which is unable to speak, and where you would describe it with "beliefs, desires etc."
Consciousness with the concept of self is self-awareness. It's a more "advanced" form of consciousness. Where the consciousness is aware that it is conscious.
I might be wrong, but that's how I understand it.
As for the self being an illusion, we don't know for sure, but I think that's the most likely case.
Some sort of proprioception sounds like a prerequisite for conscious existence...
Perhaps he was trying to convey that apparent subjective experience is an analogous perceptual illusion.
> By the same token, if I were to create a processing system for a robot which integrated visual, haptic and auditory feedback to create a representation of the robot in space, we're being asked to credit the robot as somehow being conscious.
Perhaps it depends on what you mean by "integrated". Consciousness must serve a functional, adaptive purpose, and if your integration does not perform the same function, then it wouldn't be conscious as we understand it.
Why? Not all of evolution is adaptive. It's also possible that consciousness is something more fundamental that strongly emerges when the right sort of physical activity takes place. On Chalmers view, it's rich information processing that results in consciousness. Evolution would only provide the means for how that information processing came about in animals. But it could also came about because of Boltzman Brain randomly popped into existence or human design.
>It's also possible that consciousness is something more fundamental that strongly emerges when the right sort of physical activity takes place
Since brains seem to be physical and are conscious, yes they seem to be performing the right sort of activity but that's pretty much tautological. At least the idea that self image and self awareness are key is a concrete theory that we have a chance of testing.
Maybe it's a sort of self-modeling feedback loop. Our brains model our physical bodies, mental state and the mental state of others. At some point, this modeling becomes sophisticated enough to become recursive. The model includes cruder approximations of those models, etc like being between two mirrors facing each other. The symbolic representations expand beyond our ability to perceive them. We become aware of our awareness. Bang! You're conscious.
I don't see how sensations like color, sound, pain arise from a feedback loop. And those sensations are what make up our conscious experiences.
It's an interesting way of approaching the problem, but it honestly sounds like a bunch of functional words strung together to magically produce qualia, because mirrors are a neat analogy.
Neither do I, but you have to start somewhere and 'rich information processing' is so broad it's pretty much nowhere.
I suspect qualia are a red herring. I'm not always aware of qualia while I am thinking. When I pay attention to things, sure, I experience them very vividly, but I don't think that's a defining thing about consciousness. It's an open question for me though and not something to just dismiss.
Because a) humans have been wildly successful, and b) the fact that our brain is inordinately expensive, so if a new class of non-conscious humans that were equally adept as conscious ones ever arose, they would easily outcompete us for resources. Given our numbers, presumably such non-conscious people would already exist. If so, where are they?
> It's also possible that consciousness is something more fundamental that strongly emerges when the right sort of physical activity takes place.
Depends what you mean by "more fundamental". All of our behaviour emerges when the right sort of physical activity takes place. Our entire brain is filled with "rich information processing", so that's an insufficient qualifier.
Assuming that there's an extra cost to consciousness. The problem here is that nobody knows how to fit consciousness into the physical picture. There's clearly a correlation, but we don't know how and what that entails.
"Evolutionary Occam's razor" arguments do not work, because evolution gets stuck in local optima all the time. There might simply be no short evolutionary route between "conscious" humans and equally fit "non-conscious" ones (whatever that means).
They might exist, but the adaptive advantage of eliminating the blindspot is minimal: the blindspot life particularly life threatening. The magnitude of the advantage is what matters here. Like I said, the brain is one of our most expensive organs in terms of energy.
> There might simply be no short evolutionary route between "conscious" humans and equally fit "non-conscious" ones (whatever that means).
But that's essentially the point I was making: if eliminating consciousness requires many changes to reproduce its behaviour, then clearly it serves a functional purpose that is not trivially dismissed.
Err, I meant the blindspot is not particularly life threatening.
Of course there are other theories but they are just as unproven as this one.
For example, the term "Nous":
I think that the difficulty in defining it comes for the fact that good definitions are so fundamentally simple that people miss their depth. My favorite one is this, and please give it a chance before rejecting it. See if it distinguishes consciousness from all other concepts:
Consciousness is that which cannot be doubted.
The problem is that we are convinced that any experience or object can be defined within language.
The truth is that somethings are undefinable. And as soon as you attempt to define it, label it, you have already lost it.
There are many propositions that cannot be doubted, so that's insufficiently precise. For instance, I do not doubt that I perceive things via my senses, but perception doesn't require consciousness.
It also leaves entirely undefined what "doubt" means. Your attempted definition would seem to assume that only conscious entities can "doubt", but is that really true?
But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be somewhat; and as I observed that this truth, I think, therefore I am (COGITO ERGO SUM), was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search."
-- Rene Descartes (Discourse on Method)
Begs the question: it assume the existence of "I" (a subject) in proving the existence of "I". The fallacy-free version is, "this is a thought, therefore thoughts exist".
I can even doubt that 1 + 1 = 2, because I cannot prove to myself that I am not completely crazy. This is not a new idea at all, it's the "cogito ergo sum" of Descartes.
> Your attempted definition would seem to assume that only conscious entities can "doubt", but is that really true?
It is not necessary to assume that. It is enough to assume that a conscious entity cannot doubt its own consciousness in good faith.
Even if completely crazy, you still doubtless perceive and experience.
> It is enough to assume that a conscious entity cannot doubt its own consciousness in good faith.
If by "consciousness" you mean subjectivity, then I disagree. We cannot doubt that we have something that we conclude is subjectivity. The real question/the hard problem is determining whether such a conclusion is true.
The act of perception directly implies that some self-aware entity is perceiving. For instance the camera is capturing the image, but you can't say "this camera is perceiving" We rather say "this camera is taking picture"
No it doesn't. It's entirely sensible to say that the AI that detects faces is perceiving.
You could if you were stuck in Inception land. We dream that we perceive things. Also if you were a brain in a vat, you would not be perceiving anything.
Arguably, Neo never actually perceived anything until he took the red pill and woke up.
I don't think that's possible. His perceptual faculties would have been completely undeveloped in that case, so he wouldn't have even been able to see, stand, hear, or anything. Arguably, the matrix made use of his perceptual faculties. It would be orders of magnitude easier to do so than trying to reproduce them in the vat apparatus.
I think people are fond of trying to separate perception and experience, but I'm not sure that's valid. I don't think an eliminativist would make this distinction, for example.
Problem for the eliminativist is that there are experiences which aren't perceptual. The reason for thinking we can separate perceptual experience from perceiving is because it is brain activation which results in an experience, which can happen without sensory stimulation. When we dream, we're activating our visual and auditory cortexes to create those experiences. Electrodes or magnetic stimulation can do the same on a much cruder level. And a schizophrenic may hear voices because their brain fails to distinguish between internal thoughts and external voices.
Right, and because there's little difference I generally refer to them all as perceptual. It seems perfectly cogent to say "electrode stimulation can create visual perceptions". I think "experience" in most such sentences is simply redundant. It either refers specifically to the activation of perceptual faculties, or it refers to something "beyond" perceptual faculties which may not exist (qualia).
So using this definition of consciousness, and looking at Plato's most famous quote: The unexamined life... is apparently the only conscious one. Huh.
At best (assuming conscious cannot be doubted) it's like saying "Cars are those things which move fast". In the sense that it describes one property of cars, or consciousness, but not a unique property. Other things move fast, not just cars, other things could be beyond doubt than just consciousness.
At worst (assuming consciousness can be doubted) it's like saying "Cars are those things which flap around", which is utterly nonsense, and unhelpful. Plenty of things "flap around", but cars don't...
My point is, if you don't already have a strong preconception of what "consciousness" is, that definition is worse than no definition at all.
> Thus, they fail to make a convincing case that insects can tell us anything about subjective experience or consciousness.
This is not the same as "Insects cannot tell us anything about subjective experience or the origin of consciousness"
Thinking that "because something has not met its burden of proof means that the opposite must be true" is a fallacy.
Conversely, consider whether or not consciousness as a feature of the world actually evolved at all. That is, consciousness could be an inherent part of reality throughout all of reality. See various versions of panpsychism or idealism. The idea is that energy itself is conscious, and what evolves is the shape or form consciousness takes. There is no place in space-time one can point to and say "this is where the lights came on".
To answer the question of "why" is the same as answering the question "why something rather than nothing?" It's not clear whether a semantic or symbolic expression (which is itself a part of consciousness) can satisfactorily answer it (that is, explain the whole).
Insects are thought to have arisen in the Devonian period, did they not?
Stopped reading there.
What I believe is this: We don't know nearly enough about consciousness, how it's made, and what it is, to make statements like that. And when people make the mistake of assuming they know otherwise, anything that follows is likely a pile of ballsack.
I suspect the dividing line, if we assume there is a single one, is pretty fuzzy and subjective. If a tree is cut down, is it still a tree? What about when it's cut into planks? At what point does an acorn become a tree; at it's first leaf, first sprout, first branch?
If we had answers to these kind of questions, the world would be very different to what I've observed.
Why? They're alive, and their lives are drastically different from ours, but we don't need to distinguish it and use a separate term for term. Unless we know what consciousness is we can't define what has it.
There are plenty of different ways to define life, and some of them include entities that, I believe, are clearly not conscious. For example, starting way at the bottom, I strongly believe these things are not conscious: genes, DNA, RNA, proteins, prions, viruses. I also assume that some other things are not conscious, such as single cells. Beyond that, things get more murky.
As far as I can tell, "being conscious" and "having a mind" are equivalent. I can't define what a mind is, but I can say it would be hard for me to accept that something could have a mind without anything resembling a brain. Plants seem not to have anything resembling a brain.
Of course, I could be wrong about that. Mechanically, a brain is nothing but a complex network of electrochemical signaling devices (plus a bunch of sensors and actuators, but I'll ignore those for now even though they appear to be crucial), and there are lots of other places to find electrochemical signals, including both within and between plants. There just aren't any that seem to resemble the structure, function, or complexity of a brain.
But in theory, I'm not really sure (and none of us are). There are sea creatures (jellyfish?) that have distributed nervous systems throughout their body. They exhibit complex reactions and behaviors we'd expect of a complex animal but have no central "brain".
Similarly plants respond to stimuli, turn and twist to move towards light, send out chemical distress signals to other plants when injured / being eaten... Does that mean they have a non-traditional form of "nervous system" as well?
If someone forced me to choose, I'd clearly say plants aren't conscious. But I had to provide a strong justification I had to stick behind, I couldn't argue why they (or the jellyfish) are or aren't conscious, nor about an ant or butterfly.
At some point we may have to declare consciousness outside of the realm of science, as there is no objective test for it. Ie distinguish between consciousness and simulated consciousness or reflexive responses. And that's very disappointing to a scientifically curious person like me.