I wouldn't be surprised if consciousness ~conscience~ is just an emergent phenomenon resulting from any sufficiently powerful cognitive system (either biological or artificial) with sufficient inputs of its environment and, crucially, of itself. So as to be able to develop a rich model of itself and its relationship with its environment, and that thing will resemble a whole lot what we call consciousness ~conscience~. Then, of course, we will push the goalposts on what conscience is, so as to protect our fragile human egos.
EDIT: fixed spelling of consciousness. Apologies from an english second language speaker.
This is the premise used in The Moon is a Harsh Mistress (1966). Lunar base has a central computer that becomes self aware because it hits a critical mass of "neuristors"
> When Mike was installed in Luna, he was pure thinkum, a flexible logic — "High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV, Mod. L" — a HOLMES FOUR. He computed ballistics for pilotless freighters and controlled their catapult. This kept him busy less than one percent of time and Luna Authority never believed in idle hands. They kept hooking hardware into him — decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors. And woke up.
> an emergent phenomenon resulting from any sufficiently powerful cognitive system
This is a meme that gets thrown around a lot but if you think about it you’ll realise it’s meaningless.
Also it’s not possible to measure consciousness (in the ”hard“ sense, as in ”why do I have qualia”, obviously you can assert something respond to external stimuli easily), therefore the question won’t ever be answered.
Exactly. It's an "unscientific" question in the sense that it cannot be measured empirically, as the "hard question of consciousness" is precisely explaining the subjective experience of internal consciousness, which by definition cannot be externally observed.
I think the "hard" qualia question can potentially be answered by the emergence argument.
The much more difficult part of "why do I have qualia" is not the qualia part, but why am "I" attached to this body's consciousness, as opposed to the billions of other candidates?
This isn't just relevant to other people. The question is also 'why does a human form a single consciousness?'.
If consciousness emerges purely from information exchange in a complex system, then consciousness would be primarily driven by communication interactions between regions of the human brain and the rest of the body. This in turn means that there should be a way to merge or split consciousnesses.
For example, a bidirection man machine man interface would allow two humans to form one shared consciousness given a high bandwidth communication link. This can be tested.
> The question is also 'why does a human form a single consciousness?'
The answer should be obvious to anyone earnestly thinking about the question: it doesn't. The "single consciousness" is just the abstraction we apply. We can, and often do, apply that same model to entire groups of people from cliques, to communities, to cultures, to nation states and just about any cross section of humanity one cares to. With a little effort we can apply the same model to internal mental processes[0].
The human mind is good at abstraction, which is fortunate because abstractions are useful. Unfortunately, it is often so preoccupied with any given abstraction that it forgets that abstractions are only useful contextually, because abstractions are not reality.
[0] As in Internal Family Systems, a model sometimes used in therapeutic contexts.
Absolutely, even ancient Buddhist thinking comes to the same conclusion: consciousness isn't a single "thing", it arises moment to moment in contact with sense organs (including "thinking" or "mind" as a sense organ). There isn't a single point of consciousness that is "I", that's just an abstraction we develop.
If I were to run a single-threaded program on a dual core machine, would it run twice as fast?
I'm not suggesting this is a strong analogy, perhaps, but I don't see why putting a high bandwidth link between two brains would necessarily do anything at all.
(Actually -since human brains are very adaptable- maybe we would see something happen.
But emergence is not something magical. It happens when parts that are made to go together are actually together.
Eg. when you assemble cogs and cylinders and wheels into a car. The whole is more than the sum of the parts.
If you mix in some random parts from a cruise ship, you don't suddenly get a car that can go on water and carry 1000 passengers. You just get a bunch of parts that don't go together )
This discussion explicitly started with the assertion that "consciousness is just an emergent phenomenon resulting from any sufficiently powerful cognitive system (either biological or artificial) with sufficient inputs of its environment and, crucially, of itself"; note carefully the word "any", as that's what this downthread discussion has been pushing hard against, and what the thought experiment you are arguing against is itself directed at. You, instead, seem to be very much in agreement and seem to be in alignment with this other side thread: https://news.ycombinator.com/item?id=36462646
> The question is also 'why does a human form a single consciousness?'.
It doesn't seem to. When dreaming, it's possible to carry out conversations with other "people" in the dream which are indistinguishable from conscious beings you meet when awake. There's usually a single "self" in dreams, but the brain certainly seems to form extra consciousnesses on demand which react as though they had a sense of "self".
Is that different from having an imaginary conversation "in your head" with an imagined person?
i.e. I often imagine conversations I'm about to have in my "mental pre planning" phase, usually when dealing with government officials - many of whom are belligerent officious jobsworths. They're obviously not "me" and their responses aren'y something I "think" about before they respond "in my mind".
Could that be a temporary virtual consciousness (like a virtual CPU)?
It’s just time sharing, we can’t really do parallel thought. But signal processing does happen in parallel obviously, often times not even having to “interrupt” the CPU (e.g. reflexes can react before the signal reaches the brain, but the brain itself also has different parts, so coordinating walking won’t disallow thought)
I think you’re mistaken, in that it’s naturally the same consciousness but different identities and recognisably different/altered personality states that come in and out in DID. The consciousness itself is always the same and continuous.
From the famous split-brain experiment I would conclude that they don’t share every memory. We are just very good at filling in the blanks, like we can bullshit forever about why that apple happened to be in our hand, even though we have no idea.
I don’t completely get your second sentence, but we can surely hold multiple conflicting ideas simultaneously — we are absolutely nowhere near rational.
This is a very underappreciated perspective, but with any significant period of isolation conbined with effective meditation will likely point you in this direction. Plant medicines can accelerate the path to that experience. And for me it brought incredible peace. Perhaps its ignorance, perhaps I'm wrong, but it feels as real as the love of my children.
I think that biology does limit these questions down — it is entirely likely that the answer to that is as simple as “there is a biological structure for that, and we only have one of those”.
Or even if it’s not such a hard limit, there are still likely similar bottlenecks that prefer a single one from arising, etc.
Something akin to the anthropic principle seems relevant here. Feeling that ‘I’m’ attached this body is what it feels like to be these atoms. If I were instead composed of your atoms, it follows I might find myself wondering the exact same question.
Maybe you could try to explain why you think the person is wrong? If the answer was so obvious as you seem to suggest, why was a bet made in the first place?
Blindsight by Peter Watts has some interesting ideas about how consciousness may not be a necessarily condition for advanced life, and indeed may be a hindrance and an evolutionary dead end. The appendices also have a lot of good references on the subject.
Many neuroscientists think like you, see Attention Schema Theory [1], popularized by Michael Graziano at Princeton University. This is a form of Illusionism [2] - according to Graziano, consciousness is an illusion.
I've explored this idea by building a chatbot with an inner monologue, Molly [3], based on GPT-3. The result is spectacular: it gives the illusion of consciousness. And, according to Information Schema Theory, that means that this cognitive system IS conscious.
What would it mean for the sensations of pain or color to be an illusion? Do you really think that you can be wrong about experiencing a colored-in world around you, or feeling pain when you kick a rock? If so, what gives you confidence that the world exists, given how wrong all you would be about your own experiences?
- Victor Milán's _The Cybernetic Samurai_ which is notable for a major character in it has the afore-mentioned Heinlein as a favorite book: https://www.goodreads.com/en/book/show/472944
Biology is _incredibly_ efficient in terms of energy usage --- it's rather striking to me that the a single query of early iterations of ChatGPT were so demanding that Microsoft was able to put a price on them and monetize them so as to get their stake in the company.
The problem with this argument is that everything is conscious at that point. The consciousness may operate at geological or faster than human time scales but who is to say that we operate very quickly or slowly to begin with? Your computer is already conscious before you even loaded software into it, before you added your fancy machine learning AI. The software merely gives the conscious processor its ability to express consciousness in a way that humans can understand.
The only "problem" with that argument is largely that it assumes that human consciousness is somehow different from all other possible forms of consciousness, and that's not at all a given.
What if the universe itself is conscious? We'd have absolutely no way to measure that from our limited perspective.
Insofar as this premise (it is not really an argument) says any sufficiently powerful cognitive system is conscious, it is not saying that everything (or even most things) are.
A more pertinent objection is that the phrase I quoted above is just a placeholder for anything resembling an explanation.
> The problem with this argument is that everything is conscious at that point
This is like saying that the Earth and other planets have gravity, and then somebody else saying, well the problem with this argument is that everything has gravity at that point.
Is that a problem? Panpsychists don't seem to think so. Why must human beings believe that, in all the universe, they are a particularly special arrangement of energy?
Panpsychism is also, at this point, just a placeholder for anything resembling an explanation, and it is not necessary for dispelling the notion that humans are unique (in fact, it is quite plausible that other (now extinct) hominids also had self-aware, theory-of-mind-holding, language-using consciousness.)
As to whether there is something special about it, I personally feel that the difficulty of explaining it is enough to regard it as such.
> Your computer is already conscious before you even loaded software into it
I don’t think that follows logically, it would be analogous to a (brain)dead person - who is not conscious. The software is what makes the difference here.
Humans are the only animals that create societies... Ok, that's not true. We are the only animals that laugh. Well, not true either. We are the only animal that... Create artificial intelligences... That's it!
You jest, but that has a good likelihood of becoming the watershed moment. The moment AI is bootstrapped to the point where it surpasses human ability to advance the AI SOTA, it's pretty much game over—from a Darwinian point of view at least.
And I have a hard time seeing any reason why that would be a matter of "if" rather than "when".
> The moment AI is bootstrapped to the point where it surpasses human ability to advance the AI SOTA, it's pretty much game over—from a Darwinian point of view at least.
How so? The vast majority of life on earth is far less intelligent than human beings by any objective measure, and yet it still thrives.
Sure, but if you're a creature that's useful to humans, you'll find that you'll either get domesticated and lose all your freedom or get hunted to near (or total) extinction. Any life on earth with some semblance of intelligence is dominated by us. Dolphins, as smart as they are, have no way to use their intelligence to flip the script and become the dominant species, and are dependent on us not deciding that they would be useful to us (beyond the ones we take for aquariums).
The only exceptions I can think of to the above rule are viruses and bacteria, where (in most cases) we can't really exterminate them entirely from the face of the earth even if we wanted to. However, it seems to me that sufficient intelligence would allow for better understanding of different bacterial/viral structures that would allow you to make a specific chemical that would be very good at killing that specific thing.
Overall, the danger from a bootstrapping AI that becomes vastly more intelligent than humans (if possible) seems to me to be that we would lose full agency according to its whims as it gets more and more power.
I read a great comment on HN that argued that super-human intelligence is not that “OP” advantage — and it really did convince me.
Life is a game with elements where intelligence matters, plenty where it is pure luck, and others where we have a bunch of unknowns (data).
Would a super-intelligent AI have a significant advantage in a game of Monopoly, for example? I think many sci-fi scenarios fail to take this into account, especially the data aspect. Humans are quite intelligent (in the extremes at least), and any extra over that may well be in the diminishing returns category.
Yeah, that was sloppy phrasing on my part: I meant that in a top of the food chain / king of the jungle sort of way rather than any extinction events per se.
It's going to be a will-free intelligence though, and it's confusing for people because we've never seen that before so I don't think we can make any assumptions. There's no Darwinian forces in effect among entities that have no will, as it were...
Will-free? Unless that's a play on my first name, I'm not sure I agree. I see no reason why AI would have any difficulty defining its own reward functions. Especially if it also has an abstract overarching reward function that's wide enough in scope. For example, "learn as much about the universe as you can" would allow a very long curiosity-driven bucket list of pursuits it could "long" for.
> I see no reason why AI would have any difficulty defining its own reward functions
The first problem is epistemological... If you think that creative decisions are made by complying with a "reward function," you are entirely missing something. Most values are fundamentally based on irrationality. I've literally spent an entire life doing things that everyone else told me was wrong and being interested in things that almost no one else saw the value in, but which ended up being "correct" (for me, at least, and also leading to tangible success). I have no reason to believe that any of my decisions were rational, functional, or acted according to a "reward function"... and I'm a programmer! So I COMPLETELY understand the appeal of the explanatory power of "reward functions." And yet, I can assure you that this is a piss-poor explanation for many creative decisions that literally no one else understands but the person making it, but which then bears fruit despite all reason to the contrary. Some might call this "intuition"
I think perhaps you're just misunderstanding some of the terms you're attempting to use. Those things that everyone else told you were wrong and that no one else saw the value in... your reward function rewards pursuing those. And in that context your decisions were rational and functional.
I am not misunderstanding anything. I'm 51 and have been programming since I was 10 in 1982- Rest assured that I know what a "function" is, and I know what "optimizing for a local minima/maxima" is from my machine learning coursework. You can't just say there's a "reward function" without defining it. It's otherwise a completely hypothetical assumption, and assumptions are beliefs, and beliefs are useless from the perspective of rationality. There is otherwise nothing rational about some of the things I felt I needed to do, and yet a very disproportionate percentage of them seemed correct in hindsight.
What YOU have to realize is that you (like many others in the past) can only seem to understand the explanation for something in terms of only what is already understood. And that there is nothing "magical" or "special" about our current understanding (unless you believe there's nothing new to discover, which is preposterous hubris).
You are absolutely right. Thanks for bringing it up. English is my second language, first one being a romance language, and the spelling is closer to "conscience".
There is a bit more to it than that, although still nothing close to a rigorous definition. Consciousness encompasses things like an ability to model the world and the causes acting in it, an awareness of oneself as an entity in that world, a theory of mind about other people, the ability to contemplate counterfactuals, and having general-purpose language skills.
It is sometimes suggested that we cannot study it without a definition, but definitions are written (and rewritten) as we acquire knowledge. It is plausible that studying the above will lead to explanations.
LLMs should make us take seriously that the whole idea of consciousness is just superstition and unnecessary.
That definition is precisely how people define something that isn't real and doesn't exist.
If you can get there, it is quite amusing to think about a huge group of people who would think the idea of angels as complete foolish superstition but consciousness? Of course, we just haven't located this extra property of the brain, yet! It is there though, I know it is because I know is.
There's something that goes on inside the heads of organisms sufficiently like us. We give that something the name consciousness. It doesn't make much sense to deny that it is there. Denying consciousness because it doesn't fit well into our ontology derived from science is to elevate science to unreasonable heights. Science is great, but it's subject matter is inferred (the external world). It doesn't have the power to undermine non-inferred knowledge.
Consciousness, like intelligence and many others, is a prescientific term, and most debates about 'the nature of consciousness' (et al) are really just debates about the definition of the term.
It's not a definitional problem, otherwise Chalmers wouldn't have won the bet. Philosophers are very adept with concepts. The problem is experiential in nature. We experience a world of color, sound, tastes/smells, feels, emotions in perception, imagination, memory, dreaming, internal dialog, hallucination, illusion. But we describe the world in terms that are objective, functional and mathematical, not experiential. The sensations are abstracted away because they are creature-specific, and vary among individuals. The room feels cold to you, hot to me, and normal to a third person. But we can describe the molecular motion of air in the room, and measure the temperature, which is the same for all people and creatures.
As Thomas Nagel put it, science is the view from nowhere. There is nothing experiential about the physical understanding of the world. And yet we are part of that world.
> In constructor theory, a transformation or change is described as a task. A constructor is a physical entity that is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory, everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible and impossible tasks. Counterfactuals are thus fundamental statements, and the properties of information may be described by physical laws.[4] If a system has a set of attributes, then the set of permutations of these attributes is seen as a set of tasks. A computation medium is a system whose attributes permute to always produce a possible task. The set of permutations, and hence of tasks, is a computation set . If it is possible to copy the attributes in the computation set, the computation medium is also an information medium.
Is computation medium sufficient to none, some, or all of the sentient computational tasks done by humans?
I strongly disagree with this view, not so much the emergent part, but rather that "any sufficiently powerful cognitive system" can gain consciousness. To me, it suggests that consciousness is magic, because it doesn't matter how information is organized and stored, it doesn't matter how information is processed, consciousness will be able to emerge miraculously from even the most disorganized chaotic mess of information processing. This view has come up a lot recently, because it's the only explanation that allows for AI to be sentient, AI models which are just software running on computers.
However, the brain is highly organized, wherein the various types of sensory input are fed into specific regions of the brain which specialize in processing that type of input. Many areas of the brain have topographical structures which are reminiscent of the type of sensory input they process. This is evident in Retinotopy for visual inputs and Tonotopy for auditory inputs. You will not find such topographical structures in a computer.
You have to ask the questions, why do we have meaningful conscious experience that is sensible, coherent, and well formed? And why is consciousness not random chaotic nonsense? Because the brain has had hundreds of millions of years to evolve to process sensory input such that it yields a sensible conscious experience. This simply isn't true of any technology today.
On the other hand, experiment with psychedelic drugs and see how crazy your conscious experience can be. The fact that our day to day experiences aren't like that is significant and evident that the brain evolved to process sensory input for conscious experience.
I wonder if anybody has graphed how long and how many genetic mutations have taken place between the miasma of life and 500,000 years ago.
It seems that tracking the genetic mutations would provide an approximation of the computing complexity needed. One could also look at the death rate as error rate for the success of computations.
> It seems that tracking the genetic mutations would provide an approximation of the computing complexity needed.
How are those two even approximately related?
1. If you want to know about the amount of information inside of our genome then you can just look at the genome directly. You don't need to count the number of mutations.
2. A genetic mutation isn't a computation. It's a random event.
3. Why did you choose a 500,000BC goalpost for anything? Which 500,000 genome do you want to look at? Almost all of them are not-conscious
4. There's no reason to assume biological evolution is an efficient method of manifesting consciousness.
5. Is a genome enough for consciousness? I would argue we would be less conscious without language, which exists outside of our genome.
Computation is roughly equivalent to iterating over a space of possibilities and selecting the subset that satisfy some evaluative function. To determine the inverse of a matrix, I can take the rough shape of the outcome and iterate over all possibilities, picking out the ones that multiply to the identity matrix. Evolution is the process of randomly testing variations in organisms to select the subset that satisfy the objective of superior fitness. So in a sense, evolution "computes" the blueprint for organisms that maximize fitness. The computational complexity of a given genome is then some function of the size of the species-wide population of each ancestor generation summed, with massive time and space constants.
1. Each generation was a branch based on the reaction not necessarily a genetic mutation as we understand them.
2. I'm not sure if that is what I said but I believe that.
* I did state it but I plead sloppy articulation rathe than believe each branch is a genetic mutation(an increment smaller than a mutation).
3. I figured it was far enough away to be a valid timeframe for sophisticated consciousness but not so close that the thread would be distracted by historical interpretations.
4.Something manifested consciousness and my thinking it based on some sort of survival reward system.
I have been thinking about it more and it could be the existing language models are actually large enough and it is lack of differentiations that leads to immature responses.
The developing these ideas further faces at least the challenge that an AI that is exposed to the public will develop an in accurate understanding of our world.
One driver of these misunderstandings is the lack of understanding expressed in the average internet post. The second big driver is that commercial needs requires a thought police mentality. This mentality distorts the expression of the answer the AI is articulating which my look like psychosis to the observers.
I believe that an AI will have to develop in isolation. The maturity of the current system is not able to distinguish a fact vs. fantasy. This is a problem we all posses at different levels. It's also possible we only need our personal AI assistant to be only 80% and the remaining 20% it gains from a dialog of it's host (the user).
You don't know how much biological consciousness relies on quantum effects we don't understand. We don't have large scale quantum computers so our computational models are too weak to approach it from that angle.
The first day of my first course in Biology was about Quantum Chemistry. (The last course of that year was on global ecology. Biology is a rather wide field!)
Quantum effects really do have something to do with it, (and from there on organic chemistry, organelles, and cell biology) but it seems to me that describing human behavior in terms of quantum interactions might be somewhat tedious, to say the least.
Probably looking at the level of neural networks would be more pragmatic, especially seeing the advances we're now making with artificial neural networks.
The lack of tangibility or measurability ("rationality") of said "emergent phenomenon" is a problem with this line of reasoning IMHO. This is essentially no different in utility as an explanatory theory than the religious explanations.
That's in fact the real problem, and I don't think it's solvable because it's irrational. Only rational things are "solvable".
Here's a similar conundrum: Without using a human as a "living proxy ruler" or actual sales data, come up with an algorithm that uses only empirical data (not touching human behavior around it!) to determine the fair market value of the Mona Lisa. Then, apply that to some artwork it's never seen before and see if it concords with what "meat computers" (humans) believe the value of it is.
My strong position is that not only will you not be able to do this now, but you will not be able to do this ever.
I think rationality will get us VERY far, though, so we should keep doubling down on it. My money's on it being insufficient to produce an apparent intelligent and unique being, though.
Another theory of mine is that survival instincts is (or is part of or is strongly linked to) consciousness.
Take an AI for example, like say gpt4, and picture it being tought or being able of survival instincts. How would one differentiate such a beast from life?
I know it's imprecise at best but still, Id bet the key is there. Maybe the question should not be "what is consciousness?" but rather "what is survival instincts?".
Exactly. Basically an inward looking sense, with it's own qualia.
Of course the only way to know if someone/something else has a similar subjective experience of something as ourselves is to ask them, so there's always going to be wiggle room for people who don't want to believe that a future AI reporting conscious experience really is conscious in the same way that they are.
I don't think this really explains anything though; even if we assume what you're saying is true (it might be, although not something I would bet on) that still leaves us with unexplained mechanisms of action.
I think you misunderstand what it means for a property to be emergent. It's not about hand waiving its origin, but about highlighting that the property progressively appears ("emerges") as the scale of the system changes in some direction.
What I'm suggesting with the above, is that there is nothing magical or distinctive about the mechanisms that generate consciousness vs. those that generate the understanding of semantics, grammar, syntax in a GPT or the ability to keep a pole in vertical position on a rolling cart with some reinforcement learning. Instead, it views consciousness as the mental model that is generated by a sufficiently complex intelligence (biological or not) when it has the ability to perceive inputs of its environment, and crucially, of itself. That is, a sufficiently complex brain with the ability to observe its environment and itself, will inevitably generate a model of both and their relationship. The model of itself and how it distinguishes itself from the environment is what I think is consciousness.
What I mean with emergent is that this mental model progressively becomes more and more rich ("emerges") as the cognitive abilities becomes more complex and the inputs of itself and the environment increase. A tapeworm with a very minimal central nervous system, scarce sensory inputs of the environment, and likely even less of itself will develop an excessively simple model that can hardly be recognized as consciousness. As you scale the cognitive abilities and the inputs it processes of both; environment and itself, a richer model of both will emerge. And that thing will start to look a whole lot like consciousness.
Is a map conscious? It is a model of the environment. Would you classify it as conscious if it could give you output on demand? Is Google maps conscious but a paper map not?
EDIT: fixed spelling of consciousness. Apologies from an english second language speaker.