Hacker News new | past | comments | ask | show | jobs | submit login
Brains are not required to think or solve problems – simple cells can do it (scientificamerican.com)
426 points by anjel 12 months ago | hide | past | favorite | 396 comments



There are a couple traps to be aware of with this article.

1. "Bioelectricity"

This is a generic term which doesn't capture the nuance of charge gradients and chemical gradients in cells. While you can directly apply charges to interact with gradient based biological systems, this is a brute force method. Cells have chemically selective walls. So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple.

2. Replacement

This one is a bit more subtle. If you find that you can affect a system by one means that is not the same thing as saying the means is the cause. Take the example of using RNA to transfer memory from one Aplysia to another. Immediately after transfer the recipient does not have the memory. It takes time for the introduced RNA to affect sensory cells so that they become more sensitive to stimulation. This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.

In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.


> Said another way 'bioelectricity' is not simple.

> If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.

I'm not sure these two statements are compatible. The first is definitely true, and rna does function on a slower timescale. We can't be 100% confident that some of the complexity we don't understand in the first statement wouldn't have an impact in the second scenario, can we?


I am not sure I would call RNA transferring regulatory programs "memory". This looks more like epigenetic transfer than what we would call memory (IE, factual recall). My training was before the more recent work with Aplysia, but "RNA memory transfer in planaria" was presented as an example of "how to make big claims with irreproducible experiments" in grad school.

I appreciate that epigenetics is a well-established field at this point but I worry people conflate its effects with other phenomena.


I tend to agree that the word "memory" makes me think of a higher level (more abstract) type of action than a simple reactive switch, I'm not sure where the line is or if there really needs to be one or not.

Having said that, are you familiar with the purkinje cell from a rabbit that they trained to respond to timed patterns of input in isolation?

Timed pattern=input 1, delay X, input 2, delay Y, then input 3.

Definitely more than a simple on/off switch type training, but does that rise to the level of "memory"?


> This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.

Not if you removed the DNA. Epigenetic changes to the DNA are what maintain the synapse at it's "learned" state. Here's a link:

https://www.sciencedirect.com/science/article/pii/S240584402...

In addition, research has shown neurons communicating via mRNA (surrounded by a lipid).

https://www.nature.com/articles/d41586-018-00492-w

https://www.inverse.com/article/40113-arc-protein-ancient-mo...

Lots of interesting stuff in this arena.


Additional note: I forgot that synapses are also maintained by local RNA (local=at/near the synapse), so removing the RNA would definitely cause the synapse to revert back to a different state and not retained it's "learned" state.


I also want to know how much of this was replicated by independent, skeptical sources looking for alternative explanations. One thing I see in “science” reporting is that one or a few people make wild claims, it hits the news, and people believe their word on faith with no replication. There’s also many statements about what we know where the claims made should have citations, too. Yet, people who have never run experiments like that are nodding along saying, “Of course it’s true.”

Or was all this replicated? What strengths and weaknesses did they hypothesize in these studies? What did they prove or disprove? What’s the next steps? And can we already implement any of those in simulators?

(Note: I think agents poking and prodding the world can definitely be implemented in simulators. Even primitive, game engines should be able to model some of that.)


Where one can learn about that in more details?


> In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.

You can say the same thing about computer systems - as long as you don't understand the underlying logic. If you don't understand that the chemistry of transistors doesn't matter as much as the C code, you can say exactly the same critique about how a thinkpad works: "So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple....In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions."

Once you do understand the logic - the 'why' of von neumann machines and Javascript and transistors, it's clear that your claim isn't true and there is an underlying logic. The trouble is, until we positively identify that logic, we can't know if it exists or not and we're stuck debating the bioequivalent of the fundamental computational significance of the clock cycle speed of a CPU.


I have a very rudimentary understanding of how electricity and electronic circuitry and transistor work, but it does make me wonder:

We use programming languages like C to create complex branching algorithms that are turned a linear machine code tape. Programmers generally can not understand assembly even if they understand the branching code that is turned into assembly. Even if assembly had variables, just the fact that if/else's and function calls are turned into jumps is enough to make the code too complicated to understand. It might be possible to disassemble back to C by resolving the jumps into something that is easier to understand.

Imagine if brains worked the same way. That there is actually a naturally-forming high level "brain language" that is turned by a "brain compiler" function into a low-level "brain assembly," but when we look at it all we see is the assembly. That what the brain is actually doing is relatively simple, but because we can only observe the output of the compiler function it appears to be insanely complex to reverse-engineer.

Then again, I don't have the faintest idea of how brains work either.


> Imagine if brains worked the same way. That there is actually a naturally-forming high level "brain language" that is turned by a "brain compiler" function into a low-level "brain assembly," but when we look at it all we see is the assembly.

And then make it analog and full of self-referential hacks.

Though honesty, if nature wanted to screw with us, it would make brain properly encrypted, perhaps even running on homomorphic encryption. Good luck deciphering that.


If our brains evolved to be protected against man-in-the-middle attacks, that would be extremely scary because it would imply the proto-humans that didn't have this ability went extinct to some brain-invading predator!


If we go far enough back, we may have done common ancestor/s that did develop compensatory mechanism/s to at least somewhat mitigate specific kinds of attacks, like what tarantula hawks can do to tarantulas.


Imagine this to be the very reason the human mind is mostly self-illegible — that the birth of the inscrutable unconscious was a defense mechanism?


Unconscious behaviors are often conserved across species and (lower level brain behaviors) in functionally and evolutionarily similar parts of the brain.


I guess technically true, but the cell channels are vastly more complex and much harder to measure. Chemical gradients can pass electric currents, but they can also trigger other chemical cascades and cause physical changes in the cell that may not be reflected when a charge is applied. Logic is also fairly consistent across computer systems, where biological systems can function differently from person to person, and even within the same person at different points in time. There are so many more variables with the living system.


Honestly, it sounds like what you’re saying boils down to claiming it’s just too complicated for there to be an underlying logic.


Ehhh, I am definitely saying its too complex for us to properly figure it out today. I wouldn’t categorically rule it out, but I do think the logic and rules in biological systems are much squishier than physics or chemistry.


> there is an underlying logic. The trouble is, until we positively identify that logic, we can't know if it exists or not

First you exclaim there is an underlying logic, then in the next sentence you say we don’t know whether it exists, which completely contradicts your claim.


Interesting to see Levin's zeitgeist spreading (even though considering the amount of podcast and discussions he made explains that too).

I don't know what the biological/medical field thought about single cell and tissue level intelligence before but I found this gap in the usual medical thinking (usually things are either genetic or biochemical/hormonal) quite mind blowing.

Hopefully this results in new opportunities for finer medical therapies.


This is just incredible! I follow Michael Leavin since quite a while now and I am sure that he will earn a Nobel Price for this outstanding research! All the other things that he adresses in his Presentations and also Interviews are just mindblowing!(the one with Lex Fridman is quite in depth, but I prefer others even more)

This really has the potential to revolutionize our understanding of intelligence, mind and medicine. He may just tell cells to grow a new heart without modifying genes. He want to have what he calls an 'anatomical compiler' which translates our "designs" to electro-magnetic cell stimuli so that they will build this.

For me this is really pointing into a worldview that is much more in line with view that the ancient mystics from all cultures throughout all the ages have been pointing towards: Intelligence is something fundamental to existance, like space and time (maybe even more fundamental). It is all a play of intelligence, it is phenomenal and it can be tapped into. This is amazing!!!


I've been listening a lot to Sean Caroll's mindscape podcast [0]. In it they have this notion of complex-to-intelligent systems. Their loose definition is that such systems can hold an internal state that represents the world around them. A sort of model to interact with and to extrapolate future events from (time travel!). In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced. One can imagine, somehow in the feedback loops (I think, that she thinks, that I think, that she thinks, ...), something like consciousness (awareness [a model?] of the self in the world?) evolved.

Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.

I'm just a cocktail philosopher, but aren't we all.

[0] https://podverse.fm/podcast/e42yV38oN


But still - why is consciousness required? Because a model of the World could be held even without it, in my view.

E.g., I wouldn't think GPT-4 is conscious, but I'm pretty sure there's a representation of abstract World and relationships within it following the neurons and weights. Otherwise it wouldn't be able to do much of it, that it is.

Also I think model of the World is just that - which can be represented as relationships between neurons, symbolising that model of the World.

And I think you can have a complex and a perfect set of neurons and their connections to represent everything in the most efficient manner for that size of parameters (neurons and connections together). There probably is the perfect configuration, but it couldn't even be achieved using training or evolutionary methods.

And none of it requires consciousness in my view.


I think most of our world model is actually a human model. Our social relationships are more important than we give credit for.

So there's an arms race. The more brains you have the more you can model your tribe to know how to help or succeed. AND the bigger everyone's brain is the harder they are to model simply.

In this model consciousness is the "self model" or "self-consciousness" that allows you to model others opinion of yourself by having such an opinion yourself. And adjust their opinion by providing a narrative about yourself which you first have to craft, .... nd on and on with higher levels of abstractions.


I’d argue that there is no default model. The model you have is just whatever you’ve spent time interacting with. If yours is mostly social then that perhaps plays a big role in your life. I would say mine is more heavily spatial and oriented around things and their interactions.

Supporting evidence for this are feral children who could never learn to integrate into society. They didn’t have built in models of social interaction.


feral children never develop what I would call the higher thinking. They still have mobility, spatial, etc. But "consciousness" is probably not something they really put together.

This is just my opinion. I think it took most of your brain to formulate that rebuttal, not concurrently, but as a deep and rich model of how readers might respond to a given argument, and weighing in a narrative sense your remembered experience vs what you "heard" me say in your head when you looked at these squiggly lines. Probably you even had a nascent image of me that was an amalgamation of your past experiences with internet idiots (fairly!) etc.

That's a lot, lot more than what a bug does when it finds its way past a complex maze, and probably something feral children lack.

I think all our higher level "reasoning" is a cooption of these systems.

There's a point in infant development where speech and social skills are developed, after which they cannot be. I'd go so far as to argue you'll never make a mathematician or physicist out of a feral child. Something critical to our higher level thinking is formed then. If missed, it cannot be recovered.

The flip side is I deeply believe if you can hold this conversation with me, then we both can be the best mathematicians there are, if only we took time to develop that language.


I like this model because consciousness just becomes another function, and it lets us explain some of the weirder modalities; for instance, being able to predict free-will choices way ahead of time in human subjects. Also, SMBC had a funny comic about this (funny comics always making science true!):

https://www.smbc-comics.com/comic/consciousness-3


The reason I like this theory is that it provides the groundwork for and over eagar agency-modeller that explains a lot of human quirks like belief in gods, seeking root causes, modelling etc


I think some problems are simple enough that they can be dealt with "blindly", but some problems turned out to be tricky in special ways that evolution was more able to solve via consciousness than blind information processing. And from there, we find ourselves, with that new capability in hand, able to repurpose consciousness to newer and newer things. Then retrospectively it can look like consciousness wasn't "needed" for certain problems.

So I think even if you want to make the sense that consciousness solves a lot of problems it doesn't need to, it may have been a "real" solution to a "real" problem at some point in our history. And from that point on, it was no longer important whether it was the best solution.

I do think it's fair to say that lots of remarkably complex informational problems are solved in a p-zombie way, which is to say, with every outward appearance of intelligence (slime molds solving mazes, collective behaviors of ants). So I do think evolution or nature writ large "agrees" with you that consciousness isn't strictly necessary.


There's also the idea of the "hive mind" of ants and bees, so maybe quite literally our brains efficiently evolved as a meta-hive-mind internalized by our network of neurons.


That's an incredible way of thinking about it. I would say that is the most creditable way of dealing with the question of consciousness as it would apply to collective behavior, which is to say, it is a clear way of stating the difference between what those behaviors are, and how far removed they are from actual consciousness, without dismissing the "interestingness" of the behavior.

I would suspect though, that in our case it may have been something like centralized "nerve cluster" getting more and more complicated over time rather than anything that is in any sense a hive, although that is a very imaginative idea.


The intermediate Value Theorem, in existential form:

A single one of my brain cells is not me,

but all of my brain cells is me.

Somewhere between these two endpoints there is a line where me has to begin, and not me ends.


There’s a really neat book (fiction) that touches on this concept. It’s called Blindsight by Peter Watts.


With a human brain you're got

1) A huge stream on sensory data only some of which gets promoted to conscious awareness.

2) Some of that raw data and other conscious outputs are persisted into working, short, and long term memory.

3) And your consciousness works recursively using (2) as well as (1) as inputs.

All the stuff in GPT that gets called "memory" in machine learning seems much more like (1) and it lacks any ability to persist data outside its context window so we're still missing something.


I highly recommend reading "Blindsight" by Watts it touches on that.


Having a purely representative model of the world is less useful than having a sandbox for modeling choices and outcomes. Do I need to duck before entering that doorway?

That introspective analysis is consciousness. Humans have just improved the same mechanism allowing for more abstract analysis.


But GPT-4 can also take the decision to duck.

For instance, here I have GPT-4 with a system message: "You are an AI human agent in a video game performing actions.".

And I give it the following information as current environment:

Current environment:

1. Your goal is to get out of the current room.

2. You are moving in direction of North at 4km/h.

3. There's a door 3m in the North.

4. The door height is 150cm.

5. Your height is 180cm.

What is the sequence of actions that you plan to perform?

Its response:

---

1. Slow down as I approach the door to avoid collision.

2. Inspect the door for any lock or handle.

3. If a handle or a means to open the door is present, try to open the door.

4. If the door is locked, search for a key or any other opening mechanism within the immediate surrounding area.

5. If the door isn't lock or after opening it, crouch down slightly to accommodate my height which is more than the height of the door to avoid my head hitting the door frame.

6. Proceed through the door and stand tall again after passing through.

7. Continue moving North after exiting the room.

---

So here I did hint quite a bit by giving this specific environmental data, I could add more noise, and unnecessary information, but even then I think it can do it based on my experience.

I had temperature of 1 for the above, which I think why it made some odd word choices, but I also decided for example to change the info where the door is to "South", and this is how it responded:

1. Stop moving in the North direction.

2. Turn around 180 degrees to face the South direction.

3. Start moving towards the door in the South direction at a moderate pace.

4. Upon reaching the door, duck down slightly to accommodate the difference in height between myself and the door.

5. Proceed through the door to exit the room."

-------

So I guess, what I would like to see as an example. Assuming GPT-4 is not conscious, what is something that an average conscious human can do, thanks to presumed consciousness, that a GPT-4 couldn't do and why is consciousness required for that?


> what is something that an average conscious human can do, thanks to presumed consciousness, that a GPT-4 couldn’t do and why is consciousness required for that?

Consciousness isn’t required for anything. It’s useful for dealing with a complex and adverse environment not solving any specific problem.

That said, without testing I would assume if ask chat GPT-4 to change how quickly it responds it’s going to fail based on my understanding of their architecture. As in recite the pledge of of allegiance inserting 10 second pauses between each word. That said having provided a specific test it’s possible for that particular capability to have already been added to the UI layer via a pause token or something even if the model itself responds at the same rate.


If you used GPT as a brain you could provide the current time as an input for it, but otherwise yeah, GPT doesn't have time within it's input by default, but if you did:

Made the instruction to be: Say "Stop!" when 10 seconds are done. And then had a loop where

You give it current time in a loop, it would surely be able to do that.

But I'm not sure if or how that is exactly related to consciousness?

Time, the best way to think of it is probably a series of ticks anyway, and I assume in people's brains there is something to do that, so it would be reasonable to add a mechanism for GPT as well.

GPTs goal is to only act as the intelligent part of the brain based on the input.


Modify the system enough and it’ll eventually be conscious.

It’s not about a pause token, but the internal processes. You can have a long conversation on the subway with someone without forgetting you’re going home from work. Overflow it’s context window and GPT-4 has no recourse it just forgets. The difference is essentially prioritizing information, but LLM’s really don’t function like that it’s all about predicting the next token from a given context.

Give a future generation of AI systems internal working memory, a clock, and the ability to spend arbitrary time updating that internal memory and IMO that’s pretty close to consciousness. At least assuming it was all functional.


But it's not really difficult to inject this mechanism into the context window.

GPT-4 turbo latest version allows for 100k tokens or 75k words. The whole subway thing and more could easily be kept there, but what ever else can't can be designed in the prompt to always keep certain amount of tokens in context for different layers of memory, where you the more into the past you go the less details you have, but it's more like a Title of your most important learnings through out life, but at any given time GPT-4 can call the function to ask extra bit of content about it, if it seems relevant to the situation at hand.

So for example in each prompt context you would have:

1. Short description of what you have done each year in your life. 2. Key findings, goals, that you currently have. 3. The whole current day (or how much seems reasonable). 4. Past weeks in a bit more detail than the short description for a year.

So basically you could try to find 70k words to represent as much context and most important details (that are decided by GPT itself, what is most important).

I've been building an assistant for myself, that has such memory management system, it gets past N (like 40 messages) in full detail, then it will have summaries from before that time and in addition it will have messages and learnings in the past stored, and these will also be passed to the context depending on the query that matches it.

And if you want to compare it to human process of sleeping, it occasionally goes through the all the messages and "compresses" them, to extract most important findings and short summaries so they can be used in the next day's context.

So to me it's just basically giving it tools, and the other things like memory, longer term memory, inputs it currently doesn't get, are fine to be solved by other tools. I think that human brain also has different parts of brains working on different things, so it's similar in a sense.

Then once you have 70k spent on this historical context, you will run prompt in a loop allowing it to perform fn each time, like retrieve further info, or store some important fact, etc.

The real actual problem would be cost, because the costs would rack up quite quickly making looped 70k token requests.


As an example, what I tried right now:

USER I'm responding to you roughly every second with current time, say "Stop!", after 10 seconds have passed. Respond only with silent when you do not wish to say anything.

Current Time: 13:04:11.21

Say "OK!" now if you have understood the challenge.

ASSISTANT OK!

USER Current Time: 13:04:12.24

ASSISTANT silent

USER Current Time: 13:04:16.21

ASSISTANT silent

USER Current Time: 13:04:19.18

ASSISTANT silent

USER Current Time: 13:04:23.12

ASSISTANT Stop!


That’s almost but not actually waiting. Picking the nest token for a context window is something GPT-4 is quite good at. But the underlying approach has some real limitations which I discussed in response to your other comment.

Using 1 hour instead of 10 seconds and then list every Current Time incrementing by .01 seconds and you overflow the context window. The solution isn’t to have a larger context window, the solution is to have an internal representation independent of that context window.

The issue is you can’t train a system to use that internal memory by simply giving it a large corpus of text examples. The text lacks the internal representation associated with its creation.


In this case you wouldn't spam the context window, you would perhaps just send latest time every time and lose portion of the previous times.


That solves this specific issue, but it isn’t a general solution to the finite size of a context window. Ideally the system should be able to decide for itself what information it needs to pay attention to, but LLM’s simply don’t operate in that fashion.


But you can, e.g. you allow GPT to call functions to store embeddings that it can later retrieve for the memory. Meanwhile default context memory window is just recent info more detailed and historical info gets more and more summarized, possibly summarized by GPT itself.


I'm not convinced GPT can infer correct actions needed for novel scenarios for contexts that have not been directly experienced in training.

Consider:

I have never in my multiple decades of existence had a ball roll out in front of me while driving. But if I were driving by a playground and a ball rolled out in front of me from between parked cars, and the ball itself is not a hazard because it has already passed my lane, I can infer that I need to slow or stop because a child may rapidly follow that ball.

In your ducking scenario, the AI needs direct perception of a door. I assume it cannot imagine a need to duck without 1) direct perception of the doorframe, or 2) previous experience of ducking through a doorframe.

I'm not convinced that GPT or any AI can infer that context for handling novel edge cases at this point. It could just be a need for more training data to correlate different contexts, but I don't think it's there yet.


GPT has no sense, or care when it is wrong or right, such sense is only (arguably) driven by human through prompt interaction and throughout training of model, while humans and other animals able to update there's internal state just from single observation or interaction, and integrate future information with such single observation for very long time.


What if GPT was integrated with a module to

1. Take light input. Video/images.

2. Take sound input.

3. Touch, heat input.

And other inputs from the environment. Then there were mechanisms which could also be neural networks that will transform this data into more digestible way for GPT and GPT was also in addition specifically trained to act based on this input.

Then it would run in cycles, where it gets this input, and it will provide output on how it plans to react to the data, maybe every 100ms.

It then could also have a storage it can use, where it can store data as part of the output to later retrieve it again.

So it would be a set of modules that is controlled and interpreted by GPT.

It could then do all of that above, no? And all of it should be just a matter of implementing. The only near time challenges may be certain types of inaccuracies and or producing tokens in some cases might take too long time to have fast reaction time.

So basically you'll try to run as frequent cycles as possible with the inputs mentioned above, other neural networks identifying the objects, in many different ways and all the context about the environment, unless a new version of GPT becomes completely multi-modal.

And you run those loops, then GPT gives output what it wishes to do, e.g. store some fact for later usage, move there, move here, etc. Or retrieve some information using embeddings then decide again, and short term memory would just be this context sized window, and if it needs more it just looks into its own memory for embeddings.


at least in my limited perspective, root issue is learning dynamics of internal model that represents self (internal world) and external world.


What do you mean by these dynamics exactly?


how system updates and maintains own model(s) when new information added in form of single/few observation and/or interactions, without ability to replay data, without catastrophic forgetting etc..., and importantly, how such system's model(s) grow in complexity while retaining stability and redundancies.


Analysis of what-if scenarios and simulating likely expected outcomes of proposed actions is a very useful ability, but IMHO that is not part of what we'd call 'consciousness'.


It’s a large fraction of what consciousness is actually doing, not the definition of consciousness.

Someone considering what to say is simulating tradeoffs.

In practice simulation isn’t limited to future possibilities. Trying to remember something you might recall vague sensory information like the tilt of your hips and try and create a scenario where that fits.


I would not automatically treat all the things that people do consciously as intrinsically linked with consciousness, unless there's some clear reasoning why consciousness is required to do that particular thing - and not in the sense that it's required because unconscious people can't do stuff because they're incapacitated.

Like, saying that simulating tradeoffs is what consciousness is actually doing seems to me equivalent to saying that smashing my fist on the table is something 'my consciousness' is doing, which is kind of technically true but not really relevant, as IMHO a non-human entity that's clearly without any consciousness could both smash something against the table and also simulate tradeoffs for various options, having a subjective experience isn't required for that.


Sleepwalking shows most things people do can happen without consciousness. People can make breakfast without being conscious etc.

Choosing between tradeoffs is however intrinsically linked to consciousness.


Personally, I doubt that self-awareness can be achieved without some form of consciousness, and I feel that self-awareness is a key component of higher intelligence.

If intelligence and/or consciousness arise as emergent properties in the right sort of complex system, they will disappear from view in a low-level analysis of the causal processes occurring in that system.


Is there any way you could have a being like a human, who when asked would say they're not conscious? Is a definition of consciousness allowing that possible?

I'm not talking about whether they are or aren't, but surely all intelligent beings would say and think they're conscious?


I mean you certainly could, in the sense that you could get it to tell you the sky is green, and that it's the spirit of Rodney Dangerfield.

To your point though, so far as I can tell it could not answer that question as a product by actively engaging conscious introspection to discover its own absence of consciousness, which I take to be your point.

So if its way of answering involves dipping into its own active "theater" of consciousness it likely could not, at least given our usual understanding.

However I do wonder if its something where that could actually be complicated to answer, and where, like illness, as we grow in understanding, it breaks out into all kinds of unexpected subclasses.

I can, for instance, know, in a sense, that I can't see UV rays. But I am not so sure if consciousness writ large could every be tested for, via conscious introspection, and come out in the negative, even if we allow for the idea that it will explode into all kinds of nuances and subclasses with future research.


Of course consciousness is required for intelligence - you need to model yourself as part of the world to have a complete model.


Why does "modelling yourself as part of the world" require consciousness?


> But still - why is consciousness required?

Did someone say it is? Parent explicitly called it out as a by-product.


It's true, p-zombies make more sense to physics, yet here we are.


A p-zombie is, by definition, physically identical to and physically indistinguishable from another individual who has conscious experiences, yet does not itself have those experiences. It's not clear to me that this makes more sense to physics, particularly when one considers the p-zombie and its conscious twin both being asked the same questions about their conscious experiences.


Indeed, here we are, literal p-zombies.


You don't have any experiences? Weird, I see a colored-in world, enjoy the aroma of fresh coffee, don't enjoy the wet cold in winter, don't like being awakened up from an engaging dream, and feel a mild sense of irritation by humans pretending not be conscious because of their materialistic commitments.


> You don't have any experiences?

I have "experiences", they just don't really have the properties that a naive interpretation of sensory data would entail.

> mild sense of irritation by humans pretending not be conscious because of their materialistic commitments.

Funny, because I feel a mild sense of irritation by humans pretending that their flawed senses yields some kind of direct observation of fundamental reality.


Yet you observe them and don't put down their pretense to your flawed senses. Which suggests that something fundamental is impinging on your senses, however flawed your understanding of the underlying fundamentals may be.


> Yet you observe them and don't put down their pretense to your flawed senses.

I do actually. I think this internal sense of subjective awareness is just as if not more flawed than our other senses, and it has yielded all sorts of false conclusions because people consider it more reliable. I liken it to the illusion of multitasking on single CPU computers. I think Graziano is on the right track:

A conceptual framework for consciousness, https://www.pnas.org/doi/10.1073/pnas.2116933119


Having read the paper he's rapidly on his way to materialistic solipsism - which is fine as a circle of hell, but it doesn't have any bearing on reality.

Otherwise, he really isn't saying anything new from a philosophical perspective. Both Aristotle and St. Thomas long ago had a distinction between the act of sensing something and the act of naming (that is knowing) it. And noted that we can reflect on both of them.

As he says:

> Third, an attention schema cannot explain how a [non-empirically verifiable, non-material phenomena] emerges from the brain ... It explains how we believe, think, and claim to have such things, but it does not posit that we actually have [non-material phenomena] inside us.

Having limited himself to only material things, he is necessarily constrained to only talk about epiphenomena if there is a non-material formal cause for the material phenomena.

But empiricism is not all that is - there is no empirical proof for empiricism, it requires something outside of itself to support itself.


> I have "experiences", they just don't really have the properties that a naive interpretation of sensory data would entail.

You don't think your experiences have sensations?

> Funny, because I feel a mild sense of irritation by humans pretending that their flawed senses yields some kind of direct observation of fundamental reality.

I never said anything about fundamental reality. I don't understand saying we're literal p-zombies. You have to explain away your sensations, dreams, inner dialog etc., by appealing to some theoretical commitment.

I'm not claiming what consciousness is, only that we have it.


> You don't think your experiences have sensations?

I don't think sensations are what they appear to be either. Eliminative materialism is very misunderstood.

> I never said anything about fundamental reality

But you did, by claiming that we have consciousness. The only reason you think you have consciousness is an alleged direct perception of subjective qualities.

There are only two real paths to reconciliating this with some form of scientifically sound materialism: panspychism, where subjective qualities permeate all of reality and we are perceiving some of them, or concluding that our perceptions are flawed and untrustworthy and so the subjective qualities we think we're perceiving are not real/illusory (eliminative materialism). Given the demonstrable flaws in all other perceptions, the most parsimonious position seems clear.

Of course, you could try to resurrect idealism, but idealism is very problematic.

> I don't understand saying we're literal p-zombies.

P-zombies believe they have consciousness, say they have it, write whole screeds about how absurd it is to claim they don't have it, but they don't have the subjective qualities as we've defined them. Sounds like a bullseye to me.


> But you did, by claiming that we have consciousness. The only reason you think you have consciousness is an alleged direct perception of subjective qualities.

We don't perceive subjective qualities, subjective qualities constitute our perceptions, dreams, memories and other conscious mental states.

> There are only two real paths to reconciliating this with some form of scientifically sound materialism:

If you have a commitment to doing so. I don't particularly, since materialism is model of how the world is constituted, abstracted from our shared conscious experiences. A very useful one with scientific backing. But nature is more than what we think it is (map/territory distinction).

And there are other options you omitted such as identity theory, strong emergence, property dualism, neutral monism and cognitive closure. They all have their drawbacks, but so does every attempt to account for consciousness.

> P-zombies believe they have consciousness, say they have it, write whole screeds about how absurd it is to claim they don't have it, but they don't have the subjective qualities as we've defined them. Sounds like a bullseye to me.

Yes, but by definition, they lack the subjective qualities of our experiences. I believe Chalmers coined the term to demonstrate the problem with physicalism.


> We don't perceive subjective qualities, subjective qualities constitute our perceptions, dreams, memories and other conscious mental states

And I say the opposite, that our sensory perceptions, dreams, memories and other mental states drive processes that lead us to conclude that we've perceived subjective qualities in them.

> And there are other options you omitted such as identity theory, strong emergence, property dualism, neutral monism and cognitive closure.

To make a long story short: identity theory, neutral monism and panpsychism are basically the same on the property that everything has subjective qualities in some sense, strong emergence is a speculative concept at best that fundamentally amounts to dualism, dualism can itself be easily dismissed as any dualist theory will have a non-dualist equivalent that posits fewer entities and so dualism will never be preferred, and cognitive closure is not even a candidate as regardless of our ability to know consciousness, it still must logically fall into one of the categories I outlined.


>> P-zombies believe they have consciousness, say they have it...

> Yes, but by definition, they lack the subjective qualities of our experiences.

And, also by definition, each p-zombie has (at least in an accessible possible world) a physically identical, but conscious, doppelgänger. Speaking and writing are physical processes, so in any circumstance where the conscious doppelgänger writes about the reality of its consciousness, the physically-identical p_zombie will do the same.

In the case of the p-zombie, there was a causal process leading to the physical act of writing in which consciousness played no part - and, ex hypothesi, the same physical causality must have played out in the zombie's conscious doppelgänger. In general, whatever explains the zombie's observable characteristics is sufficient to explain the doppelgänger's observable characteristics, and that explanation cannot be dependent on consciousness. In particular, if one accepts the possibility of p-zombies, one must logically accept that no explanation of consciousness, regardless of how thorough and well-supported it seems to be, has any basis in any facts about consciousness.

A consistent belief in zombies seems to lead to epiphenomenalism, the notion that we have conscious experiences, but they play no part in what happens to our bodies. Frank Jackson, author of the well-known "Mary the neuroscientist" thought experiment, came to the same conclusion, which is why it appeared in a paper with the title "Epiphenomenal Qualia."

For more on zombies, see Dennett, "The Unimagined Preposterousness of Zombies."


Right, so one can reject the possibility of p-zombies. Consciousness plays a role in why we think we're conscious.

> In particular, if one accepts the possibility of p-zombies, one must logically accept that no explanation of consciousness, regardless of how thorough and well-supported it seems to be, has any basis in any facts about consciousness.

Although I disagree somewhat here, because logical isn't the same thing as physical. All you can say is there is no physical explanation of consciousness. But you could logically come up with one. For example, Chalmers has proposed an additional natural law connecting consciousness to information rich processes. Natural doesn't necessitate physical, if the universe contains more than physical stuff.

Or if physicalism is the wrong ontology of nature. I take the p-zombie and Mary Room arguments to be critiques of physicalism.


I feel you may be overlooking the significance of the final clause in the sentence you quoted. Sure, one could come up with some hand-wavy outline of a non-physical explanation of consciousness which does not clearly stand in contradiction with a belief in the possibility of zombies - panpsychists do it all the time! - but that final clause is about verification. If zombies are possible, then consciousness is not doing anything observable that is not attributable to just the physical processes of the physical body.

I suppose zombiephiles could simply posit some sort of non-physical observable, but then they would be just piling on further unsubstantiated claims for the sake of preserving the zombie-possibility doctrine. They could adopt epiphenomenalism, but that means accepting that the apparent causal effectiveness of qualia is illusory. They might adopt a solipsistic stance and say that their conscious experiences are the only things they can be sure about, but that comes with a hefty dose of motte-and-baileyism: you doubt the external world, but you're sure about zombies?

On Chalmers' "additional natural law connecting consciousness to information rich processes": Firstly, proposing something is cheap; turning it into a verifiable hypothesis and then following through is hard, and Chalmers et. al. seem to have no interest or intention of doing that, preferring instead to endlessly repeat their anti-physicalism arguments. Secondly, there is an attempt to redefine what's meant by 'physical'. The physical was originally defined by what's tangible to our senses, and as additional phenomena having a causal connection to the physical world were discovered, they were naturally subsumed into an expanded concept of it (gravity, for example, despite Newton's unease over doing so.) Chalmers is free to propose causes that lie outside of our current concept of the physical (and he's even free to look for them!) but to insist, without any justification, that they would be non-physical, is at least tendentious and probably question-begging. To settle the question, we need more facts, not more speculation.

I'm not sure what to make of your last sentence: the p-zombie and Mary Room arguments are indeed intended to be critiques of physicalism, but as you have rejected the possibility of zombies, I assume you do not take them both to be successful arguments. One thing about Mary's Room is that you can make essentially the same argument, but with Mary's prior knowledge including Chalmers' additional natural laws.


There are many non-physical things, like the laws of physics. We know they exist, just at a higher dimension of being.


I'm not saying that there are no non-physical things. I am saying that certain philosophers are making seemingly tendentious arguments for declaring that consciousness is not a physical phenomenon. Once they get around to saying with sufficient specificity what they have in mind, then we will be able to see whether those claims are justified.


If you imagine an Apple, is that Apple physically on some plane of existence?

If so, then consciousness is physical, if not, then it is non-physical.


What, exactly, does 'plane of existence' mean?


> And, also by definition, each p-zombie has (at least in an accessible possible world) a physically identical, but conscious, doppelgänger.

I agree that epiphenomenalism entails absurdities, therefore either all people in any possible world are p-zombies or p-zombies cannot exist in any world.


p-zombies don't require an illusion of free will for the universe's entertainment.


A thermostat is a system that can hold an internal state (nominally, temperature) that represents the world around them. You can also build a thermostat with a switch and a bimetallic strip with differing rates of thermal expansion -- a device that is clearly not intelligent. I'm not sure I can subscribe to this definition...


My thermostat may not be intelligent but it is certainly smart... At least it says so on the box.

Anyway, the strip does contain the state of the world around it: the temperature is modeled by how much the bimetal is bent. I think indeed it is a minimal example of a complex system, one that at first glance defies explanation, it seems to have purpose (keep temp stable), until you understand the inside.

Anyway, "Is a virus alive?", "Are these specimens the same species?", ... Us humans like our boxes, but at the edges they almost always go wrong.


In lectures I thermostats as an example of an intelligent system that matches most attempts at defining intelligence. And I have no qualms saying they are intelligent. Intelligence is a very vague and very context dependent thing that can be at most used to compare some things in some scenarios.


A bimetallic strip is intelligent?

https://en.wikipedia.org/wiki/Bimetallic_strip

Thermal expansion is clearly a physical reaction; if arranged in an intelligent way, it's able to achieve some human-relevant objective... but if we're going to say that two pieces of bulk metal hooked together is intelligent, then the term "intelligence" is so broad as to be roughly meaningless.


Some definitions of intelligence (or even consciousness) are systems that simply process data. From that context, a thermostat is processing temperature data. It also underscores the GP's point that there isn't necessarily a consensus on the definition.


You can set a context as broad as you want, but it changes utility.

You can set a definition and context so broad that everything and anything is intelligent.

The possibility of an outlandish alternative, does not does inform on if a consensus exists or not


You can set the context as narrow as you want (e.g., abstract reasoning specifically defines intelligence). That might make your ontology feel crisp, but it also doesn't make it correct; it's still arbitrary and a matter of convention.


Indeed, the only value of categories in how useful they are. It's clear there's a continuous spectrum of intelligence/complexity. I believe the main reason we're still arguing in circles about what is or isn't intelligent, is that we also have a distinct, ill-defined category called "intelligence" that carries ethical / moral connotations. We don't know how to handle a continuum of moral weight / personhood.

It's useful in many ways to put a thermostat, an LLM, a monkey and a human on the same "intelligence"/"complexity" scale - there are many interesting things to discuss other than moral weight. And then, for discussing personhood, it makes no sense to include a thermostat.


sure. but there is value in defining a useful working convention so that people can discuss, learn, and study.

Interjecting the perspective that words are meaningless constructs is rarely helpful. If someone wants directions, telling them that there is no concrete metaphysical concepts of north, south, left, or right, isnt constructive.


I agree on the usefulness piece and that words/ideas have meaning. But usefulness shouldn’t be conflated with the underlying truth and neither should classifications of convention. As the saying goes, all models are wrong, but some are useful. I can say that burying the dishes from a meal that made me sick may help avoid future food poisoning; while useful, it doesn’t get me closer to the underlying truth of germ theory.

I think the heart of these more philosophical debates is a desire for fundamental truth, not just a useful mental model.


A bimetallic strip is only intelligent when placed in the context of a thermostat system. Similarly, your brain isn't intelligent when removed from your body and placed on a table. Brains are only intelligent when they're in conjunction with living bodies.


A brain is intelligent regardless of where it is, so long as it's "powered" so to speak.


I used to think this but I don't know anymore. Without our bodies to provide stimuli what input is the brain going to use for thinking or reasoning? So much of what we do day to day is hard-linked to our body/sensorium. The brain without the spinal cord/nervous system is useless.


Sure, replace a living body with a machine, and replace the thermostat around the bimetallic strip with some other kind of mechanism. It makes no difference.


So a brain is only "intelligent" in certain (but infinite) contexts as opposed to intelligence as an inherent property of a brain?

If so, your definition of "intelligence" is incomplete. What are those contexts? Can you really generalize all of those contexts for all possible intelligence systems? That sounds like an epistemological nightmare.


Intelligence is not an inherent property of a brain, brains are only intelligent when placed in particular configurations with supporting hardware. Are you going to fart at me too?


{{Citation needed}}


{{fart noises}}


Not without lungs, you don't...

But actually there's a real interesting thing here. People used to believe that the heart did the thinking. Phinneas Gage upended that theory, and now people think that the brain is the thinker. More recently, we're learning that the gut has a massive neural network, and the brain-gut connection is being investigated today. Likewise, "muscle memory" isn't in the muscles nor the brain, it's distributed among neural clusters all throughout your body. How much of "you" (your knowledge, your personality, your physical skills) is actually contained within the head is entirely non-obvious.


This is sophomoric. First of all, muscle memory is not contained in your muscles or "distributed" throughout your body but is known to reside in your cerebellum and basal ganglia. The nerves going down to your muscles are simply a conduit for actuation, a wire, nothing more.

As far as your gut, if you claim that a thermostat is intelligent then sure your gut is also intelligent (don't forget the microbiome in there too).


Peripheral nervous system has significant plasticity and its reflex archs can produce complex and purposeful behavior without the brain. E.g. cats can walk with their brain removed and famously decapitated chicken do all sorts of stuff.

Octopi don't even have a centralized brain.


I'm not disputing that. What does that have to do with whether a human brain is inherently intelligent?


I was replying to this:

> First of all, muscle memory is not contained in your muscles or "distributed" throughout your body but is known to reside in your cerebellum and basal ganglia. The nerves going down to your muscles are simply a conduit for actuation, a wire, nothing more.


> This is sophomoric.

Says the user who just made fart noises. Your confidence here is dunning.


I literally just followed suit. Don't like the response, don't lead. You're being hypocritical.


I think intelligence is a continuum from not very intelligent to very intelligent. I guess a bimetallic strip would be at the bottom end of the scale above a lump of coal but below Einstein. At least in thermostat form it can keep something stable in contrast to say Turkey and it's currency as mentioned in another HN thread today. Maybe they could rig a bimetallic strip to raise or lower interest rates with a heat input driven somehow by consumer prices? They could then use that to replace Erdogan.


Our brains are clearly physical reactions. Check and mate.


Perhaps check, but I don't think it's checkmate until you can explain the hard problem of consciousness via physical reactions.


What makes you assume thermostats aren't conscious?


Because they don't have any other higher level function that could reason about the state other than a single binary response. If intelligence is a continuum you might say it is at the lowest end (a bit state and actuation on that state). But consciousness seems more like a meta-intelligence that would require more than one bit and one actuation.


this really is getting at something important. however the question I would pose is, if the thermostat is conscious, does it have a sense of self, i.e. an identifiable boundary between itself and the world around it? I would say not. My understanding is that humans have an identifiable region(s) of the brain that correspond to a sense of self: https://en.wikipedia.org/wiki/Neural_basis_of_self#:~:text=T....

I think in a lot of these conversations people say consciousness and assume that consciousness always mirrors their own, which just so happens to come along with a sense of self for 99.9% of sober individuals. So I think it's important to distinguish between the two things.


Would this imply that someone who claims to lose the sense of self (or seems to realize it as a hallucination) through “sober” techniques like meditation would no longer be conscious?


Quite the opposite. I'm saying that a sense of self is not necessary for consciousness. And that many people write off simple objects as having consciousness when really what they mean as that those objects don't have a sense of self.


I don’t make any of those assumptions. My point was that I also don’t know that we can distill consciousness to strict materialism until we can find the mechanisms to explain the hard problem of consciousness.


> thermostats as an example of an intelligent system

vs

> A bimetallic strip is intelligent

You're moving the goal posts


There are some who claim that a bimetallic strip shows evidence of intelligent design, but every fifth-grader knows that if a million volcanoes erupt over 5 billion years, there is a lot of weird stuff that can result from that random activity.


Language is quite flexible, but I don't think it is very helpful to call a thermostat intelligent. Doing so does not advance our understanding the sort of self-aware, theory-of-mind-holding intelligence that is generally meant by the term (putting aside its alternative meaning as a synonym for information), and adopting the broad definition just increases the burden of disambiguating what we mean when what we mean is the latter.

The broad definition is not justified by the fact that certain attempts to define the term would attribute intelligence to thermostats; a more useful conclusion to draw is that the definitions in question could be improved.

It is sometimes said that we won't be able to gain an understanding of intelligence until we have a precise definition, but it's actually the other way round: definitions get tighter as our understanding increases. We now understand many things that not only were lacking definitions in the past, but were not even imagined.


Thermostat with a servomechanism is self-aware of its state. A thermostat could be quite simply augmented to have a theory of another thermostat. Probably there are such coupled termostats.

I use intelligence mostly just as perspective of analysis to a system. E.g. Can it be seen to process information? How complicated is the transform from the information to some action? Does the processing lead to some good outcome, e.g. self-preservation of the system? Could the information processing be improved for this outcome?


Saying that a thermostat with a servomechanism is self-aware is just repeating the same broadening-beyond-usefulness with a different (though related) concept, in this case self-awareness. The difference between the self-awareness of humans and that which you see in a thermostat is vast, and understanding how a thermostat works gives no useful insight into how the human mind works.

Any system processes information, and in a thermostat it is not complicated at all. Even the most self-aware thermostats currently being manufactured are not going to improve themselves on their own volition. Applying the term 'intelligence' this broadly turns it into an empty tautology, or a little piece of uninformative noise on a communications channel, no more useful in understanding those systems than in understanding actual intelligence.


> I have no qualms saying [thermostats] are intelligent

> Intelligence is a very vague and very context dependent thing

That's either contradictory, to state that something has property X where X is vague to the point of undefined, or you are setting a very low bar for intelligence (I'm actually okay with the latter).

Just a question, if a bimetallic strip can be intelligent, is a marble intelligent for having the wit to find a low point on an uneven surface by rolling downwards?


The marble question is good. For most purposes I'd say the bimetallic strip (one connected to an actuator at least) is more intelligent than a marble. Studying dynamics of internal properties of the termostat is more fruitful than of the marble that is easy to describe as an unchanging object.

Taken to the logical conclusion, we are all just rolling downwards to a minimum energy state just like the marble.

I don't think intelligence is an intrinsic, let alone binary, property of an object or a system.


> I'm not sure I can subscribe to this definition...

Is this because it is a completely man-made system and not one that evolved slowly over time through natural processes?


It's not just the internal state but the prediction that makes it intelligent.

Your brain is taking in a lot of information at the edges of your awareness, light, sounds, touch, etc. are all getting absorbed and transmitted to your brain. As that information is transmitted along your neurons it's getting summarized, then merged with other summarized information and summarized again. The brain is getting summaries of summaries, and developing a unified categorizing of the global state across all it's inputs.

Then the brain takes that summary and makes a prediction about the future state. The summarization is energy-efficient. By categorizing all that data into a global state you make decision making possible. "When my boss seems stressed all week, then calls a bunch of people one-by-one into his office on Friday afternoon, I know lay-offs are coming. I better polish up my resume." From "stress/anxiety/unease" in the environment to "danger is coming I need to fight/flight".

Your brain is taking that summary/categorization and figuring out what it needs to do next. If "X" happens then I should do "Y" to "stay-safe/get-food/find-a-mate". The brain is very good at capturing and summarizing data, and making a prediction because that process is much more efficient than doing otherwise. Instead of foraging everywhere for food and hoping I just bump into something that will provide sustenance, I know if X, Y, and Z happen then food will be "here", and I can get lots of it.

You can apply this same model to all actions the brain directs. It also helps make sense of why maladaptive behaviors develop. Sometimes the summary is incorrect, or was formed based on past information that no longer applies, and it may need to be unlearned.


A thermostat is certainly intelligent.

The definition of intelligent I give is “to mitigate uncertainty.” If it does not mitigate uncertainty, it is not intelligent.

It is merely of constrained intelligence. Perhaps your expectations are too broad.

If the thermostat reacts appropriately to environmental changes then it is performing its role intelligently.


In the context of a person being trapped in a cave, is some aspect of light shining distantly intelligent? It's reducing uncertainty about where a viable exit is, after all.


Information is the reduction of uncertainty. If it does not reduce uncertainty it is not information.

Intelligence is the mitigation of uncertainty.

The revelation of the light is information. Navigating the contours of darkness is intelligence.

If the cave walls glowed in response to proximity (of the observer) then the light source would have a modicum of intelligence.


But what if the cave walls glowing misleads the observer and they get lost further? Then it wouldn't seem to be intelligent. In any case, the light source being intelligent occurs in the context that there is an intelligent observer who may be able to escape easier given a light source. The observer sees the light, evaluates it, and makes a decision. Even if it's just instincts, there is a decision made. What might be intelligence is if the cave (or elements of it) acts according to "this person is lost at this location, so light up this specific pathway". If the cave somehow facilitated more functionality, such as leading a person to clean groundwater, it would be more intelligent. In the context of helping the person escape or aiding them in the meantime, the cave tends towards accidental sophisticated technology ("...[which] is indistinguishable from magic"). Functional intelligence, I'll call it. Then perhaps there could be a theory of intelligent agents, who have goals and plans.


Didn't Nest have a >$1B valuation at some point? /s


Juicero was $120M /not s


You can say that. You can say a lot of things to explain consciousness in a materialistic sense, as in how it could've emerged. But I cannot fathom how material interacting with other material and forces gives arise to subjective experience. It simply makes no sense to me. If I create a copy of my brain, it would be conscious, but with its own unique subjective experience. This makes sense so far, but what exactly is this subjective experience and how can "mere" mechanical matter create such an entity.

So in short: I cannot understand what is the actual substance of subjective experience.


Have you ever been under anesthesia like propofol?

I feel like most of what we call "consciousness" is converting short term memory into selected long-term memory, facilitated by language. Because then you're under, you can even be "interactive" but you're not conscious of it because your short term memory has been disabled.

As to "human intelligence", honestly, I think that human languages that let us convert our "consciousness" into a shared hallucination is the key evolutionary advantage. Human intelligence comprises a hive mind in a sense, that our experience of the world is hugely affected by the shared social experience where language transfers memory from person to person.


> Because then you're under, you can even be "interactive" but you're not conscious of it because your short term memory has been disabled.

I don't find this convincing. If you asked someone under such anesthesia if they were currently conscious, they'd probably say "yes". Doesn't it seem more likely that you just don't remember having been conscious? If I magically wiped away a year's worth of your memory, does that retroactively mean you had actually been unconscious during that year?


But their saying "Yes" doesn't necessarily make it so. I've made comparable claims while asleep. (Conversely, I've believed myself completely awake while actually asleep and unaware of what my body was doing.)

There would need to be a deeper test than either long-term memory or a yes-or-no question asked in the moment.


I feel like saying that one is in a sense "conscious" while dreaming..? Though this use of "conscious" seems to conflict with another use which has "asleep" as a primary example of "not conscious"...

I guess maybe this calls for another word. Not sure what word is best.


But why should it be like something to convert short term memory to long term memory? Computational systems can convert data between different forms of memory but we don't typically believe consciousness emerges from this operation.


Not to say this addresses the concern but in that case a computation system is being driven by a pure line signal (the power) and is ultimately both dependant on that for operation, and requires a high level of stability of that line to be "effective".

The alternative in this case is imagining that you have a biological process that has adaptive energy response controlling the movement from short term, to long term and itself optimizing based on the performance trade-off of input stimulus overload.

In that case, the process of deciding what to, and when to move data from short term to long term memory (in a feedback loop) could start to approach more of the appearance of consciousness (choosing what to process, what to remember, adjusting the input rate and types of input based on the limitations of processing and retention).

Many animals have consciousness as well so I think it's important to be more open to reducing the bare essentials of what a process of "awareness" is or could be rather than looking at it from a hyper evolved language/information influenced state of humans.

Note I agree that it is not consciousness, but i think it's an interesting exploration to look at the modeling of it from computer forms since we are made of non-conscious atoms and still made it here


Basically in this argument there is no such thing as what we call experiences or consciousness. It's an imagined concept, one that we think we have but we are merely misunderstanding.

Basically in this model, there is no present. The brain just invented the concept "I feel this" as a model of it's own inner workings, just like we invented the concept "object" to define certain aspects of physical reality that aren't 100% there.

So, basically you are imagining your experience, when in reality you are just an automaton. Same as how you imagine a robot is happy or sad when in reality it has no feelings.


This is precisely my hypothesis. We imagine that our human consciousness differs from animals when in fact it it exactly the same, but with a more-expressive language system on top.


> So in short: I cannot understand what is the actual substance of subjective experience.

This problem just goes away if you assume that there is no dividing line between the "experience" of you and the "experience" of any other computational system. Actually try to think, what does a computer "experience"? An atom? What does it feel like to be a standing desk?


Panpsychism is going mainstream again.


It seems like the only solution that's compatible with materialism. If you accept that it "feels like something" to be you, and you accept that your intelligence is just a computational process which differs from others only in the specific computations involved (as I think you should, especially in the age of LLMs), then you must infer that it "feels like something" to be any number of computational processes. If you infer otherwise, you're positing that there's an unmotivated, invisible, in-kind difference between human intelligence and all other physical phenomena which seem to mirror its abilities.

The conclusion I come to, which doesn't actually seem that strange, is that what's special about our experience is the complexity of our thought, our ability to turn extremely high-dimensional raw experience into low-dimensional models of reality, our ability to maintain a sense of mostly-stable identity over a long period of time, our capacity for emotion and sensation. It may "feel like something" to be a rock or a standing desk, but it isn't a very interesting or complicated feeling.


If we take the behavior of matter as the observable signs of how it feels — this is true if sensation motivates action — then many rich states of sensation have been observed in subatomic particles.

The computational process of an organism is evolved to operate the organism, toward survival of the species. I don’t think these sensations would necessarily be more complex than those of subatomic particles, but they would need to be directed by the system to represent the state of the system and the observed environment, to link to actions the system could take, and to bias actions toward survival of the system.


I'm sympathetic to panpsychism, but it isn't obvious to me that intelligence should be what's behind awareness.

I've been blackout drunk before, and my brain was capable of intelligence, I got home just fine, but I had no conscious awareness. Same for a dreamless sleep and being put under for surgery.

Additionally, I the observer, can observe and be aware of intelligence which seems to indicate to me that it's something outside of what I'd call "me".


I think you are still conscious during the blackout, you just don't have a memory of it. Amnesia is the effect here, which is a little different than actually being unconscious the whole time.

When in surgery you really are unconscious though, hopefully at least.

I would compare this to forgetting what exactly you did on this exact day and this exact time one year ago. You can't remember it (most likely), but it doesn't mean you were unconscious during that time.


It only makes sense if

- you can represent complex state in a distributed way, so each neuron only encodes a small part of a larger signal

- the system has a working model of the environment, including our value judgements for all states, which are basically our emotions

Such a system can have experience because it has a latent space to encode experience in. It feels like something to be an agent because of the external environment and internal models of the environment, which include imagination and emotions. And this feeling is essential in choosing our actions, so there is a feedback loop action-to-emotion, then emotion-to-action. Our feelings are causal.


Experience isn’t the right way to put it. The system you describe is straightforward and makes total sense how it can understand and act upon its environment. What is unexplainable, and perhaps always will be, is the awareness that I am. Even if I meditate and remove all aspects of the surrounding world from me what I’m left with is a pure awareness that groups of neurons simply can’t explain.

Why am I present at all and not just a mindless automaton? Surely an automaton could evolve and be just as successful simply by being a complex neural network. This is why I, and lots of others, think that awareness is fundamental.


You can never remove internal body sensations even in meditation. The "self" is just a stationary point in this inner space, an element referenced in most situations, but not all, such as when listening to great music we can forget about ourselves.

So my idea was - we have an actual system for representing sensations, a system for imagination and a system for value prediction. We have the hardware for it, and we train our consciousness hardware on environment data.

If you're thinking - how can simple neurons or protein interactions do this? it's the same with a neural net. We don't understand how it can tell cats from dogs, or why it can answer our questions so well. Yet it does. Hard to believe unless you see it and even then it's hard to accept, unless you come up with your own private inputs and test it in a way that is sure not to be memorized.


No I’m saying it makes total sense that neurons can model the environment and act upon them. That isn’t at all hard to grasp. What is unexplainable is the fundamental awareness of being a thing at all.


Essentially why is it like something to be this system of neurons modeling the environment and acting on it.


Yes exactly


What makes sense to me is that consciousness is not an emergent property but a core of all things, with the additional property that is replicative/additive. That is smaller consciousness's can form larger consciousness's.

As to what it is, or why it exists at all, I don't think there will ever be answer to that. It just is.

Its definitely a strange thought, but it seems more likely to me than neurons or whatever other brain matter somehow produce consciousness out of thin air as soon as they some level of composition.

Pure materialism also seems very ill defined to me. The physical world is after all only observable/detectable/can be studied upon, through conscious experience. At best we can say what is real is what is universally agreed upon by all observing conscious agents. If hypothetically there were only two of us, and I said "There is no ball in front of us" and you said "There is", then what is the meaning of physical true/reality?

You can say lets use a detector. But then again, if I experienced the detector as saying false and you said its true, what do we do?

It seems unavoidable that reality is a part of conscious experience, and not the other way around.


Frankly, your non-understanding is not unique. We're all rowing the same boat :)


If we wrote software to do this, but we were so incompetent that we couldn't fill in the model with correct data, we might just say "who gives a fuck, fill that with random garbage and we'll fix it in a later version". And then we never do.

Your subjective experience is that incompetent model. Your model doesn't know how to correctly judge human character, so you misread people and always wonder why they say one thing but another is true, and it doesn't click that they're lying the whole time. You can't keep track of time because the internal clock just isn't implemented, so the who day seems to drag on, or maybe fly by too quickly.

It's all just really shitty software. Layers upon layers. And because humans believe this to be some mystical thing, rather than trying to fix it from the inside, they assume that it's awesome, necessary, and why would anyone want to fix it?

No small fraction of it is simply because our memory is faulty. The only time you ever remember anything is the first time you remember it, every memory access of that is really you remembering the last time you remembered it. Each access is lossier than the last, and confabulation is guaranteed. This seems to be true even moments after the event.

If it was anyone other than evolution who wrote your code, you could sue them for criminal negligence.

And that's before we even get to the part where you find out you're not even you. Inside your skull is another being, an intelligent one, with desires and goals. But you can't see, hear, or feel this being. It's invisible. The "you" that I'm talking to, exists because this being once upon a time needed to simulate the other humans around him, so he could anticipate them well enough to not be out-competed. He has a pretty good idea what they'd say if he asked them questions, how they'd respond to threats and challenges, what sets them off (so he can avoid fights). And, by mistake or design, he used this simulator to simulate himself (maybe to bootstrap it? if the simulation's output matches his known answers, it's working correctly?).

You're the simulation. When the judge asks the psycho teenager why he put the cat in the microwave and he says "I dunno" he's telling the truth. He does not know why. When your girlfriend cheats on you, and she's crying hysterically and can't tell you why she did it, she's not just lying (either to hurt you or to spare feelings)... she doesn't know. It was that other being in their skulls doing these things. They're just the simulations.

Now, you've either been poking around in your own head, seeing little glimpses of what I'm talking about, making you wonder if I'm not on to something, or you're incapable of that. I've met both kinds of people. If you're the former, you're wondering just how much of it I understand, because some of the glimpses paint a slightly different picture from what I describe. That's because our minds weren't built the same way. No two are alike, not in a special snowflake way, but instead like no two shacks in shantytown have the same kind of leaky cardboard roofs. And, if you're the latter...


I buy into this theory, and the other one about consciousness being a step or two behind and fabricating some cohesive explanation for why you did what you did.

If you are unfortunate enough to experience some human body failure modes you can get a glimpse into this process. The cohesive "veil" breaks down and you realize there's more "you" than that voice in your head. The distributed nature of the brain/body peeks through.


I never bought into this subconscious thing. If my actions follow my internal voice, it seems unfathomable to me to say, "no you did that because of other reasons". A simple example is, I go to the fridge because I am hungry. If I were to cheat on somebody, there sure as hell was a lot of voices in my head that had to do with either lust, anger, feeling neglected/unloved or something.

Sure, there are deeper reasons for my voice is what it is, but thats something else entirely. I think its more so that as we evolve, we forget our initial motivations for thinking the way we do so that when we do reflect on our past, and are honest with ourselves, they do somewhat remerge.


Your internal voice is making up justifications after the fact. You ask yourself "why did I do that"... and it doesn't know, but has to have an answer anyway.

You were taught well. Teachers and parents and adults don't like it when the child says "I don't know why I did it". They punish that answer. You learned the lesson soon, well, and maybe without even having to be punished.

Oh, and your internal voice is just another one of those simulations.

> A simple example is, I go to the fridge because I am hungry.

Except many millions of people go to the fridge, and they're provably, objectively not hungry. They say "because I'm hungry" too. If you had to make up a fictitious-but-reasonable reason why people go to fridges, that would be at the top of the list. That's why you say it. Few people, if any at all, are even aware of their hunger. In the west, few have even ever experienced hunger, and if they have, almost certainly only a few times in their lives.

None of this is anything that you are really aware of. You can't be. You can't even express how you think your mind works without using the word "voice". I used to think people like you were schizos when I was a child and hadn't figured all of this out yet. It was clear that it wasn't just a failure of vocabulary, I'd met many people even just out in public who didn't seem to know enough words to describe things, this wasn't that. And early on I could tell that it wasn't a joke, eventually that it wasn't even a metaphor. You do not know the reasons you do things, and when forced to explain, you make up stuff and choose to believe that's why. The "I don't know" answer becomes painful to adults. It nags at you. You're supposed to know yourself well enough to have reasons most of the time. It's one of those scary glimpses. There's a ghost inside your skull, moving the furniture around when you're not looking.


Also, I think a large part of "I don't know" is purely the fact that the mind is driven by chemicals. I can't explain for example why I can't focus or why I am tired, or why I no longer feel something that I used to.

There just isn't a perceivable reason for everything, as we can't fully perceive our internal state. But if youre aware of these things, they can be explained too. For example, you may identify that when you eat greasy foods you become lethargic and that if you don't eat all you become irritable.


>Your internal voice is making up justifications after the fact. You ask yourself "why did I do that"... and it doesn't know, but has to have an answer anyway.

Thats assuming I need to make up a justification for why I did something to protect my ego. Most actions I take in my life don't need that reason, that includes going to the fridge because I am in fact, hungry. In fact I'd say 99% of the actions throughout my day are super simple decisions. There are some things where I am conflicted, and unsure about that I don't understand myself sure.

You don't think "I am going to go left" and go right, after all. You don't think "I am going to grab this" and open up your fists. Saying your voice has NO relation to your actions or your true self is just absurd.

For more complex actions that have more emotional depth, again it really depends on how honest you are with yourself. You may not understand everything, but you have some idea or feelings you should be aware of.

Also consider the technical actions we do make. If I am doing my job, I can fully explain my rationale behind why I did the things I did. Why would it be any different for the actions I take in my personal life? I felt insecure about not having a relationship, so I started thinking about how I can improve, so I started thinking about my body, so I started going to the gym. I bought low calorie coke because I am trying to stay fit, because of the previous reasons. You're telling me that's all just complete bs and I did all these things for some completely different, unknown reason?

Or Ive really been hating my job. I started playing piano and I really connected with it and I see its something I am really good at. So I quit my job and enrolled in music school. Complete bs? When I was sitting there at work and had a bunch of negative thoughts about it, I actually didn't feel that way?

>Few people, if any at all, are even aware of their hunger. In the west, few have even ever experienced hunger, and if they have, almost certainly only a few times in their lives.

I think this is a pretty fictitious statement to make. Yeah most people in the west have not experienced "true" hunger, but we all feel hungry every day as we do in fact, need to eat everyday just to sustain. I am not overweight, I am barely holding onto the weight I have. If anything I should be eating more. Yet I feel very hungry several times throughout the day.

I see "I don't know" largely as "I don't want to confront my feelings or publicly tell you". If what you were saying is true, there would be zero point in self reflection or therapy. There would be no point in judging peoples personalities, listening to what they say, or the promises they make.

I don't think you're entirely wrong, in that if you go deep enough asking why, you'll get lost. But thats not because of your unconscious self, thats because you completely forgot/blocked out/retold yourself some other story.


If we wrote software to do this, but we were so incompetent that we couldn't fill in the model with correct data, we might just say "who gives a fuck, fill that with random garbage and we'll fix it in a later version".

Sounds like stochastic gradient descent. Start with random weights and fix them later. That kind of "incompetence" gets you a seven-figure paycheck these days...


I mean. You could write program with a "mind" object that recieves a bunch of data through various sensory "experience". From the perspective of the "mind", the data is "subjective", and the mind is "implemented" in exactly a way that it can represent itself as an entity "I".

I don't think the biological reality is conceptually any more complicated, except that the mind and data are complex in exactly a way that completely hides the abstraction, roughly by being very good at ignoring meaningless artifacts of abstraction.

The hard part isn't imagining such a subjectivity, but imagining that I am that.


But, where does the color red come from? The actual first person experience of seeing red. Not eyes seeing a specific wavelength, but your mind seeing red.


What I find so frustrating in these conversations is exactly this mismatch. There's nothing special about the wavelength that creates the color red, but the color red is completely distinct from that. They're barely even related!


We have a deep-founded believe that the atom is the core of reality.

And everything emerges from there.

This materialism stems from René Descartes and his fellow philosophers.

And in the West it's often subconsciously combined it with evolutional theory. consciousness developed because it was useful somehow. However that's a very big leap to make.

Both theories have good arguments going for them but are very theoretical and need a lot more proof. Yet they form the basis for pretty much all Western thought

From a scientific perspective we have no idea how to create new consciousness or what it is.

From a human's experience it's more the other way around, reality is an emerging property of consciousness.

At the same time we also learned that matter & time is not as solid as we thought a few centuries ago.


In the brain there is an emergent reflection of a material reality happening where the brain is creating a fully constructed model of the world with its own independent existence, our day to day experience is a dream that's cohering to sense input. Whether or not that is what consciousness or our apparent point of view lives I don't know because I don't see how to logically prove it either way, but experimentally it seems like it does because our experiences align, and because you can alter people's state of consciousness through chemical and physical means.


What you're saying is the brain creates the illusion of a consciousness, that's something you would need more proof for now you're just assuming it because it's the dominant view based op on our materialistic culture.

If anything you are arguing the world is an illusion, not consciousness.

You can change what inputs a being get visually & other stimuli.

But you don't the change consciousness itself. The pictures & emotions change, but the experiencer doesn't.

Same with dementia, the body & connection to the world changes, the inner consciousness & it's ability to experience doesn't.

The irony is that we believe the world is real because our brain/conciousness does a really good job. Take for instance the simple fact that certain cultures can't distuingish green from blue. Or certain animals almost live based on feel of audio. Which experience of reality is the "real" world?

Materialism feels natural because our world feels real. But what we think is real and solid is what our brain has created for us, in reality both us and the world is very intertwined and hard to separate experience & experiencer.


> Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.

I'm not even a cocktail biologist, but my understanding is cells effectively operate via a web of complex chemical reactions, so the notion of a cell holding primitive models might be analogous to the way a CPU executes an assembly instruction: not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU).

In a similar way, "anticipating events" could involve an analogue to computer memory: the processes that have run so far have lead to certain state being saved to memory that will now influence how the system reacts to stimuli in a way that's different from how it reacted before (e.g. sum a value with the value stored in a register).


> not because it “thinks” but because the way it’s wired it’s inevitable that it will react to a human stimulus in a predefined way

This CPU analogy of yours doesn’t comport very well with the article we’re commenting on, which detailed some specific experiments that show cells are not reacting in a predefined way that is due to their ‘wiring’, contrary to previous and maybe incomplete understanding of how cells work. I don’t know if the RAM analogy helps since the surprise is that non-brain cells do have memory and do cooperate with other non-brain cells to solve certain problems, and these collections of non-brain cells can apparently remember solutions to problems over time. So yes, memory can help with anticipating events, but that really supports the idea that cells are dynamic and doing some non-trivial processing vs the possibly outdated notion that they’re hard-wired and deterministic.


> not because it "thinks" but because the way it's wired

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ― Edsger W. Dijkstra

If we are talking about which kind of complex systems (our brain, a cell, a computer, an LLM, a beehive, etc.) think and how we should note that there is nothing magical[0] in our brain that makes our thinking special and so other blobs of atoms that are not our brain can likely do things analogous to what our brain does.

This to say that explaining in reductionist terms how supposedly something thinks is not a proof that it is not really thinking. Otherwise a sufficiently intelligent alien could prove that you are not really thinking (just a bunch of ions dancing)

[0] and if there is something magical then we do understand how it works and where else it is magicing stuff.


CPUs are anticipating all the time how the future will evolve. They have caches (to be specific, expiration strategies), branch predictors, and speculative execution. Albeit for a very different purpose: to enhance processing speed, not to react to external events.


That web of complex reactions is sufficient. See Anticipatory Systems (Rosen) https://en.wikipedia.org/wiki/Anticipatory_Systems


> not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU)

I think these are likely different only by way of their level of complexity. We simply substitute a word like "think" when the reactions to stimuli are far too complex and numerous for us to track fully. But ultimately said "thinking" is made up to many, many cells following those same stimulus/reaction patterns.


> not because it "thinks" but because the way it's wired

What if it is wired to think?


Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.

I've occasionally run into science podcasts, going back almost a decade, where some researcher talks about the computational power of cell membranes, and how the synapses evolved from these mechanisms. Amoebas and paramecia navigate their environments, sense, and react through their cell membranes. Apparently, synapses evolved from these mechanisms.

The upshot of this for AI, is that the neural network model may be drastically incomplete, with far more computation actually happening inside actual individual neurons.


Nobody is attempting to have one-to-one correspondence between neurons and artificial "neurons", the fact that a single biological does much more doesn't imply some limitation or incompleteness (as long as the same computations can be implemented simply by having more of them, and as far as we understand, that seems to be the case) - the choice is primary because due to how our hardware parallelization works, we'd prefer to implement the exact same behavior with 1000 structurally identical simple "neurons" rather than have a single more complex "emulated neuron" that requires more complicated logic that can't be straightforwardly reduced to massive matrix multiplication.


that requires more complicated logic that can't be straightforwardly reduced to massive matrix multiplication

What are the rationale for thinking it doesn't matter? Or is it the case that researchers think a larger number of simpler neurons could be equivalent to a smaller number of "complex emulated neurons?"


Yes, computation as such is sufficiently universal that it's very reasonable to assume that a larger number of simpler neurons would be equivalent to a smaller number of "complex emulated neurons".


I'm also a cocktail philosopher, but isn't consciousness different to just having a model of the world and self within it? Consciousness is the lived experience. The world model and feeling of self appear in consciousness. I think a complex system could plausibly be conscious without having a belief of a self within it. Not sure if consciousness is possible without any world model though.

My impressions about this were strongly influenced by Sam Harris's Waking Up book and app.


One possibility at least is that "the experience" is not something that really happens. That is, it's possible that we don't actually "feel" anything, and our impression that we are is just the story that our self-model comes up to explain (and help predict) our reactions to ourselves. Just like our world model has ideas like "the rock wants to fall down", it's possible that our self-model does too.

We already know that our self-model can be entirely wrong about our feelings. People with paralyzed or even missing limbs often believe that they just don't want to move that limb at first. So, they think they are having one experience, but they are wrong about their own internal experience: in fact, they are not moving that limb because they can't. And there are many other similar examples of people being wrong about their own intenal experiences, typically but not exclusively because of some illness.

So, it's possible that our internal experiences are in fact only a model in which one part of our brain interprets the actions of other parts of our brain, often retroactively.

Note: I'm not claiming this is the truth or silly things like "if you believe in science you have to believe this". It's just another cocktail philosopher's story of what consciousness might be. Other stories are just as plausible, and just as consistent with the little we do know in this area.


> That is, it's possible that we don't actually "feel" anything, and our impression that we are is just the story that our self-model comes up to explain

It seems to me that you are trying to banish things from the existence by a reductionism based exorcism ritual.

If we know a structure of some phenomenon it doesn't mean the phenomenon doesn't exist. We know that life is a chemistry, and there is no life in periodic table. But it is not a reason to conclude, that life doesn't exist.

If my feelings come into existence as a result of a complex information processing, it doesn't mean my feelings do not exist or are not real.

If "I" is just a retroactively written story, then what? Does it mean "I" is not real?


> If "I" is just a retroactively written story, then what? Does it mean "I" is not real?

It depends what you mean by "is real". Of course in some sense it is still real, just like triangles are real. But it's not real in the same sense that rivers or rocks are real.

The relevance in this context is that it gets rid of some privileged position for consciousness. If this explanation I gave were true (I have no reason to believe it is!), then there is no problem in reconciling qualia with materialism. Qualia are just an aspect of the modeled "I", not a raw experience that can't be explained. P-zombies are all that exists.


> Of course in some sense it is still real, just like triangles are real. But it's not real in the same sense that rivers or rocks are real.

Rivers and rocks are less real than consciousness. Our minds can work with mental objects only, these objects often are built as an images of something outside of our minds. Most of the time images have nothing in common with underlying reality of an imaged phenomenon: for example we need to remind ourselves that you cannot cross the same river twice, because our model is a static object, while really a river is a dynamic process of H2O molecules falling out of sky and then rolling down to an ocean.

But our consciousness is the model of ourselves. The object (me) and its model (me consciousness) are much closer, there is no long chain of causes-effects passing information from external river to neurons. And moreover consciousness is a pure magic among other images: I can believe as hard as I can that river must be made of milk, and it will not change the river. But if I believe something about myself, it leads to real changes. Doesn't it mean that "I" is more real than any river?

Or... scratch that. I can agree with you we have confused naive understanding of consciousness mixing causes and effects. But I'm not sure that the rest is as simple as you describe. I'm sure a dualist will find a way to explain qualia as something that can't be reproduced in AI.

These things are never easy and they never have a definite answer.


> So, they think they are having one experience, but they are wrong about their own internal experience: in fact, they are not moving that limb because they can't.

I think it's rather the opposite, they aren't wrong about their internal experience, it's just that their internal experience doesn't match the objective reality of their body (which in this sense is external).

I think it is indeed entirely possible that our self-model can fool us about the realities of various situations, even those with our own body/emotions/etc, but I'm not sure how one could then derive the conclusion that the experience doesn't exist. It would just be instead that the experience is rather inaccurate/confabulated.


I don't think it's correct to call this an external experience. It's an experience about will and desire and direct control of the body - what can be more internal than that?

It's not like an optical illusion, where you think you are misinterpreting an external stimulus. This is a situation where you are trying to control a limb, not getting any reply, and concluding that "you don't want to move" instead of "I can't move".


The experience is internal, the body (the limb that's missing) is external to the experience. The confabulatory experience of "I don't want to move" is an internal experience and cannot itself be an illusion, it's simply an incorrect assessment of the actual state of the body.


Sure, the limb is external. But the experience "I don't want to move" is not wrong because the limb is actually missing, it is wrong because I did actually want to move. "I" did the exact same thing as every time I try to move (i.e. the brain sent the exact same signals to the limb).


Sure, none of what you said there would lead to the conclusion that the "experience is not something that really happens," though it's also possible there's a just a failure of communication here and I'm not understanding what you meant.


So what would it mean for us to not really have sensations of color, sound, etc? People can be wrong about some of their experiences, but those would be called illusions or hallucinations, not lack of experience. Illusions and hallucinations are themselves kinds of experiences, even if the experiences are faulty. You can dream about all sorts of ridiculous things, but it would be very odd to use that to claim that you don't actually experience dreaming.

If our internal experiences are the result of self-interpreting an internal model, that doesn't make the experiences go away. It means there's something about that activity which causes subjectivity.


It definitely seems like aspects of conscious experience are just more parts of the world model, e.g. we experience having a continual visual view of the world through our eyes, like watching a movie, but we know that's not what is actually happening physically.

Where I get confused though is this sentence: "...the story that our self-model comes up to explain (and help predict) our reactions to ourselves". What is "ourselves" here? That implies some observer that the self-model is explaining things to.


Speaking in the context of thinkers like Harris this is the reason why awareness is differentiated from consciousness. Consciousness is the ability to grasp the world "out there". Awareness is the blank canvas where consciousness is experienced. The analogy here is a movie projected onto a movie screen. Consciousness is the movie being played. The screen is the blank space of experience where the movie is projected. I think the confusion is there is no logically consistent differentiation of concepts like intelligence, consciousness, awareness in the sciences and they end up being used interchangeably.


Thanks, the distinction between awareness and consciousness is definitely something I've overlooked. I tend to think of them interchangeably.


> Where I get confused though is this sentence: "...the story that our self-model comes up to explain (and help predict) our reactions to ourselves". What is "ourselves" here? That implies some observer that the self-model is explaining things to.

That is a good point, language is pretty important in these things. I just meant "ourselves" as the overall brain/mind. That is, the self-model component just feeds these models into the planning component, so that the planning component can decide the next actions to take.


Also an armchair philosopher and I enjoy thinking about these things a lot. The theory you've described is the one that seems to have the most explanatory power.

Andy Clark's "Surfing Uncertainty-Prediction, Action, and the Embodied Mind" is a book-length exploration of this idea. Highly recommend.


I agree that "consciousness is different to just having a model of the world and self within it" indeed. I'm just saying it feels like that modelling ability (which has clear and major evolutionary advantages) is a step towards consciousness, indeed something in the now (as we experience it). A (near) real-time model perhaps that constantly projects and adjusts. I guess this still doesn't require consciousness, but maybe consciousness results from this? Does it require a sense of "now" and identity relative to the world model?

I feel like the matrix is about the eject me btw.

Thanx, I'm looking for Harris' books right now.


Mixing consciousness with cognition is a mistake, Imagine a conscious experience of constant redness and nothing else, is there any intelligence needed for this?


The reason why this is likely not possible is because consciousness appears to require a notion of time passing by and having a constant experience means that there is no time.

For time to exist some sort of motion is needed, including either internal motion like the perception of breathing or a heartbeat or a train of thought or external ones like a clock or a change of color from red to blue.


We have no way of knowing whether there is any time even with the "notion of time passing", as that notion depends on a notion of memory of a past, and we have no way of knowing if that is real, including whether or not there is a train of thought or any change.


You are misunderstanding what I wrote.


I don't think so. The point being that we can't tell whether we're having a "constant experience" or not, and we can't tell from that whether or not there's any time or no time, or punctuated time, nor can we for that matter tell whether consciousness requires any notion of time.

It's all pure speculation because we have no way of assessing it outside of our own frame of reference. E.g. I see you in another answer saying that "the fact is that all kinds of state changes happen", but we don't know if any state changes ever happen.

We have no way of telling a dynamic universe apart from a static one where we're all just suspended in a single unchanging moment.

We can choose to handwave a lot of this away ordinarily because as long as the experience is reasonably consistent it makes sense to assume it is objectively real, because we have nothing better. It doesn't matter if the world didn't exist a second ago when e.g. considering whether gravity work, because it appears as if it did.

But when trying to determine the nature of consciousness we run headlong into the fact our observation of consciousness is direct only in the case of ourself, and even then heavily limited. We have no direct measure that puts us in a position to prove consciousness even in other people. We can show that within our perceived world we can measure brain activity that correlates to though, but not whether that reflects merely the operation of an automata, or the existence of consciousness, or if there is a distinction at all.

As such, it's meaningless to suggest we have a way of saying much about the limitations of consciousness at all.


> We have no way of telling a dynamic universe apart from a static one where we're all just suspended in a single unchanging moment.

I am curious to know why you think so. What would you say about repeatably observable causality, in that case?


If the universe is static, then there is no "repeatably observable causality" or indeed any causality at all. In that scenario any perception of time and causality would inherently have to just be our perception lying to us about a past that we have had not part in, if it exist in any sense at all. If so, we have not had this conversation, and your experience of it is just a static momentary perception of having had it.

Maybe time is a spatial dimension, and there are infinite moments of consciousness fixed in the same spatial location with no time passing.

Consider how you'd tell if a drawing is an individual drawing or a frame from a cartoon if all you have to go by is that single frame. You can argue that the drawing hints convincingly at motion, but that does not require that this motion has taken place.

Or consider a trace of a simulation, sliced and diced into snapshots of individual moments. We can argue that it's unlikely any entities in such snapshots would have consciousness, but if we're arguing on the basis that we appear to experience the motion of time, we'd equally make that argument if we were wrong about consciousness and indeed locked in snapshots of individual moments. We can even construct simulations where individual parts look causally connected but where the existence of one calculated frame tells us nothing about whether any individual other frames have even ever been instantiated (e.g. imagine a very complex function over time, where only punctuated values have ever been calculated).

I'm not saying I believe that is our situation - I'm saying we can't distinguish between that and an infinite set of other possible options, because "from the inside" there is an infinite set of possibilities that could all look the same from our vantage point. We can debate which possibilities seem more likely, but they will always be speculation as long as we're locked inside the asylum, so to speak...

Incidentally, this is an argument for a materialistic view of the universe, not against it, on the basis that absent a way of "peeking out" and seeing our situation from the outside, it's meaningless to treat the world as anything but what we measure and perceive - it doesn't matter whether or not this world is what it seems like to us or not as long as it is the only view we have of it. We just can't say if it is some inherently true objective view of the universe, and most likely it won't make any difference to us.

It only makes a difference when we tinker with philosophy around the edges, like these conversations about whether what we experience can tell us anything about the experience of other entities.


> If the universe is static, then there is no "repeatably observable causality" or indeed any causality at all. In that scenario any perception of time and causality would inherently have to just be our perception lying to us about a past that we have had not part in, if it exist in any sense at all

Is it possible to have perception in a static environment? It seems like perception requires flux of some sort.

Clarification: meaning the machinery of the perceiver must have flux, otherwise it's not perception, it's just static state.


Is it? If we are in a static environment, then it would mean it is possible, and that this intuition is wrong. Since we don't have a way of determining experimentally if is wrong or not, then at least for now it does not even help us quantify the odds. If we're not in a static environment, then maybe, maybe not - we don't know what the subject experience of consciousness is at all.

We so far have no way of splitting perception or conscience down in slices of ever shorter moments to see where it stops being whatever it is and becomes something "inert", but even if we did, we would not know whether that was an inherent limitation of objective reality or of our subjective reality and whether those two are the same or not.


IMO those are details, we could drop the constatness and just ask - what is required to have a single moment of redness? Or even simpler, a single moment of one tingle.

BTW experienced mediators apperently can experience no time and no space, no idea how that tastes like.


> BTW experienced mediators apperently can experience no time and no space, no idea how that tastes like.

Sure, they might feel so, but the fact is that all kinds of state changes happen, so time goes on.


I think it's arguable that this "conscious experience of nothing but constant redness" might not be actually possible. We can imagine it, of course, but we can also imagine many things that can't actually exist in the world. It's similar to the idea about p-zombies: just because a concept exists doesn't mean it's physically realizable.

It's very much possible that what we would call consciousness can only exist in certain types of minds that have a particular level of intelligence, but also a particular structure.

Of course, it's also possible that you're right and the two are entirely separate, and that consciousness doesn't require intelligence (and possibly vice-versa).


Perhaps, that's the question. What is required for the simplest possible consciousness?


Good question, being investigated in the field of consciousness studies. Like Thomas Metzinger's minimal phenomenal selfhood: https://youtu.be/8f4ykI9har8


I think all these term are too loaded and deep to answer your question, almost all words in the sentences we exchange are subject to subtle interpretation and definition differences.

I still enjoy the process though, which perhaps also doesn't require consciousness, yet here I am.


At least when it comes to human level consciousness, I agree it's something 'more than', but what exactly?

If we look at our advancing world models in things like self driving cars, when would we consider them conscious by our own rules? It has a sensor network that keeps track of it's own internal states. It has external sensors that monitor the external world and adjust it's internal state. The internal state attempts to predict future states and take actions to avoid negative outcomes.

So when does this loop become something we consider consciousness?


As another cocktail philosopher.

I think everyone should ponder this, when thinking about how they think, like as if they are the one thinking at all.

"Man can do what he wills but he cannot will what he wills.” ― Arthur Schopenhauer, Essays and Aphorisms


Just tried will willing ... success!


And yet, whatever it is that you willed is inexplicable. Why didn’t you will something else?


Ah - dare I say, "free"?


This is one of Hofstadter’s big ideas that he explored in his main work: GEB, Mind’s I, and I am a Strange Loop. The latter is a good intro to his work.


The particular podcast didn’t come across with that link. Can you provide the title or number? I’d like to listen to it! I reviewed a fair amount the podcast list, but didn’t find a match to your description.


The title of the podcast series is: "Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas".


Thanks. My problem is finding the specific podcast!


Joscha Bach also talks about this a lot. He calls the consciousness the monkey with a stick controlling the elephant. For a starting point, listen to his Lex Fridman interviews.


What we call consciousness may have same relationship to what creates it as Plato's cave shadows to what generates them.


Not everyone is a philosopher with a cocktail, but surely we're all cocktail philosophers.


> A sort of model to interact with and to extrapolate future events from

Something something LLMs can only predict the next word.

I hate to spin up this trendy debate again, but it's always funny to me to see the dissonance when talking about the exact same things in biological and mathematical cases.


LLMs don't even come close to the complexity of the human mind though. They're a pastiche of human language, a fuzzy jpeg of the Internet.

The human mind is _so much more_ than a prediction machine, and incredibly complex... All that's before you get into the way the endocrine system interacts with your mind.

A single neuron has an average of 250000 connections in some parts of the brain. The speed at which neuronal signals travel varies neuron to neuron from 2.5m/s to 200m/s.

Human minds are more than just prediction. The anterior lateral prefrontal cortex has the sole responsibility of prediction (not that nothing else does, just that the ALPC seems solely dedicated to that task) and is extremely good at it. Prediction can influence all sorts of mental processes such as most all forms of perception... But it is _not_ the same as _being_ all forms of perception. If something unpredictable enough happens in front of you: you'll still see it.

Sure there are limits to that: when focused on a task the predictive parts of sight tend to filter out visual data that doesn't match the signal you're looking for (see: basketball players passing the ball and a moon walking man in an ape suit) but if every basketball player turned into Spaghetti-os and started screaming you'd still hear the sounds and see the O's.

So sure: LLMs do a good job at basic prediction but they're nowhere near the complexity of the human mind, of which prediction is only a small piece.

(And let's not even get into efficiency... A brain runs on 20W of power)


Of course, it's only a small piece of the puzzle, but I do wonder what kind of existing organism would roughly match it by complexity, at least in terms of problem solving capacity.

A bunch of cells likely have more than a few GB of data between them, but most of it is just instructions on how to fold proteins to replicate themselves. Is that intelligent behavior or just mindlessly executing a script, albeit in DNA form? I would lean towards the latter.

I suppose it's hard to compare biological intelligent systems directly since they all have to process all available sensory info in a timely manner, not just very narrow text input. Once there are more LMMs trained on vision, sound, and kinematic data it should get far easier to draw comparisons.

> but if every basketball player turned into Spaghetti-os and started screaming you'd still hear the sounds and see the O's

Unless of course someone had set up a Somebody else's problem field generator around them :)


>(And let's not even get into efficiency... A brain runs on 20W of power)

At the end of the day this is mostly meaningless.

Biological creatures have to be efficient because they need to harvest energy from the environment around them. But to limit machines to that would be like saying we have to feed biomass to airplanes.


Their point was the insane efficiency, not the low consumption. Modern LLMs are many orders of magnitude less energy efficient; despite all our progress computational biology makes our silicon looks primitive in some metrics.


And they are generated by their host piece by piece (brain growth). And they last longer than AI processors with basic maintenance anyone can do (eat/sleep). And they usually continue working when their programs break (eg hardware failures or gamma rays) while NN’s and GA’s don’t.

There’s hardly a comparison. If they want to, I’d like to see them start with using dynamic chunking instead of tokenization like the mind does. Also, they can tell me what pre-training method uses 20W per second for 1+ trillion parameters. I could use that right now.


> predict

This the right term to use here.

> Something something

If human brains have a model, then is language the transport layer on top of that? Is trying to get to intelligence via language no better than trying to get to "google" by modeling its TCP/IP traffic?


Man why do all people working the most menial tech jobs have such an obsession suggesting some shitty "research" fad in CS as a solution centuries-old complex problems in all other science fields? It's cringe, reeks ignorance and the comparisons are flat out wrong most of the time.

It's especially worse when low-quality popular science journalism promotes this notion, like this Quanta article about the human vision system working just like transformers do.


> the most menial tech jobs

How do you know what job moffkalast has, and why does it matter? This reads like pure snobbery to me.

(Also: moffkalast did not in fact suggest that anything is a solution to a centuries-old problem. "Some common rhetoric about LLMs is too simplistic" is a far cry from "LLMs resolve all the perplexities about human consciousness and thought".)


> In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced.

You can generate all kind of sentences like this all day you want in your consciousness. That does not make it any true.

There is zero evidence for existence of physical matter/materialism.

The only thing we know for sure that exists is consciousness.

And you suggest the complete opposite with zero evidence.


You may want to read on Descartes critiques from last century. Not even your own consciousness is a given.


Obviously there is no you. Just the experience. I thought that was pretty obvious.


Again, I suggest you read some more recent philosophy. This is a good start:

https://en.wikipedia.org/wiki/Cogito,_ergo_sum#Critique


I always found weird all these discussions about what exists and what not exists in that there is a distinct lack of focus on what existence means.

Even if the only thing to exists (existence level 0, if you wish) was some vague "cognitive activity" would I not exist (existence level 1) in that cognition?

Unless you assume that there is a certain divine level and you are conjecturing about your relative existence compared to it what is the object of talking about the possibility of non-existence?

In my mind this is similar to the theological mind-trip about the universe being a simulation; it is clear that I exist in some sense, rather than focusing on whether my existence is fake or "not real existence" is it not more reasonable to focus on how my existence relates to other existences?


There is also zero "evidence", by this extremely restrictive standard of "evidence", for existence of any consciousness aside one's own. This rhetorical strategy thus has a weakness: who or what exactly are you trying to convince?


> There is also zero "evidence", by this extremely restrictive standard of "evidence", for existence of any consciousness aside one's own.

Yes. That is correct.

> This rhetorical strategy thus has a weakness: who or what exactly are you trying to convince?

Ego dissolution/self realiszation takes time. It's my goal in life. But haven't achieved yet. But your comment do point that I should not waste time on engaging in things like this which builds up the ego more.


Brains are not required to solve problems, yes, but they are required to think. That's one of their defining characteristics. It's not a thought without something like a brain, at best it's a pre-programmed/pre-trained behavioural response.


> "... but they are required to think"

Let me humbly suggest to you to not make such (Truth) statements! I dont know of any hard evidence that supports this. I know this is what most people believe, but the focus is on believe.


That’s misunderstanding what they’re saying. If you watch some of Michael Levin’s talks on YouTube, he specifically uses William James’ definition of intelligence (Intelligence is a fixed goal with variable means of achieving it) and has experimentally shown this capability at cellular scales. He shows how it cannot be pre-programmed behavior. There seems to be goal directed behavior.


> (Intelligence is a fixed goal with variable means of achieving it) and has experimentally shown this capability at cellular scales.

Supposing I accept that, what does this have to do with thought, which is the claim that I was specifically responding to? Does Levin or James also show that this can only be done by having thoughts?

Edit: for instance, as opposed to having some non-thinking process like gradient descent, or more plausibly, some kind of hill climbing.


I agree, it’s not ”thinking” in the way brains think, but I was just disagreeing with the assertion that it’s “just” pre-programmed. He’s shown pretty convincingly that these phenomena can not be preprogrammed. He’s put organisms in novel developmental states that never exist in nature and shown that the cells work around the problems he gives them to build up a working body. I think if you get stuck on “thinking”, yeah it’s not thinking, but it’s some sort of cognition that is above simple optimization.

I honestly think if you (or anyone) watches one of his talks it will blow your mind. Prior to watching his talks, I would have said pretty much the same thing as you, but his work completely upended my view of biology. He’s basically breaking down how we get from “just physics” to a working organism and finding a lot of surprising things. Unfortunately, there’s way more information than I can condense into a comment.


>it’s not ”thinking” in the way brains think

Are you sure brains aren't doing any of it?


Yes. Many of the experiments involve single cell organisms where there is no brain involved. And the cognition he’s talking about are in different problem spaces than what we think of as “thinking” in the brain.

Honestly, it’s so difficult to talk about because it’s such a different way of looking at biology. I would suggest picking any one of his YouTube talks and watching it. He’s a very good explainer.


Which is one of the arguments the ancient Greeks (Aristotle in particular) used to argue that God must exist. Things are clearly ordered to ends (have goal-directed behavior). Others came to the conclusion that all things that are are part of one enormous goal-directed-entity, but that conclusion involves a bootstrapping problem on the part of that entity (which is composed of parts) and so I don't hold with it.


I'm not sure I see a bootstrapping issue. Time and chaos will remove non teleological agents, doesn't it ? it would just emerge out of selection.


The issue with the goal arising out of nothing is just that. To put it another way, a goal arising out of nothing is an effect without sufficient cause to explain it.


This is pretty similar to concept in "Children of Time" Adrian Tchaikovsky.

I've always thought the concept in the book of 'DNA' memory storage, was SCI-FI. Cool concept, but really far out. So this is pretty exciting that this Sci-Fi concept could happen.

What if we could drink something to give us the memories of someone else. And this would be way to drink a 'degree', and learn a ton fast.

"Glanzman was able to transfer a memory of an electric shock from one sea slug to another by extracting RNA from the brains of shocked slugs and injecting it into the brains of new slugs. The recipients then “remembered” to recoil from the touch that preceded the shock. If RNA can be a medium of memory storage, any cell might have the ability, not just neurons."


Michael Levin is a rare example of a scientist who really thinks outside the box and goes wherever few have gone before.


> “Indeed, the very act of living is by default a cognitive state, Lyon says. Every cell needs to be constantly evaluating its surroundings, making decisions about what to let in and what to keep out and planning its next steps. Cognition didn't arrive later in evolution. It's what made life possible.“

Yes. Cognition isn’t just about solving differential equations and the like. It also refers to the most basic functions/processes such as perception and evaluation.


Is perception and evaluation a basic function? By analogy with cellular life, maybe. But I think this abstraction hides more than it reveals.


"All intelligence is really collective intelligence, because every cognitive system is made of some kind of parts" - that's exactly the basis for the popularity theory of consciousness, which deduces that not only humans are conscious (and plants and other animals, etc), but also the global human society can have a sort of consciousness.

https://consciousness.social


Makes sense, we're all just the proverbial neurons of the brain that is our entire society/planet. Yeah, this is really big brain time.

Then a collection of societies would be another level of meta-intelligence above that, it's all fractal.


Exactly - multizoa theory describes the global human society as an organism made of many animals (multi zoa means many animals)/

And then multizoa organisms can evolve once they reproduce, via processes like establishing colonies on other planets. Who knows what kind of multizoa consciousness, i.e. sense of social self these human societies would come to evolve given enough evolution via the process of multizoa natural selection?

In that sense, consciousness.social is like the glimmer of a proto-consciousness for multizoa organisms. The first sparks of a multizoa-level sense of self, that could evolve down multizoa generations into something as complex as human consciousness, but at the multizoa level.

Brain numbing stuff.


This seems to confuse intelligence with consciousness?


I meant to say that they have the same root - you can think of consciousness as (collective) intelligence put to use in a particular domain: the self, or the "what it is like to be". If certain basic units of the collective intelligence contribute to that sense of self, you can get consciousness. consciousness.social aims to provide a living example of that.


I find it really interesting how many times this distinction has been made (from each side to the other and vice versa) in just this thread.


This is great news given the relative scarcity of brains among humans.


SLime molds!!!!

Slime molds can get very large as a single cell, visible to the naked eye weighing in at several pounds large: https://en.wikipedia.org/wiki/Myxogastria

It says 44 pounds.... that's pretty chunky for a "cell"


This finding gives more supporting evidence to professor Chemero’s Radical Embodied Cognition. This is a very interesting interview he gave that covered a lot of his ideas: https://m.youtube.com/watch?v=vxT1RQICM_8


My reading (caveat: not a biologist, other sort of scientist, nor philosopher) is that a brain is required to translate the environment and its collection of problems into something (or some things) that its simpler structures can “solve” (where “solve” could just mean “act usefully in response to” and that act/response could be to ignore), and then to translate any responses but out to that more complex environment.

Cells can solve problems in their limited context, though that context can be less limited than you might first think (consider single celled life can have relatively complex interactions). Groups of cells can solve more complex problems, by working directly together or by some acting as support structures while others do the solving. Complex bodies and brains build up in parts from there over time.


Micheal levin is just incredible, he appears on lot of podcasts on Youtube. His work on collective intelligence of cells, xenobots and regeneration is just mind boggling.


Next stage in AI?

> According to Bongard, that's because these AIs are, in a sense, too heady. “If you play with these AIs, you can start to see where the cracks are. And they tend to be around things like common sense and cause and effect, which points toward why you need a body. If you have a body, you can learn about cause and effect because you can cause effects. But these AI systems can't learn about the world by poking at it.”


This is a little like the 60s experiment teaching what I believe were nematodes to arch their backs in response to a light shone by the researchers.

Those nematodes were ground up and fed to new untrained nematodes which then acquired the back arching response.

Can't find the original paper but it was covered in the 1984 book The Science in Science Fiction.


Feels like Peter Watt's Blindsight, conciousness not needed for advanced problem solving, and may actually hinder.


Brains are for complex tasks linked by a series of simple problems handled by simple cells. It is a network.



Makes me think about Michael Levin et. al, working on understanding bioelectromechanics.... Fascinating stuff

https://www.youtube.com/watch?v=hG6GIzNM0aM


That explains so much about HN.


I have a question. Doesn't the complexity of the problems it can possibly solve vary from cell to brain.

Or is it like a turing machine where a simple abstraction can solve any possible "computable" problem?


I guess a ("simple") molecular biological mechanism can be Turing complete, but if it's as slow as a mechanical ticker tape, the mechanism will be too slow to respond to any inputs in a relevant time frame. The speed of the response is important in an evolutionary context.


I have a little problem understanding why this article is dated by February 1, 2024. Is it an UX bug and they show the date of the print issue, did it come from the future or my clock are off by a week?


Yeah it’s likely the magazine issue date. Bad UX but not a bug.


Michael Levin is seeking answers to seemingly basic biology questions. I’m a layperson but how come mainstream biologists didn’t pursue this line of thinking as far as he has?


Interesting. What if the brain is leaving a "footprint" on the cells of the nervous system, and it's possible to recover the brain by making a mold of this footprint?


So brains are required for multi-modality, but not actual 'intelligence' ?

(Herein everyone screams at my misuse of terms and tells me I'm wrong - which at least should help explain things)


Makes sense that brains are not required to solve problems. I am not brainy but seem to be able to brute force my way to solve some complex problems by sheer force of will


I am still struggling with my central question (I read this to be the 'hard question') of why I am currently occupying the body that I am as an observer, and not another one, or perhaps even none. Does anybody have some cursory reading about this subject?


I quite like Penrose-Hameroff's theory https://www.youtube.com/watch?v=xGbgDf4HCHU

My best summary of it: consciousness is primal, every bit of matter has perspective, every interaction has experience, the body you're 'occupying' is having a particularly complex and long-lived experience.

As for your question of why this one, well, someone's got to be having this one. As for why not another, there's no saying 'you' won't have another 'after' this experience ends, but if 'you' had them in a perfectly undifferentiated way that'd be a different experience again, and as stated, someone's got to be having this one.


This!! It all comes together. What the mystics have figured out through deepest self-introspection and meditation comes so close to our modern understanding derived by physics, biology and neuroscience. It is fascinating to see how these understandings converge.


If you assume that the observer can "occupy" a body, rather than be a body, you have to conclude that the memories are part of the body, not the observer. You assume that you occupy the "correct" body, because your present experience matches your memories, but that'd be true even if you moved to occupy another body. Moreover, how do you know that there isn't more than one observer in your body? Or maybe there's just one observer sharing all the bodies?


To answer your last to questions: I don’t know! And those formulations are simply, in my opinion, rhetorical variations of the hard questions too. One observer many bodies or many observers one body are equally valid possibilities, as far as I can tell, and I am not referring to split personality disorders or a collective external memory for that matter. It’s genuinely one of the most fascinating questions out there for me right now!


Materialism, which by my reading has the most evidence going for it, solves these particular questions rather easily. Your experience as an observer is the result of your brain. It's not another one or none b/c those aren't your particular brains.

This doesn't solve the "hard" problem, it's still quite mysterious how exactly and at what point neuronal mappings become sentient, but it does leave you without having to wonder the particular questions you asked there.


This seems like a common way that to me completely side steps the actual question. Sure, your brain powers intelligence, but nothing we have nothing on how it could create awareness, which seems qualitatively different.


Completely sidesteps which question?

It answers very clearly these questions posed by the person I was responding to:

> why I am currently occupying the body that I am as an observer, and not another one, or perhaps even none.

It definitely does not answer the "hard" problem of consciousness, which is what you're alluding to and which I specifically and explicitly said it didn't answer.


I referred to the hard question in my OP, and I don’t think that materialism answers either of the question’s formulation. Materialism may answer it at one point, and I would be very impressed if it does, because it is going to require perhaps new mathematics, geometry, and physics for us to get there. So far, none of our tools for measurement of any form of field has led us any closer to answering the hard question.


Yes I noted it doesn't answer the "hard" problem explicitly in both of my replies here on this thread. Indeed, the very reason it is called the "hard" problem is b/c it very well seems perhaps unsolvable (though this is certainly debatable, but this is the very etymology of the term).

Your actual stated questions (why am I me and not someone else, etc) are in no way part of the "hard" problem's formulation, and are indeed easily answered by materialism as I noted.

Perhaps take a look at the wiki article on the hard problem: https://www.wikiwand.com/en/Hard_problem_of_consciousness


Buddhism claims that our feeling of separation (and thus the multiplicity of subjective experiences) is an illusion. But I never really understood why.

My hunch is that this is related to the question of why we are experiencing this particular moment in time and not another one in the past or in the future, is related. If you believe in the many words interpretation of quantum mechanics, one can also say why I’m experiencing this particular branch.


> Buddhism claims that our feeling of separation (and thus the multiplicity of subjective experiences) is an illusion. But I never really understood why.

They've made a good book to help people get the concept. It's called "the gateless gate" and it's a series of seemingly non-sensical stories, that you're supposed to think about and try to see the meaning behind it.

If you want to give the exercise a try, it's on wikisource: https://en.wikisource.org/wiki/The_Gateless_Gate


For an intro. I think the old stories have too many anachronisms for modern readers. There are too many meanings in the phrasing that the readers of the time would already know.

Do you know of any modern translations that frame the story in actual modern references?


I think it would make a bad introduction to buddhist philosophy in any way, it's meant as a more "advanced" text imo. Starting with a plain english intro (like Alan Watts, The Way of Zen) or simpler texts (the Dhammapada) should be easier.


VS Ramachandran has an interesting talk about mirror neurons, which is a subset of motor neurons. They activate when you perceive anybody else doing something as opposed to only activating during your actions. This is fundamentally a built-in empathy/group learning mechanism, but it also has some other interesting implications too.

For example, when somebody touches someones hand in your view, your mirror neurons activate just like you yourself have been touched. Then your nerve endings in your hand send a signal to cancel the effect, but sometimes you still get a tingling from the neural confusion depending on the strength of the signal (e.g. watching someone getting kicked in the balls or russian daredevils walking on top of highrises). But, if there is no nerve endings there, there is nothing to cancel the signal, so you do experience another persons feeling of being touched as your own. Therefore, the only thing that separates our consciousness is literally our skin and our nerve endings on it.


I sometimes wonder if we are all basically the same conscious, threading through all of the "antennae" of life one after another. But I find this idea painful because it is tantamount to an immense cosmic loneliness.


Like in "The Egg"?


Yes, although The Egg ultimately presumes separation of consciousness at some "higher level" given that there is a dialogue between such consciousnesses. My greater sense of loneliness comes from a sense that the very premise of a separation of consciousness exists as a deliberate goal of our universe, which was made by a "God" who is actually all of us, as a means to keep myself/ourself company and to introduce love. Sort of like we are all branches of the same tree. But people talk about having epiphanies about this as if it is a good thing that we are all the same, leading to a connectedness. But it also leads to loneliness.

Sorry for the dump.


Loneliness is a human instinct based on our evolutionary history as social primates though - if you travel up the antennae enough that there's no 'others' there's also no evolutionary pressures to make that loneliness a source of danger.

But what I find cool is that the lonely social ape can also look up the antenna and find all the security it could want - where I think some religions err is that when this happens, if you keep identifying with the insecure social ape rather than the many-antannaed-deiform it can interfere with the smooth communication between the two.


Don't worry, this has been the central question of humanity forever. It's also likely unanswerable by the scientific method.


I feel that if one does not have the patience to wait, one may just go and selfexperiment. It is not, that Consciousness is in a lab somewhere where only some scientists have access, no it is right here! It is so much in our faces that we do not explore it ourselves, but rather wait for some "authorities" to explore that for us.

No! We can explore it by calming our mind and our senses with well established methods which are known for thousands of years (the ancient tradition of yoga for example, an do not think twisting and bending, no this goes MUCH further!!)

And then, when the senses are turned of (by practice or "outside help" e.g. think tank) and the mind is silenced, then we start to explore what lies beneath!!

They say: "When all the movements in the mind have ceased to exist, the knower dwells in his real nature" (or something along the lines, in the Yoga Sutras, roughly 1500 to 2000 years old )


I'm also stuck on the same thing. Why am I me and not someone or something else? I feel like most are just going to laugh at this question as completely obvious, but for me it's almost a deep sense of angst and awe. It used to keep me up at night.


I know this won't feel satisfactory, but there's nobody else and nothing else that you could be. Not even in principle. Your consciousness is an unique product of your brain's labor. Think of it not from the perspective of your consciousness (as if it could up and change bodies), but of your body. What other consciousness could "this body" have? No other than you.


I suppose I am at the beginning of asking this question out loud, and it has brought on some interesting discussions with my friends and family.

Although nobody has a clear answer on this subject (globally it seems, not only in the local one I'm exploring), the variety of answers and debates it has led to has been very exciting. On the other hand, I am a bit disappointed that none has been able to bring anything more insightful than I could find via exploring the possibilities by digging around on Wikipedia.

In fact, one question that I had been wondering, which is, at what point does an analogous conscious experience appear in terms of the quantum to human scale of complexity, has been asked already in https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F by Thomas Nagel. I still wonder what it is like to be a planarian, or a fly on the wall of a board meeting (audio fidelity must suck).


If you haven't read any Douglas Hofstadter, do so at the earliest opportunity.


GEB is on my shelf, I read the long prologue, I will dig in once I'm a little further through the Silmarillion!


Sleeping on my copy of Eternal Golden Braid, and I really shouldn’t…


Based on your thoughts above, you'll definitely enjoy it!


It's a fascinating question, and I've long thought that it's the simplest one-sentence argument against physicalism. It's a fact that I am who I am (in the non-tautologous sense you allude to), but a complete physical description of the universe would presumably not include this fact.


It is the simplest one-sentence argument for physicalism! If consciousness is a purely physical phenomenon, then it should be no surprise that everyone with a healthy brain has one, including you. If it isn’t, then what supernatural event caused it to appear specifically in you?


Matter isn’t like that though. Consciousness is unique in being a phenomenon that is totally singular and unique. There are no phenomena in the physical world that aren’t considered as part of a wider collection of other physical phenomena. Eg this atom in relation to that. There is no this and that subject, there is only One. Not even getting into the fact that mental states are already non physical


""Consciousness is unique in being a phenomenon that is totally singular and unique""

I think the problem with the argument is that you are giving way too much credit to consciousness. You are giving an opinion on its uniqueness from the point of view of a Conscious entity that itself hasn't grasped it's own Consciousness.

The really exciting part of all the latest AI tech, you can prove 'state of a neural net' is the 'state of experiencing the phenomena'.

For Brains. They can now do an MRI on a brain, and re-construct what image it is thinking about.

So, State of Neurons = Phenomenal Experiences.

The Phenomena you are saying is not physical, is really the physical state of the brains neurons.


The experience of seeing something is completely different from the physical state of the neurons that see though. You have to be "in" the physical state in order to have the actual experience. Any physical description of the neurons does nothing to capture the actual experience of being those neurons, and that's the gap.


I agree with that.

But if you have one person reporting an experience and can capture the 'physical state'. And then take someone else and have them report an experience, and capture the 'physical state'.

And they match, then you can start making the relationship.

Then if my mind is captured, and I'm thinking something that correlates, then I can 'assume' i'm experiencing what the other person is.

It will always be imperfect. I agree the experience of each person is unique.

But-> The whole problem of the 'eye-can't-see-the-eye' means the eye doesn't exist is also kind of just playing with logic.

So just because we can't embody the experience of another person, doesn't mean we can't reason about how that experience is being generated.


I agree there might be a relationship between the physical state and the actual experience of it, but ultimately they cannot be bridged. There's only one way to bridge that gap and it's to "be" the physical state, and that's a concept that has absolutely no analogy in physical phenomena.

But it's even worse than you say, because those relationships are generated purely within the mind, we don't find them outside the mind, so really all physicality is an illusion, it's all ultimately internal.


Or even worse, the phenomena doesn't exist internally or externally. They are a combination of the processing of the sensory inputs and the internal model. So the 'phenomena' is really the processing of the inputs, no substance at all because it is the processing itself.

Kind of what was described in 'Being You', by Anil Seth. Our reality is a 'controlled hallucination'. Though I don't like the terminology because it is to easily misunderstood.

But. Guess, I fall on side that we can still study how Phenomena is generated internally. And still be able to understand it. Just because we can't experience the exact same phenomena as someone else, shouldn't mean we can't figure out how it is generated.


> so really all physicality is an illusion, it's all ultimately internal.

You don't really believe that. If you did, you would be fine jumping out of an airplane without a parachute, or something even more fatal.. because the physical "doesn't matter", only the internal.


If I do that I think I will have a negative mental experience, therefore it matters. The internal world will become undesirable if I jump out of an airplane. There is no external reason why I don't want to do that. If I could jump out of an airplane without a parachute and not expect it will have a negative impact on my internal world I would be fine with doing it.

So yes I do really believe it!


Why on earth would doing ANYTHNG in an imaginary, unimportant, realm affect your mental internal world?

You don't want to admit it, but you're basically confirming that you do understand that you can not stand apart from the physical realm. You are playing mental gymnastics to pretend you're in touch with some higher knowledge and understanding... but you're still constrained to the physical world, just like the rest of us.


Well, a story can also affect my internal world, if that’s your criteria of reality, yet I doubt you consider that as real as the physical world. Basically, according to your argument, there is already no distinction between physical and mental, which is the very reason I simply say it’s all mental. The reason I don’t call it all physical is because physical usually means “out there” and not “in here”. No, everything, absolutely everything, is “in here”. There is no “out there”. I perceive reality in this way. If you touch some outside directly that is amazing magic and a contradiction to any Logic I know.

It seems kind of like you can’t really accept that this is how I understand reality. I have a tip for you: others’ views will be much clearer to you if you don’t assume they’re lying about believing them. Maybe they really do see differently to you


There's a recent video you can search, a woman walking out the door of her apartment building. A young man sneaks up behind her and puts a bullet in her head. She died.

She didn't have to have any thought about the bullet. She didn't have to think ANYTHING about the physical world. Nothing about her state of mind affected the outcome.

Physical reality is independent of what we think of it.

> Maybe they really do see differently to you

I appreciate you think you see it differently. But your actions show you don't believe it as deeply as you think you do. You are a prisoner of the physical realm, not an inhabitant of a higher, more spiritual, plane. What good is it saying something is imaginary, if it affects everything you do, and you can't escape it. That sounds like something very real indeed.


I can’t really infer anything about the internal world of the woman. Also according to Buddhism the state of her mind would affect into which realm she was reborn after death, and arguably getting shot itself was a manifestation of her mental state. Yogachara philosophy of Buddhism says that all appearances of external reality is created by our internal world


> I can’t really infer anything

The point of the video, is that she had zero awareness of the bullet. It all happened in 2 seconds without attracting her attention. I'm sure if you're honest, you can think of 1000 other examples as well.

Physical reality proceeds without your thoughts, it will continue after you and I die, it doesn't need our feelings or thoughts to exists and persist.

> Yogachara philosophy of Buddhism says...

Tony the Tiger says, "They're great!!"


> Physical reality proceeds without your thoughts, it will continue after you and I die, it doesn't need our feelings or thoughts to exists and persist.

Sorry, I simply don’t believe in this.


My belief is it is unfounded and ultimately based in narcissism. "I am so important that I create the external reality, I really am the center of the universe, it wouldn't exist without my mind" It's an appeal to self-importance.

It can't account for something like the fossil record, that existed before any human was on the planet; other than to say, "I dreamed that up in my own special imaginary fiction!!"

Perhaps the primal nature of the world isn't physical matter, but to pretend that we create reality with our mind, is not only silly, it's dangerous and flies in the face of how everyone, even those who claim to believe otherwise, actually act in the world.


Well, only deluded beings (including myself) act that way. Buddhas and bodhisattvas don’t. Indeed they can bend so called “physical reality” to their will, manifesting physical bodies and appearances in the minds of others. It’s just simply the following: form is emptiness, emptiness is form


That is NOT what the Buddha taught. It is expressly the opposite.

Promoting these types of wrong understandings really distract people from further learning.

This turns people away from further investigations that might help them.

People that might be aided in their suffering by learning more about buddhism, read some weirdness posted about "buddha can earth bend, whoooo, I learned about buddhism from watching Avatar, I'm a bender", and are completely turned away.

Read and meditate further before promoting something that is incorrect.


Yes he did, it’s in the Lotus sutra and Lankavatara sutras, to name two. Perhaps you’re a Pali canon fundamentalist or something but thats not my problem, what I’m saying is orthodox Mahayana: Buddhas can create bodies and are basically omnipotent in the Mahayana. The lotus sutra even says that the Buddha faked his appearance on this Earth so that the sravakas would have a nice narrative to believe in

Also the fact you said meditate makes me think you are quite ignorant about Buddhism. Most Buddhists don’t meditate, the idea that most do is a western misconception. I’m a pure land Buddhist, which is the most popular in the world, and we mainly engage in recitation and offerings and bowing

EDIT and I understand it may just be ignorance but it’s highly offensive to characterise traditional Mahayana Buddhist beliefs as some kind of children’s show. This is effectively slandering the Dharma


Well, we'll have to agree to disagree on this one; time will tell. Wish you all the best. Thank you for talking with me.


the parent, bowsamic, is incorrect.

This is not what Buddhism teaches.


> Not even getting into the fact that mental states are already non physical

Conjecture, at best. All thought experiments purporting to show this non-physicality are deeply flawed. We literally have no reason to conclude that mental states are non-physical.


To me it’s a self evident truth but unfortunately I can’t demonstrate it


Is it as self-evident as the absence of an object placed in your blind spot?


Yes


Then I agree that it appears self-evident, but then we must admit that it then doesn't follow that it must be true!


You're saying that people shouldn't be surprised that they're conscious. That may be true, but it's irrelevant to the argument I was talking about. The relevant fact there isn't merely that you're conscious, but that you are (consciously) the particular person that you are.

Bowsamic orignally posed a question that gets at the relevant fact: "Why am I me and not someone or something else?" However, the argument here doesn't rest on this question being puzzling. The argument is simply that the relevant fact of personal identity (whether puzzling or not) does not seem to reduce to any specifiable set of physical facts.

The form of the argument is exactly like a Platonist argument against physicalism: "2+2=4" is a fact and does not reduce to any set of physical facts; therefore not all facts are physical facts. This argument does not require the assumption that truths of arithmetic are surprising or difficult to explain. However, I think the irreducibility claim is more compelling in the case of personal identity than for mathematical truths.


To a physicalist, the only way to interpret the question "why am I me and not someone else?" is "why is my physical body identical to my physical body and not identical to another physical body?". The framework simply doesn't allow alternative interpretations. This interpretation is a tautology, because in the universe, things are identical to themselves, and only to themselves.

If you are using an abstract definition of "yourself" (such as your brain's internal representation of you), you are no longer talking about physical objects, so of course your question can't be answered using physical facts alone. Similarly, "2+2=4" is a statement about our abstract representation of the world, not the real world itself. That we can reason about integers without relying on facts about the real world doesn't mean that integers are any more "real" than unicorns - or for that matter, your concept of "self".


That's why it's a good argument against physicalism. According to what you just said, everything we ever experience is talking about "not the real world itself" according to physicalism, therefore why does it ever make sense to assert the "real world itself" over what we actually experience? It's like watching a movie and believing in the movie more than the fact you are sitting watching a screen.


> According to what you just said, everything we ever experience is talking about "not the real world itself"

I don't know how that follows from what I said. When someone says "the floor is slippery", that refers to a very much real physical property of a very much real thing. When someone says "the floor is lava", that means we're playing a game. We have no problem distinguishing between the two.

If you define yourself as the set of atoms that currently make you up, you are talking about the real world. If you define yourself as the set of your memories and experiences, you are not. The former definition has a lot less issues than the latter - firstly because you don't have to define anything further, you can just point at yourself; and secondly because a perfect clone of you would have the same set of memories and experiences, and that doesn't make them you. This is why "Why am I myself?" is much easier to answer when we're talking about your atoms. Now, if a theory can answer a question with ease, how is that question an argument against it? I've certainly never heard anyone say "If global warming is real, why are my car tires melting into the asphalt?".


> Now, if a theory can answer a question with ease, how is that question an argument against it?

Bowsamic may have a different take on this, but in my version of the argument, the point is not to answer the 'why' question. The 'why' question is just used to gesture at the relevant non-tautologous and apparently non-physical fact. So it doesn't really help with anything to redefine the terms of the question until it becomes a tautologous statement about atoms. All that shows is that physicalism doesn't have a problem with the concept of two physical things being the same thing – which isn't in dispute.

>If you define yourself as the set of atoms that currently make you up [...]

I don't see how I'm going to do this without some prior notion of who I am. Otherwise, one set of atoms is as good as any other.


> The 'why' question is just used to gesture at the relevant non-tautologous and apparently non-physical fact

Yeah, but to clearly define what that fact means, you'd have to introduce a bunch of intermediate definitions, and eventually you might just end up in physics-land again. It isn't physics' fault that the human brain is complicated.

> I don't see how I'm going to do this without some prior notion of who I am

Protip: if you pinch it and it hurts, it's probably yours :) Well, that or a very angry cat.

To be more serious: having prior notions of things is fine. Required, even. We can't reason about things without having at least a vague understanding of the language. It can't be turtles all the way down.


>and eventually you might just end up in physics-land again.

Sure, anything might turn out to be explicable by future physics (or indeed, future astrology, future phrenology, or any possible future knowledge). There are no knock-down arguments in this domain. The point is that the fact of your personal conscious identity is apparently not a physical fact, which is a prima facie argument against some of the stronger forms of physicalism.

>To be more serious: having prior notions of things is fine.

You can't use the physical definition to dispense with other notions of personal identity if you need those very notions to bootstrap the physical definition (and maintain it over time, as your 'set of atoms' is constantly changing). To point out that your informal notion of self is connected somehow with a loosely-defined lump of physical stuff isn't really to succeed in reducing anything to physics. It's just to say what pretty much everyone agrees with: that humans have physical bodies.


> You can't use the physical definition to dispense with other notions of personal identity if you need those very notions to bootstrap the physical definition

Now that you mention, I don't need to. "I" simply means "the person who is currently speaking" (or in this case, writing). This already determines, beyond reasonable doubt, which physical body we're talking about. No other definition needed, since you can't say "I" without speaking. QED. Sorry, I should have noticed this 4 comments earlier.

(Caveat: this definition doesn’t feel satisfying, which might be the source of our disagreement, but I can’t put my finger on why - perhaps you can)

> the fact of your personal conscious identity is apparently not a physical fact

The human brain is astonishingly good at BS-ing itself, so I wouldn't attribute much to how you intuitively feel about your own conscious self. To me, this smells like a more abstract and philosophical version of the puddle analogy.


> "I" simply means "the person who is currently speaking"

But on this definition you can raise the question of why the person who is currently speaking is the person who is currently speaking (which by alternating de re and de dicto interpretations can be understood in the same non-tautologous ways as e.g. "Why is the person who is currently the President the person who is currently the President?") So this linguistic analysis doesn't succeed in dissolving facts about personal identity into nothing. You still have the fact that you are who you are and not someone else. And the intuitive notion of personhood (on which your definition depends and which you haven't in any way reduced to physics) is intimately connected with the idea of a more-or-less uninterrupted stream of first-person conscious experience.

> I wouldn't attribute much to how you intuitively feel about your own conscious self.

If we set 'intuitive feelings' aside, then of course the whole problem of conscious experience disappears immediately.


> And the intuitive notion of personhood (on which your definition depends and which you haven't in any way reduced to physics) is intimately connected with the idea of a more-or-less uninterrupted stream of first-person conscious experience.

"person": a member of the genus Homo

"speaking": phenomenon wherein the previous object vibrates its own vocal chords and moves its own mouth to produce sound waves

"currently": a point in time in which the aforementioned act occurs

Notably missing are: any references to continuity of experience, conscious action, or even thought. Suggestion: I have, in fact, completely reduced it to physics.

> You still have the fact that you are who you are and not someone else

Please describe to me a hypothetical universe where that statement doesn't hold true. If there is none, then we must accept that this "fact" does not require any explanation, and therefore can be dismissed as trivial.


Your linguistic analysis of 'I' doesn't work if understood in purely physical terms. Lots of people are speaking at any given moment, but I don't refer to all of them if I say 'I'.

Consider the absurdity of the question “who am I?”, uttered by an amnesiac, if it means simply “Which lump of physical stuff is currently speaking?” Even if we ignore the problem mentioned in the first paragraph, the answer is trivial – the same lump that’s making the noises. But it’s difficult to accept that the amnesiac is confused over a triviality: they really don’t know who they are; they lack a key piece of non-trivial, non-tautologous knowledge.

>If there is none, then we must accept that this "fact" does not require any explanation

As I've tried to emphasise, I do not claim that the fact is puzzling or requires an explanation, only that it is a fact. By analogy, it is not puzzling that the sum of two and two is four, and no non-trivial explanation of this fact appears to be required, but it is still a fact. Now of course you can try to make such fact dissolve via various forms of philosophical therapy. For example, there are some philosophers of mathematics who would try to argue that all mathematical facts are tautologous. You are trying to do something similar with facts of personal identity. However, philosophers of mathematics who claim that all mathematical truths are disguised tautologies can point to lots of actual work that has been done to show that this position is more plausible than it might first appear.


> Consider the absurdity of the question “who am I?”, uttered by an amnesiac

In that context, that question would just mean "what was my life like prior to losing my memories?", wouldn't it? So in this instance, the missing non-trivial knowledge is their long-term memory, not a problem with understanding the definition of "I".

> For example, there are some philosophers of mathematics who would try to argue that all mathematical facts are tautologous. You are trying to do something similar with facts of personal identity.

I'm not arguing that the subjective experience of personal identity is tautologous, I'm arguing that it's a chemical reaction (EDIT: and that the underlying objective truth is tautologous). Any feeling about the intuitive concept of "myself" having some kind of esoteric component above the physical matter (which I also share, don't be mistaken) would, presumably, also be given voice to by a computer simulation of your brain. That would be your brain being caught red-handed in the act of lying to itself about how special it is. And if this turns out not to be the case, then we have rock solid evidence that something is indeed missing from our simulation, which gives us an opportunity to measure that something, which in turn makes that something physical.

> Your linguistic analysis of 'I' doesn't work if understood in purely physical terms. Lots of people are speaking at any given moment, but I don't refer to all of them if I say 'I'.

Valid point. Refinement: in any speech, "I" refers to the person making that speech.


It's also the question I always end up getting stuck on !


I like the idea of ‘individuation’[0] where individual things (rocks, living bodies, minds, societies) come into being from their preindividual environment. As I understand this idea, it is just that a mind individuates within a body, and that mind comes to call itself ‘I’. So it’s not that we existed before a mind individuated, but rather we come to find ourself recognizing ourself as ‘I’ after individuating.

[0] https://epochemagazine.org/40/on-psychic-and-collective-indi...



Dennett is always worth a read!


Maybe your consciousness does sometimes occupy other bodies. Of course you don't remember it, because memory and working memory are tied to brains. When consciousness is in another body, it feels that it is another person with his memories and context and none of yours.

Another way of saying this is that there's no need to postulate separate consciousness for separate people, one is enough.


Experiments done on corpus callosotomy patients [0] give us some very interesting insights on how consciousness works. IMO, these experiments strongly point towards consciousness being mostly, if not entirely, an illusion created by the brain.

[0]: https://youtu.be/ZMLzP1VCANo


I knew about similar split brain experiments, and although I agree to the illusory component of consciousness, these nonetheless do not answer my original question on the observing component of it. Thank you for sharing!


I was referring to the observing component itself being an illusion (as a split-brain patient seemingly has two of those). If that component is also a byproduct of physical processes, then your “observer” being attached to your body is self-evident.


If I stand up too quickly or hold my breath, it's consciousness that I start to lose, not my consciousness's control of my body. It seems to me that implies consciousness comes from the brain.


Consciousness is not even really a well defined concept, so for something that's already nebulously defined to be an illusion doesn't make sense.


Indeed, and would not an illusion be contingent on an observer?


Yeah not far removed from the fungible soul concept

Built from the same shaky foundation and shared desire for it to exist

One hit upside the head and you’re a completely different person. (Or dietary change with different microbacteria, or following heavy psychedelics use, etc…)


> Yeah not far removed from the fungible soul concept

I am with you on this. Today's scientific debate about consciousness sounds a lot like the scientific debate about the human soul from a century ago.

We just want to be special so badly, meanwhile evidence keeps piling up indicating that we are not.


I would love nothing more than to not be special, the only solutions that I can conceive of for the hard problem of conscience are ones that requires something other than the physical. Either there is a soul or conscience is just inherent to the universe(the whole everything has a conscience theory) neither of these is particularly satisfying to me if for no other reason than they are both unfalsifiable. I would love for there to be an available algorithm that my brain could compute that would spit out conscience.

But I don't see how, it seems intractable to me. The brain could theoretically do anything it does from information processing, problem solving, planning or even develop a theory of the mind without a conscience, it's computation all the way down. But why and goddamned how does a human brain go from perceiving and processing the visual information of red light hitting your eyes to "feeling" red, how does it "feel" anything at all, heck what is this "observer" that does all the "feeling" even made of? if you could break "him" down into constituent parts(theoretically computable sub-problems) at all that would be nice, it would be more progress than I've done over years of thinking about this bullshit problem.

The "observer" seems indivisible to me, heck it seems made of nothing at all, it just is. Sure it being indivisible might be an illusion made up by the brain, but, if so, it's an illusion that still needs to be fed to "something", and I haven't heard any working theories that I actually buy that explain how the brain comes up with that "something".


Have you read anything by Peter Kreeft? https://www.peterkreeft.com/topics-more/case-for-life-after-... might be of some interest to you.


the other way of thinking about it is that we don't know if we share a sense of "red", aside from merely being consistent with other compatible nodes (other humans) in a network. the other node simply can distinguish "red" from "green" just like I can, but we don't know if they are perceiving that color the same way I am. whether it is a hardware difference in the rods, cones, other aspect of the eye, or computational aspect in the brain, we don't know.

your supposition still relies on their being a separate entity to imagine, quantify or figure out, when that is not necessary. a tribunal of processes is enough for this "observer" phenomenon.


This is why I always put my soul on a blockchain.


EIP-4671 format or EIP-6239 format or other?


It would have to be EIP-623, hahah


Perhaps it isn't well-defined, but I initially thought of it as something atomic and immutable (even in the face of brain damage), and now I no longer do.


An illusion for who? ;)


I don't have an answer to offer, but anecdotal memories of reading somewhere that brains could be a sort of antenna for consciousness. Then a subsequent comment explaining how if that were the case we would fall unconscious in a faraday cage.

so, back to square one...


I wouldn't take the leap into the Faraday cage so easily, that would imply that consciousness is measurable in the electromagnetic field. I don't think that's true, otherwise we would have measured it already. I don't know where to go next on this reasoning, it seems like we don't have the tools to measure in which field we are communicating.


That line of reasoning seems pretty silly given that consciousness is supposed to be much more subtle than electromagnetic fluctuations and omnipresent, but it reminds me of the Tibetan Buddhist tradition where the most difficult practices are only done deep in mountain caves where no light can reach the practitioner. I wouldn't be at all surprised if minimizing cosmic background radiation was a factor in sufficiently subtle inner inquiry.

One of my ancient aliens theories is that the Pyramids are artificial mountain caves.


Brain antennae + consciousness is more complex than just brain consciousness. In the former, we still have to answer "so... what is consciousness then? where is it? how does it couple to the brain? can it be blocked?" In the latter, it's just "consciousness is a property of a brain", and we are only left with the question of how the brain produces it. The antenna theory is just multiplying entities unnecessarily.


> why I am currently occupying the body that I am as an observer, and not another one, or perhaps even none.

Are you sure this is a well-formed question? To be clear, this isn't what the hard problem of consciousness is about, although perhaps that's what you meant by "none".


The question you raise is fascinating. I ask the same questions when I think about the many-worlds interpretation in QM.

As for the research in focus in this thread, It seems that we are facing another Great Humiliation[0], with the potential to affect the course of development in so many other areas.

It will be interesting to see what developments this research will inspire in the cultural/anthropological field: so were the tree-huggers right all along? So is there really a harmony and connection between all the "children of Gaia"? :)

[0]https://blogs.scientificamerican.com/cross-check/copernicus-...


The only "scientific" explanation that makes sense to me is that consciousness spans time and space, where each focal point of attention is an aspect of a central whole, like a lifetime within a fractal of lifetimes stemming from when source (God) consciousness fractured itself to experience every viewpoint within the universe to avoid an eternity alone. It created what we experience as the shared (objective) reality from the original singularity that came into existence from the number line via the long tail of the quantum probability distribution. The concept of something from nothing is maybe the foundation of consciousness itself, since science may never be able to explain the subjective experience of conscious awareness or how it came to be.

As a thought experiment, imagine we could emulate the corpus callosum between brain hemispheres through a computer chip, and had the medical technology to split someone in half. There have been cases where people have lost a brain hemisphere before birth or due to an accident but continued leading relatively normal lives, so we know that such a division is plausible.

Before separation, we'd experience being an individual.

At the moment of separation, our consciousness would anchor to one of the hemispheres we'll call "A". We'd see our other half "B" pull away and look back at us in curiosity as it continues experiencing its own consciousness. The other half B is still our consciousness, but anchored to a different point in spacetime. The halves would begin accruing their own memories independently, but act like the original individual for quite some time or even a lifetime.

At the moment of reunification, we'd see B approaching and then experience a download where our memories suddenly contain more than they did a moment ago. We might even experience "hearing" the other half as another person until the merging is complete. But afterwards we'd still be the same individual we always were, just with new narratives running in the code of our mental model, built from the memories acquired while separated. This experience is identical for the other half, except seen from its own context.

Warning (spoilers): This is analogous to a Vulcan mind meld, although it's unclear from Star Trek whether minds maintain their individuality or become a single consciousness. Because Spock's consciousness existed separately within McCoy until he was reincarnated, I tend to think that the show took the society of mind approach where our thoughts maybe come from a collection of simpler agents, meaning that Spock's anchor transferred to McCoy when they separated, so both their anchors shared one mind for a time, and Spock would have been in McCoy while he was asleep and his partial (a term borrowed from Eon by Greg Bear) B died in the warp chamber.

But I think the hemisphere splitting thought experiment is more useful because it sheds insight on reincarnation and the multiverse. It also helps us step up to 4D/5D and consider what it subjectively feels like to straddle realities as we shift laterally through spacetime rather than waiting for events to unfold in forward (linear) time. Insight and downloads might come from our consciousness constantly splitting and reuniting from other timelines as it explores possible future outcomes in the quantum probability distribution.

That might shed light on how the inner mind seemingly affects outer reality by overcoming the odds to survive via affecting probability itself (the principle of manifestation). In other words, it's likely that there is always an anchor, since quantum reality can't exist without an observer (consciousness), so an individual sees unlikely events whose odds of happening are so remote that science can only call them coincidences (synchronicities).

That would mean that our souls always continue, even if our physical bodies die, by straddling another reality where a body for us still exists. We may experience an infinite, continuous, individual timeline spanning countless lifetimes over billions of years, interspersed with long periods of nonexistence that we have no way to remember. And each time we die, the people around us can no longer be with us, since our anchor is no longer in their reality.

And it's unclear whether our soul can reincarnate in simpler life forms or if our complexity keeps evolving as our soul ascends. Dreams and psychedelic accounts seem to lean towards the idea of a lizard brain, where consciousness can exist unfiltered. So that it may be that we are allowed to "choose" our next incarnation during unconscious periods and "opt in" to a 3D experience like here on Earth to experience the novelty of suffering.

The main takeaway is that since those around us are aspects of ourselves, then egocentric narcisism is a refusal to see the big picture. If we act to enrich ourselves by exploiting others, we karmically doom ourselves to more suffering in our next life. Our old A becomes the resource-hoarding enemy of our new B. Making a deal with the devil may feel good, but it's like being the top inmate in a prison, starving the soul of real connection. So the meek and pious spiral out towards peace/connection/heaven while the rich and powerful spiral in towards their own violence/loneliness/hell.

I haven't read this, but I think it leans more towards the objective side, where multiple consciousnessness would occupy a single mind, more like schizophrenia than a fractal split from a single source consciousness:

https://en.wikipedia.org/wiki/Society_of_Mind

And of course all of this could be completely wrong. But it's all derived from first principles and a collection of insights from the spiritual community that seem to keep resonating and expanding in this post-pandemic era entering the New Age. We may not know until we meet aliens who have performed these experiments. But it provides a useful framework to explore the idea of consciousness existing in higher dimensions and even visiting us through the paranormal/supernatural (magical) experiences we all have as conscious aware beings.

Another benefit is that consciousness affecting probablity means that time travel is not possible, since any repeat scenario is a copy of the original and subject to random outcomes. The same events can never be replayed exactly the same way twice. Many vocal people in the scientific community have arrived at the conclusion that free will doesn't exist, which self-evidently defies our subjective experience, but they're basing that on the fallacy that time travel is possible.

The impossibility of time travel is maybe the best (albiet weak) explanation we have for consciousness and free will. It's how our timeless nature steps down from 4D/5D to experience a timeline, similarly to how a photon doesn't experience time since it's moving at the speed of light. There may only be one photon in the whole universe, creating a 4D holographic "crystal" encoding everything here. The mutable state of our consciousness is kind of like the in-between times when pure functional programming blocks to wait for new input, with the deterministic portion running unconsciously. An android would have mutable state, while a robot would mostly run hardcoded logic.

Which is why I don't subscribe to the idea that humans are just AIs running in complex robots, or that this is a simulation. Although if we substitute the word dream instead, then it all begins to make sense again. There's no matrix or holodeck that created the simulation, it's more like source consciousness projects the simulation to give us something to do. And of course there's no way to prove any of this, it's just a descriptive model that may lead to deeper insights.


Thank you for your thoughtful comment, I found myself nodding along until the very end. I subscribe, so far, to the idea that we might simply be fragments of a fundamental consciousness, but it still remains a bit frightening to me (despite its lack of effect on my way of living) that we may never find the ‘root’ of it all.

I also agree to the connection between the karmic ‘field’ and our consciousness, and I sometimes feel like I am simply guiding an observing vector across all possibilities, collapsing the wave functions along time. Other commenters in the thread have done a better job articulating that aspect than me, however!


Thank you for replying, that means a lot. I winced after posting that, like I mostly do now when engaging online, because so much of it is subjective and changing that I struggle to consistently articulate these thoughts and the length gets out of control. One of the first rules of Zen is that there's no way to tell someone what enlightenment is, it can only be sought individually. Because letting go of attachments and giving others the dignity to find their own path is one of the first steps. I'm thrilled that you found things to disagree with, as that suggests that there is still much more for me to learn from others like yourself.

For what it's worth, I wrote that long rant for my 25 year old self going through an exceptionally painful time in his life. I hope someone finds some value in it, even if simply to forgive others and ourselves, and find gratitude for the miracle of life. Even when things feel so terribly unfair that just surviving these human trials amounts to an act of heroism.


I found myself nodding along until the very end. <- I may have misread this as disagreeing with simulation vs dream-based reality, so I should have said I'm thrilled IF you found things to disagree with, definitely plenty of room for interpretation!


I know this is a disliked point of view, but for me the yogic point of view really gave me a mental picture in which this all makes perfect sense and which is quite consistent in itself (even though there is an unexplained axiom which is: "Intelligent-Awareness just is")

I feel that is what was termed the dreadful g-o-d word by many cultures but if we can understand this under that light, the readings of the second ever yogi that came to the west called Paramhansa Yogananda are highly recommended. His Autobiography of a Yogi was Steve Jobs favorite book and he made everyone that attended his funeral to get a copy of it. Not that that means a whole lot, as he was by far not an embodiment of love, but I feel he had the capacity to see the brilliance in this undestanding.


"Brains are not required to think or solve problems" is a perfect intro to a Leetcode job interview, for a candidate that already grokked 300 of these problems.


Water can solve a maze!


I remember some TED guy talked about consciousness being basic substance around us, and the brain just concentrates them to make it significant.


That's the theory of Panpsychism.


There’s so much nonsense on the TED stage. They even allowed Graham Hancock.


I think they created a brand and now you can just pay, get up on stage and say whatever you want.


and it's even worse with TEDx


Maybe that's the Great Filter? All sentient beings must get rid of their brains and associated baggage to progress further.


This is reassuring. I never thought I'd be able to think or solve problems, but this gives me hope!


How is that new? Maybe if you've never heard about evolution :)

All living beings have some kind of nervous system that has the control. Some have it concentrated somewhere - we call that a 'brain'. Simpler organisms have it everywhere in their body.


He's literally investigating the nuances and limits of what you describe in your second paragraph. Yes, the outline you described is obvious.


Complexity, tho dazzling and impressive, is apparently a small trick. Consider the simple algorithms that generate fractals, alife and random numbers.

Problem-solving intelligence appears to be similar in that we consider it impressive and special, but, in the big scheme, is actually rather trivial and common.

Speaking as a person who has spent literally decades enthusiastically chewing upon software design riddles, it makes me feel a bit insectile.

I mean, the termites in my walls probably feel what I feel, and they're just playing with mud and spit.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: