The philosopher of science Pamela Lyon writes that “taking seriously modern evolutionary and cell biology arguably now requires recognition that the information-processing dynamics of ‘simpler’ forms of life are part of a continuum with human cognition.”
Cognition as a property of all matter is the simplest premise for any materialist theory of the mind.
Any and all theories that divide matter into cognitive and non-cognitive types are logically equivalent to Cartesian dualism. Socially of course, scientific-dualism is often more palatable to contemporary intellects.
Just to be clear, "the information-processing dynamics of ‘simpler’ forms of life" being "part of a continuum with human cognition" does not strictly imply "Cognition as a property of all matter". Also, I fail to see how the latter is the "simplest premise for any materialist theory of the mind". How is it simpler to say that "all matter has cognition as a basic property" than to assume "certain arranges of matter exhibit cognition"?
This is the threshold I talk about in my sibling comment. It is very difficult to come up with a materialist argument for what about that 'certain arrangement' makes cognition. I am unsure if it is possible to prove that there is no such argument, but I don't think we have made any progress in finding one either.
> It is very difficult to come up with a materialist argument for what about that 'certain arrangement' makes cognition
I'd claim it's not necessarily harder than it is to argue certain arrangements make a computer. Which is to say, there's grey area but it's ultimately just a label we give to certain patterns/behavior, not some special line where the universe starts doing something different, so it's fine to be somewhat vague/arbitrary (when do sand grains become a heap?).
I think "Cognition as a property of all matter" is leaning too much towards panpsychism. There's a spectrum of chairs from thrones to stumps, but I wouldn't say "Chairness is a property of all matter".
I don't think so. The distinction is about whether or not there is a point of transition. The sorites paradox is more about identifying where the transition is. You can apply sorites paradox to a colour gradient going from red to , green arguing you can't pinpoint the threshold but you wouldn't deny that the transition point is somewhere within the range.
I never found the sorites paradox to be a terribly challenging argument in itself. Formal proofs rely on the assumption that a tiny change to a state is the same is no change and thus an an accumulation of tiny changes is also no change. I just don't accept the base premise. The common sense arguments with grains of sand in a pile, trees in a forest etc. just seem to rely on the vagueness of definition allowing individual judgement to place the threshold at different places.
The sorites paradox depends on a slight of hand -- applying the pedantry of logic to ordinary language. Or to put it another way, like a pun or double entendre it is naught but clever word play. The language game falls apart in technical contexts, for example there's no sorites paradox for heaps in computer science.
My comment was on the philosophical shortcomings of the statement in particular and the philosophical shortcomings of socially acceptable expressions of materialism in general.
“Particular arranges” is mind-body dualism with lipstick.
It needn't be dualism if there is some threshold that makes things conscious, but then people can ask what that threshold is and why, without a good answer people will think you're leaning on dualism again.
I doubt there is such a threshold, I think the issue people have with the idea that rocks might have cognition is it too difficult to perceive the difference in scale of complexity of a brain compared to a rock. People have trouble comprehending the idea of a millionth, going further than that there is the intrinsic difficulty of accepting something existing at a scale you cannot perceive or even conceive of what that might be.
But we go to sleep, into deep anaesthesia, and to lesser degrees can lose or have degraded our various senses and cognitive capabilities - doesn't this show that the "thresholds" exist, evidently? Drink a bottle of vodka and you will slowly, and then completely, "lose consciousness". The issue folks have with the idea rocks are cognizant is that we can't see any reason to suppose that they are.
‘I think, therefore all matter must think’ is just such an unsubstantiated jump. Creatures having limited cognitive process, sure, of course. Rocks or other unliving matter thinking? That requires a lot of faith in materialism, so much so it starts looking a lot like primitive spiritism/shamanism.
Is life not necessary for cognition, then? I would say almost certainly that some forms of matter are not alive. Similarly, it’s hard to imagine some forms of matter having a cognition level that isn’t zero, even if it is a continuum.
Here are the options:
1: All of it is irreducibly together alive.
2: Some of it is alive, some of it is not.
If 1: then life is made of non living material.
If 2: repeat the above options with this new smallest piece of life.
If there are no parts left to examine without reaching 1: then it is all alive.
Then we are left with two options.
A: Life is made of non-living material in specific arrangements.
B: life is a property of all material.
More than likely we are dealing with the first option. Life from non-living material. Which implies life could be created from other arrangements of materials that function analogously to a cell (at a different scale, maybe).
This question may be settled soon... well, as soon as someone builds a x-sized replica of a cell and proves it 'works' (given proper input/output/environment).
My gut also tells me this is true: with the following reasoning. A chair isn't actually any particular chair, but a template: a pattern, which can be expressed in other materials besides any one particular example. A pattern can be expressed in different mediums, and life looks like such a template... to me at least.
What about AI software running on silicon chips? Soon it will reach levels of complexity vastly exceeding any human brain. To these systems we will look like bugs, or maybe even just cells - they might not even classify us as being alive, let alone intelligent.
“Soon it will reach levels of complexity vastly exceeding any human brain.”
I doubt the “soon” part. Artificial neurons are vast simplifications of real neurons, and even complex networks like GPT don’t come close the complexity of biological networks, both in network structure and in terms of what goes on between the neurons (eg chemical processes, as opposed to just the activity of the neurons themselves). Individual neurons have been shown to have capability to both process and store information, something which artificial ones don’t do. Besides, we are only scratching the surface in our understanding of biological neurons and brains. How can we say “this thing we built that is a vast simplification of this thing we barely understand will soon exceed the complexity of the thing we don’t yet understand”?
Because it’s already able to do almost everything a brain can? State of the art AI models can already learn, reason, communicate, and even create - better than most humans. Using a lot less neurons, and much simpler neuronal structures.
All trends indicate we’re only a couple years away from an AI superintelligence. No additional understanding of biological brains is required to get there.
I remember a similar argument made since the 1980's... 40 years later and a lot of stall-outs...
It could be hubris to assume we know enough yet to replicate our 'kind' of intelligence. Just as it might be hubris to assert AI doesn't have some 'kind' of experience. A minimum (which will be raised once we have a better understanding) is still: change of state, to have an experience - we simply don't know what else exactly is required, so that threshold minimum remains to be raised.
This argument could not be made before March 14 2023, when the first actual AI was released (GPT-4). I remember that day very well because I was extensively testing every GPT model before that (part of my job as an AI researcher). The entire history of human civilization can be separated into before and after that milestone.
We do not need to "replicate" human intelligence. It's enough to "simulate" it. The coming AI models will be entirely "alien" types of intelligence to us, and that's OK as long as they are useful. Most likely these AI models will be able to finally explain to us how our own brains work.
The GPT4 you’re using must be a different one form what I’m using, because for me GPT4 was only an incremental improvement over GPT3.5 and while both are a step up from everything that came before, it’s output is still very mechanical to the point where it often seems like little more than a very elaborate and sophisticated expert system with probabilistic response choices and a large knowledge base. It has far too many shortcomings to make the claims you seem to be making and the worst part is that it’s entirely unaware of its own shortcomings and will confidently and incorrectly assert things without any self awareness or understanding and I’ve yet to see a single model that can determine when it doesn’t know something. There’s still a long way to go before human level intelligence. Also, think of the amount of energy that is required to run GPT4 compared to a typical human brain.
That’s not to say that it’s not useful in its current form, because it is, but that’s not what we are arguing here. You asserted that “Soon it will reach levels of complexity vastly exceeding any human brain.” while I’m asserting that the human brain is far, far more complex than you give it credit for and that the latest AI models are not even close to that yet. I am doubting that LLM’s and similar machine learning will exceed human brain complexity, and certainly not VASTLY, in any timeframe that I would call “soon”, unless there are fundamental discoveries made. It’s not even clear the current models can scale any further due to the lack of any new non-LLM-generated training data to train bigger models on, so the GPT3.5 to 4 shift is unlikely to reoccur.
“It’s hard to imagine” is a statement about one’s imagination and knowledge. It is not evidence (absent an arbitrary metaphysics giving humans special status within the world (or cognition being universal)).
Holy hell, did anyone get to the part where this is the lady from YACHT?? That’s probably the most insane twist I’ve ever read in a philosophy post, even more unexpected than the thoughtful post a week or two ago that was by the guy behind brainfuck. Highly recommend their music; you’ll know pretty instantly whether it’s your vibe or not!
Substantively: all of this is absolutely right on IMO, though at times a little less precise in her wording than I would prefer. For example, I loved the part at the end about slime molds doing computation without being like computers, but would have loved an overview there of how they do work instead of hierarchies of logic gates backed by transistors, or comparing their “chemical memory” to RAM and/or solid state storage directly. Obviously, this is asking a lot - that’s a sign I really liked the essay, I’d say!
My biggest takeaway was the possibility of literally learning from other life forms. I’ve been on a self-run-octopus-experimentation phase recently and I love pondering about how to quantify (and maybe even qualify!) their cognition, but it honestly never occurred to me at all that they might be able to produce insightful concepts on their own. Human hubris, I suppose.
Thanks for posting, OP! Substack continues to amaze. Claire Evans and Slavoj Zizek, sharing a website… truly a golden era.
I have a pet theory that brains give rise to consciousness just by processing information in a way that entangles their state with whatever is being perceived, and that consciousness is the sum of simultaneous perceptions.
Something enters our sense of reality as it is resolved by the activity of neurons, i.e., a part of the brain adopts a state that is possible only if it is observing that particular thing. That thing could be a something internal like a thought or feeling or something external like a color or sound.
There is nothing unphysical about our consciousness, it's just a relatively large amount of observation that is somehow amalgamated into an individual perspective. Without knowing the precise loci or mechanism of observation within the brain, I would think maybe physical proximity or some form of connection play a role in the formation of an individual consciousness, or else wouldn't there be just one consciousness assigned to the universe as a whole?
In my theory and as discussed by the article, levels of consciousness do exist on a continuum. Even a region of space not containing a meaningful observation system, like that occupied by a rock or a single particle, can be said to be minutely conscious, to the extent that its state resolves the world around it.
Cognition as a property of all matter is the simplest premise for any materialist theory of the mind.
Any and all theories that divide matter into cognitive and non-cognitive types are logically equivalent to Cartesian dualism. Socially of course, scientific-dualism is often more palatable to contemporary intellects.
reply