Hacker News new | past | comments | ask | show | jobs | submit login
It's time to admit that genes are not the blueprint for life (nature.com)
190 points by pseudolus 3 months ago | hide | past | favorite | 308 comments



I have a PhD in biology and I work in machine learning today.

After years of study my takeaway is that reductionist analogies are helpful for giving folks something to grasp on, but at a certain level, you have to step back and appreciate the complexity of certain things. At some point, only machines will be able to fully grok what is actually going on. Does that mean reductionist thinking is bad though?

How does GPT4 think? A piece of data getting multiplied by a gazillion parameters you will never understand.

How does one cell become a human? A gazillion chemical reactions happen in parallel you will never understand.

What is consciousness? The firing of a gazillion synapses you will never fully understand.

The three challenging phenomena I described may seem intractable to reductionism, but have seen the greatest progress from reductionist approaches. Most of the greatest and most useful knowledge will be very reductionist in nature. How many transcription factors are needed to turn this skin cell into a stem cell? About 4. What is the best architecture for machine translation? Fits on the back if a postcard? What chemical transmitter underpins human happiness? Can be synthetically manufactured in your mother's basement...


> What is consciousness? The firing of a gazillion synapses you will never fully understand.

This doesn't fit with the other examples. Consciousness is not a problem of scale, but a problem of kind. We can see from simple models how you might make things more complex until they become something like GPT 4.

So too with human cells, we can see analogous engineering feats that make it possible to imagine that there could be a way to have a system that can produce a human from small beginnings. Again, simple systems show us how the complex might be possible.

But there are no simple building blocks that we can point to and say "yes, I can see how more of these would produce consciousness".


We've cut open brains and it's synapses all the way down. Of course we don't understand it but there's nothing else. A problem of kind is just a large enough problem of scale.


I agree with you that the brain is nothing else but its physical makeup (whether that's synapses or more). But that's not the point of contention -- the point of contention is whether those synapses are enough to explain (or explain away) consciousness. And to me it is plainly obvious that physical stuff (synapses or whatever) can never explain (or explain away) consciousness.

(side note: there is absolutely, undeniably, a correlation between brain states and mental states -- but that does not entail that mental states reduce to or are explained entirely by brain states)


> there is absolutely, undeniably, a correlation between brain states and mental states -- but that does not entail that mental states reduce to or are explained entirely by brain states

It's not plain correlation. It's long been experimentally demonstrated that you can circuit-bend[0] an animal or a human live, by poking in their brain[1]. If in those cases, altering brain state creates/affects behavior and mental states, then there isn't really a place for a non-physical-stuff consciousness. If it existed, it's demonstrably overridable by purely physical interventions, which implies you could just make a physical-stuff consciousness, at which point why postulate a non-material one?

--

[0] - https://en.wikipedia.org/wiki/Circuit_bending

[1] - Then there's obviously alcohol, tobacco, and other drugs, that have a slower onset.


Poking at the brain or consuming certain substances causes changes in the phenomena experienced in consciousness.

But doesn’t seem in any way explanatory of consciousness itself, i.e. the owner of the brain’s awareness of those state changes.

I think this points at the somewhat nebulous definition of consciousness, which at its core seems categorically separate from the phenomena that can be experienced (and manipulated by poking at the brain) within it.

This is why some have theorized that everything is conscious, and the human experience of consciousness just happens to be particularly rich and complex due to the complexity of the human organism. This would suggest that a sufficiently advanced computer could also be conscious, with the specific experience mediated by the architecture of the computer.

(I’m not endorsing this position, but it’s an interesting thought experiment to help separate experiences in consciousness from consciousness itself).


I never denied the claim that changing the brain can change mental states. I think that the experiences we have are ones we have because of physical states (primarily, the brain). I just do not think that the experiences themselves are physical things, and on a purely naturalistic explanation of the universe experiences would not exist.

You are right that on naturalism there isn't really a place for non-physical-stuff consciousness. And yet, the existence of non-physical-stuff is the one thing I am most sure of, the one and only thing I have access to. So the challenge is to explain it (or explain it away). As you say, there's no place for it in a naturalist view of the world, and I agree.


> I just do not think that the experiences themselves are physical things, and on a purely naturalistic explanation of the universe experiences would not exist.

But they could be. Elsewhere in this thread, I mentioned computer science is effectively a branch of physics now, because we can connect computation with thermodynamics, and e.g. put lower bounds on amount of energy required to perform certain computation. Therefore, if you consider experiences to be "execution", some specific computations done by our brains, then they still are physical things. Might be tricky to point and prod at them in the structure of our brain, but physical concepts that are smeared over space or time are nothing new - think e.g. waves, whether EM waves or mechanical waves or virtual waves, think of the boiling soup they make out of their medium, and how e.g. Fourier transform can tease them all apart anyway.


> Therefore, if you consider experiences to be "execution", some specific computations done by our brains[...].

Which it seems to me to boild down to the assumption that intelligence and/or consciousness are computable. Which is a possibility, but we should keep in mind that we don't have any proof of this yet, only plausible-sounding arguments.


> if you consider experiences to be "execution"...

Why would anyone think that is true? The problem I find with responses like yours is that I think you haven't understood or fully appreciated what it is I'm pointing to that needs explaining. There's something real here that needs explaining or accounting for in a full theory, and an explanation like this doesn't even touch on it. In the full story of "what is experience", there is absolutely without doubt a computational side. But there is an aspect of experience, the 'felt'-ness of it, or the phenomenology, the qualia, that is not explained even in part by reference to computations or any other physical process.

There's just nothing like that, no precursors of it, to be found in the physical world. It's not the computational side of experiences, but rather the what-it's-like-ness of it.

Here's an example of it: what, given naturalism, explains the redness of seeing red? This is not a question about the wavelengths of light, or light hitting the retina, or processing in the brain that leads to the brain states that correspond to having that experience. This is a question about what it's like to see red -- the experiential side of the experience, as opposed to the physical causes of it. Why, given naturalism, would we think that anything experiences (first person perspective) anything at all?


By analogy/homology:

Technically, software isn't exactly physical either, of course. It works with information contained in physical states, but isn't -technically- itself physical.

Arguably this can be demonstrated by moving it to a different physical representation (magnetic, optical, diode charges, dynamic memory cells; little endian, big endian; AMD, Intel, ARM, Turing Machine In Game Of Life, Minecraft Turing Machine) without affecting its behavior or functionality.

I don't think there's anything particularly mysterious about this kind of phenomenon, per-se.

You could grok the behavior of the software by poking at the hardware its running on for long enough; but it might be more useful to split the problem into separate layers that are easier to reason about. (Hence the need for reduction and emergence)


You can give alcohol to a driver and it will affect their driving.

However, if you're an alien and you don't know the exact workings, you might conclude that alcohol affects cars.


Hence my point about circuit bending. We're at the stage equivalent to that alien a) being able to electroshock the driver, and b) observe them getting out of car and functioning independently, therefore c) conclude that the driver is the thing controlling the car.


Not quite. What you prove by circuit bending is that you can alter behavior through physical means, no one's denying that. But the conclusion that because of that, then the physical means are also creating consciousness, that's not proven.

It's like assuming that the driver creates a spark in order to initiate combustion within the car, because the aliens haven't observed an instance of the car starting without the driver present.



necessity and sufficiency is insufficient (and not necessary) in biology. I've attended talks where the system was complex enough that something (a molecular factor) was sufficient, but not necessary! Pure logic isn't useful in biology because it's a feedback-controlled system with an enormous number of internal states.


Does it not illustrate that the person making the assertion either hasn't considered that their explanation is not necessarily exhaustive (in fact, as opposed to according to current theories within a popular, not entirely genuine/consistent in behavior at this point in time ideology), or would like you to believe that it is?

If there is a way to prove it is necessarily exhaustive, I would like to hear it.



"where probability expresses a degree of belief in an event".

I'm generally ok with beliefs, provided they are realized as such.


Logic as applied to the real world is just beliefs too; using probability for the math is a way to account for uncertainty, instead of rounding everything up or down to impossible standards of "true" or "false".


> Logic as applied to the real world is just beliefs too

I disagree. For example: how can humans operating on scientific principles so consistently stack hundreds of "just" beliefs on top of each other to achieve things like landing things on the moon, as just one very old example?

I suspect there is more to it....I believe that all beliefs are not equal, and I'll go even further: some beliefs are true, and some beliefs are not true.

> using probability for the math is a way to account for uncertainty

That "account for" has a different meaning than "perfectly resolve" seems highly relevant.

> instead of rounding everything up or down to impossible standards of "true" or "false"

Of the two of us, who is rounding things up or down to an artificial truth?


> If in those cases, altering brain state creates/affects behavior and mental states, then there isn't really a place for a non-physical-stuff consciousness.

This does not follow. The altering of brain state affects behavior; from this we can deduce that there is a correlation between the two, and that there is a causal link between the two. We can't deduce from this that they're the same thing or that one is reducible to the other: this is illogical.

More generally, it is illogical to claim that brain states are the same as mental states (thoughts). There are several reasons for this. One is that reducing thoughts to brain states means a thought cannot be correct or incorrect. For example, one series of mental states leads to the thought "2+2=4"; another series leads to the thought "2+2=5". The correctness of the former and the wrongness of the latter refers only to the thought's content, not the physical brain state. If thoughts are nothing more than brain states, it's meaningless to say that one thought is correct -- that is to say, it's a thought that conforms to reality -- and that the other is incorrect. A particular state of neurons and chemicals cannot per se be incorrect or incorrect. If one thought is right (about reality) and another thought is wrong (not about reality), then there must be aspects of thought that are distinct from the physical state of the brain.

If it's meaningless to say that one thought is correct and another is incorrect, then of course nothing we think or say has any connection to reality. Hence the existence of this disagreement, along with the belief that one of us is right and the other wrong, presupposes that your position is wrong.

No amount of neurological study will undermine this argument. More generally, describing one aspect of reality in greater and greater detail doesn't gainsay those who claim there are other aspects. You can describe the inner workings of the brain in as much detail as you like -- it is nonetheless logically certain that human thought is not reducible to brain states, even though it obviously correlates with them.


> One is that reducing thoughts to brain states means a thought cannot be correct or incorrect. For example, one series of mental states leads to the thought "2+2=4"; another series leads to the thought "2+2=5". The correctness of the former and the wrongness of the latter refers only to the thought's content, not the physical brain state.

> If one thought is right (about reality) and another thought is wrong (not about reality), then there must be aspects of thought that are distinct from the physical state of the brain.

Of course there is. Computation. Observation. A correctness of a thought isn't a function of thought itself, it's a function of thought and reality. Correctness of a thought is not a physical label. It's not a state. It's something you evaluate, something you compute with respect to knowledge you observe.

The way I see it, you're confusing a thought with computed properties of it. There is no privileged notion of correctness and nature does not put XML tags with correctness score on neurons.


> A correctness of a thought isn't a function of thought itself, it's a function of thought and reality.

I agree with this, although I think it's more precise to say a thought is correct insofar as it conforms to reality.

> Correctness of a thought is not a physical label. It's not a state. It's something you evaluate, something you compute with respect to knowledge you observe.

This appears to be illogical. The correctness of my thought (that is to say, whether it conforms to reality) is one thing; my evaluation of its correctness is entirely different. Further, correctness must be a 'state', or property, of a thought: otherwise you undermine your own argument (see below).

> nature does not put XML tags with correctness score on neurons.

I agree with this. But this supports my argument. An arrangement of neurons (or anything physical) cannot be correct or incorrect. But a thought can be correct or incorrect (you must think that, because you must think you're right and I'm wrong about this disagreement). Therefore a thought cannot be reduced to an arrangement of neurons (or anything physical).


If a group of scientists in 1923 were given a working iPhone, would they have a chance at figuring out how the ChatGPT icon is able to launch something that talks or where Grand Theft Auto is in the machine?

Is trying to figure out how the brain works by looking at synapses and blood flow and electric fields similar to trying to see how an iPhone works by looking at transistors and electric fields?


Others have had similar lines of thought!

* https://www.cell.com/cancer-cell/fulltext/S1535-6108%2802%29... Can a biologist fix a radio?

* https://journals.plos.org/ploscompbiol/article?id=10.1371/jo... Could a Neuroscientist Understand a Microprocessor?


> If a group of scientists in 1923 were given a working iPhone, would they have a chance at figuring out how the ChatGPT icon is able to launch something that talks or where Grand Theft Auto is in the machine?

Would they get another iPhone if they break the first one?

The task is no different than any other scientific endeavor, from biology to physics: they'll keep poking and prodding at it in different ways, making observations, accumulating and discarding hypotheses, until some kind of coherent and useful model becomes apparent. Hell, given sufficient supply of iPhones, this would be much easier than figuring out the brain - there's lot of value in short feedback loops and being able to do meaningful experiments on the cheap.

> Is trying to figure out how the brain works by looking at synapses and blood flow and electric fields similar to trying to see how an iPhone works by looking at transistors and electric fields?

They would be looking at that (after speed-running the relevant part of the "tech tree" to understand fields and transistors), and importantly, how it correlates with observed behavior such as ChatGPT saying something, or crashing the weird fire-breathing demon plane???! simulacra in Grand Theft Auto V. Same is the case with intelligence, except we're not free to circuit-bend people the same way the 1923 scientists would be with their iPhones, due to ethics and stuffs.


Overanalysing this perhaps, but they might need some philosophical tools.

Would it still be possible without Boolean Logic (1847)? Of course they'd have to realize that it applied. [1]

Earliest possible date they'd be successful might be post-Leibniz in the 1690's. [1]

Of course, there's an argument to be made that it'd not be possible before Alan Turing's work in 1935/1936-ish. On the other hand, in 1923 you'd already have most of the ideas that lead to Turing's insights.

[1] https://en.wikipedia.org/wiki/Boolean_algebra#History


This is a crucial point to make. Even if every natural phenomenon can be reduced to very early-level processes (e.g., physics), earlier levels of analysis have little predictive value at later levels of analysis.

Does anyone have a term for this from philosophy of science literature?


I don't know the term for it, but it's the most common non-strawman version of the anti-reductionist argument.

If you want to ruin your day, you can try and parse through the SEP page on reductionism. It's a doozy. https://plato.stanford.edu/entries/scientific-reduction/


If I understand what you’re asking, the term would be emergence. In philosophy this means that that behavior, property, or attribute arises from parts or components that do not show that behavior, property, or attribute themselves either in part or in whole.


> (side note: there is absolutely, undeniably, a correlation between brain states and mental states -- but that does not entail that mental states reduce to or are explained entirely by brain states)

How does this make sense? There is some divine being that only sometimes looks at the actual synapses to decide if it should feel nauseous or happy?


What OP is describing is the strong form of emergence, that emergent behavior cannot be fully understood or predicted by understanding each of the individual parts. Aka the whole is more than the sum of the parts. If you fully understood the laws of physics at an atomic level, would it be sufficient to recognize a cpu as turing complete if you see one?

It's a debatable position in philosophy against the reductionism OP is describing, and maybe a bit magical but I'd find it the more interesting of the two positions to hold.


I have a vague familiarity with that view, and that is definitely not the view I hold.

That view states, as you describe, that somehow the whole is more than just the sum of the parts. I reject that. Instead, my claim is that if the only parts you have are physical parts, you will not get consciousness. You need other parts too -- specifically, mental/consciousness parts.

(although, this sounds closer to a dualist view, while my view is more idealist)


So reduction and emergence are obviously each other's complement. (A bit like differentiation and integration)

> If you fully understood the laws of physics at an atomic level, would it be sufficient to recognize a cpu as turing complete if you see one?

This is testable if you take an alternate universe with very few laws. Somewhat canonically, do you understand a Turing machine built in Conway's Game of Life if you understand the 2 rules of that game. (Or 4 rules, depending on how you count).

The only way to understand it is to play the game.


> How does this make sense? There is some divine being that only sometimes looks at the actual synapses to decide if it should feel nauseous or happy?

One possibility: the experiences that God gives me at any moment are what they are because of the state of my brain. This isn't a sometimes thing, but an always thing. I see or experience what I do because of the state of my brain, but not only because of the state of my brain.

The problem is that the physical states alone don't entail anything about the experiential -- nothing about the phenomenology. The physical explanation leaves out things that are real and need explaining.


No. Nauseousness or happiness are states that can only happen when mind and matter are intertwined in the form of a living brain.

Much like there is no divine being that decides "I'm going to make this dense star a black hole", a black hole and the attached phenomena are consequences of matter and forces interacting.

Emotions are thoughts (brains and the attached phenomena) are consequences of matter and mind interacting.


It's logically impossible for brain states (configuration of atoms) to be the same as mental states (thoughts). They are correlated, but they are not the same. Please see my reply to Temporal above.


I wholeheartedly agree. For those interested check out the hard problem of consciousness

And I have a thought I would like to share here, an attempt at arguing for why it might be impossible for a physical pure materialistic model of the world to explain consciousness

In a pure physical materialistic world, we can represent everything as laws, interactions between materials, guided by a set of rules. That's it.

If we had a absolute knowledge, we can represent those rules, all of them, as a set of mathematical and logical equations.

I reduce the material world to a mathematical world of rules and laws, an abstract world.

Saying consciousness could arrise from a pure material world, is akin to saying the mathematical equations and laws, combined, can be conscious. As in, there is a system of rules that if tuned in a certain way can think. Since rules are abstract, and by definition rules don't think. No combination of them no matter how complicated can think

The initial rebuttle I have in my mind is that "well you are conflicting the system with the objects governed by it", I feel there is a better way to phrase the initial proposal to address this issue, but I just wanted to throw this idea out there


I think you're on the right track here. I usually try to reason down a similar line. I wouldn't say that the claim is that the equations and laws combined can be conscious. Instead, I would say that none of the equations or laws that we come up with involve mental attributes (such as consciousness or qualia) as either inputs or outputs.

You look at all the proposed laws of the universe, and they involve things like energy, mass, time, position, work, etc, as inputs or outputs. No equations have inputs of these sorts of natural quantities and outputs that entail consciousness or that first-person experience.

We can see from these equations how things like tree leaves blowing in the wind, or mountains, might arise. These things involve motions and mass and things interacting with each other. While we might not be able to paint the full picture, we can see that the inputs and outputs of those equations are the right kinds of things that might plausibly lead to mountains and leaves blowing in the wind, at the right scale.

We can't see from these equations anything that might plausibly lead to the first-person experience, the what-it's-like to be something.

Note that the naturalist cannot then just inject these mental/consciousness related parameters into their equations. That would amount to a form of dualism, and destroy their project. What I think the naturalist needs to do is somehow explain away consciousness, because I don't see any path they could take to explain it in purely natural terms. They need to deny that there's anything here that needs explaining at all.


> the brain is nothing else but its physical makeup

> physical stuff can never explain consciousness

These statements seem contradictory to me, can you elaborate? Also, you don't provide your reasoning for the second statement. Why do you believe that physical stuff can't cause consciousness? What is your definition of "physical"?


That’s not true at all. The most interesting research into components of consciousness involve astrocytes, glia, and formerly assumed “helper cells” throughout the brain. It isn’t “all synapses.”

Source: neuroscience MS (dropped out after defending to become a software engineer)


You missed the crucial part, it's not about the synapses, but the organisation they have that brings out consciousness.

At the moment we have absolutely no idea of what basic organisation (of synapses) it stems from, something we could build upon, nor any idea of the general organisation of the whole system that make the consciousness an emergent property


This actually creates a good argument for considering current LLMs and other generative AI as steps in the right direction. Specifically, while we have "no idea of what basic organisation (of synapses) it stems from", we do know a few things about this organization which creates consciousness - such as:

- It is simple enough to be reachable by a dumb, brute-force, random-walking, incremental process of evolution;

- It scales well, conferring survival benefits from the start, and at every point of the way, from the first neuron, up to the human brain;

With those in mind, it absolutely makes sense for us to find a relatively simple compute/organizational system and scale it all the way to consciousness (or a set of those that scale together) - because that's exactly how evolution must have done it, since it's structurally the only thing it can do.


Having done a bit of biology, I would not want to use the word "evolution" and "simple" in the same sentence. ;-) Certainly not things that have evolved over many generations.

(Even artificial) evolution sometimes does really weird things.

https://www.cse.unsw.edu.au/~cs4601/refs/papers/es97thompson... (page 399) "Fig 7 [...] The cells shaded gray cannot be clamped without degrading performance, even though there is no connected path by which they could influence the output"


I don't mean to diss evolution. It is simple in terms of a program, the simplest it could possibly be - but it's also massively parallel; every molecule of or around any living thing participates in it. All life, constantly, everywhere, all at once. And then it had literal billions of year to run on this rock, to get to the point of complex life in complex environment.

Like a brute-force enumeration running on a supercomputing platform, it's mighty, but it's also simple.

Also, the process is simple. The output not so much.


We both agree on the output being non-simple and, to be fair, that's what I was thinking about when I wrote it. ;)

Taking it a step further though; Of course evolution happened to the very mechanisms by which biological organisms evolve.

Because of course it would.

(eg.

* Eukaryote -> Meiosis : This is obviously a means to create/tailor Variation. (Evolution ~= Variation->Selection )

* Reproduction in Metazoa in general, with all kinds of complex methods to exchange DNA.

* Vertebrates. Just. Vertebrates. (specifically: Sexual reproduction thereof.)

* Simple but still strange: Viruses with multiple reading frames. Sure they successfully evolve, but it's never going to be straightforward!

)


> It is simple enough to be reachable by a dumb, brute-force, random-walking, incremental process of evolution;

I'm not an expert in this field, but from what I heard brute-force natural selection is the naive explanation we're given in school. There are many more factors at play other than random chance: e.g. there is also sexsual selection. Sexual selection selects some characteristic that isn't necessarily an advantage in the current environment, but it is somehow preferred by the opposed sex. According to some research, the reason why we lost the bone that other primates have in their penis is due to sexual selection.


One intuition that helps is to see evolution as a random search through solution space. Another thing to realize is that -in evolutionary algorithms- the distribution of random trials will be strongly biased around existing solutions.

It helps to understand that in some situations, a search algorithm with some level of randomness can arrive at a solution faster than a systematic/random approach, on average. In other situations, a search with added randomization might be slower, but its ability to escape local optimae (to some degree) means that it is much more likely to find the global optimum (or at least a better local optimum :-P ).

Compare also: Monte-Carlo methods.

https://en.wikipedia.org/wiki/Evolutionary_algorithm


Interesting points. But I would add that sexual selection can select for some trait that is actually counterproductive, a famous example being peacock tails. Female peacocks use them to evaluate the health of potential partners, so it is an indirect measure of fitness to the environment. But at the same time, it is clear that healthy peacocks would still be better off without carrying around such big tails. If tails weren't so important for reproduction, they would have likely shrunk by now. I wonder how a comparison with the methods you mentioned would capture this.


So I was watching a clip about black holes and entropy today that seems to fit in this somewhat. That complexity behaves like entropy.

https://www.youtube.com/watch?v=yLOHdW7dLug


Could we also not try to reach that organization via evolution (once we have enough compute)? Starting with a worm-like AI...


I know a good Greg Egan story about that

https://www.gregegan.net/MISC/CRYSTAL/Crystal.html

“What created the only example of consciousness we know of?” Daniel asked.

“Evolution.”

“Exactly. But I don’t want to wait three billion years, so I need to make the selection process a great deal more refined, and the sources of variation more targeted.”

Julie digested this. “You want to try to evolve true AI? Conscious, human-level AI?”

“Yes.” Daniel saw her mouth tightening, saw her struggling to measure her words before speaking.

“With respect,” she said, “I don’t think you’ve thought that through.”

“On the contrary,” Daniel assured her. “I’ve been planning this for twenty years.”

“Evolution,” she said, “is about failure and death. Do you have any idea how many sentient creatures lived and died along the way to Homo sapiens? How much suffering was involved?”

“Part of your job would be to minimise the suffering.”

“Minimise it?” She seemed genuinely shocked, as if this proposal was even worse than blithely assuming that the process would raise no ethical concerns. “What right do we have to inflict it at all?”

Daniel said, “You’re grateful to exist, aren’t you? Notwithstanding the tribulations of your ancestors.”

“I’m grateful to exist,” she agreed, “but in the human case the suffering wasn’t deliberately inflicted by anyone, and nor was there any alternative way we could have come into existence. If there really had been a just creator, I don’t doubt that he would have followed Genesis literally; he sure as hell would not have used evolution.”


It’s also very much not just about the neuronal synapses.


This is unacceptable level of reasoning about such a topic, almost comically illiterate.

The moment you've used the term “brain”, you'd already detached that thing from the rest of the body (without which it can't function — just like a spherical cow can't really live in a vacuum), drawn its borders, set limits to its functions, and so on. The same with “synapses”. So by stating that those mental objects are “real”, you've simply found that X equals X, that those things are exactly the same as they are defined. Their evident presence here is no different from the evident presence of demons inside the possessed people in the past.


But unless you're a dualist, you have to admit that our biological brain is a massive bundle of cells from which consciousness somehow arises. It must be those mushy tangled cells, there's nothing else there. It probably is a problem of scale along with a philosophical problem around consciousness likely not quite being what we intuitively imagine it to be. (e.g. visual consciousness seems continuous but we know from saccades that it can't be)


> unless you're a dualist, you have to admit that our biological brain is a massive bundle of cells from which consciousness somehow arises

This is indeed a consequence of a physicalist/naturalist reductionist view. On that view, there is only natural entities and causes, and everything else reduces to just natural entities or causes. Therefore, as you imply, consciousness must somehow be explained by our biological brain or other natural entities/causes (note that I didn't say "arise", since some naturalists deny there's actually any such thing as 'consciousness' to be explained).

But you can also approach this from the other direction: it is impossible for naturalism to explain (or explain away) consciousness, and consciousness is real, therefore naturalism is false. That's my position. There isn't even the seed of an explanation given naturalism, and I think the naturalist's best position is to simply deny there's anything to explain. The whole project, given naturalism, is hopeless.

Again, it's not a problem of scale but rather kind. Naturalism just doesn't have the right building blocks to explain (or explain away) consciousness.


> it is impossible for naturalism to explain (or explain away) consciousness, and consciousness is real, therefore naturalism is false. That's my position. There isn't even the seed of an explanation given naturalism, and I think the naturalist's best position is to simply deny there's anything to explain. The whole project, given naturalism, is hopeless.

Materialism (the much more common name of "naturalism") can, in principle, very much describe consciousness, if we understood the functionality of the brain enough. We already have an inkling, due to Turing: the human mind is quite possibly a computation, a mathematical object. Consciousness could then be an identifiable aspect of this computation.

I do agree though that in general, the materialist position is simply that consciousness is just not that deep a mystery, and some of the "deeper mysteries" like qualia are nothing special that even needs any explanation.


That if in the middle takes the same role and responsibility there than the religious' faith in god.


> since some naturalists deny there's actually any such thing as 'consciousness' to be explained

> it is impossible for naturalism to explain (or explain away) consciousness, and consciousness is real, therefore naturalism is false

I’ve always found the denial of the existence of experiential consciousness to be kind of funny. If you removed all my senses and put my brain in a vat to keep it alive, what would be left? As far as my brain is concerned, consciousness is the only thing for which I can be definitively certain of its existence.

It’s a solipsistic viewpoint, because I can only prove that consciousness is real for me. And since I can’t prove that it is real to anyone else, it’s not “science” in the sense of the societal process that requires reproducibility among peers, but perhaps it is still science at the personal level since I can reproduce the observation of its existence to myself at any moment that I choose.


> I’ve always found the denial of the existence of experiential consciousness to be kind of funny. If you removed all my senses and put my brain in a vat to keep it alive, what would be left? As far as my brain is concerned, consciousness is the _only_ thing for which I can be definitively certain of its existence.

Yes exactly, it is the thing that is most familiar to us, most accessible, and the only thing of which we can be certain. Descartes in his Meditations went down this path, starting with that foundation.

But I don't think you need to land on solipsism, because we don't have to believe only those things which can be proved. To me, it seems a reasonable hypothesis to suppose there are other consciousnesses. For an experimental line of thinking that leads to that conclusion, I would recommend the aforementioned Meditations.

Many are tempted to think that the physical world is the certain thing that we have, missing the fact that we experience the physical world, and so it is experience that we have access to, not the physical world itself. That I am conscious and have experiences of such and such is the thing I am most certain of. Whether there are physical things that correspond to those experiences is an extrapolation.


> But you can also approach this from the other direction: it is impossible for naturalism to explain (or explain away) consciousness, and consciousness is real, therefore naturalism is false

So you believe in a soul or what is the practical consequence?


> there's nothing else there

By way of analogy, if a civilisation had found an antenna, but has not discovered radio they might think that the internet and it's content is coming from within the antenna, because that's all there is.


I was going to counter that we can circuit-bend humans enough to know that the processing is local, unless you're going to claim that every little piece of a human is connecting remotely to the magic-land - but then I remembered that this is exactly how "Internet of Things" starts to look like: tons of devices sitting next to each other, but for no fucking reason, talking through servers on the other side of the planet.

So who knows, maybe god is an adtech corporation.


I suspect that if we stop and think about bandwith and transmission speed we likely find out that whatever might be transmitting information to our brains has to sit close to us, but I lack some data to actually do the computation.


If we jump this far off the beaten path we can probably stop assuming that whatever was transmitting the data would be limited to the 4 dimensional space we're trapped in and could in theory have causal connection via higher dimensions.


Honestly, that would be great, because we could eventually reverse that connection, and take over heaven.


They may also have thought that the content is being fed in by a giant purple bunny that lives on the moon.


You are correct, but do any relevant conclusions necessarily follow from this additional fact? And if not, why do you mention it?


That's not a good example because we could easily measure the modulation of the signal reception by simply covering and moving the antenna.


You can also say 'consciousness doesn't exist, we're all P-zombies'.

Or even the more limited version: consciousness exists, but doesn't have a lot of the magical powers usually ascribed to it (free will, etc).


Consciousness is unique, because it gives rise to the problems of qualia. None of the other examples face this issue. A.k.a. "Why is there a "likeness" of subjective experience?" and "why am I me and not somebody else?". Saying that consciousness is merely a side effect of neurons kicks this proverbial can down the road.


True. But still, dualism is the only other way out of that conundrum

edit: ok you're quite right there are other ways out of the conundrum like idealism


It's not the only one. Naturalism/physicalism, a monist view, claims there is nothing else except for natural entities or causes. Dualism claims there is nothing else except for natural and mental entities or causes.

There is another monist view though, one that sometimes gets called idealism, that says there is nothing else except for mental entities or causes. I lean towards this view.

The challenge for naturalism is to explain (or explain away) consciousness (the mental). We call this challenge 'the hard problem of consciousness'. The challenge for idealism is to explain (or explain away) the physical. However, there is no analogous 'hard problem of non-consciousness' (but the explanation of this would take an unreasonable amount of words that I cannot fit here).


> We call this challenge 'the hard problem of consciousness'. The challenge for idealism is to explain (or explain away) the physical. However, there is no analogous 'hard problem of non-consciousness' (but the explanation of this would take an unreasonable amount of words that I cannot fit here).

There absolutely is. If the world is all ideas, then it's just as impossible to explain why different minds have a coherent shared illusion of the physical world, or even any kind of communication between minds at all. I think it's much easier to reject the concept of consciousness in a materialist world view than it is to reject the inexplicably shared physical world in an idealist world view.

Of course, there is the simple and self-coherent answer of solipsism, but that's a kind of an intellectual dead end, there's nothing really to discuss about it.


> If the world is all ideas, then it's just as impossible to explain why different minds have a coherent shared illusion of the physical world, or even any kind of communication between minds at all.

It is far from impossible. There are many idealist models. Here is one: there is a central consciousness (call it God), and God is able to give experiences to other consciousnesses, and those consciousnesses can communicate back to God.

On that model, there is a shared world because God gives experiences to these other consciousnesses as of being in a shared world. The other consciousnesses, in turn, communicate back to God what they will, and that influences future experiences that God gives to those other consciousnesses. That gives a shared illusion of the physical world.

Note that whether you're a naturalist or an idealist, you're going to postulate some things as true without justifying them. For the naturalist, they postulate the existence of physical things without explaining why the physical exists at all. For the idealist, they postulate the existence of mental things without explaining why mental stuff exists at all. If the idealist asks to be granted the above simple postulates, then a coherent shared illusion is possible. I'm willing to grant the physicalist whatever physical postulates they need, and from that I would like to see how they explain (or explain away) consciousness.


Can a computer be conscious?

If I take a camera, a microphone, a speaker, and some other sensors and feed them into a CPU/GPU and self train it to have an understanding of its capabilities (internal/self) and the world around it (the external), this this consciousness or not? If I light a fire near this 'smart' computer via it's sensors it can detect the heat and move farther away. If I give this computer a complex task that it has to calculate multiple steps to achieve before it acts, is this not mental work?

In LLM based systems we can't really figure out how this occurs because the computational complexity of the operations is too high, much like brute forcing encryption, getting to the answer of how it's working isn't impossible, you'd just have to burn the visible universe to figure it out.


I'm a solipsist and to be honest I'm surprised there's not more of us


Nice joke :)


Yes why are we bothering to use EUV machines to hit 25 micron drops of molten tin that are moving at 70 meters per second with two co-ordinated lasers, 50,000 times a second, to generate light in the right frequency to etch tiny processors onto tiny bits of silicon, so that we can build these machines that we are using to communicate with each other using a network that spans the whole planet ... if none of the physical world is real why have we bothered to build all that. The only conclusion is that if the physical world is not real its not real in a very well simulated way that in practical terms makes it as good as real anyway.


I lean towards this view

Well don't lean on it too hard because its not very solid ; )


We currently don't have a definition of consciousness on which people agree and which allows you to determine whether something is conscious or not. It is therefore not surprising that you don't believe that scaling simple systems produces "consciousness", but unless you think that consciousness is some kind of magic that does not follow from the laws of physics, there must be some kind of reductionist thinking can explain the phenomenon, given sufficient computational power to do the math.


The laws of physics are a mental model created by consciousness to understand the world experienced as beings separate from it.

To subjugate the creator to that which it created is a bit funny.


The laws of physics are measurable constants that if the measurement is executed anywhere in the universe [1] will yield the same results[2].

[1] that's in the same phase state as us, inside a black hole or microseconds after the big bang may derive a different answer.

[2] up to the uncertainty limit


Measured by conscious observers, and results perceived by conscious observers. There is no way around it.

The laws of physics are “universally” agreed truths about certain mental phenomena experienced by the conscious observers that conscious observers have so far interacted with.


"observation" in terms of quantum mechanics doesn't require a conscious observer, only some interaction which affects the result. From the Wikipedia page on the observer effect[0] and a cited quote therein from Werner Heisenberg[1].

    Despite the "observer effect" in the double-slit experiment being caused by the presence of an electronic detector, the experiment's results have been interpreted by some to suggest that a conscious mind can directly affect reality.[3] However, the need for the "observer" to be conscious (versus merely existent, as in a unicellular microorganism) is not supported by scientific research, and has been pointed out as a misconception rooted in a poor understanding of the quantum wave function ψ and the quantum measurement process.

    "Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory." - Werner Heisenberg, Physics and Philosophy, p. 137[1]
[0]https://en.wikipedia.org/wiki/Observer_effect_(physics)


So two hydrogen atoms bonding with an oxygen and forming a water molecule and releasing 'heat' only happens when a conscious observer is around? Interesting universe you live in.

I, personally, enjoy a universe independant of the whims of observers.


> So two hydrogen atoms bonding with an oxygen and forming a water molecule and releasing 'heat' only happens when a conscious observer is around?

The GP's arguments lead to greater weirdness than that. If the laws of science are mere mental phenomena (which I think is what he's saying), then we have no reason to think that things will ever behave in one way rather than another, because our thoughts about things point to nothing beyond our own minds.

If he were right, then reacting hydrogen with oxygen may produce heat and water; alternatively it may produce a chocolate cake with orange candles and a golf ball in the middle. Because the laws of science would exist only in our minds and wouldn't be about reality, we'd have no way of knowing: one prediction would be as rational, or irrational, as the other.

Since reality isn't like this, we know he's wrong, and that the laws of physics exist independently of observers.


>If he were right, then reacting hydrogen with oxygen may produce heat and water; alternatively it may produce a chocolate cake with orange candles and a golf ball in the middle. Because the laws of science would exist only in our minds and wouldn't be about reality, we'd have no way of knowing: one prediction would be as rational, or irrational, as the other.

Everything we know is through the observation and mental formulation of our mind. This is just an unavoidable fact. We make proofs of physical phenomena entirely through observation with our sensory receptors. Whatever scientific experiment you have, whether its watching a plant grow in a certain way or particle collision in the Hydron collider, is through the beliefs of an observer.

The question in my view, is how universal these mental conceptions are. After all, we all similarly agree on some basic observations. There is a ball here. The ball blew up, this detector turned on and off at these times, etc.

If however, there were hypothetically some other observer that did not agree with you on these basic beliefs, such that they said that the detector turned on at time 2 and not 1, then what is the reality? What method do you have to confirm what is reality when two observers disagree on what was observed?

It is easy to throw this thought to the wind and say, there can be other perception of reality possible, but that to me seems faith, not logic.


Clearly, we know reality through our senses and our ideas. That is uncontroversial. The problem is when one says (as I think you are) that our sensual perceptions and ideas are all we know, and that we can reasonably doubt that they ever refer to reality.

We know reality by means of our senses and ideas. We don't know our senses and ideas as such.

To repeat myself: if your claim is true, then the person who says "reacting hydrogen and oxygen will produce water and heat" can be no more, and no less, correct than the person who says "reacting hydrogen and oxygen will produce jello in the shape of George Washington's nose". Because if, as I think you claim, we know nothing but the "mental formulation of our mind", then our thoughts don't point to reality; and if this is true, we can't predict anything about reality. And therefore each of these two claims about chemistry is as valid as the other. Do you agree with this?

> If however, there were hypothetically some other observer that did not agree with you on these basic beliefs, such that they said that the detector turned on at time 2 and not 1, then what is the reality? What method do you have to confirm what is reality when two observers disagree on what was observed?

This is a red herring. We may never know who is right; that is different from saying that there is no reality to know.


>Clearly, we know reality through our senses and our ideas. That is uncontroversial. The problem is when one says (as I think you are) that our sensual perceptions and ideas are all we know, and that we can reasonably doubt that they ever refer to reality. We know reality by means of our senses and ideas. We don't know our senses and ideas as such.

I am failing to see the distinction here. If your only conception of "reality" is through your sensual perceptions, how can you know anything beyond that? How does reality then differ from mental conception of your sensual receptors? To me this is equivalent to saying, "there is a spiritual world that doesn't interact with anything and we cant prove it but it exists". The only thing I can think of is that reality is shared among several observers, who all agree on it.

This is why I brought up the hypothetical scenario of two scientists who disagreed on whether the detector turned on at t=1 or t=2.

If you were the only being on this planet so that no one could disagree with you, and you for some reason were high on LSD for the entirely of your life and saw flying rocks, would that not be your reality? Every scientific test you do would fit your findings, as perceived and evaluated by you. If there was a community of such people who never saw a rock standing still, what would be the real truth? And if you say "but there is still a reality, none of those people would just know it", then I can ask, how do you know we don't fit that same bill, where some other observer could look at us and say "those guys are all seeing an illusion, its actually this way"

>This is a red herring. We may never know who is right; that is different from saying that there is no reality to know.

But what determines who is right? That is my question. What answer can you give me besides "a conscious observer?", or "a collection of conscious observers who all agree".

And if you can't determine who is right, what meaning is there to it being reality? Why is one conscious experience any more correct than the other?


Let's work out precisely where we disagree:

1. Do you agree that it is rational to think that burning hydrogen in oxygen will produce water and heat (prediction A), and irrational to think that it will produce a juicy stake that can play tennis (prediction B)?

2. If yes to 1, do you agree that A is correct (or at least closer to being correct), and B is incorrect (or at least further from being correct)?

3. If yes to 2, do you agree that there must be something against which A and B are both measured that makes one right and the other wrong?

4. If yes to 3, do you agree that the thing against which A and B are measured must be independent of them both?

5. If yes to 4, do you agree that there is a reality independent of our observations, and which our observations presuppose and are caused by?

I think you're saying that we can reasonably think that our sensual perceptions and ideas are all we know, and that we can reasonably doubt that they ever refer to reality, but I want to confirm this.

> And if you say "but there is still a reality, none of those people would just know it", then I can ask, how do you know we don't fit that same bill, where some other observer could look at us and say "those guys are all seeing an illusion, its actually this way"

See point 1, above. I suggest that nobody really systematically doubts their observations. But if you honestly disagree with point 1 then let me know.

> But what determines who is right? That is my question.

Obviously that is sometimes impossible to answer, depending on the situation. But sometimes (more often) it's not. Again, do you disagree with point 1 above?


>who disagreed on whether the detector turned on at t=1 or t=2

This is an already answered question by relativity and causality.

Since light has a constant speed and also a propagation speed there are operations that observers will not agree on the order of... but they will always agree on the order of causality. The cup will always get pushed off the table first before it's broken on the ground.


You're making a fundamental mistake in comprehending this. You're confusing the map for the territory. When I say the laws of physics I mean the map, and you're thinking about what they point to, the territory.

What is the most fundamental fact? What is the first step, from which everything then can be derived?

That first step is the fact that there's something observing. That observer can construct a theory.

That theory can posit "actually, there's matter, and there seems to be matter which does not possess this same capacity of observation, therefore this inert matter must have formed first, and then [insert hypothesis of your choice] that's how the observer arose".

That's valid.

However, it's not a fact, and it's not the only theory.

The theory I'm entertaining instead, is this: the observing phenomenon is as much a fundamental part of the universe as gravity. The reason some things don't seem to possess it is that we can't directly perceive other's observing phenomena, and it's only when this phenomena binds with matter in such a way (time, appearance, scale, complexity, frequency) that it can communicate with similar binding phenomena, then there is recognition: "you have a face, you're smiling, you seem to be alive, so I'm going to assign the observer trait to you".

Now that I recognize other observers we can communicate and form a theory, and see what we can agree on. Then and only then I can say "well let's pretend that there are no observers and see what we can agree on". And thus the laws of physics are born, as a mental model.


> You're making a fundamental mistake in comprehending this. You're confusing the map for the territory. When I say the laws of physics I mean the map, and you're thinking about what they point to, the territory.

I agree with the map-territory distinction -- but the laws of physics (or, better, the "way things behave") is primarily in the territory, and only derivatively in the map. Our map may be more or less accurate, but the map presupposes the territory. Otherwise, nothing we can say about reality is more or less true than anything else, and absurdity results (like the hydrogen-oxygen example I gave before). If the way things behave is solely a property of the map, not of the territory, then it must be the case that we can't predict what will happen when we react hydrogen with oxygen. But we can predict this, and therefore the way things behave is a property of the territory. The map that says hydrogen+oxygen=water+heat is a better map -- that is, closer to the territory -- than one that says hydrogen+oxygen=chocolate bunny. But again, better vs worse map presupposes the territory, and presupposes that the stuff on the map points to real stuff on the territory, however imperfectly.

Are you in agreement with this as far as it goes? Do you agree with my premise that we can predict what will happen when we react H and O?

> What is the most fundamental fact? What is the first step, from which everything then can be derived? That first step is the fact that there's something observing. That observer can construct a theory.

Surely the fundamental fact is existence? Without this, there can be no observation, no observer, and no observed object. These things, like any individual thing, presuppose existence.

Further, your statement presupposes other things: for example, the law of non-contradiction (which is a facet of existence). Your statement says that there is a 'thing', that it observes, and that it constructs a theory. It must therefore take for granted that there is not nothing; that the 'thing' doesn't not observe; and that it doesn't not construct a theory. Otherwise the statement is meaningless.

This is just the beginning of what I could say, but your argument, like any argument, presupposes existence, the laws of logic, non-contradiction, the distinction of one object from another (the law of identity), etc. So we can't take an observer as a fundamental fact. More generally, we can't take any particular thing as the fundamental fact. Universal knowledge precedes particular knowledge. Existence in general precedes existence of the particular.

I'm not sure if this directly affects our disagreement, but it might help understand where I'm coming from :-)


> Are you in agreement with this as far as it goes? Do you agree with my premise that we can predict what will happen when we react H and O?

I do agree with the second premise, but I'd say that's just saying we have a really good map for that particular area of the terrain. If you're going to your aunt's and you have a really good map, you're not going to show up in hong kong (I'm mapping the example onto your hydrogen-oxygen case).

But fundamentally, "the laws of physics" are still information as digested by the human brain. At one point our map of physics didn't include relativity. Now it does. The terrain did, but the map didn't.

With this in mind, it's crucial to remember that our current map has nearly no information about consciousness. It's like you have really detailed information about your aunt's neighbor in Chicago, and you know that Hong Kong exists and is Eastward. So you just assume that it's a straight road. Someone might suggest there's a few mountains in the way, but fundamentally you have no idea how to actually connect both points of the map.

Materliasm is essentially that to me, looking at the very detailed map of current physics, wiping their hands and saying yup that should be enough, it's a straight line from here.

>Surely the fundamental fact is existence?

I try to refrain from making an observer-observation distinction. To quote Krishnamurti "the observer is the observed".

Moving the discussion in this direction feels like another map, the map of logic. Within the framework of logic, I do agree with those terms. However, I don't think reality obeys logic in so much as logic is another way for us to make sense of the phenomena around us.

I hope I don't annoy you by another analogy: Imagine we're watching a wolf and his cub, and I ask which comes first. You say the father of course, because he was born earlier. This is true. However, I was asking which of them was ahead, that is in the present moment which of them will arrive first. You were answering from knowledge, not from direct observation.

Maps, models, and logic, are within the realm of thoughts. But experientially, without following and validating the narrative of thoughts, can you directly observe that there is existence without observation? Your thoughts will immediately tell you that this is already a contradiction: can you observe that there is no observer? Can you discern in your own experience "this is observation", "this is existence". Could they be one and the same?

This is where I'm coming from. Thank you for engaging in these topics which are tricky to discuss :)


I think the initial problem with your argument remains. If the 'laws of physics' (honestly I'm not keen on that term except for casual use -- can we say 'way things behave'?) only exist in the human brain, and not in reality, then we have the problem that our map does not point to any territory. All maps are imperfect, but the 'map' that includes relativity is more accurate than the one that includes Newtonian mechanics, which in turn is more accurate that the one that includes Aristotelian physics, which in turn is more accurate that the one that says a giant dragon in the sky controls everything, etc. But again, the map presupposes a territory. The fact that one map is better than the other presupposes something independent of any map that each map must be measured against to determine its accuracy. Given that each map is attempting to describe the 'way things behave', it implies that a territory, independent of any map, exists; and further, that the 'way things behave' exists on the territory. Agree?

And therefore, the original statement that I took issue with, which was "[t]he laws of physics are 'universally' agreed truths about certain mental phenomena experienced by the conscious observers that conscious observers have so far interacted with" must be false.

> Materliasm is essentially that to me, looking at the very detailed map of current physics, wiping their hands and saying yup that should be enough, it's a straight line from here.

Oh I totally agree with you about that. I'm not a materialist at all, as my comment history will reveal. Materialism cannot account for the obvious fact of consciousness, or the fact that the human mind is capable of reasoning. In general, any philosophy that tries to reduce all reality to a single principle (like matter, number, energy, power, dialectic, the self, etc) is suspect, IMO. Starting with 'existence' as the first principle avoids this problem.

> I try to refrain from making an observer-observation distinction. To quote Krishnamurti "the observer is the observed"... I don't think reality obeys logic in so much as logic is another way for us to make sense of the phenomena around us.

This must imply that someone who says 'the observer is not the observed' (assuming he means these terms in the same sense as Krishnamurti) is wrong, but this only makes sense if one accepts logic as an unbreakable and truly-existing principle.

So I would disagree that logic is a map; instead, logic must be presupposed by any statement or any belief; it must be presupposed by any map. (I think logic is 'baked into' existence itself -- it's inseperable from existence -- in some sense it is existence.) Again, if we deny the law of non-contradiction (one of the first principles of logic), anything we say, believe or think can be true or false in the same sense at the same time, so any attempt to make any sense of reality will inevitable not even get to first base. Hence, nothing we say would have any meaning whatsoever. So hence, I disagree that logic is within the realm of thoughts (except insofar as it's something we grasp or see - meaning it's only derivatively in the realm of thought).

> Can you discern in your own experience "this is observation", "this is existence". Could they be one and the same?

I'd respond by saying that all these questions presuppose something, which is existence, for the reasons I gave in this and my previous post. The fact that you can ask these questions, and that there is any meaning in the questions, presupposes that (for example) the law of non-contradiction exists independently of you (and me, etc).

> Thank you for engaging in these topics which are tricky to discuss :)

Sure!


No that's not what is being said. The laws of physics are a mental model. There is something they map on to, but we can only form an incomplete map. They're a reduction in order to understand, and thus as much as they're formed by our observations, they're equally informed by our biases and limitations.


No I am not saying that things don't happen when there is no observer around. I am saying, `what` actually happened is entirely based on perception of mind, and is completely subject to interpretation.

To give an analogy, imagine the universe really was a collection of bits.[0,1,2,3,4].

We are some program which give means meaning to these bits. Another program on the other hand interprets these bits another way, perhaps even in a language completely incomprehensible to us.

The state of the universe can change to another set of bits, but each observer could still form completely different sets of perceptions, or change of perceptions.


And where do you find such a universe?


Before we had LLM it wasn't at all obvious that the simple ones we had would work as well as those do. It's not clear what you define consciousness to be but for general intelligence it's easy to imagine how something would would with enough resources. We can imagine how animals could evolve to be smarter and smarter over time. A old school symbolic AI with near infinite power could create and sci-fi tyle old school AI. We can imagine a super advanced LLM mixed in with logic reasoning.


My own view is probably atypical among dualists and idealists, and hard to explain. I'll just state what I think is true, without justifying why I think these things are true or compatible with each other. I think:

- Most or all of what makes us human, like love, pain, sense of belonging, seeing, hearing, find almost all of their explanation in the physical. I include even things like thinking, reasoning, beliefs, and so on, as finding almost all their explanation in the brain.

- The experiential/phenomenological side of these things (love, pain, sense of belonging, seeing, hearing, etc) are non-physical mental/consciousness things. Everything else besides the phenomenology finds its explanation in our brains (or some wider physical states than just the brain).

- I am not surprised that LLM's could be as powerful as they are today (though I am surprised that I was alive to see that), and I can see them getting better.

- The foundation of reality is ultimately mental/consciousness and not physical, and the physical (to use a philosophical term) supervenes on the mental.

Regarding definition of consciousness: I'm not so bothered with how we define words. I think a lot of philosophy is pointless arguing that amounts to no more than people disagreeing about how we should define a word. I see no point in that. Instead, let's just stipulate what we mean by a word, and get on with the meat of the discussion. In this particular instance, when I talk about consciousness, I'm talking about the thing that the 'hard problem of consciousness' is talking about. I'm not talking about, e.g., consciousness in the sense of wakefulness (where being asleep or under a general anaesthetic would count as unconscious).


Before planes people imagined machines that would flap their wings like birds.


> But there are no simple building blocks that we can point to and say "yes, I can see how more of these would produce consciousness".

If you are open to have your view on this changed, I urge you to read "Vehicles: Experiments in Synthetic Psychology" by Valentino Braitenberg (1986).


It's very possible you're wrong.


> What is consciousness? The firing of a gazillion synapses you will never fully understand.

But then you realize that transcripton factors (genes that activate or inhibit other genes) form networks with feedback loops that are conceptually equivalent to neural networks, and some cells grow camera-like eyes complete with a retina and a lens [1].

It isn't about neurons per se, more about the way information is treated by these systems.

1. https://en.wikipedia.org/wiki/Ocelloid


What makes you think that a machine any less complex than the entire organism is capable of "grokking what is actually going on"?

I agree with you that humans may never be able to understand most of what's going on in biology. But I'm even less enthusiastic about gradient descent. The interesting results in machine learning for biology all seem to amount to better ways of indexing the data we already have (like crystallographically-discovered protein shapes) and interpolating between the existing matches nearest to a query term.

I don't claim to know what "understanding" really truly is, but I do know what it isn't.


Reductionism is useful because we need to find manageable pieces of the system so we can model it, and manipulate it. Without reductionism we wouldn't have modern medicine.

It is important to acknowledge tho that it can never produce the whole story. There's nothing wrong with it. It's just not sufficient for a complete understanding.


I don't have a PhD in biology (or in anything), but isn't temperature a single number "explaining" a gazillion interactions we will never "understand"?


No, a temperature is a measurement.


Temperature is a statistical property of a macroscopic system that reduces a large number of variables of a microscopic system to a single scalar value.

It doesn't explain so much as summarize and it does so about as well as a mean or a median (which makes sense because temperature is literally the average of all the kinetic energies of all the microscopic components). It doesn't tell you about the distribution of energies, although in some extremely simple model systems you can compute the distribution: https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_dist...


Yeah, a measurement of the speed of a lot of molecules.


In biology, being amenable to reductionist explanations is an evolvability trait. Repeating building blocks can be re-used, combined and repurposed with small changes. That's the fundamental reason why so much in biology does make some sense. On the other hand, we are of course biased towards understanding things that are easy to understand. Who knows how many things are overlooked because they're just too messy!


Can a reductionist approach be good enough to control life independently? Or do we need to emulate the full process? I am not sure if the point of the article is to bash reductionism, but maybe push for full understanding. Kind of like Newton's law fits well within Genera Relativity. A first step.


>> to push for full understanding.

The point of the GP is that that full understanding can't happen in a human brain, but reductionism may be a lever good enough for that human brain to use.

>> do we need to emulate the full process?

Probably not, but let's assume 'yes'. It seems likely that it would possible to optimize the emulation to a point where it becomes practical. We do something similar today with weather simulations; it's not possible to emulate each molecule in the air, and yet meteorological models manage to account for an insane amount of processes and--short term--produce accurate predictions.

In a more subjective note, I believe the methods and ways of thinking in biology need to shift from "let's understand everything" to "let's produce a forecast and maybe fix some stuff". Pharma companies are already in that track, but it could be that the general public and people in academia are still in the full-complete-very-detailed-understanding wagon.


That’s basically a cop out I think it’s called ? Some kind of cheap magic trick fooling the audience by pretending that if we haven’t explained xyz it’s only because of complexity.

This kind of thinking is deeply entrenched in physicalism.

And please, GPT does not think.

As Alan Watts beautifully said « science is the art of prediction « To forget that is naïve at best (meaning, any amount of untangling the complexity will not explain where the universe comes from, nor how consciousness arises from inert matter)


> Most of the greatest and most useful knowledge will be very reductionist in nature.

More likely the greatest and most useful knowledge will simply not be understandable by our primate brains.

We'll either upgrade our brains to understand, build machines to understand it, or it'll be left not understood.

It's a miracle our primate brains can understand relativity or calculus.


Aside from consciousness, you can understand those processes by studying the respective fields. It's true for non-experts that don't have the foundational knowledge.


I believe the argument is not that any part of those fields is incognoscible - "too complex for a human brain to grasp", but instead that the whole field is massive - "the amount of assertions and relationships required to understand the whole process is bigger than what the human brain can hold". That it is a problem of quantity, not quality; that even experts will need to abstract away and statistical average many details on these fields.


They're incognoscible in the sense that our models are very complicated metaphors, and that meta-modelling process runs out of steam at some level which we don't understand.

It's philosophical Dunning-Kruger. We don't know what we don't know. Our models are patchwork associations that have some experiential consistency, and appear "logical" based on our subjective experience of what logic is.

But that's all they are. There is no automatic implication they're complete, or even that it's possible for them to be complete.

We usually assume that limitation doesn't exist. But that seems very naive and optimistic.


I see, in that case I would grant it for biology but not so much for AI, at least not yet.


> How does GPT4 think?

It doesn't think. It only processes.

And as far as I grasp it, its process can essentially be understood, not least because they were designed and documented.

It is several big mistakes to use "you will not understand" complexity mysticism equally when referring to biological processes and human-designed computer processes. Not the least of which that it is patronising and reductive.


"It doesn't think. It only processes."

Human thinking is frequently described as 'processing', a 'process'.

Guess, that is problem with using Metaphors and analogies.

When it comes to Brains and AI, a lot of the same metaphors and analogies have been used over the centuries, so now, which one is 'more accurate'. Can't really say, it depends on what each person meant by that metaphor.


Yes. Now that we have machines that people are making claims like this for, we really should stop using the word "thinking" so lazily.

Human or animal thinking is a process, but it is an innate process of instinct, hypothesis, testing, prediction, overrule, and introspection. (Whether there is metacognition or not).

GPT "thinking" is only "thinking" in the colloquial sense of "wait a sec, the computer is thinking about it".

GPTs don't think. And they especially don't think in some imaginary complex way that we can't understand.


Hey aliens just landed and they are telling us we don't think, only they do, we only process.


I am not sure what you mean by this.

But we, humans, qualitatively think in a way that we absolutely know for sure GPT does not.

We are capable of introspection and metacognition.

GPT is not. We know it is not, because the software does not provide it the resources or the methods to engage in it.

To suggest otherwise is pointless quasi-woo.


It wasn't just GPT.

You included all animals.

I think the point is that you are putting human thought on a pinnacle, that is actually not proven or even commonly accepted. And if you have proof, then you should write a paper and put to rest centuries of questions.

The parents example goes like this:

Some other 'alien' species could find and examine humans and make the same leap, "these things appear to be forming rudimentary structures like the ants, and forming basic social structures. Hmmmm, my dear Watson, I wonder if they have any metacognition. Do you think they have qualitative thoughts?"

-> "But we, humans, qualitatively think in a way that we absolutely know for sure GPT does not."

Everyone is using GPT as the goal post. But it is not.

Deep Mind just released a paper where AI could do conceptual thought to solve geometry problems.

Deep Mind AlphaGo 'imagined' new moves that no human ever would.

It is hubris to think that someone wont put AI on a feedback loop, non unlike our own default mode network, and allow continual learning and adapting.

And at that point, any distinction between carbon and silicon, for what is happening internally will be on shaky ground.

EDIT: Maybe not you. Could have been different thread that used animals. So not you. But part of point, so don't want to edit.


No, I did say that animal intelligence has some characteristics. I think I made it clear that metacognition is more optional but I may have written less clearly.

However, I was responding to assertions made specifically about GPT, not some other or future AI.

Other AI systems may be different (I am enthusiastic about Steve Grand’s approach for example) but GPT is not “thinking” by any useful stretch of the word.

I am not placing human intelligence at the pinnacle. I am rebutting the idea that GPT is usefully thinking. It alarms me how many people are willing to strongly suggest it has magical, unknown emergent abilities, when all it can really do is surface embedded logic in the corpus of the written word.


ah. Ok, if it was specific to GPT. Got it. I was extrapolating the arguments to all AI.

I think the reason people are freaking out about GPT/LLM, is it is dealing with 'natural language' specificaly.

However it is doing it, forget all the arguments about consciousness. It is touching on something with the general human that they believe is specific to humans.

There is something about having a computer 'understand' and 'speak back' in natural language, that triggers humans on some level. It's something that wasn't supposed to happen, because it is 'innately' human. People said "it will take a 100 years", and now it is happening.


That's right. It is not human.

How do humans think? We do not know.

AI is merely a facsimile of human thought. We don't know how that works, but we can certainly make a cult of it.


Oh please.

[I note that your subsequent edits invert the sense of what I was originally replying to so I am going to abandon discussion with you]


We don't. Prove me wrong. I'll wait.


Complexity is not useful because it can't be used to fix things


> What is consciousness? The firing of a gazillion synapses you will never fully understand.

Hmm. How do you know this is the case if you don't understand it? Magical thinking?


Think the point was there are a lot of things that at some point we 'didn't understand' . But we keep chipping away at it. And eventually we do understand.

We were able to build steam engines before understanding Entropy, and Entropy is super complex.


I think the issue here is, you probably can't apply the same logic to consciousness.

It's the medium through which we understand, comprehend and chip away things. It's the medium in which the world appears. It's the medium in which "you" exists or think exists.

Extraordinary evidence is required to make a claim that firing of neurons, which we perceive and understand through consciousness is what causes consciousness.


> This burst of activity represents a frustrated thought that “it is time to become impatient with the old view”, as Ball says. Genetics alone cannot help us to understand and treat many of the diseases that cause the biggest health-care burdens, such as schizophrenia, cardiovascular diseases and cancer.

Does anyone actually subscribe to this "old view"? Surely the majority of scientists, medical professionals, and laymen alike understand environmental factors play a huge part in this. Why make a call to action to address a imagined state of affairs?


I can personally attest that some medical professionals have a pretty dismal broad understanding of genetics and evolution. And besides, even if the book isn't breaking new ground, or is targeting a more a popular audience, a good writer synthesizing developments in a field and providing fresh metaphors for understanding it can be tremendously beneficial, even to the academic side, I think. There's also an element of marketing and puffery in how authors talk about their new books.

All that said, yeah. I've for a long time liked the simple idea that the organism is the product of interaction between genes and their environment. That simple notion alone banishes many of the supposed misapprehensions under attack in this article.


Some idiots (on both sides) seem to think "nature vs nurture" must have a single winner. Some slightly less stupid people like to strawman all their opponents as thinking that. This seems to be yet another example of the latter.


A reliable career-advancing publication in the life sciences often follows the pattern: Look, everybody! We've found a genetic marker for X! Here's how we sequenced the organisms, and here are the stats we ran to identify this particular gene or constellation of genes.

This was exciting research in the 90s, but now gene sequencing is routine and the results just get added to the pile. It's scientific chum.

This book's authors, the review author, and the editors at Nature who decided this review was worth publishing and under what headline, would like to coordinate a shift away from this kind of low-impact publication.

To make significant contribution, you can't just identify a marker for cancer or dinosaurism: you need to actually attempt to cure cancer or turn people into dinosaurs.


It's not like that kind of research is value-less though. It's still important to do that kind of thing, firstly for practical purposes such map is useful, and secondly it can help with building a more fundamental theory. (The same is true of the "particle zoo" before the Standard Model was developed in Physics). I don't think stopping doing it means you'll get the big breakthrough any faster, in fact it'll slow things down.


It seems people think that, if we just focus more on Kuhn's "revolutionary science" instead of incremental, "normal science", we'll get more paradigm-shifting theories.

It could just be that paradigm-shifting breakthroughs are exponentially harder to find, and that it's not just a matter of "we just didn't look hard enough"


"We found a genetic marker for X" is even more fuzzy when talking about something like Schizophrenia or Autism, because the diagnosis itself is very far from a precise label. It's not just about missing environmental contributions or dealing with how complex interactions between different generic markers are, though these are also issues. We're averaging over probably dozens of different issues in the vast majority of studies, and that extends to other subfields of biological psychiatry too.


Not sure about in the sciences, but large companies and their funders still do. e.g., 23AndMe, which still wants to become profitable through drug development based pretty much only on DNA sequencing data.

I agree with the article, it's a paradigm that peaked 20 years ago, and has been outdated for about a decade.


> Why make a call to action to address a imagined state of affairs?

Because it's easier to argue against windmills than actual opponents.


Yes. See: https://obamawhitehouse.archives.gov/precision-medicine

But I'm not being absolutely critical since I'm only viewing from the sidelines and perhaps there is still something important to be gained correlating genetics with disease for cases where there is little hope of a standard diagnosis.


> Precision Medicine, on the other hand, is an innovative approach that takes into account individual differences in people’s genes, environments, and lifestyles.

That seems to be a far cry from asserting genetic determinism. Can you explain what you were trying to communicate with that link?


Among academics? No it’s not the view.

Among the general population? It absolutely is.


> Does anyone actually subscribe to this "old view"?

I think that’s how genetics is communicated… if you have gene X, you’re Y% more likely to get a disease. Stuff like that.


Even in this most simplified form, it's clear genes don't determine all of the outcome. You're Y% more likely to get a disease, with other factors affecting how the dice actually fall.


> Why make a call to action to address a imagined state of affairs?

because writing a science book is a career-influencing milestone?


> Surely the majority of scientists, medical professionals, and laymen alike understand environmental factors play a huge part in this.

The majority of laypeople have no idea what you are talking about, and if they know anything, I would guess from colloquial language that they 'know' your 'genes' are inherited and determine things about you, in a way you can't change or avoid.

I'm not criticising the laypeople (or flattering us), but pointing out that we are in a bubble ... on another planet ... in a different universe ....

I expect that more scientists and medical professionals are back on planet Earth (i.e., unaware) than you imagine. Who else would the OP be targeted at?


The article is disorienting, and it glosses over the real issues. “It's time to admit that the water is wet”, oh yes.

“The view of biology” they talk about doesn't exist. What exists is pop-science, with its acolytes and proselytes, boldly claiming that everything is as easy as 1-2-3:

1) We just make a list of All The Genes.

2) We put All The Data into the Computer.

3) We solve any problem by finding a relevant connection.

Just imagine how much nonsense was said about “genes”, from racial cleansing projects to self-help books — or reasoning why corner store closed. As we are speaking about all kinds of genealogy, it is worth mentioning that pop science is a distant cousin of real science, and is closer to 19th century militant vulgar materialism, the kind of marketplace “science” which promised that corpses would get reanimated by wondrous electricity in the same manner it made frog's leg move. “Only need to figure enough details”, as usual. Now “genes” or “evolution” are just a way for common people to talk about “fate” or “dog-eat-dog” in “scientific” terms.

With that sorted out, we can study the scientists. Unfortunately, a lot of them aren't that different from the general public in understanding that for each efficiency of some model, there is a corresponding deficiency. Educated people honestly ask why we shouldn't use computer metaphors so carelessly all the time. It's like asking why hammers can't be used for everything, or why integers exist when we can use floating point for everything (and also deliberately ignore the complexities because “we're dealing with general cases, we don't need that”). Is there something wrong with the hammer? No, there isn't, something is wrong with the people who don't really understand what they are doing.

So the book review basically says “Fine, we all know it's a pathetic circus, but it's our circus, and lots of people are trained to play their parts, so let's declare some patented nonsense outdated, do a facelift, an go on”.


It’s interesting that throughout history we have tended to characterize the behavior of biological systems in terms of the dominant technology of the time. At one point, it was mechanical automata. Then it was hydraulics. Then it was electrical circuits. Now it is computers.

It seems that some people have become overly attached to the computer code explanation of DNA, viewing it not just as a convenient analogy to current technology but as a fundamental truth. They will ask questions like “if DNA is code, then what corresponds to functions? or exceptions? or data structures?” and end up working backwards from the analogy instead of observing what DNA does and building a mental model from there.

(It doesn’t help that the code analogy is very flattering to the programmer’s ego.)


Michael Levin appears to be demonstrating that cells have a goal-seeking intelligence operating more or less independently of directions from the nucleus. Since literally every cell on earth arose by fission from another cell, the lineage of this intelligence goes back in an unbroken line to their first common ancestor billions of years ago, with natural selection operating on it directly by success at solving problems of survival and reproduction in real time, not just via selection of genes.

In a similar way, every membrane in every cell came from extending and then splitting an existing membrane, again more or less independently of the nucleus. Natural selection has operated directly on those membranes, besides its role in adapting the genes that code for subunits of the membranes.

The nucleus might be thought of as mainly a library of apparatus that the cell's goal-seeking capacity may draw upon as needed. This analogy is necessarily limited because there are innumerable feedback control cycles encoded in genes and repressor and promoter proteins they encode, and enzymes to methlylate or de-methylate start and stop codons.

But something is in charge that operates on a shorter scale than those can. We understand very, very little about that intelligence. It is far more capable than we could ever have guessed. It barely seems possible, with as few moving parts as we know about.


> In a similar way, every membrane in every cell came from extending and then splitting an existing membrane, again more or less independently of the nucleus.

Can you elaborate?

As I understand it, the manufacturing of the cell membrane and coordination of splitting with other features (eg, intracellular features that pull it apart) requires the DNA transcription and is regulated by particular genes.

That is, depends on the nucleus.


Proteins are certainly involved wherever membranes are grown, and proteins are coded for in the nucleus. But the membranes are always there first. No cell anywhere in nature, as far as we know, just makes any membrane from one lipid and then another; membranes are only made by extending an existing membrane.


The hypothesis is that coding molecules predate cell membranes.

So we have “every current cell membrane requires the nucleus” and “the coding portion came first”; nobody is disputing that membranes build on membranes, just pointing out they integrally depend on the nucleus to operate (coding for proteins) — and always have.

> again more or less independently of the nucleus

Ie, this is wrong.


There was a time when there was no nucleus, and no DNA. But there were membranes and RNA.


everything about cell membranes splitting would need to be encoded in the germ plasm of the embryo (because all cells derive from that one). Can you point to the information-encoding information in the germ plasm which is not in the nucleus? Everything we know about cell biology currently suggests that everything required for cell membranes is implemented by proteins controlled by regulated expression of DNA.

For natural selection to operate directly on membranes would be a fantastic discovery but would require some pretty challenging experiments to demonstrate convincingly.


Lovely insight


the lineage of this intelligence goes back in an unbroken line to their first common ancestor billions of years ago,

Nonsense. If cell walls ever had the ability to replicate themselves, totally autonomously, without the rest of the cell -- they've very certainly lost that ability. That loss of self-sufficient replication is quite a big "break" in the line.

Your argument might apply to ribosomes, however. The nucleus needs the ribosome for replication every bit as much as the ribosome needs the nucleus. And the ribosomal DNA is one of the very, very rare things that has hardly changed at all in the history of life -- humans and yeast have 75%-identical ribosomal DNA. Yeast!

https://en.wikipedia.org/wiki/Ribosomal_DNA


I think you miss the point. Every scrap of membrane in every cell in every known organism -- and cells have a lot of different membranes in them -- was once part of a membrane in the parent cell, or was one of several in a parent cell that became part of this or that daughter cell. We don't know of any organism that constructs any membrane for any purpose de novo.

In a very real sense, life may be defined as a membrane with apparatus to aid in growing it. So, in that sense, every evolutionary change is ultimately a refinement to enable growing more membrane. You, personally, are a lot of membranes all folded and wrapped up and carrying around a skeleton.


> At one point, it was mechanical automata. Then it was hydraulics. Then it was electrical circuits. Now it is computers.

Those analogies aren't equivalent. At each step, they got more accurate.

Computing analogies may be flattering to programmer's ego, but it doesn't change the fact that they're also qualitatively different than what came before, because computer science is effectively a branch of physics now.

First, computer science gives us fully abstract mathematical framework that describes calculation independent of the medium - the same CS applies to silicon-based computers, meat-based distributed systems (aka. bureaucracy, societies), neural compute, biochemical compute, substance-less compute implemented using photon interference, whatever. Whatever something is, if we can identify a few patterns we care about, we can view it as a computer. And, like with every math-backed science, as long as the assumptions hold, results of our calculations apply to real-world systems.

Secondly, researchers also found fundamental connection between information and energy, computing and thermodynamics. That means we can take an abstract computer and establish bounds on its energy use - like how much energy it takes to flip a bit - conditioned on some environment (IIRC it's temperature-dependent, at least). This is the part where CS becomes effectively a branch of physics.

Science and mathematics made qualitative jump in the last 100-200 years; we're not making crude analogies to other observed processes now, but rather abstract models with known preconditions and a set framework for reasoning within them. This makes viewing biology through lens of computing a good, useful thing. The important bit is to remember that "all models are wrong, some are useful". If we see that our computing analogies are failing us, perhaps it's time to rethink how we map them to territory. Maybe the most useful computing systems are to be demarked differently, or elsewhere.


Ah, putting it this way connects a few of my brain-cells together that weren't previously connected. ;-)

Thanks you so much!


The code analogy is right in this aspect: The data has a quantized form. But to know the storage format does not explain the higher emergent properties. Those are not comparable to programming languages, which are optimized for human understanding.


And you, or (almost) nobody else, would have never made this example (quantized, emergent properties etc) just 15 years ago. GP point still stands.


15 years ago it was 2009; those concepts were widespread at the time.


I knew someone would nitpick on the chosen time-range, missing the point. Let say 25, or 30, or whatever.


> The philosopher G. H. Lewes coined the term "emergent" in 1875, distinguishing it from the merely "resultant"

https://en.wikipedia.org/wiki/Emergence

The word is 150 years old; the concept is older than we can determine.


The thing is, the code analogy still holds up pretty well if you ask me.

The linked article tells us how this code is self-generating and self-modifying, how the interpreter of this code is more complex than you may expect from a 4-word language, how this code reads from external inputs when it runs. Well OK, sounds very reasonable to me. No one promised the code will be easy.

The problem with the code analogy at our current level of understanding is not that it's untrue and not that it's too simplistic, but on the contrary that it's too complex to be applied on practice. What we do now is akin to debugging running linux system by analyzing bit sequences in the kernel image. Surely it's all code and data, surely the bit sequences of the kernel image do determine the kernel behavior within the environment of hardware. This framework is correct but not powerful enough produce good results.


> how the interpreter of this code is more complex than you may expect from a 4-word language

It's not a 4-word language. It's a 64-word language. That's been known for about as long as we've known what DNA was.


4 or 64, neither is a good description of how it works outside of protein building. Same as saying that C++ is a 100-word language because it's the size of the ASCII alphabet: correct but not helpful


That's only correct in the same sense that it's "correct" to say DNA is a four-word language. It's not "correct in an unhelpful way", it's just incorrect.


Well so is the 64-word concept if you insist. It only applies to protein transcription which is 1-2% of human DNA, so it's wrong 98% of the time


I mean, neither really matter. TREE3 shows us that computational complexity explodes beyond all reason even in word limited simple systems with low connectivity.


Sorry, what's 64-word about DNA?


One codon is three bases in sequence. Each base has four possible values. So there are 64 codons.


Ah, I see your idea, but codons aren't DNA- they are a higher level construct (only used in the context of transcribed and translated genes that code for proteins).

Also, several of those codons aren't mapped to amino acides (stop codons) and the codons are re-used to code for the same amino acids, so it's really redundant compared to that.

DNA is a 2-bit code, with 4 symbols.


Words are defined by the parser, not by the alphabet. Letters are what's defined by the alphabet.

> Also, several of those codons aren't mapped to amino acides (stop codons)

Yes, codons are the minimal units of the genetic code, which is why they're called "codons".

> and the codons are re-used to code for the same amino acids, so it's really redundant

Synonymous codons aren't redundant. They take different amounts of time to process, which affects the shape of the resulting protein.


DNA codes for a lot more than just genes... you're just conflating several unrelated concepts. For example, regulatory regions don't care about codons at all.

If you're just saying "genes are defined in units of codons", no complaint. And the biophysics of synonymous codons are highly complex; in most cases, you can substitute them in and get the same exact protein function. There are lots of papers about corner cases in specific proteins, but it's still correct to say that synonymous codons code redundantly (this is a well-established fact of molecular biology).


DNA does have error correction.

It is a fundamental building block.

I'm not sure this article is tossing it out of the window.

Just that it can be modified.

Epigenetics, how environment of the host changes the DNA.

That doesn't mean DNA is suddenly not valuable because we also found that it has capacity for modification. Still need something to modify.

Or, like code, it can be updated.


As an analogy, I like to think of it as us only having access to the compiled/minified/processed code; except it is massive, no libraries were used, and we can’t even fully describe the hardware.

Yeah… technically that code is controlling what happens (along with whatever hardware inputs exist). But the raw HEX is also near impossible to use in a productive way.


It tends to happen. Think of spacetime in Physics. Is a mathematical structure that, saving some small issues, works pretty well to describe reality. That doesn't mean that is fundamental but if you ask people nowadays many will believe it as a fundamental truth


> It’s interesting that throughout history we have tended to characterize the behavior of biological systems in terms of the dominant technology of the time.

"In the beginning was the Word", says the book.


This is tough for me. "Instead, we must let our ideas evolve as more discoveries are made in the coming decades." is a very appealing statement. My concern is admitting that something is complicated is different from saying it is not the case.

To that end, is there anyone that thinks genes are a simple blueprint for life? Seems far more accurate to say that they are part of the blueprint for life, and that even with that, we have not defined the execution environment for how that blueprint is carried out.

Do we present an even more simplified model to students? Especially young students? Absolutely. As we do to laymen. But things being markedly more complicated does not mean that models are bad.


From meeting identical twins in my lifetime, yes I’d say they are quite a blueprint for an organism. A blueprint doesn’t mean things are exactly the same, yes there are environmental factors, but there is a lot going on there that is the same. Identical twin studies done with twins separated at birth and experiencing different environments will show you that.

https://www.gu.se/en/gnc/what-have-twin-studies-taught-us-ab...


Depending on what exactly you are studying, there is a much more important environment, that has dramatically different outcomes even for the same genes: the uterus you develop in. If you implanted two genetically identical fertilized eggs into two different women, you'd see a much larger difference, since the mother's body has a significant active role in controlling gene expression, one that's often forgotten about in such discussions.


And even then, it is far from clear how much of the similarity comes from the matched genes, and how much from the original egg cells having fissioned from a common ancestor, independently of the contents of the nucleus. We know the cell itself actively seeks a goal, on its own.

As that first cell divides again and again to generate the trillions of cells in your body, mutations happen in the hundreds and thousands. Even though they started with all the same genes, they certainly don't stay that way as you grow.


> admitting that something is complicated is different from saying it is not the case.

I agree. I do think "too many people take convenient science metaphors as literal" is a valid concern, especially when it comes to pedagogy, but the situation is not so bad that everyone is wrong about everything. No scientist goes around thinking molecules are made of little colored plastic balls.


This always confused me as well. It’s all nature in the end - your genes will react to a certain environment in a certain way. Of course we have some control over environment so it’s worth making the nature vs nurture distinction, but how you react to an environment is predetermined. Not that we can easily predict any of this given the staggering complexity.

Additionally, what’s the deal with the almost doublespeak in the article? The author plainly states that genes aren’t code, but then goes on to say they’re just more complex code.


I agree with most of the article, but I'll argue that there are some genes that do have a 1:1 relationship with diseases. I work investigating them.

There's a spectrum of genetic variants. Most (like 99% might be an underestimate) are completely benign and have no effect. Some affect traits that you have, like your height, intelligence, or susceptibility to schizophrenia or diabetes (lots contribute to these), or eye colour (fewer contribute to this). But there are also mutations that single-handedly will cause a disastrous child-onset disease.

One of the first things I learnt when starting studying genetics more than 20 years ago was that living things will break almost every rule you make up for them. So we say that genes are to make proteins - the information flows from DNA to RNA to protein. That rule gets broken in various ways - retroviruses flow the information back from RNA to DNA, some genes never get converted to protein but the RNA they produce is active instead.

So the message is absolutely that most times we try and say things about genetics, and every single time it will be an over-simplification. And this applies to the article too.


I have given a few genetics talks this past year to technical groups and I start the talk by saying

"This is no more difficult to understand than any event based system" [insert image of the Netflix micro services architecture]

This gets a laugh.

DNA is like an executable file written in assembly. Most of the time it makes a bunch of pure functions (but not always) and it is self modifying, lol. For most of the work I do I can presume that gene X produces a "pure function"

At the next level up you have a bunch of functions that can take some arguments and spits out some stuff. The question is does that function work etc?

At the next level up you have systems thinking. If I have 100 lambda servers that are all executing from a queue and then I have 20 lambda servers that are all reading from those 100 you can guess where the data is going to pile up.

I am radically over simplifying, but hopefully you see the parallels to binary, code, application logic

In our "designed" software you can almost never go and change one line to modify the behavior of a program. Instead you have dozens if not millions of functions that all interact depending on the lens by which you want to look. Usually it is contained in on "area" with code that matters the most, but that isn't always the case. Is it any surprise that biology is even messier?

The article mentions the 300 genes related to schizophrenia. This is like me mentioning that my random Go service also uses a bunch of Go standard library code when it is executing.

Debugging stuff like this is what our careers are made of.

For a long time scientist were looking for a single gay gene. Poking around once I knew what to look for it only took around 6 months to figure out the whole LGBT, but in nearly all cases it involves more than just one gene. You simply have to apply debugging logic at each layer to figure it out. It is handy to understand all the layers, but depending on the problem not always required.


If we're allowed to use computing analogies here, would it be fair to say the old model is more like how we view a standard CPU and the new model is more like a FPGA? Where the old model viewed genes in a more classic reductionist approach as CPUs, arranged and static logic waiting for input from the environment and impressing on the environment in such a deterministic way. While the newer model analogy, genes act as an FPGA, an array of gates with potential to adapt to input imposed upon them with this bi-directional synchronicity of environment impressing on genes and genes therefore impressing on environment?

It seems as fields mature we move further away from classic reductionism and encompass more a holistic approach, a path gravitating towards objectivity which I find interesting from a philosophical perspective.


For those old enough to program on punch cards you could say that is closer. We load the program into ram and then the env can and often does modify the program on the fly and we do dumb thing like use the current line number to save a byte because dividing by 17 is “good enough”. Most cards are functional and many cards do double or triple duty to save space. So a bug in one card can subtly break three wildly different spots. Analogies break down of course as in biology it all executes at the same time which is why the micro services or lamdas that can be scaled up or down have been my go to. And you can even inject external events into your lambda systems.

And for anyone familiar with lisp, code is data, data is code. For modern coders you can think that it has a billion feature flags you can also flip on and off rather than rewriting the code on the fly


Wow - can you elaborate on your process to “debug” these different layers? What comprises your personal or team’s feedback loop, given that 6 months is short? (I suspect it’s a lot of sequencing?) Are there good comparisons to be made with getting acquainted with a new codebase/technical system? Are there any particular computational tools involved? Pardon all the questions, this stuff is fascinating!


To be clear this is/was a hobby project. Getting access to DNA files is required yes, but sequencing wasn't really the limiting factor. It started with simply poking around my DNA file and then some friends many of which were only done on 23andme. Most of the time was spent reading countless papers and making hypothesis, trying to invalidate them and iterating. Constantly seeking out new ways to look at the problem and going from there. Every time I got a new DNA file from someone I could see how their dna fit into the current hypothesis. Because every DNA was different it was a great way to test them. It anything it made my job easier. I wasn't looking for a single snp, but common patterns.

Sometimes this has involved using nebula and their whole genome sequencing dna test to get much more accurate data, but more often than not the cheap dna tests most people do were good enough.

As for actual programs, I did write some quick and dirty scripts to scan dna files for specific snps, but mostly I would just read them as the parts I needed were not that long.

There is a fair amount of phenotype data to start with. What ultimately started this was knowing 1) that there was a number of conditions that are seen in statistically weird numbers in the LGBT and 2) sex hormone levels in the LGBT are not exactly what you would expect.

My goto fun question when talking with someone in the LGBT is if they have hypermobility. A good percentage do. In one specific example those with classical like EDS will have 21-OHD and thus POTS and elevated 17-OHP, backdoor DHT production (aka PCOS for women) etc.

The real question I have been pondering is what exactly do I do with this as this is just a fun puzzle, not my job, I don't work for any school etc.


I don't get it. If you can solve for LGBT genes that easily, then why isn't this in the news? Surely academic scientists would have tried this if all you needed was some scripting to find patterns?


Been interviewed by a medical news journal and this has all been done in the public over the last year so there was never a “release date” or anything. Honestly at the start I was just another person with a guess.

There are some scary implications such as in some cases we have had sexuality and gender changes once we knew how to “inject into the system”

I guess when you get down to it, it wasn’t actually that note worthy by itself as most cases are simply minor versions of already well documented conditions. It is only when you combine them that they add up.

And lastly given that I have not paid to get it peer reviewed and published formally it isn’t news yet. Again no school affiliation. Just now mostly helping treat a bunch of those common conditions I mentioned.


There has been some research mostly into gay men. But honestly the transgender data set is much richer. At the end of the day it is a minority that is being politicized so not exactly being investigated, but once you figure it out it is like shooting fish in a barrel there is so much easy research. Before this it was (simplifying but not) brain scans for the most part. It was mostly unknown.


I know of Sapolsky saying what a decade ago that LGBT brain structures are, like, flipped wrt. heterosexual people. But I thought the scientific consensus has been that there is no easy way to find a gay gene(s), so your claim of finding such low-hanging fruit seems to fly in the face of that. I'm already imagining that academic scientists would be ready to dismiss your work outright.


Consensus is that there is no single "flip this and you're gay" gene, but we've known for decades there is a genetic component because of twin studies. Fits right up with what this person says they've found.


If you know one that wants to talk I am happy to. In the meantime it is being put to practical use today.


Why didn't the medical news journal hook you up with a professor? They could've taken a look at your work. Like, how do you know your scripts isn't just doing pseudoscience and based on a superficial understanding of all those papers, especially if this is just a hobby project. There could be blind spots.


I am already working with a doctor and have talked with those in academia. They find it neat, but are not going to jump projects, they already have their area of study that they are working to publishing something on, not this.

There absolutely could be blind spots and been iterating on it all year each time the tweaks are smaller, but the core idea has not changed, but simply accumulated more and more evidence.


> what exactly do I do with this

Whatever you do, maybe do it anonymously?

It really sounds like it could badly trigger many people who will viciously attack others, actively attempt to destroy their lives, etc.

Be careful? :)


if nothing else post about it.

everyone loves a good story.


Exactly this.

Everyone arguing DNA isn't code, its just a reductive metaphor, and that the computer analogy is wrong, don't understand computers/code, or DNA.

There is such a 1-1 correspondence, that the 'metaphor' begins to look more 'real'.


Amazing perspective. Are any of the talks online?


Sorry but this reads like complete gibberish to me. Could you write this again with less paragraph breaks?


Isn't this a matter of semantics? A blueprint for a computer could be stored on a computer, and it's obvious that the blueprint doesn't contain everything needed to build a computer.


This is a problem with using metaphors. Blueprints very accurately describe the role of genes by the way the article here describes them. I think they don’t have a precise understanding of what blueprints are, which means now semantics of the metaphor are critiqued rather than the actual content.


It always surprising to me when people on programming related forum, a discipline that in a sense is all about viewing the world trough different levels of abstractions, get defensive when exposed to the fact that real world also do have them. Be that sociology, history or even 'hard science', simplyfing the world is inherent to comprehending it.

Now, ofcourse, there are still 'good' abstractions and bad ones. But even the best one, by definition, hides some details away.

As for the article itself - I do agree that it is mostly a fluff, and not something revolutionary. But also nothing to criticize it for, either


The "real world"?


As in: based on physical one, where the perceived abstractions aren't completely arbitrary solely on the discretion of programmer, but come from some observable fact.


The story is something like this: the genes act out in an environment. They encode information which demonstrably makes the most important and obvious differences between, say, a fruit fly and human. However, they don't necessarily encode all of the cellular environment in which they act. (Though do direct its activity and proliferation.) That environment is part and parcel of life as much as the genes.


The cellular structure is maintained and propagated by the DNA. If you want to say that the very first cell’s structures (the zygote) is inherited from the mother and not produced by the individual’s genes but by the mother’s genes I suppose you could define that first cells organelles as a special kind of gene ‘inherited’ from the mother, but it certainly is not what people mean by the environment in discussions about nurture vs nature.


That's why I said "cellular environment". The literal biological environment in which genes act. Not the household environment in which a human baby finds itself; that's indeed something else. But I'm saying that genes are not the complete blueprint of life even in this "low level" sense. (Never mind that gene expression is in the developed individual is subject to external factors, too).


The article's title and the words of Denis Noble are quite incendiary and come across as misinformed. Are the non-quoted ideas attributable to Ball accurate? I don't know.

The actual quotes from Ball, wherein he laments the comparisons of cells to computers comes across as ill-informed. Computers operate on programs. If a program is dependent on complex state, includes random factors, error correcting codes, etc., how is it incompatible with the description of the cell?

Yes, cells are immensely complex. But that does not preclude analogizing to a computer.

This critique strikes me as refusing to admit the nuance of another's argument while demanding others see the "correct" nuance in one's own.


I think Scott Alexander's [recent post on genetic causes of schizophrenia](https://www.astralcodexten.com/p/its-fair-to-describe-schizo...) is an important piece here.

Basic summary is: just because there are environmental / other factors doesn't mean it's incorrect to say that gene's cause schizophrenia. In other worse, finding additional causal variables doesn't negate the causal impact of existing causal variables.


Saying genes are not the blueprint for life is for me like saying “physics can not predict our life”. It cannot but everybody knows physics is the foundation of our universe. However, our world is ways to complex to use physics laws to predict all aspects of it. Stochastic is the intrinsic nature of every complex system.


That is exactly the wrong model. Genes are only a part of how your body is formed, and two identical genotypes will not produce identical phenotypes if the environment, especially the very very early environment of the egg/uterus, is different. That's why we can't grow babys in vats, for example, and are having massive issues even trying to grow tissues in vitro (and growing whole organs is not even a dream for now).

Consider also that your neurons, your red blood cells, your muscle cells, your liver cells, your fat cells etc all have the exact same genes. And yet, they are vastly different between each other, and you'll never see a fat cell divide into a red blood cell and a neuron, even though they are "built of the same blueprints".


> Genes are only a part of how your body is formed

And blueprints are only a part of how a house is formed. Two different teams of workers can build very different houses from the same blueprint. So what you say here sounds just like a blueprint, I don't see why that is wrong.

Edit:

> Consider also that your neurons, your red blood cells, your muscle cells, your liver cells, your fat cells etc all have the exact same genes. And yet, they are vastly different between each other, and you'll never see a fat cell divide into a red blood cell and a neuron, even though they are "built of the same blueprints".

Yes, and us programmers tend to deploy the same code to many different servers and tell some of them to be databases, others be frontend etc. It is just simpler and more robust to share code and then then just flip a few settings on startup to change what the server is.

A single blueprint describing many things that are working together and you can build any of those things is very common.


My understanding is that "outside of the genes information" is not just the color or the shape of the house, it is _essential_ instruction on how to build the house. Without it, you will not get something that qualifies as "house".

In the analogy, it is not 2 teams that build houses that are very different, it is two teams that use the same blueprint and one ends up with a house, and the other one ends up with a car. In this case, it is then correct that the "blueprint" is in fact not a blueprint.

Or another way of seeing it, you have the blueprint of the house, then you rip it apart in small pieces. Some of these pieces are the genes, other of these pieces are "out of the genes", such that if you just have the genes pieces, you just don't have enough information to build something that qualifies as a house. (funnily enough, you can say that the house builder can "fill the gaps" with his own knowledge, which would be a good example of "out of the genes" instructions)

As for your software analogy, again, some software have flags to turn between databases or frontend. But the point of the article is to explain that it is demonstrable that genes don't correspond to that: genes in itself are not enough to make blood cells by just turning a flag on or off, the same way a piece of wood is not a blueprint of both a chair and a door and that the carpenter is just a simple flag that will turn the piece of wood into a chair or a door. In this software analogy, it's like if you have one script file that just contains one basic function that neither does a database or a frontend. If you combine this script with other software pieces, you can have a database, if you combine this script with other software pieces, you can have a frontend.


We lack good metaphors here.

Like genes, the following collections of information are plans, as other entities (workers, compilers, cells, etc) can reliably use them to produce larger, more complex objects:

- Blueprint

- Instruction manual

- Recipe

- Code

Unlike genes, they’re all human-designed. The top-down forcing function - the “back” in the feedback loop which shapes them - is human artifice.

I’ll leave it as an “exercise to the reader” to consider the differences in how their environment and execution apparatuses affect the resulting objects.

Genes are like blueprints, but obviously not the same. For one, they haven’t passed the county permitting process! And living organisms are like buildings because you can point to the plan behind them. But I’ll be darned if a house has ever had to struggle for survival.


I still find the analogy mostly appropriate. Perhaps, it is not the 'high level source code' that is strictly typed as was perhaps believed by some, but I think it can still act as analogous to the runtime memory state of the computer. The genes are like memory mapped services which can operate on one another and change their state, and this couples with interrupts from the external environment that force changes to the state and memory, but the runtime code adapts and still has access to in-memory functions to call when appropriate, but every so often a buffer overflow can occur or be enticed and this causes other issues.


Yea, you have to keep in mind that while it's sort-of code, it's code written by random errors and selection. Obviously the code quality isn't going to be awesome :P


The thought which had occurred to me some time back is that genes are less a blueprint than a bootloader.


Or the early version of a bootstrapped compiler


That's another good analogy, yes.


I wonder if it's still accurate to say that genes are the blueprint for proteins, and then build on that concept to the extent of what's known.


As someone who works with actual blueprints, I'll tell you that even those don't guarantee what the actual asset looks like.


Of course "cells are computers and genes are their code" is an oversimplification, but it's not a terrible analogy. It seems to fail the worst if either

- you stretch the analogy too far, as is the case with all analogies, or

- you don't understand much about computers or code.

A given set of code can produce different results when compiled by different compilers, or on a different computer, or for a different computer, and a given executable can produce different results depending on whether or not libraries or peripherals are available, etc.

No one will ever be able to make a one-sentence analogy that will satisfy every scientist, but in spite of that many analogies are incredibly useful. If someone doesn't like the "cells are computers and genes are their code" analogy, I'm all ears for a better one.


Well, cloning does work. Remember Dolly, the sheep. Now there are two companies routinely turning out clones of polo ponies.[1] They all look the same and perform about the same.

[1] https://www.youtube.com/watch?v=vTmVpzAnpxo


AstralCodex discussed this recently. I am not a fanboy, but it’s one of his better articles:

https://www.astralcodexten.com/p/its-fair-to-describe-schizo...

I just wish he’d apply the same reasoning to his believes on the genetics of intelligence.


It's possible he does apply the reasoning, whatever you mean by that, but just does not write about it publicly.


Ive drifted in and out of reading him over the years. What’s a representative stance of his on genetics of intelligence


Many biologists will say that studying evolution and development is the key to understanding how phenotypes arise. I agree. Watching the development of an organism- say, a tardigrade egg that grows over a few days and then hatches- is remarkably edifying.

You can see individual cells growing and moving around and then look at another tardigrade egg and see exactly the same cells growing and moving around to the same exact places (this is a feature called eutely- they have a predetermined lineage of cells all arising in the same tree structure from the same original egg cell, which (in many tardigrade species) is in fact a clone of its mother (no fathers rrequired- known as parthenogenesis).

I think many people would see that, along wiht other observations, and easily come to the conclusion that specific behaviors were encoded for by individual genes, or that genes act like an architectural blueprint, exactly specifying either intermediate or final states.

Instead, in each of those cells is a blob of jelly filled with the genome, which is decorated with all sorts of proteins that are flying around, binding to vairous specific sites, activating and deactivating other sites, which then get turned into RNA and ultimately specific proteins. These proteins execute a plan encoded in the genome, but they do so probabilistically, with noise immunity, following physical behaviors that can be understood rationally (although in most cases, the number of actual variables is far too large to work with). And that encoding is extremely complex, more like a collection of weakly linked PDEs (a lot of weakly linked PDEs).

There is massive feedback, both positive and negative, that contributes to automatic regulation of components so that the plan proceeds normally. Many of these regulations lead to extremely non-linear, complex behaviors. Yet, for all this complexity, fairly straightforward actions that are similar to tardigrades happen in nearly all life. A sphere forms from an egg. The egg splits in two cells, then four, then many, retaining the spherical shape. At some point one of the split cells develops a polarity- one side grows more actively than the other. This leads to a body development plan (https://en.wikipedia.org/wiki/Blastulation) that self-generates with mostly local interactions (IE, there's no central controlling cell, it's more that the cells are just pushing against each other and the result is the right shape).

Understanding how genotypes lead to phenotypes has been a massive journey and I have had to unlearn much of what I was originally told, as new data has subsumed previous ones. That mendelian model of peas with discrete characteristics that segregate on different chromosomes is useful, and does show up in biology, but from what I can tell, it's just an easy, special case that we saw early, then geneticists overfit new data on that model.

When viewed through evolution as well as development- we start to see how complex phenotypes begin, then evolve to become far more complex. Early eyes and wings had utility, similar to modern eyes, but far less capable. Through mutation and selection, the organisms whose eyes worked slightly better were more likely to generate offspring that inherited those properties,leading to even more radiation (into many different types of organisms that all share similar eye properties).

I used to think that by this time in my career (I'm 51), we'd have been able to address a simple question I asked when I was 18: why is my nose this funny shape that doesn't look like other people's noses? What genes "encode" the "blueprint"? And to be honest, we're still really far from answering questions like that, but through a combination of data collection and machine learning, scientists actually are beginning to understand the complex process that leads to funny nose shapes.

For those who made it this far, here's your prize. A video of a tardigrade being born while its two younger siblings continue to prepare for life. https://www.youtube.com/watch?v=snUQTOCHito


> we'd have been able to address a simple question I asked when I was 18: why is my nose this funny shape that doesn't look like other people's noses? What genes "encode" the "blueprint"? And to be honest, we're still really far from answering questions like that, but through a combination of data collection and machine learning, scientists actually are beginning to understand the complex process that leads to funny nose shapes.

IIRC there was some research that could predict facial shapes from genomes (but it's not a popular direction of research since it touches various taboo topics too closely for many people's comfort), and notably it does not require understanding complex process of how e.g. noses are formed, just data about sufficiently many people to detect correlation patterns.


Yes, it's been a painful lesson for me to learn: you can often build a good-enough approximation of the underlying physics to make good-enough predictions, even without modelling all the molecular details directly, as long as you have enough data, good algorithms, and fast computers.

https://www.annualreviews.org/doi/full/10.1146/annurev-genom... is a review of research in this area. I don't think it's particularly controversial because realistically, the underlying data supports the hypothesis that facial features are hereditable and then do association studies to find plausible candidate multivariate genomic features that predict them accurately. I think that's far enough from controversial "Race science" that it's hard for people to make reasonable criticisms of this research.


> or example, mutations in almost 300 genes have been identified as indicating a risk that a person will develop schizophrenia.

> It’s therefore a huge oversimplification, notes Ball, to say that genes cause this trait or that disease.

That sounds as though the genes do cause the disease, at least sometimes.

This article - whose author seems to have mind-melded with the book itself, providing no objectivity I could discern - seems to be arguing against a straw man. No one thinks that it's only genes that cause disease. If I catch covid, it wasn't my genes. If I get type 2 diabetes, it (probably) wasn't my genes.


Blueprint evokes this picture of house, or car, or plane... Which we really do not have, we do not have ability change a bit and add extra pair of arms for example.

To my understanding genes are more like massive collection of recipes, which of products interact in myriad of ways eventually producing something quite consistently. Something being wrong can results some condition from time to time in what is produced.


There are hoc genes though which pretty much lay out the pattern of tissue (best seen but far from exclusive in the example of butterfly wings)

of cause there is a whole mess of other genes and functions, so a more correct that would be: genes don't __only__ represent blueprints for proteins and tissue


In the same vein I would like to challenge the tree model of life, of species, and of any genetic variation.

The reality is much closer to trunks that merge and branch, similar to git.

This applies to the origin of life, to differences between species, differences between populations, also to human language classification. Also to the origin of humans.

The tree model I would “consider harmful”. It affects society and policy in a strong way.


I understand why they went with the combative “admit” phrasing in the title. Subtextually, this is a shot fired against unsavory opinions that claim a basis in strict genetic determinism.

However, that strict interpretation is an extreme fringe of belief. The interaction between “nature” and “nurture” is a fully mainstream concept, especially amongst scientists, and especially amongst biologists. What is this “New Biology” of which the book speaks? The article is arguing against something of a straw man.

Worse, this manner of challenging the notion of strict genetic determinism is fuel to the terminally polarized who will snap to the equal and opposite error of assuming genes don’t matter at all - an idea which underwrites a bunch of profoundly unscientific woo.


Well said. "The picture is more complex than this strawman" is being framed as "we need a new paradigm!".

I'm curious what motivates this. It's not like progress in genetic research and technology has stalled (see: crispr, IVF screening, GMOs, etc).

And I don't see how moving from "genetic determinism" to "environmental determinism" is any less depressing / defeatist. World politics suggests we have about as little control over our environment than our genetics


>Genes, proteins and processes such as evolution don’t have goals, but a person certainly does. So, too, do plants and bacteria, on more-simple levels

If we're expanding the scope of what we consider to have agency and goal directedness why stop there? Evolution is analogous to a hill climbing algorithm which also has the same properties.


I think people who complain about these analogies have themselves too simplistic ideas of how computers and locks work :P


Here is a video of a discussion between Richard Dawkins and Denis Noble (who wrote the linked article, and who was Dawkins' doctoral examiner): https://www.youtube.com/watch?v=uLC0akD1WOE


I recall a simplification that said that if we used computers code as an analogy, genes would be similar to functions, epigenetics would be similar to conditions over which functions get called and when, and the environment is the argument and events in which the program get executed with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: