
The Omnigenic Model as Metaphor for Life - infruset
https://slatestarcodex.com/2018/09/13/the-omnigenic-model-as-metaphor-for-life/
======
maxander
What changed between the few-gene era and the polygene era is that we got
sequencers that could read the genomes of an entire experimental cohort at
reasonable cost. Likewise, when we have the capability to do single-cell
profiling of the epigenetic signature of an array of relevant cell types, for
every member of your cohort, there’s decent odds biologists will think it
ludicrous that we hoped to get meaningful results out of lone, unannotated,
out-of-context genomes.

 _EDIT_ : I would specify that I'm not saying that experimental biology
"doesn't work" or doesn't get meaningful results before the bigger new
technology arrives. I think the Slate Star Codex article overstates the
helplessness of single-gene theories, which did explain a bunch of diseases
and simple attributes, and had significant medical impacts in understanding
things like cancer and chemotherapy effectiveness. It just failed to
accomplish a set of other things that people wanted, like explaining
intelligence or height. Each new advance (cheap genome sequencing, epigenetic
readouts, hugely longitudinal metabolomics, molecule-level microscopy, and the
hundred-plus advances in this direction we haven't even conceived of yet)
expands the territory of what we can address through molecular biology.

Scientists are often _really optimistic_ about whether topics of interest lie
inside or without this territory at a given moment, which I think has to do
more with the incentives of grant writing than anything else.

~~~
eli_gottlieb
Yeah, the hidden caveat is that in a lot of cases, science "suddenly"
progresses when scientists get access to better equipment or get trained in
new methods, and can suddenly admit to our funders that the things we were
doing before sucked.

~~~
toasterlovin
I dunno. There's a lot to be said for squeezing as much as you can out of
whatever technology you currently possess. The current iteration of sequencing
will surely be looked back on as woefully inadequate, but we're learning a ton
with it now.

------
api
I learned about this in bio back in the 2000s. As I studied how genomes
actually worked it quickly became apparent that genetic systems are absolutely
nothing like human engineered systems. Every part simultaneously interacts
with every other part in real time and determines everything in parallel.

Take the genome for example. I really think the notion of gene sequences
hobbles our understanding. The genome is not a sequence. It can be _sequenced_
, but that's not what it actually is. A genome is a molecule. Every part of
the surface of that molecule is constantly interacting with other molecules.
In programming terminology it's like a program where every instruction
executes in parallel, always, and in real time. That means that trying to read
the genome sequentially like a program or a book is missing the point
entirely. We're holding it wrong.

I truly think we are a long way from being able to really "grok" these
systems. It took us thousands of years to develop the math, logic, and science
to understand complex systems with discrete components and discrete logic. In
many ways the digital computer is the apex product of our understanding of
discrete systems and discrete logic.

Now we get to figure out concurrent systems and concurrent n^n combinatorial
"logic." It might take a lot less than thousands of years because there are
more of us and we have a lot more knowledge to work with, but it's not going
to be overnight.

~~~
goblekitepe
It's not going to be the sort of thing we "grok", at least in the traditional
sense. Human minds can fully understand relatively simple discrete logical
steps in a system but massively parallel interactions like the genome are
fundamentally beyond our ability to "grok".

In order to make sense of and manipulate things like genetics we will need to
develop machines that can do those things for us. While that's unsatisfying
because we generally like the feeling of fully understanding things, such
machines will still yield progress and results, which is all we can really
hope for here.

~~~
api
I don't think a human in 500 BC had the intellectual tools to understand a
modern CPU or computer program. An electronic circuit would have been
absolutely inscrutable to them. A time traveler might be able to talk them
through it, but a time traveler would have the benefit of future intellectual
tools not available at the time. It's one thing to teach what is already
understood (to you) and another to comprehend for the very first time in
history.

I've thought for many years that there are new intellectual tools waiting to
be discovered here that will be as big as arithmetic, calculus, or logic.
There was a time when humanity had no idea what mathematics -- the _whole
field_ \-- was, and today there are probably whole analogues to mathematics
waiting to be discovered.

Unfortunately we are still in the phase of trying to attack this problem with
old ways of thinking. We probably won't even try until we finally come to
terms with the fact that the tools we have at our disposal right now do not
work to truly understand the genome. This will take a while as humans become
emotionally attached to their tools and cling to them. Try debating a
programmer on OSes, languages, or editors to see a simple example. :)

Bonus is that once we understand the genome we'll probably understand a lot of
other unknown unknowns we didn't even realize we didn't understand. Maybe this
is why physics seems stuck. Maybe the cognitive tools we have right now are
simply not up to the task of understanding the whole thing.

Edit:

I actually think Stephen Wolfram's doorstop _A New Kind of Science_ was
groping in this direction. The book was problematic because of Wolfram's
almost comical narcissism (Wolfram sort of tries to take credit for a lot of
things he didn't invent), and the techniques it discusses don't seem to have
delivered much fruit in and of themselves. Nevertheless at the "meta" level
the notion of trying to invent fundamentally new intellectual tools is
absolutely what we should be doing. We will of course fail a lot, but that's
what happens when you try to do something new.

~~~
ajuc
> I don't think a human in 500 BC had the intellectual tools to understand a
> modern CPU or computer program. An electronic circuit would have been
> absolutely inscrutable to them.

I'm pretty sure they could learn to write programs. The had algorithms.

~~~
api
They could if it were explained to them. I doubt they could figure out a more
complex sort of algorithm if they were given the artifact with no explanation.
Doubly so if the algorithm involved things like calculus and modern number
theory.

We do not have aliens or time travelers to walk us through genomes and fill in
the missing pieces of our understanding.

~~~
lurquer
If your are referring to the physical CPU itself, then you are correct as they
would lack the technology to even see the circuit paths (much less measure
current).

But, if you are suggesting a smart human from 500 B.C. couldn't grasp 'The Art
of Programming,' I'd respectfully disagree. Logic and reasoning haven't
changed in recorded history. The ancients were no more or less intelligent
than the moderns. Whenever I'm tempted to think otherwise, I sit down with my
Euclids Elements and see how far into it I can get before I reach the "WTF...
How did he figure THAT out!?" An even better cure -- although more recent --
is to see how far you can get through Newton's Principea.

~~~
api
I see your point but I still dont agree.

Everything seems obvious and easy in hindsight because we are viewing it with
those intellectual tools deeply embedded into our understanding. They are all
over our culture and we pick up bits of them as children through osmosis even
before we study them formally.

I think getting an ancient Greek or Roman intellectual to understand a large
integer factoring algorithm, a proof of work block chain, or an OS kernel
would be pretty painful. It would take a lot of tutoring to first teach a lot
of things that were not understood in that time at all.

You can sometimes see this today when you see older people in rapidly
developing nations trying to learn advanced concepts. They can do it but it
takes a while.

My point is that all this assumes a tutor who knows and can explain. For
levels of understanding not yet reached by any human, there is no tutor to
teach us how to think about the problem.

~~~
lurquer
Two of your three examples consist of algorithms that take a large number of
steps. Without an ability to perform a large number of steps in a short amount
of time, there wouldn't be the need to think about such things. That is, your
tutoring of the ancient would consist of explaining an algorithm that might
take 100,000 calculations. He would grasp it, but shrug and say "so what? It
would take years to perform that algorithms. Let me show you a different way
that results in a damn good approximation that requires nothing more than a
compass, straightedge, stylus and a piece of string."

In short, you are confusing reasoning ability with algorithms designed for a
particular form of technology.

Take neural nets. Would someone from the 1970s understand the benefits of a
convoluted net vs. a simpler form? Perhaps. But, without the technology to
perform millions of training calculations, the lack of comprehension would
come from your pupil wondering what the point would be of trying to understand
an algorithm that, with his technology, could never be demonstrated, used, or
tested.

------
westoncb
> The most recent estimate for how many genes are involved in complex traits
> like height or intelligence is approximately “all of them” – by the latest
> count, about twenty thousand. From this side of the veil, it all seems so
> obvious. It’s hard to remember back a mere twenty or thirty years ago, when
> people earnestly awaited “the gene for depression”.

I wonder how much that's just a technicality, in the same way you could say
the inverse square law for gravitation is wrong because really every massive
particle has _some_ influence on every other particle, etc.

So maybe it's the case that every gene is involved in every trait, but maybe
there's a handful that account for 99% of what we care about in that trait?
(Then again, I can imagine for something like intelligence that most of the
genome is really involved—height though?)

EDIT: I had some remarks about the term 'gene' here that were incorrect and
turned into a useless diversion.

~~~
lacker
It's more like, early on we found some traits like "blood type" that really
did correspond closely to a small number of genes. So we theorized they all
worked like that. Now, we know that however height works, it doesn't work like
blue eyes do.

------
tlb
Research into the question, "which programming languages are more productive?"
certainly suffers from looking at single causes in a massively polycausal
system.

If you assume the effects of individual causes combine linearly, you can still
look at one cause at a time. But programming languages interact with the
problem domain, library availability, team preference and experience in non-
linear ways.

------
yters
Seems premature to claim "it works" based on predictions. That was what got us
in trouble last time.

~~~
kerbalspacepro
What do you mean by "it works"? Polygenic scores?

~~~
smallnamespace
I'm doubtful that polygenic scores work, in the sense that we all acknowledge
that complex traits like intelligence are interactions between genetics and
environment.

You can always decompose a function of two variables into three parts (with
some hand-waving for notation):

f(a, b) = f(a) + f(b) + interaction term

where E(interaction term) = 0 in some statistical sense.

If the interaction term is zero, then great, your function is trivially
decomposable. For our problem, take a = genetics and b = environment, that
means you can precisely talk about someone's 'genetic score for intelligence'
and someone's 'environmental score for intelligence', and never have to
consider them together.

But I strongly suspect that the interaction between genes and environment is
very very high; the latest effort to map genes to intelligence only accounts
for 10% of variance not because the environment _determines_ the other 90%,
but probably the interaction term, which can be complicated and highly
nonlinear.

Another thing: variance of your inputs can only be considered in a statistical
sense so the relative importance of genes vs. environment won't be stable. If
somehow the world became completely uniform (every single child in the world
received the exact same education and upbringing), you'd expect genetic
variation to account for everything just by definition.

~~~
andrewprock
You can't do that with things that interact, e.g.:

f(a,b) = b/a

~~~
whatshisface
f(a,b) = b/a = 0 + 0 + b/a = f(a) + f(b) + interaction term

In this case the system is contained entirely within the interaction term, and
since you don't know the distributions of a and b there's not much motivation
to go further. If you had the distributions of a and b you might be able to do
something a little less trivial by skewing the interaction term to have an
expected value of zero, potentially like:

a/b = a + b + (a/b - a - b) = f(a) + f(b) + (interaction term with E[term] =
0)

~~~
andrewprock
If by "interaction term" it is meant f(a,b), then I congratulate the author of
that tautology.

~~~
whatshisface
Hidden in the parent's comment is the additional condition that f(a,b) have an
expected value of zero. It's still pretty easy to prove, though.

------
jl2718
Height is measurable. Intelligence really isn’t.

I’m naively confident that we’ll find gene patterns for height soon.

------
evrydayhustling
Bear with me against the impression of pandering, but I see an analogy between
the author's prescriptions and "product-market fit". The power of that term is
that it discards the notion of an optimal product or perfect audience - the
two can only be right for each other. In the same way, problems with
polycausal phenomena might admit specific solutions (Prozac for depression)
without being fully understandable.

The correlary is that successful products can litter the world with unintended
consequences - as can isolated discoveries.

------
mrfusion
I would think a big part of the problem is we shouldn’t look at genes but
rather how genes are regulated.

------
memebox3f
I remember thinking how utterly stupid it was to think it was a single gene
that determined these.

