Hacker News new | past | comments | ask | show | jobs | submit login
Nature’s libraries are the fountains of biological innovation (aeon.co)
100 points by jonbaer on Sept 24, 2016 | hide | past | web | favorite | 48 comments



The library analogy seems really apt here.

This article seems to argue that Nature has selected for genes (and by extension, perhaps whole genomes) that in some sense are likely to benefit from small mutations.

That means that even if two genomes lead to phenotypically identical organisms, the one whose genome is more robust to small changes (either by continuing to function the same way, or to have a different, but still useful function) is better.

Is there a specific term for this? This implies that 'How likely is this particular organism to survive in this particular environment?' is not precisely the same as 'How well will this genome persist over evolutionary time as the environment changes?', but they seem to both go under the term 'fitness', even though one is more 'meta-' than the other.

Similarly, software engineers do not simply search the space of programs for ones that give the exact desired output, but within the set of working programs that identical output, we also select for ones where small changes will lead to different but still-useful programs. We call these features 'maintainability' and 'readability', and we use 'libraries' to help do this.

And just like we don't necessarily use the entirety of a library, it seems possible that large stretches of non-coding DNA are simply genes that aren't currently useful, but are kept around as raw material for future evolution.

  "It is not the strongest of the species
  that survives, nor the most intelligent;
  it is the one most adaptable to change."


This is an excellent post smallnamespace. The area of research you're looking for is known as evolvability, reading these wiki/wikibook [↓] links is a good start for the topic.

Evolution seems to have figured out a way to build representations such that nearby steps in random directions are nearby in a semantic (can't think of a better word atm) sense. This should surprise you. Think about an expression tree like (+ (x y)), it's very different from (/ (x y)) despite differing in just 1 position. Or consider that 12 and 32780 differ by only a bit in their binary representation. We humans could really do with figuring out how evolution takes a seeming discrete problem and works out a way to build representations such that efficient search is possible even without explicit derivatives.

It's also interesting that, evolution might have taken this to a meta level with brains. From http://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20princ...

> The beauty of neural Darwinism is that it nests distinct selective processes within each other. In other words, it eschews a single unit of selection and exploits the notion of meta-selection (the selection of selective mechanisms; for example, see REF. 89). In this context, (neuronal) value confers evolutionary value (that is, adaptive fitness) by selecting neuronal groups that meditate adaptive stimulus–stimulus associations and stimulus–response links. The capacity of value to do this is assured by natural selection, in the sense that neuronal value systems are themselves subject to selective pressure. This theory, particularly value-dependent learning90, has deep connections with reinforcement learning and related approaches in engineering (see below)

A bit of an aside but: If you're interested in AI, it's really worth looking at what the people studying the intersection of evolution and learning are saying, at animal intelligence and at human brain development. It's surprising to me how many people think the current approach to reinforcement learning is but a hop and a skip to full human like generality.

[-] https://en.wikibooks.org/wiki/Structural_Biochemistry/Stabil...

https://en.wikipedia.org/wiki/Evolvability


Going back to basics to get my thoughts in order:

Synonyms are words with the same meaning. A way of descibing this is that the mapping from the set of all words, to the set of all meanings is not "injective" (aka "one-to-one") - i.e. at least some two words map to the same meaning. There are other ways to describe it, one might say synonymous ways...

Synonyms also occur when two expressions have the same value. e.g.

  2+2 = 2*2 = 4-2
It's also possible with variables e.g.

  x+y = y+x
  x+y+z = (x+y)+z = x+(y+z)
  a(b+c) = ab+ac
All this is redundant, and we tend to like to minimize redundancy, as wasteful inelegance. As in python's There's one way to do it. Though the truth is that there are an enormous number of programs that are functionally identical - synonyms.

Getting to the article's domain, one specific DNA sequence has a number of neighbours that are one mutation away (an Edit-Distance or Levenshtein distance of 1, if you like)... for a long sequence, there are quite a few neighbours. But some of those neighbours are synonyms, and similar for their neighbours, and their neighbours' neighbours. Pretty soon you have an exponential explosion of synonyms. But the crucial point is that you also have an explosion of non-synonyms... an exponential explosion of the surface area of the set of equivalent DNA sequences.

This is where massive parallelization, of having a huge number of organisms, is helpful, many of them being unique points on the surface. Of course, not just for this one piece of DNA, but a different point for each gene.

All this makes it much more likely that a new mutation could generate something new and useful.

It may be that life took so long to get going because it went through many systems that lacked this property... when the most adaptable finally came along (DNA, plausibly preceded by RNA), it completely swamped its ancestors, obliterating them. Then we see other forms that excelled at flexibility/adaptability: multicellularity, sex, birds, mammals and so on.

BTW on the subject of Platonic forms, the surprising thing to me is we have so many clear categories, instead of a huge mush of continuously varying species. They may be local minima... and when a new more flexible form arrives, it again explodes.

We don't like redundancy... but maybe we should.


Thinking more: the tricky property is not only that synonyms are one mutation away, but that functionally very different meanings are one mutation away - enough to make a difference.

For example, in C programs, it's easy to get one mutation synonyms: just mess with whitespace. But difficult to get very different synonyms, which then with one further mutation suddenly gives very different results (I think?). The synonyms don't get you much.

My understanding of the function of DNA coded proteins is pretty much their folding shape (which in turn determines what reactions they "enzymize"). We could say that the mapping between sequence and folding is the mapping from word to meaning. I'm guessing, but it seems to me that a protein could vary much in some areas, without affecting folding much (like whitespace?), some areas would predispose some folds, and others would be critical. It also may be affected by the general environment, and presence of specific other proteins, that might encourage one out of few potential folds.

It maybe that it's also somewhat probabilistic, that proteins are not deterministically folded in one particular way, but produce a distribution of folds. And that as a synonyms may shift the distribution, e.g. more of one base-pair at a certain part may gradually shift the proportions, Provided none of the possibilities are pathological, this seems a way to gradually shift towards alternatives.

The key thing is that when shifted far enough, at a few different folding points, it may suddenly start producing a novel folding. If it happens to be useful, selection would quickly favour stablizing this particular folding (and it's probably possible to move towards a sequence that almost guarantees this fold).

So in some ways it's more an analog system than digital.

I find it difficult to think of an actual design with this property.


There was a good article about this in the April 2004 issue of Scientific American (paywalled, unfortunately; the name of the article is "Evolution Encoded"). It's about how similar gene sequences result in similar proteins, so that mutations don't necessarily result in death.

http://www.scientificamerican.com/article/evolution-encoded/


"It is not the strongest of the species that survives, nor the most intelligent; it is the one most adaptable to change."

Isn't this (from a very, very high perspective) the reason why thousands of years of power concentration / consolidation / empire-building endeavors went down the drain each and every time .. and the current attempt (where attempt is too weak of a word), regardless of the tools/strategies/technologies being used(yes _you_ are a part of it) is nothing more than a manifestation of the same power-hungry "elite" ape-brains steering the world counter-nature(from decentralization, competition etc) into its abyss? Everyone knows that a system that can't evolve is a dead system .. and not everyone but at least some people know that today's elite mindset with most of its "enlightened" ideas (some if its proverbial arms even using the same, cherry-picked nature-references as the one I'm picking on right now) - are conceptually the same regardless of the time period

Sorry for this rant, I think our civilization won't survive if the world keeps its current direction(well, the parts that will make it .. and yes both species), decentralization, limiting the concentration of power whatever it takes .. that's the only way

"current" as in the last 300-200 years give-or-take *"specie_s_" including moechtegern new species as ref. Edit: Grammar fixes


> decentralization, limiting the concentration of power whatever it takes .. that's the only way

Unfortunately, successfully addressing global warming and over-use of resources is probably going to require more coercion and centralization.

Even bilateral agreements must be backed by credible threats, like sanctions, or they won't work.


Over-use of resources is actually a direct consequence of the consolidated global corporate power (from planned obsolescence through destroyed local markets/industry(with all its micro and regional economic and social consequences) .. just so global corporation A can ship products that were originally manufactured and consumed/sold locally - at a huge human and environmental cost - across the planet and make profits. Not because a product is cheaper for the consumer, nor because its better, this is solely the consequence of various coercive techniques (military, economical). Patents(yes, Watt actually postponed the onset of the industrial revolution at least 2 decades .. one example out of many), licenses and concessions(both meant as economic terms), etc et

Too complicated topic for this kind of discussion but that's one huge misconception/piece of economic propaganda you cited there that had to be addressed

The other one - even more potent and dangerous - is the notion that global problems need global governments(or, more precisely a consolidated global power). This is actually a two-part-question, a) Why is our current civilization unable to solve global problems(education, healthcare, social welfare, environmental damage) - which is a complicated topic on its own; b) how can a decentralized global society with a mostly local scope solve global issues.

This is a special question of sorts because if you answer the a) part you will with high certainty arrive at an answer to the b) question too. (and no, centralization of power is not the answer)


The fact that money does not represent value on a holistic scale (humanity) is the root problem imo. Instead money is designed to represent value to an individual, which creates an incentive for making instant gratification i.e. something an average person will value. If things of value were worth money then the world would 'automatically' go in the 'right' direction.

I think decentralization is the end goal of any system, it is much easier to design a centralized system (a minimum viable product if you will - like the current internet) but more robust to have it decentralized. The egalitarian eutopic vision that people keep dreaming of is basically ultimate decentralization. Where somehow the system is designed so that the rules 'are invisible', people do what benefits them the most but it's also the best thing to do and legal. Any move towards this limit is a step in the right direction in my mind.


I was unable to reply to smallnamespace(and it just did not pass my internal value system to create a 2nd account just for that)

@smallnamespace: I actually did answer, or, if you can't get an exact(ish) pointer out of my answer there are hints scattered all over .. yes, not all answers are of the one-sentence-gives-you-all kind

@openfuture: In many aspects, albeit put in different terms, I do agree that money, or the monetary system per-se is at the core of today's dystopia-to-be-built but what I think you meant by your answer(and correct me if I'm wrong here) is some sort of resource(or environmental-impact)-based currency which is -in my opinion- counter to what money actually is/should be: primarily, in its pure form, a _tool_ to make exchange of goods and services easier. I recommend reading about Silvio Gesell's work, IIRC both Hayek(Constitution of Liberty and probably others) and Friedman(can't help you here) cited him as one of the greatest economists of the century(commercial over). Of course once you understand what he was getting at, you have to strip all that's unsuitable and convert the rest to a 21c world, nevertheless a fun exercise which will give you a ton of new ideas(like, on-demand per transaction currency for exchange w an intrinsic value-loosing personal currency as your own saving option within your asset pool). Wurgl experiment, BI, 21century state-of-affairs - lot of mind experiments I invite you to here so get a deep breath and start reading ;)


Thanks, I really appreciate thoughtful book recommendations! Sorry that my answer is late, perhaps no one will see it but I will answer anyway.

I don't think you can make a separate currency for the holistic viewpoint (holocoin ^^) my point is more that what we currently value (money) isn't valuable anymore. That's the fault of the institutions that are supposed to provide 'value' (thereby earning money) by making sure that money is correlated to value and not the other way around (finance / banking etc) as well as the institutions that make sure that 'value regulators' don't get delusions of grandeur (governments / police etc).

Basically, if the feedback loop goes: "People -> Institutions -> Government -> People" then everything is fine, that's democracy and the non people part of the equation will see value in cultivating appreciation for skill and knowledge in society. However if the power gets too concentrated you can effectively cut "People" out of the equation so you get a feedback loop of "value regulators" and "regulation regulators" stroking each others backs creating ever more severe conditions for the rest of us till the whole things collapses as people lose faith in the abstractions that were supposed to be meaningful (like money or law) and go berserk, much to everyones' chagrin.

This is obviously a huge oversimplification but basically I feel losing faith in humanity is a self fulfilling prophecy. The whole point of decentralization is to at the same time disarm the powers that be in a fairly orderly manner before the collapse/fascism while giving everyone both the power and the responsibility over their immediate environment. If you've ever seen a child try to handle responsibility then you also know that it is a huge perspective shift for them and a great motivator for trying to actually know yourself what needs to be done rather than knowing who to ask. We need a humanity like that if we want to survive the coming age of superweapons.

tl;dr. religion is good, organised religion is bad.

edit: just realized I'm not really answering you so much as just rambling my point.


I don't know what your proposed solution is, since you elected not to share it.

I will say though that historically, the only examples I can recall of independent polities being able to work together to solve common problems involved both an outside enemy, and a shared set of values, culture, and experience.

Global warming doesn't fulfill either criterion.

Despite the Internet and modern media, Western culture and values are not universal. And even in the US, a large fraction of the population doesn't even acknowledge that global warming is a problem, much less be willing to pay for a fix.

How are you going to get those people to come along without some level of coercion?


Good points, though I disagree with a few elements.

Overuse of resources is a typical pattern of not only human but nonhuman species. One of the first instances was cyanobacteria, which overused both the then-abundant atmospheric methane source, as well as the sinks of free molecular iron (through the great rusting) and the atmosphere's ability to absorb oxygen without reaching levels both toxic to the cyanobacteria itself and sufficiently high to disable greenhouse warming via methane, plunging the planet into what seems to have been its deepest and longest ice age, "Snowball Earth". I describe some of the dynamics here: https://ello.co/dredmorbius/post/5l_8mqtvwllvx_dabpjy-g

My point isn't that what humans are doing here and now is sustainble or good (it's not), but rather, that playing the blame game, or trying to assign causality, is frequently nonproductive and/or fraught. It's what William R. Catton, Jr. calls "futile vilification", from his book Overshoot.

I love the example of Watt's steam-engine patent extension. I picked it up from Vaclav Smil's books, where'd you learn about it?

On coercion -- you might be interested in William Ophuls. He's a political writer (Yale PhD and professor, worked in the US Foreign Service as well). His primary work is Ecology and the Politics of Scarcity (1977), which lays out the fundamental problem and dynamics. He's continued the discussion in a series of books published through 2013, of which Plato's Revenge and Requiem for Modern Politics are probably the best supplements to his first book. http://www.worldcat.org/search?qt=worldcat_org_all&q=au%3Aop...

Ophuls argues, despite his strong belief in liberal political philosophy, that some level of mutual coercion will be all but necessary should a sustainable technological society be achieved. His critique of the present decentralised system (well, since 1977 and before), isn't very assuring.


@"overuse of resources is a typical pattern" - I don't agree, the drive for overuse of resources may be considered as such but this is hugely species-dependent, and besides a lot of cost-to-gather-resources vs energy-you-get abstractions one can knit oneself into, the main (limiting) factor is and always has been the (ever changing) environment and a sort of equilibrium all systems within are forced to adapt to/operate in(or, extinct).

Without steering even further from the thread, humanity is unable to adapt to the ultimate limiting factor as of today(one can say earths biosphere) and the cause for that is as I pointed out earlier .. at least that's what I came to realize once I dived into all that boring policy/economy more-pr-than-science.

@Patent-law: stumbled-upon a freely-available book about the patent system and its evolution years ago(maybe via HN), I'm not on my PC so can't tell you what the name or authors were(from UCLA IIRC), but one gets pretty upset when reading it :)

@"playing the blame game, or trying to assign causality, is frequently nonproductive and/or fraught" I Don't agree, wrapping one self's mind into a nicely-picked-set of philosophical nuances is whats unproductive. Its like going through academia without ever asking anyone for clarifications/more information on a subject because you were brought-up/"propagandized" into believing that everyone who is asking is a non-productive burden to society. Its time to point fingers even knowing you could be wrong(which should be the default anyway), at least you get some information out of that process ..

@Ecology and the Politics of Scarcity - haven't read that one, but based on your last point, although I do agree that some level of coercion will be present in the society for years to come(since, its a product of "our nature") .. I don't think there is room for critique of a "decentralized system"(other than of a decentralized system made of competing centralized systems), same as there is no room to criticize democracy(once defined properly) because every other competing system was killed-off(no, Churchil was not right). Will give that book a look nevertheless(if only to get some ammunition:)


Edit: @Ecology and the politics of scarcity revisited, just read the foreword of that book, almost each and every sentence of that foreword is a (known) misdirection; I don't think I'll have the nerves to read it, will scroll through it once I'm less tired


Which edition are you reading? Theere's a revised veersion from 1992. The forward isn't by Ophuls, though the text is largely his. Copy at the Internet Archive.

https://archive.org/details/pdfy-SXVS7n8VWRQUMvFD


That is exactly the one I found; yes I know, but its written "in the spirit of the book" which makes it very off-putting right from the beginning. Going through the first pages only confirmed my viewpoint (as in, not mentioning that the most significant factors contributing to population growth are childhood mortality(iow no access to healthcare), (lack of)education and social safety, all of which are a direct consequence of the proliferation of century-old hierarchies and their power-conserving,consolidating,concentrating actions("we" are the ones holding large portions of the african continent in the middle ages)). It was the same with environmental impacts, giving the western consumerism as a culprit even though we are being _forced_ to become consumers(which is actually pretty hard not to see through almost every item you buy/service you consume these days) etc etc. Presenting a world without mentioning these facts is great if you want to convince (a specific group of) readers of a certain world-view and make them susceptible to accept or even defend certain policies/actions that would normally be unacceptable [by the majority of the population]. What I meant to say is .. clear I think ;)


I suspect our views diverge on a number of issues. Though I'll be interested to see your thoughts.


Edit(2): One thing I wanted to mention[..], Calhouns universes, you have to read his prominent papers to the end otherwise you might get depressed(it gets more optimistic towards the end) - sustainability of societies even without resource-limits _is_ possible


Generally familiar with his work (and before it was featured on HN a few days back), though I've not read it closely.


Your comment made me think of this comic: http://humoncomics.com/mother-gaia


Is there a specific term for this? This implies that 'How likely is this particular organism to survive in this particular environment?' is not precisely the same as 'How well will this genome persist over evolutionary time as the environment changes?', but they seem to both go under the term 'fitness', even though one is more 'meta-' than the other.

This isn't quite the same, but this is related: https://en.wikipedia.org/wiki/Gene-centered_view_of_evolutio...


But also imagine that the algorithms you write had to work (perhaps slightly differently, but essentially the same) when hit with bit-flips in the code. How would one do that? Perhaps have multiple copies, or coding redundancy, or fix-up routines, or fall-back instructions. All the kind of stuff that mother nature already figured out. But that should be unsurprising, since without that robustness, the evolution thing would kill our progeny almost every time. Building the robustness is almost as important as building the functionality.


> imagine that the algorithms you write had to work (perhaps slightly differently, but essentially the same) when hit with bit-flips in the code. How would one do that?

Gray code [1] does it for integers: Any two consecutive values are one bit-flip apart.

Instructions could perhaps be encoded similarly: Say an n-bit gray code represents an instruction. Each instruction has an n-bit ID, IDs are spread evenly over the space of n-bit values, and each instruction code resolves to the ID with the smallest Hamming distance [2].

Then most bitflips would yield essentially the same program, as long as the number of instructions is small compared to the number of n-bit values.

Our CPUs wouldn't run that directly, but I think it should be possible to construct simple bytecode instruction sets where practically any permutation of bits gives a syntactically valid program, and where bit sequences with a small Hamming distance in most cases resolve to similar programs.

[1] https://en.wikipedia.org/wiki/Gray_code [2] https://en.wikipedia.org/wiki/Hamming_distance


One could also view unit tests and static type checking as making a program more robust to incorrect programmer-directed changes.


You'd also need to robustify the compiler itself to bit-wise errors, and its output.


> to robustify

Did you mean to reinforce?


> Is there a specific term for this?

I think robustness is the specific term for that.


A quick search would confirm that mutational robustness is _a_ right term for this.


And just like in libraries, junk DNA provides all of the code cruft you could ever ask for. Some organisms that go for the agile approach are able to (and mostly required to) minimize the amount of cruft, but most organisms just pay the cost of copying entirely useless stuff.


Junk DNA is not junk at all. http://www.scientificamerican.com/article/hidden-treasures-i...

(there are many other recent publications, please google it)


The title is horrible. The problem in the article can be summarized in this quote:

> Nature’s libraries are the fountains of biological innovation that Darwin was looking for.

"Nature's libraries" is just a fancy name for all the neighbor variations of the ADN of a gene. It's just a collection of all the are one that are not so deadly. It's not a "fountain". Nature is not pulling off variations from somewhere.

It's not necessary to create a Platonic space of variations. This is like talking about the Platonic set of all binary numbers between 0000000000000000 and 1111111111111111 and saying like "Without a library of Platonic 32-bits number, computes couldn't work"


> It's not necessary to create a Platonic space of variations. This is like talking about the Platonic set of all binary numbers between 0000000000000000 and 1111111111111111 ...

I think you've misunderstood Platonic spaces. ("Platonic space of solids" is a subset of space of all solids.)

What interests me when speculating about evolution and platonic notions is the space of the possible, with the speculation high bit being that the "platonic" essense of this space is [can be informed by] group theoretic structures and compositional rules.

[edit]


Thanks, we've updated the title from “Without a library of Platonic forms, evolution couldn’t work”.


I've always wondered why you never hear about biotech or pharmaceutical companies buying up land with high biodiversity such as rainforests. Natural selection has been operating a trial and error laboratory running 24/7 for over 3 billion years accumulating DNA that by definition is useful for living organisms.

Its just waiting there being slowly destroyed for short-term resources like wood and oil. Resources we already know won't be viable products within decades. If you look at medicine some of the most pivotal drugs like opioids and antibiotics were discovered rather than created. Even with the inability to patent naturally occuring genes I can't fathom how much R&D could leapfrog forward in a profitable way.


Could not agree more. I think a hundred years from now people will look back with horror at the destruction of biodiversity, and the opportunities to mine it that are lost forever


The author (and everyone) should play with Genetic Programming. This is where programs (i.e. expression trees) get to mix together to produce new expression trees, according to their 'fitness'. It's an extension of the regular fixed-length Genetic Algorithm stuff, but the structure of the representation itself is adaptive.

One surprising thing is that (in addition to fitness improvements over time) the expression trees evolve 'robustness', since there is a subtree survival advantage if a tree's descendants are not disrupted by the crossover/mutation operators. That is to say, the process learns/evolves to evolve better.

So it shouldn't be so surprising that nature evolves a process for self-healing DNA errors, etc - it can be demonstrated in-silico pretty simply (i.e. occurs even if you don't 'engineer' for it).


I fully imagine that the author does so. Indeed, the author mentions running simulations, and the like.


It depends on whether the simulations are on fixed-length 'books' (the analogy used in the post) or on something more self-structuring (like Genetic Programming as a simple model, or the actual case of DNA coding for protein / cell / body assembly).


Do you recommended resources on Genetic Programming?


I think the author is confusing the map with the territory: the territory is the actual history of biochemical evolution, and the map is the explanation we humans have found for what happened. It is a straightforward matter of fact that the former happened without the latter. It seems likely to me that any physical system capable of enabling life (of any form) will also enable evolution.


I've always thought of Biology as the largest reverse engineering project ever.


So the core of the argument seems to be that there are many ways to encode a basic concept like 'something to move oxygen around' - and that without that you could not have evolution and that brings platonic essence into the evolution theory.

It looks like there is something interesting there - but the article is little bit too poetic to me.

A minor thing - it is also not clear to me if they mean that there are many ways of DNA to encode haemoglobin of if the other ways encode different proteins that do the same thing.


Fascinating...and if the person really knows what he is explaining (I'm in no position to tell) it certainly answers questions I had about how random mutations can be so effective in improving survival rather than leading to death. Just fascinating. Thank you HN


Most random mutations are not beneficial, but if you consider large populations and long enough time frame, then it works. Evolution is brute forcing progress with massive parallelism over long periods of time


I would recommend reading the selfish gene and other works by Dawkins if you are interested in theories on the functional basis of mutations in the universe.


Very interesting thought that evolution works so well because evolution itself evolved to evolve better. This explains many open questions unanswered by simple randomness*time.


"Nature is the great visible engine of creativity, against which all other creative efforts are measured."

-Terrence McKenna




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: