Hacker News new | past | comments | ask | show | jobs | submit login
Don't forget randomness is still just a hypothesis (2006) (nature.com)
152 points by tosh 70 days ago | hide | past | web | favorite | 129 comments

When I first started learning QM in undergrad, I was skeptical of the idea of randomness. Many years later after taking QFT in grad school... I'm still skeptical. Non-locality doesn't bother me one bit, but personally speaking, there's something deeply unsettling about the idea of "true" randomness.

First, I think it's important to define what randomness even is, for which I'll use the most universal definition, i.e., Kolmogorov randomness. A string of data is Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string (yes, you can arbitrarily choose which universal Turing machine, but the invariance theorem makes this fact inconsequential for the most part).

So if we repeatedly set up and measure a quantum system that's not in an eigenstate and then apply the probability integral transform to the individual measurement values, we should expect to find a sequence of values drawn from a uniform distribution, and this sequence should not be compressible by any computer program.

This is where it gets interesting though, because it may very well be the case that this sequence of measurement values is incompressible only because we lack external information, i.e., we are looking at the Kolmogorov complexity of the string from our perspective as experimenters, but from the perspective of a hypothetical observer outside the universe, the conditional Kolmogorov complexity (conditioned on some missing information) could indeed be less than the length of the string.

So where could this missing information be stored? My guess is that it's at the boundary of experimenter/experiment (not referring to a spacetime-local boundary here), since you can't represent the overall quantum state of experimenter + experiment as a separable state. That is, the information necessary to perfectly predict the result of a measurement on a quantum system is inaccessible to us precisely because we — the experimenters — are part of the system itself.

In this way, quantum randomness would be truly random from our perspective in the sense that the future is to some degree fundamentally unpredictable by humans, but just because it's genuinely random to us doesn't imply the universe is indeterministic.

I wonder if there's some way you could design an experiment that distinguishes true indeterminism from the merely unpredictable...

I can understand what you are saying, but the thing I always wonder about is why should the universe be deterministic? We're used to determinism because we experience that at a macro level. However, why should we prefer that condition at a QM level? To a certain extent, I'm actually more comfortable with the idea that it isn't deterministic and that all determinism essentially derives from chaos theory: things happen randomly, but the system constrains its output.

I guess I don't really have a reason for my preference other than thinking, if a universe pops into existence, what would I expect it to act like? Deterministic, or indeterministic? It just seems simpler to assume indeterministic because I can't think of a reason why it should be deterministic.

Edit: I know that chaos theory is built on determinism :-) I'm thinking of things like strange attractors. My ignorance leaves me with no better word for what I'm talking about, unfortunately.

Everything else we've ever observed has initially appeared to be random. There was a long slow process of proving individual bits weren't random and then a sudden jump in progress after several thousand years of recorded history when Newton and contempories showed it all to pseudorandom. That is to say, it was never obvious that the macro level was deterministic.

For a statistics perspective, given the youthful nature of Quantum Mechanics as a field it is more likely than not that we're observing a deterministic phenomenon from the wrong angle, so it appears random. This is because that is how first contact played out with pretty much everything else.

It's an interesting way to think about it, but I have to say that even after reading what you've typed I don't really see it. Although Newton and contemporaries had mathematical rigour, there always seemed to be order in the universe. If you push on a cart softly, it moves slowly. If you push on it hard, it moves quickly. In fact, Gallileo had to show that heavier things do not fall faster than light things as it was imagined to be. Never did we drop a ball and expect its speed to be random. Even the word "random" comes from the mid 1600s and it comes from the idea to run quickly (i.e. without paying attention to a purpose). I don't even think the concept of randomness as we see it was a concept long ago. Everything was either a result of something else, or pre-ordained. Things that you couldn't explain were "God's purpose". Something with no purpose and no mechanism would be pretty alien to early thinkers, I think.

I guess the Latin term “alea” is exactly what the English “random” means.

nice, never though about it, in Spanish random is "aleatorio"

Super interesting, thank you for the very informed and insightful comment.

In the same line, from my perspective, randomness appears where the "limit of our measurement resolution" is. In other words, when your only way to measure something is to sample it, then the maximum resolution of your measurements is going to depend on the maximum speed/frequency at which you can sample the thing you are measuring. So in the end all our measurements will always be limited by whatever thing is the fastest that we can handle/operate/understand. Anything faster than that will appear to be random to us. And this is also pretty much the same as what the Nyquist-Shannon sampling theorem says about any wave/information.

Relating to the Kolgomorov randomness under the above, something would be random when we can't sample it fast enough to rebuild its waveform with perfect fidelity within the time frame that it appears to be random.

Nyquist-Shannon only deals with measurements of power spectral densities in sampled systems, not measurements in general. The resolution of measuring for instance the location of a particle in QM has nothing to do with Nyquist-Shannon and does not depend on any sampling frequency.

You are correct. Now, why wouldn't it apply to any measurements?

With some creativity I believe Nyquist-Shannon can be applied to all measurements. For example you could think of a single measurement as the equivalent of a sampling rate of 1 in the time period in which the measurement was made.

> Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string.

That’s not a definition of unbiased randomness. A true unbiased random number could be all 0’s. Nothing about an unbiased random number demonstrates it’s random, otherwise whatever that distinction is would be a bias in it’s generation.

Kolmogorov complexity is it’s own thing, and sequences that seem very complex can have extremely low complexity. Such as long sequences of hashes of hashes.

I'm not sure what unbiased randomness is. I haven't heard that phrase before. For Kolmogorov randomness, I was using Wikipedia's description of it (https://en.wikipedia.org/wiki/Kolmogorov_complexity#Kolmogor...), although there are more technical descriptions available.

Crypto cares a lot about unbiased randomness. X bits of entropy is kind of a measurement of this.

Anyway, I suggest you reread the end of that paragraph:

“A counting argument is used to show that, for any universal computer, there is at least one algorithmically random string of each length. Whether any particular string is random, however, depends on the specific universal computer that is chosen.

Kolmogorov complexity is really referring to the fact you can’t have lossless compression of arbitrary bit strings. You can’t encode every possible N+1 bit string using an N bit string. The computer chosen can make an arbitrary 1:1 mapping for any input though. So, it’s got nothing to do with randomness in the context of coin flipping as the mapping is predefined.

Just remember, you’re choosing the computer and at that point any input can be mapped to any output. But, after that point limits show up.

If all "randomness" of the universe rises from a one extremely long bit string which has been created once from the source of true randomness, the bit string could contain unimaginable number of zeros only and it would be still random. For example, Lotto numbers 1,2,3,4,5,6,7 may be completely random.

Kolmogorov complexity works only if we have a big sample of random numbers, but we do not know if we have such in this universe or not.

Kolmogorov complexity is only meaningful for a specific architecture.

Without access to the architecture of the machine the universe runs on you can’t tell if the initial random string would be one bit or nigh infinite bits.

I haven’t really heard much talk about “Kolmogorov randomness” before, and so I’m wondering if you might be running up against the limits of the Wikipedia paradigm when it comes to pioneering scholarship.

The citation for that paragraph is a peer-reviewed journal article covering Kolmogorov complexity and randomness. It’s actually a really good article, by someone pretty famous named Per Martin-Löf. Which is all great, except that paper is from 1966, and in 2019 a more studied concept is something called “Martin-Löf randomness” :)

Well they're related. There is a slightly circular argument that, since it's impossible to determine for sure what the Kolmogorov entropy of a sequence is, the only way to generate a long sequence with high Kolmogorov entropy with high probability would be to use truly random numbers. Any pseudorandom shortcut has by definition lower Kolmogorov entropy as long as the generating program is shorter than the sequence.

A random string, in this case, is a string whose bits (or characters) are random, and the definition talks about the process that produces those bits. A string of all zeros (i.e., in this perspective, a process that produces only zero bits) is not random, but a short subsequence of a random string could be all zeros.

> Kolmogorov randomness. A string of data is Kolmogorov random if, for a given universal Turing machine, there is no program shorter than the string that produces the string

I know nothing about this field, but it strikes me as wrong intuitively. Say that I have a certain amout of data. I can find specific patterns on it (for example, a chain of 10 ceros) that are compressible (0*10). For a large enough amount of data, that can save me enough space to include a program that can print the decompressed string in less space than the original string, thus implying my original string wasn't random - but we've then reached the absurd, because it is perfectly understandable that randomness could create locally compressible substrings.

What am I missing?

>For a large enough amount of data, that can save me enough space to include a program that can print the decompressed string in less space than the original string

It can't. You'll find that, in a truly random sequence, the "compressible substrings" will be infrequent enough that you will use up your data budget just specifying where they go.

Let's take your run-length example. Let's work in bits to make it simple. Your chain of 10 zeros - 10 bits of information - happens on average every 2^10 bits. Let's say we magically compress this sequence down to 0 bits - we just assume that statistically it's in there somewhere, so we don't need to store it. All that's left is to specify where it goes! How many bits do we need for that? Well... the sequence occurs on average every 2^10 bits. We need all 10 of the saved bits just to say where the sequence goes! We haven't saved anything!

The more compressible the substring, the less frequent it is, and the more information is required to specify its location. This is also why we can't compress files by specifying their offset in the digits of pi, incidentally.


If you are familiar with programming I suggest doing the following experiment:

1. make an image of one solid color

2. make another image of the same size that with each pixel being a random RGB value

3. losslessly compress both images any way you can

4. compare the compressed file sizes to the uncompressed bitmap file size

Make sure to make a hypothesis before the experiment!

To dig a little deeper on this: all subsequences are equally probable in a random sequence. The full implications of that requires a bit more playing around and reading. I think if you explore it you’ll find that there is an intuition that can be built up.

Also a bitstring might be random/incompressible in reference to a Turing machine, but become compressible in reference to a halting oracle. E.g. "list the Nth Turing incompressible bitstring" is possible with a halting oracle, so for some N and S it is the case that log2(N) < len(S) and thus S is compressible with regard to a halting oracle but not with regard to a Turing machine.

So... you're saying randomness would be subjective / observer-dependent? Something can be random for one observer and deterministic for other, and that in all cases we can imagine an "outer" observer for whom anything can be deterministic?

...dunno why but this is one of those things that seem to me so incredibly intuitively obvious down to the bones of your mind, like in "how could even the thought of it being any other way" be possible :) I'd say it's just that modern physics doesn't want/need to have anything to do with such hypothetical "outer observers", so that's why we accept the convention of "true randomness" and work with it. And, it makes sense, otherwise you'd end up with science being polluted with useless metaphysics blabberings.

I am not sure if we can discuss inside or outside of universe in this context. If "outside" defines somehow how "inside" works, then question just changes to: Is "outside" deterministic or indeterministic. Is there truly randomness or not in the "outside"?

It is very difficult to imagine root source for truly randomness? If there is truly randomness in the universe, source of it would be perhaps the most important discovery of science ever.

True randomness and infinity are horrible potential features of the universe - especially if both are true.

To my way of thinking this is a paradox; on the one hand the measurement outcomes of the experiment are conditionalized (in a global sense) on the choices of the experimenter; on the other it's easy to believe that the experimenter's choices exert no local causative effect on measurement outcomes.

Check out the Orthogonal trilogy by Greg Egan. Given what you described above you would love it.

I've thought about this as well and it seems to be the key to free will. We don't have free will except from our perspective.

Under any practical consideration, free will is nothing more than an emotion; it offers you no capabilities, only a propensity to respond to things in a certain way. Without a useful definition of free will that offers something different to this, you won't have a key to anything.

Why it has to be useful? We try to understand countless things without have a use case in mind at the time of study.

So you're fine with a key that doesn't open anything? How will you know it is a key at all then?

>We try to understand countless things without have a use case in mind at the time of study.

You're making a pretty clear reference to mathematics & science here, but in those disciplines we study things with well-defined structures. We don't study flighty nonsense because it's not ever going to be useful. You shouldn't invoke this phrase to excuse a lack of precision and clarity.

Calling it a key is your own reframing, implying a use case.

You called it a key. I said it is not unless it has a useful purpose.

A key to understanding free will, not a key to using this understanding in any particular way.

What understanding? If you can't do anything with it, what do you understand?

Why must understanding be conflated with utility?

Which means that we should act as if we have free will, since no other approach is reasonable given our perspective.

Interesting to note that the Bible implies something like this paradigm, since it describes God as having total control of the universe but also says we have free will.

When you first encounter that pair of ideas in the text they seem contradictory, but further reflection eventually leads many to some variation on this idea.

I don't believe it's the full picture of free will and predestination in Christian theology - just think it's interesting that it fits with this perspective that would have been quite non-obvious to its authors (at least in regards its relationship to modern physics).

I'm not an expert here, but I feel like Bell's theorem has something to say about this. Do you have any comment about that?

Bell acknowledged that such “superdeterminism” represented a loophole in his theorem, but he considered it implausible: https://en.m.wikipedia.org/wiki/Superdeterminism

Kolmogorov complexity is useless for defining the randomness of a string, independent of language: http://forwardscattering.org/post/7

From your link:

"The overall Kolmogorov complexity of a string is thus defined as K(x)=|p| where p is the shortest program string for language L such that L(p)=x and we consider all programming languages."

This is false. No one considers "all programming language", we consider one fixed language (by fixed you can/should mean: independently of the input, or in a different way: you define the language for all possible inputs).

(But it can be ANY universal language chosen from all languages (due to the provably constant overhead) - that's true, but according what follows in your link, your definition not meant it this way.)

> Neither Heisenberg's uncertainty principle nor Bell's inequality exclude the possibility, however small, that the Universe, including all observers inhabiting it, is in principle computable by a completely deterministic computer program...

Either I'm confused (definitely possible) or this is sort of implicitly equivocating between two different senses of "determinism". There are experiments we can perform that appear to demonstrate quantum randomness. Though it may sound superficially plausible that any particular such random outcome is actually the deterministic output of a hidden pseudorandom number generator, that hypothesis is ruled out by Bell's theorem.

What Bell's theorem can't rule out is the hypothesis that not only any individual quantum measurement, but the sum total of everything that happens in the universe including the experimenters' choices of what actions to take during the experiment, is all part of a single deterministic causal path for the whole universe, that just so happens to play out in such a way that we never see anything that visibly contradicts Bell's theorem. This can't really be empirically falsified, but there are various philosophy-of-science reasons to be a priori skeptical of it (depending on which philosophers of science you ask, of course).

Though it may sound superficially plausible that any particular such random outcome is actually the deterministic output of a hidden pseudorandom number generator, that hypothesis is ruled out by Bell's theorem.

As far as I understand Bell's theorem only rules out the hypothesis that the random outcomes are the deterministic output of a hidden pseudorandom number generator that obeys locality. There could be a deterministic, non local process that generates the "quantum randomness", and this could be detectable if it exists.

Any true non-local effect would however clash with the observation that the universe obeys the theory of relativity. Quantum mechanics can handle this by switching to quantum field theory, but non-local theories such an Bohmian mechanics cannot.

Bells theorem rules out only local hidden variables. Quantum mechanics itself works around the issue not because of indeterminism, but because wave function collapses everywhere at once in a non local way.

But perhaps we do not even need to abandon locality, if we modify it a bit. There is no good reason to believe that space at short distances should be similar to Euclidean space, one interesting hypothesis is that space is more like a graph, and entangled particles in addition to the normal long path through the graph are also connected directly, which allows measurement on one of them to change the state of the other.

I'm not a theoretical physicist, but I've heard in informal conversations with some that one idea being explored more now is that perhaps space itself is just a statistical emergent property of entanglement.

Leonard Susskind has several interesting lectures about it which can be found by searching for ER=EPR, but i hope the final theory would explain more of the strange behaviors of quantum mechanics, something like the theory outlined in https://blog.stephenwolfram.com/2015/12/what-is-spacetime-re...

It has been verified out to several kilometers. See the Geneva experiments at https://en.m.wikipedia.org/wiki/Bell_test_experiments?wprov=...

Yes, the idea is that the particles themselves are connected to each other like two ends of a wormhole, so the signal about applied measurement doesn't have to take the 3km long path outside, but can directly go from one entangled particle to the other.

If this were the case we would expect to be able to send information through entangled particles. But it's well known that it cannot be used that way.

Entanglement would allow to send information if it was possible to clone quantum states https://en.wikipedia.org/wiki/No-communication_theorem#Some_.... If we are looking for a hidden variable theory, we need an explanation for no-cloning theorem independently from entanglement issue, so this does not add any additional restriction.

> space is more like a graph, and entangled particles in addition to the normal long path through the graph are also connected directly

Shouldn't this change the wave function though?

Yes, if a hidden variable theory like this is constructed, it will not have a wavefunction and will be deterministic.

Even more so, randomness as people usually use it exists independently of "true physical randomness." If I flip a typical coin in a typical manner, then regardless of whether quantum mechanics is truly random, a sufficiently detailed measurement fed into a sufficiently powerful computer can predict it essentially perfectly. It's a long way of saying a coin flip is close enough to deterministic in practice.

But there's something going on there that I can't predict, and we need a language to talk about it. That language is the same whether "true physical randomness" exists or not. Calling a coin flip "50-50" is just as valid in a deterministic universe as it is in a random one. Probability is a language more than a theory.

Too many people get hung up on "true" randomness, when that's probably not relevant to the the situation they're describing.


You’re talking about something different from algorithmic randomness, which would be a property of (infinite...) sequences. When I throw a coin in the air and say “heads or tails”, you have to make a prediction as to where it will land based on the information you have. Not someone else’s information— _your_ information. If you feel like you have as good a chance of winning as losing, you’ll tell us you think it’s a fair — a “random” — coin toss.

You’re talking about betting behavior. It’s not a wild tangent the way some would think — from this we get the structure of probability theory, we get derived preferences, we get the entire mechanics of Bayesian inference that underlies all systems which learn.

From studying this math — the math of the coin flip with a dollar on the table, the math of the universe — we learn that there _cannot be_ “chances” or “odds” out there in the world that we can see if we look hard enough— what we see if we look closely enough will be the structure of our perception, which is a perception itself constituted by subjective probabilities.

We learn that belief states must be subjective, must be local. There can’t be a fact of the matter in any way that makes sense to anyone. And yet we also can prove that we cannot truly reserve judgment, that we cannot be true skeptics— unless we are also not anything recognizable as scientists.

So, yes, absolutely. In the consensus reality we all agree to occupy, how you decide to answer the question “heads or tails” matters — in the deepest possible sense — just as much as anything else. And folks can get _so_ wrapped up in the idea of what an imaginary computer might do, they forget to notice what they are doing with every waking moment.

If you want to get really crazy about it, you can talk about betting about what the 999999999th digit of pi is. It's not random, it's not even really unknown, but I sure don't know the answer without a bunch of extra work.

this isn't true. given a chaotic system with enough degrees of freedom (even a deterministic) there is no computer powerful enough.

In classical computing maybe. And with phase space analysis millions or even tens of millions of events can be visualized for emergent attractors that would indicate an underlying pattern.

No. Chaotic systems mean that if you integrate the time horizon out, there'll be a time after which your measurement precision isn't precise enough, and you'll get an unpredictable bifurcation. For a typical you-or-me coin flip, the necessary precision for a hypothetically powerful computer to predict with say 99.99% accuracy is probably well above the quantum-weirdness level of accuracy. And that 99.99% accuracy (or even 90% accuracy) is different from the 50% without the measurement and computer that necessitates a 50% theory or 50% language of coin flipping.

>No. Chaotic systems mean that if you integrate the time horizon out, there'll be a time after which your measurement precision isn't precise enough, and you'll get an unpredictable bifurcation.

how is that substantively different from what i said?

>99.99% accuracy is probably well above the quantum-weirdness level of accuracy.

what is quantum-weirdness scale and what is well above? hbar is 34 zeros out.

You said it's untrue that you can predict a typical coin flip because it's a chaotic system. I laid out the limitations of chaotic systems and argued they don't apply to a typical coin flip. Are you arguing about predicting "essentially perfectly?" I use that to mean predicting with a couple nines of accuracy.

By quantum weirdness scale, I mean that the accuracy (epsilon) of a necessary measurement includes accurate position and momentum measurements. If epsilon is too small, we might not be able to hypothetically measure both to the necessary precision. I'm guessing that the necessary epsilon for essentially perfect prediction is large enough that you can hypothetically measure both to within that precision.

If you like, you can work out the maximum number of degrees of freedom that could theoretically be involved in something you could call a fair coin toss.

Hint: it’s a free body under rotation on a parabolic trajectory. You won’t end up needing Tony Stark hardware for this one.

Bonus factoid before bed: coins flipped by humans empirically have a small (~51%) bias toward heads. Supposedly folks tend to start with heads up more often, and the coin tends to flip over 0 times more often than you’d think, and that adds up to a bias you can measure in an afternoon.

Well this is where the law of large numbers comes into play. Flipping a coin half a dozen times is likely several points away from 50:50; randomness isn't an emergent property of a coin flip dataset until hundreds or perhaps thousands of flips later assuming a precise coin flip mechanism and fair coin. So randomness at least within this context is a function of time.

That's not what I was trying to get at. For even a single coin flip, we still need a language to talk about the situation when you and I can't predict it. If I flipped a hypothetical coin that poofs out of existence after a single flip (or more generally single-time events like elections or poker games) the usual probability language applies. No law of large numbers necessary.

You mean "unpredictable"?

Exactly, yes. We have a language of mathematical unpredictability that applies to many things independently of whether the universe is deterministic. Fundamental unpredictability is irrelevant to practical unpredictability, but when you replace the word "unpredictable" with the word "random" people get weird about it. It's the same damned mathematical theory. Fundamental randomness is irrelevant to practical randomness.

Perhaps that language is time. Literally no research has been done into the analysis of precise time series observer effect data.

Headline reminds me of this Dilbert cartoon: https://dilbert.com/strip/2001-10-25

A troll tour guide says, "Over here we have our random number generator." The troll places its hands on a slab of rock and relays the message of "nine nine nine nine nine nine." Dilbert asks, "Are you sure that's random?" The troll responds, "That's the problem with randomness. You can never be sure."

Maybe the message was part of the digits of pi (https://en.wikipedia.org/wiki/Six_nines_in_pi)?

What sort of actions are truly random? I can understand why a coin flip can be viewed as deterministic, because if a precise robot had all the situational knowledge, it could flip heads every time.

But radioactive decay is supposed to be random. As in, every atom is the the same, but some, randomly, decay. That never made sense to me.

If no physical event is truly random, then perhaps all physically realizable one time pads are compressible and thus crackable, and perfect security in a physical setting is impossible.

On the other hand, randomness is not the same as nondeterminism. We could have a specific infinitely long incompressible bitstring that is the source of randomness in our universe, but the bitstring is not changing, so it is deterministic. There would be no empirically distinguishable difference between this and an actual undetermined source of randomness.

And finally, what we call random and nonrandom is essentially an arbitrary choice based on a certain mechanism called a Turing machine. As Dilbert says the number 4 is a random number, and the only random number you need. It is unclear what makes compressible bitstrings special, although it is clear that they are rare. But why does observing a compressible bitstring demand an explanation? We could hypothesize reality is fed by an infinitely long bitstring which some parts are compressible and some parts incompressible. In fact, an infinite random bitstring is guaranteed to have arbitrarily long compressible subsequences, so we could explain any empirical observation of compression as being due to an infinite random bitstring and devoid of true explanation beyond itself.

Irrelevant nitpick: pretty sure you're thinking of XKCD, not Dilbert.


No, OP meant Dilbert. Here it is: https://dilbert.com/strip/2001-10-25

Yeah, Dilbert's (older) version of the joke uses 9.

> But radioactive decay is supposed to be random. As in, every atom is the the same, but some, randomly, decay. That never made sense to me.

Because there is no known mechanism such as your hypothetical precise robot with perfect situational knowledge to determine the radioactive decay.

But is there a "cause" of the decay? It is random, nah? Unlike the causes of the coin flipping heads, which can be modelled as mechanical interactions

I sincerely doubt that anything is truly random, there has to be some type of cosmic drummer behind the scenes biasing certain events. Case in point is the conflict between molecular Darwinism and the numbers associated with the human genome. Approximately one billion nucleotide bases and with four different nucleotide bases in each location == 4^1,000,000,000 combinatorial explosion, a number waaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay larger than 10^120 universal complexity limit, 10^80 stable elementary particles in the universe, and the generally accepted age of the universe ~10^40. The monkeys typing Hamlet thing is computationally intractable.

The main idea of "molecular Darwinism" is that the initial life for had a very short DNA [1]. While the species evolved, the short DNA evolves and got longer [2].

* For example some genes are repeated, a bad copy may repeat a gene and the DNA get longer. Virus may cause some duplications too.

* Some genes are almost repeated,each copy has a slightly different function, so each one has a variant that is better for each function. The idea is that an error in the copy made two copies of the original gene and then each copy evolved slowly differently.

* Some parts of the DNA are repetitions of a same short pattern many many many times. IIRC these apear near the center and the extremes of the chromosomes, and are useful for structural reasons, not to encode information. The DNA can extend the extreme easily because it's just a repetition of the pattern.

* Some parte are just junk DNA that is not useful, but there is no mechanism to detect that it is junk and can be eliminated, so it is copy form individual to individual, and from specie to specie with random changes. (Some parts of these junk may be useful.)

So the idea is that the initial length was not 1000000000, but that the length increased with time.

Your calculation does not model the theory of "molecular Darwinism". Your calculation is about the probability that is a "human" miraculously apear out of thin air with a completely random genome, it will get the correct one [3].

[1] Or perhaps RNA, or perhaps a few independent strands of RNA that cooperate. The initial steps are far from settle.

[2] It's not strictly increasing, it may increase and decrease the length many times.

[3] Each person has a different genome, so there is not 1 perfect genome. The correct calculation is not 1/4^1000000000 but some-number/4^1000000000 . It's difficult to calculate the number of different genomes that are good enough to be human, but it's much much much smaller than 4^1000000000. So let's ignore this part.

Again and irrespective of how much genome information was there initially and what it eventually became, you are still talking about a final optimization problem of 4^1,000,000,000. Even one tenth of that amount of the human genome is an unfathomably large number to randomly iterate to given the generally accepted statistics cited above. The math behind stochastic molecular Darwinism doesn't work out at all.

I don't see where you are getting the idea that humans had to be pulled out of a hat of all possible genetic sequences.

They, like, evolved, right? As the GP says, there was a short sequence that worked, a little got built on, a little more...

There was never any time that any creature was generated by random choice.

Got math?

What in the world are you talking about?

This thread of discussion is about the computationally intractable nature of 4^1,000,000,000

Got math? Maybe post a proof?

Tell me why evolution would require all of those combinations to be tried?

Edit: Microsoft Windows 10 is 9GB. It would be impossible to try 8^9000000000 different programs. Yet, Windows exists, and most of us believe it's contained in those 9GB.

So per your logic the Windows 10 operating system was created by random iteration of x86 opcodes over a lengthy period of time? Huh?

Exactly the opposite. Just because there are so many possibilities doesn't mean that all of them have to be tried or make sense.

You wouldn't code that way and nature doesn't either.

If you just want to talk about how computationally tractable it is, the math is trivial. Optimize one base pair at a time. Now it's an O(4) problem repeated over a billion generations, most of which are bacteria where a generation is measured in minutes.

In practice the changes happening in each generation are all sorts of different rearrangements, but that's different from proving the basic and obvious fact that when you have multiple steps you don't have to spontaneously create the entire solution at once.

Bogosort will never ever sort a deck of cards. Yet it takes mere minutes to sort a deck of cards with only the most basic of greater/less comparisons. Even if your comparisons are randomized, and only give you the right answer 60% of the time, you can still end up with a sorted-enough deck quite rapidly.

(Why sorted-enough? Remember that reaching 'human' doesn't require any exact setup of genes, every single person has a different genome. It just has to get into a certain range.)

There's no random iteration, it's more like stochastic gradient descent with noise. Your number isn't correct even if only because of codon degeneracy.

Haha, so biological neuronal processes utilize a method of gradient descent? Perhaps you should submit your findings to the Nobel Prize Committee :)

Again, this thread is about the computationally intractable nature of 4^1,000,000,000.

Got math? A proof maybe to support your statements?

>>> you are still talking about a final optimization problem of 4^1,000,000,000.

There is no final optimization step that analyze the 4^1,000,000,000 possibilities. We are not the best possible human-like creature with 1,000,000,000 pairs of bases.

> method of gradient descent

Do you know the method of gradient descent? Nice. It is easier to explain the problem if you know it. In the method of gradient descent you don't analyze all the possible configurations and there is no guaranty that it finds the absolute minimum. It usually finds a local minimum and you get trapped there.

For this method you need to calculate the derivatives, analytically or numerically. And looking at the derivatives at a initial point, you select the direction to move for the next iteration.

An alternative method is to pick e few (10? 100?) random points nearby your initial point, calculate the function in each of them and select the one with the minimum value for the next iteration. It's not as efficient as method of gradient descent, but just by chance half of the random points should get a smaller value (unless you are to close to the minimum, or the function has something strange.) So just this randomized method should find also the "nearest" local minimum.

The problem with the DNA is that it is a discrete problem, and the function is weird, a small change can be fatal of irrelevant. So it has no smooth function where you can apply the method of gradient descent, but you can still try picking random points and selecting one with a smaller value.

There is no simulation that picks the random points and calculate the fitness function. The real process in the offspring, the copies of the DNA have mutations and some mutations made kill the individual, some make nothing and some increase the chance to survive and reproduce.

Would you please not be a jerk on HN, no matter how right you are or how wrong or ignorant someone else is? You've done that repeatedly in this thread, and we ban accounts that carry on like this here.

If you know more than others, it would be great if you'd share some of what you know so the rest of us can learn something. If you don't want to do that or don't have time, that's cool too, but in that case please don't post. Putting others down helps no one.


>biological neuronal processes

who is talking about neurons? Beneficial random mutations propagate, negative don't, on average. In this way, the genetic code that survives mutates along the fitness gradient provided by the environment. The first self-propagating structure was tiny.

It's not literally the gradient descent algorithm as used in ml, because individual changes are random rather than chosen according to the extrapolated gradient, but the end result is the same.

>computationally intractable nature of 4^1,000,000,000

which is a completely wrong number, even if only because of codon degeneracy. Human dna only has 20 amino acids + 1 stop codon, which are encoded by 64 different sequences. Different sequences encode the same amino acid.

You're of course free to "doubt that anything is truly random" and to suspect that "there has to be some type of cosmic drummer," but I feel compelled to point out that your "case in point" completely fails to support your opinion.

Your example insinuates that (a) all of the human genome is required to correctly model the human phenotype, i.e. each bit is significant, and, more importantly, (b) the human genome came into existence as-is without a history of stepwise expansion and refinement.

I can't know whether you're a creationist, but I will point out that your attempted argument is on (e.g.) #8 on Scientific American's list of "Answers to Creationist Nonsense" (https://www.scientificamerican.com/article/15-answers-to-cre...). Amusingly, SciAm's rebuttal even explains how the "monkeys typing Hamlet" fails as an analogy to the human genome.

I mean, while you might have 4^1,000,000,000 possible genomes, only a vanishingly small fraction of those have ever existed.

Your argument is essentially the anthropic principle, the entire "we are here because we are here" thing. Even the Second Law of Thermodynamics counters stochastic evolutionary strategies, the math behind non deterministic molecular Darwinism is simply not possible given the youth of the universe.

You’re definitely reading something I didn’t write.

All I said is that this is a large state space, which has been largely unexplored. The reason it’s largely unexplored is because most of the state space is useless, inert garbage. The amount of time it takes to create a genome this large is proportional to the size of the genome, not the size of the state space. That’s how evolution by natural selection works. If you hypothesized a world without evolution, where things appeared completely by chance arrangement of molecules, that’s when the size of the state space becomes important.

So I would say that your argument is not an argument against molecular evolution, it is an argument against something else.

> the math behind non deterministic molecular Darwinism is simply not possible given the youth of the universe.

Doesn't it also depend on the size of the universe ? We don't have any idea how big it really is. It could be infinite in which case it's not only likely, but inevitable.

What has the Second Law of Thermodynamics has to do with evolution? If you think Earth is an isolated system, just go out on a cloudless day and see for yourself why that is not the case.

The only drummer needed, is environmental pressure. This is well understood and computationally ordinary.

Last time I argued with someone who didn’t believe in evolution, I went home and wrote something which implemented it. It took me half an hour and worked first time. We’ve been using simulated evolution as one of several ways to train AI for quite a long time now.

When I shuffle a deck of cards and put it down on the table the top card doesn't change anymore. Yet anybody will consider the first card to be random.

In practice randomness is about lack of knowledge not about actual randomness or processes.

(Edit: Sorry, but this clickbait title redefining randomness as something other than what everybody understands it to be annoyed me.)

I'd argue probability is about that (given information at a time), i.e. everything has probability 1 or 0 given perfect information (and computation/inference) about a deterministic universe, but some other number (by a metric) given partial information (and perfect computation/inference).

Randomness is perhaps a description of correlation? Which obviously relates to probability, prediction of uncertain outcomes, but maybe more general?..

Note that with this definition everything is an hypothesis. Gravity is just an hypothesis. If you tomorrow drop a rock the hypothesis is that it will fall with an acceleration of 9.8m/s^2 = 32.2ft/s^2. [With the usual approximations, like the air density and viscosity is small, it is a rock not an Helium balloon disguised as a rock, the wind is not too high, ...]

We have not proven that if you repeat the experiment tomorrow, you will get the same result, it is only an hypothesis.

The randomness of the QM is not proven, but for now we don't have a better alternative to predict the results of the experiments. Just like gravity that is has not been proven, but for now we don't have a better alternative to predict the results of the experiments.

By that logic, the absence of the rabbit hole with the Cheshire Cat in it isn’t proven, too.

The equations of quantum mechanics aren't random, only its interpretation, in the form of born's law.

Unitary evolution means there's neither information loss nor gain, and if there was anything random you would at the very least expect to see information gain (as new bits of information are created from the "random" result of an observation).

Randomness in quantum mechanics isn't even a hypothesis, it's an interpretation.

Important nitpick: the equations that govern dynamics in quantum mechanics aren't random, and evolution is unitary. However, the process of "measurement" is described by a (obviously non-unitary) projection operator onto one state; the so called "collapse". If you, for example, attempt to answer the very real physical question "given two particles in with some total joined state Psi, one is measured and found to be in state Phi, what state is the other particle in?", you would have to use such an operator. There isn't anything interpretive about this, as such experiments have been done again and again. It's a standard part of the mathematical framework.

Now, whether the underlying physics is truly random, or whether it's deterministic and the projection only represents a sort of Bayesian update of prior information (a la MWI), that is indeed a matter of interpretation. And completely unfalsifiable by definition, and therefore not even really a question for physicists. It's philosophy at best.

Cosmology was once considered philosophy and totally untestable. Just saying...

A couple of things are at the heart of the matter here.

Hypercomputation (halting problem) and Infinite memory/storage

Ascribe either of these to nature, and nature can be deterministic and still, the probability of us discerning its RNG's operations is 0.

This is a good video explaining the different intricacies of how "God's dice" might be constructed:


Many phenomena thought to have been random due to their quantum nature have been found to be based on their initial conditions instead. See spontaneous emission photon phase for example: https://iopscience.iop.org/article/10.1088/1367-2630/9/11/41...

It's worth mentioning that most cryptographic algorithms are designed to be strong using pseudo random algorithms, so real randomness isn't a requirement (although obviously some unpredictable starting point is required to get the ball rolling).

Well unless you use a one time pad but nobody does (hopefully).

> Well unless you use a one time pad but nobody does

Was under the impression intel agencies and militaries use OTP's regularly and the keys are carried in diplomatic pouches around the world.

But one implication of the article is that understanding the origin of life, indeed, a reproducible experiment to create life from scratch, could fall prey to Hofstader's Law.


>Neither Heisenberg's uncertainty principle nor Bell's inequality exclude the possibility, however small, that the Universe, including all observers inhabiting it, is in principle computable by a completely deterministic computer program, as first suggested by computer pioneer Konrad Zuse in 1967 (Elektron. Datenverarb. 8, 336–344; 1967).

Leibniz has already proposed essentially the same centuries earlier. And there have probably been people who said the same even earlier.

(Also, mentioning Konrad Zuse made me directly suspect that the author is German. He's Swiss, but close enough.)

In terms of the physical universe, all of our models/theories and laws are based on incomplete knowledge and observation. That being saaid, our models and theories provide a means of investigation of the universe. Where we need to be careful is coming to the idea that we "will" be able to "know" what is happening. We do not have "infinite" knowledge (there is always more to learn) and so any models and theories we come with can (and will) be superceded by later data that we collect.

"Randomness" in the small can and does appear to be non-random in the large - we make predictions as to what we will see over large numbers of events when we are unable to determine if any single event will fulfill that prediction. Radioactive decay is a good exampe of this. Two-slit refraction patterns are also another example. Much, if not all of our technology, depends on this, whether this be semiconductor design or manufacturing any material products such as steels or concrete.

What does happen is that we have more and more interesting research areas in which we can investigate the underlying principles that govern our universe. But, we must not make the mistake that we will "know" what those principles will be be. We can and do develop workable and useful models and theories to help us get a handle on understanding this universe we live in. We live on one small planet in an isolated region of our galaxy in an extraordinary and immense universe. We do not kave the ability to explore that universe in any detailed way except by proxy observations. So, instead of getting caught up in being "sure", let us have fun in exploring everywhere we can and continue to gather data and discuss what this data means and develop workable theories and models that we can use.

As a disciple of the living God who created all the that we see and do not see, I consider that the universe has a set of specific rules and laws by which it operates and that we can and should try to understand what those laws are. For me that is an act of worship to investigate and understand the what and how.

For those who are of other belief systems, whether that be Hindu, Buddhist, Moslem, Atheists, etc., there is just as much an incentive to study the universe around us and understand what and how it works. There may be additional questions that might be raised from each viewpoint that is not of concern for any of the other viewpoints like "why".

BUt what it all boils down to, is that we live in a wonderful and extremely interesting universe and there is much to learn about it and have fun while learning about it.

Little if any empirical research has been done into the quality of entropy associated with repeatedly observing a quantum state. Pretty easily accomplished using an RTOS (RT_PREEMPT or Xenomai) with GPIO sampling, and with n-dimensional phase space analysis of that dataset to determine if any patterns emerge. There are plenty of tools from the field of chaos theory and nonlinear dynamical systems analysis to prove or disprove the fundamental nature of quantum randomness with.

You could probably run something on D-Wave Leap and get actual quantum noise or something if you like

Quantum random number generators do exactly this and have been demonstrated and analyzed many times.

This is one of those questions that ultimately comes down to the exact definitions you use for everything, because the commonly-used interpretations won’t cut it. Same bucket as questions like “does free will exist?” (define “free will”)

I started writing a longer version of this comment but I think that a core part of the question is whether “randomness” is an epistemological convenience, a statement about “order“ or “rules”, or something else.

What I don't understand about quantum physics is why doesn't everything just become a statistical smudge after a couple iterations? Why is reality so coherent?

My takeaway is that this is the reason some physicists claim 'everything is information' because there is some underlying form that gives the statistical quantum physics a consistent pattern instead of devolving into randomness.

Information theory defines information starting from probabilities, so there is no much escaping from that.

I think the physicists mean something like mutual information, i.e. some underlying reality describes the global probability distribution better than the individual probabilities themselves. It is this underlying reality that is 'really real' and the quantum fluctuations illustrate it like a flickering TV screen shows a globally consistent picture.

Back in 2009 Alex Dragusin hypothesized: "In a finite space there must be a finite number of events, which are strictly related (in a cause-effect chain) to the finite constraints, are therefore not random."

In the absence of high enough computational resolution one would perceive this as randomness. This is also related to the quest for determining if we live in a simulation.

"In a finite space there must be a finite number of events" seems to have counterexamples: take events which occur on vanishing small space, then you can fit an infinite number of them in finite space. Think about converging infinite series.

I think the author means discrete when they say finite. In that case, they mean Doubly special relativity. https://en.m.wikipedia.org/wiki/Doubly_special_relativity Doubly special relativity - Wikipedia

Even simpler: there are an infinite number of real numbers in the interval between 0 and 1. Indeed, between any two real numbers. If spacetime itself isn't quantized then you get an infinite number of possible events, even within a finite hypervolume of spacetime.

I'm going to try out my luck here. In not so distant past, I came an article whose content was something along the line of 'theory of randomness' or 'history of randomness'. As what the website looked like, it looked as if written in basic HTML/CSS.

If someone know which site I might be alluring to, please post it here.

A pseudo-random noise source may be just as good as the real thing, for producing satisfyingly even-handed results.

But it turns out to be a lot harder to design a really good emulator of randomness than one might guess. Certain Monte Carlo simulations using the Mersenne Twister turned out to be oddly but unmistakeably biased.

Can you explain to a layman what's odd about it?

Mersenne Twister was maybe the first in a class of random number generators that has lots of state -- an order of magnitude more than previous designs. Each time a number is pulled from it, some of the state is stirred, and the next number comes from mostly other bits, and stirs others. They have to be fast, so you can't touch too many bits per number extracted; taking about the same time for each number is nice, too.

So, one measure of generators is how many numbers you pull before you get the same sequence again. MT's cycle is very long, so in practice you never see a repeat, even if you see the same number many times. (In many simpler generators, seeing 3 then 8 means next time you see 3, the next number will be 8. A great deal of simulation was done with such generators.) The numbers from an MT satisfy many different measures of apparent randomness.

Monte Carlo investigations consume very many numbers. They might use the numbers in a more or less periodic way, so that any match-up between cycles in the problem and cycles in the generator can skew the results. The main MT cycle is very long, so any skewed results probably point to lesser cycles as the bits stirred are later encountered again. But it's hard to imagine a way to detect such cycles deliberately from the bits you get out. Encountering a process that finds them accidentally is amazing.

Fascinating, thanks. Not sure I understood it all, but I appreciate the reply.

In relation to religion, this is interesting too, the Big-bang that happened 'randomly' could also be explained as a series of cause and effects. On another note, also a miracle according to scripture, could be a series of cause and effects fast-forwarded basically.

It's comical that the article suggests we always try to falsify randomness, when the "simple" deterministic explanations (Superdeterminism) at this point are unfalsifiable.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact