A note -- if you're linking to arXiv, it's better to link to the abstract (http://arxiv.org/abs/quant-ph/9908043, or http://arxiv.org/abs/quant-ph/9908043v3 for this particular version) rather than directly to the PDF. From the abstract, one can easily click through to the PDF; not so the reverse. And the abstract allows one to do things like see different versions of the paper, search for other things by the same authors, etc. Thank you!
This is a terrific paper, not just because of the titular subject matter, but because it's a really great overview of physics in general, and how various core theories (quantum mechanics, relativity, information theory, thermodynamics, and even cosmology) relate to each other.
BTW, there's a simpler way to calculate a physical upper bound on computation (though this method yields a bound that is not as tight): model every elementary particle in the universe as a state machine that advances at the Planck frequency (i.e. the inverse of the Planck time). The number of elementary particles times the number of Planck times between the big bang and the heat death of the universe puts an upper bound on the number of states that such a computer could enumerate. The number turns out to be surprisingly small, about 2^500 or so. Among other surprising consequences, this shows that, if you had the ultimate data compression algorithm, any state computable by this universe could be compressed down to ~500 bits. Note, however, that if you could do that, you could compute Chaitin's omega and solve the halting problem, so optimal compression is impossible. Look up Chaitin's theorem if you want to dive into that rabbit hole.
Forgot to add in communication networks between the isolated I/O bound "supercomputers". Given one set of massively parallel computers of ID number 0 to somethingbig then changing the wiring such that the peristaltic array is reversed would result in a dramatically different output.
Its kind of like a hashing function, feeding in different data (by changing the "wiring") results in different outputs.
That also leads to some interesting caching delay type problems if processor 42 halts till it gets data from processor 7 but due to cosmic inflation they're out of each other's light cones, forever. So Chaitin's constant would vary based on how fast the universe expands and what fraction lives outside of any given part's light cone.
Fun stuff to do with the constant, assuming it actually exists (as opposed to a mere concept) and can be calculated:
Given a NSA level of network spying as a resource, if you know 5% of raw (presumably non-executable) data should halt when executed and 100% or 0% of virus payloads halts (uh, yeah whatever) then periodic random sampling of supposed non-executable highly compressed video data halts at a 5% rate its probably legit and if it halts at a 100% rate then you can set off the alarm bell that something funky is happening right now online, a worm is trying to spread using buffer overflows in video codec data or something like that. Virus writers could defend against this, if they knew the constant, by making sure their virus payload halted exactly 5% of the time to mimic real raw compressed data. Cool as this hollywood movie plot sounds it would probably get simplified into CSI style "lets just enhance the pixels, ok?"
Also the SETI people could probably do something interesting with it, although I'm not clear exactly what. IF it were calculated to be exactly 7/22 (LOL) then an alien civilization repeatedly squirting 7/22 out to the world would imply that civilization has aged beyond the "must be this tall to enter" bar on the first contact ride, or they're just really big fans of the reciprocal of pi. The analogy is if you really want to impress a pre-computer civilization, give them the result of some ridiculous factoring problem that would take Andorian-centuries or whatever to calculate by hand but can be verified in minutes if you know what you're doing.
> Forgot to add in communication networks between the isolated I/O bound "supercomputers".
Nope. This model makes no assumptions about the actual computations being performed at each particle. The particles could be communicating or not, it doesn't matter. The total number of potentially physically computable states is still the same.
I'm not sure if I understand this. The number of particles in the universe is of the order of 2^250. Let's assume that every particle can encode a single bit. Then that's 2^250 bits of information (not just 250 bits!), for a single point in time.
Yes: the speed of light. Individual particles can't interact with each other at the Planck frequency. So this model treats every particle as if it were a self-contained computer that encodes information in whatever degrees of freedom it has. Note that this model says nothing about how many bits are in each computed state. It only counts the total number of states that the universe could potentially compute. That number is ~2^500.
It would seem to me that if the observable universe has X particles, and each particle has two states, then the observable universe is in one of at least 2^X possible states, since light has had the opportunity to cross from one side of the observable universe to the other (or it could be off by only a factor of two, considering that the observer is somewhere in the middle).
Or look at it from another perspective. My computer has 4GB of ram. That is 2^35 bits, and thus it has 2^(2^35) possible states (huge!) Since my computer is only a small part of the universe, the universe must have a much larger number of possible states.
Nope. It seemed like too trivial a result to try to publish it (though maybe someone has -- I don't know).
> the observable universe is in one of at least 2^X possible states
That's right, but it completely misses the point. Yes, your computer has 2^(2^35) possible (maybe "potential" would be a less misleading term) states, but the number of states that it will ever actually be in is bounded by the number of clock cycles the computer undergoes during its lifetime. If your computer runs at 1 GHz continuously for 10 years, it will have been in at most ~10^17 = 2^56 states. In the lifetime of the universe, your computer will only ever be in a tiny tiny fraction of its potential states.
So after 2^(2^35) steps my computer has certainly finished its computation. The state it is now in can be described by 0 bits of information, a priori. The question is, how is this useful in any way? :)
You're missing the point, which is that a physical computer in this universe could enumerate no more than 2^500 distinct states. How is this useful? Well, for example, it lets you put hard upper bounds on the size of crpyto keys. For symmetric crypto, assuming your algorithm is a CSPRNG, you will never need a key larger than 512 bits no matter how advanced computer technology gets.
Is that true? Cracking a key is a highly parallellizable operation, where little communication is needed between processors. Would your statement still hold here?
Yes. Just look at the details of the calculation. It makes very (very!) liberal assumptions, and produces a very loose upper bound, i.e. 2^500 is almost certainly a gross overestimate of what is actually achievable even given arbitrarily advanced technology. The point here is that even a very loose upper bound is still quite small.
Is computing Chaitin's omega the most pointless waste of time that can possibly be imagined, by definition? If you actually had the value can you do anything with it at all?
As noted above: you could solve the Halting Problem, and compute optimal data compression. Oh, and then we could go ahead and calculate Kolmogorov complexity K(x) for any arbitrary string, thus allowing Solomonoff Induction and AIXI to work. If you can compute Omega exactly you can thereby do almost anything at all.