My favorite anecdote about him, which was not widely known, involved what used to be called "nut letters." Before the internet, if you were a famous scientist, particularly a famous physicist, you would receive actual letters from people all over the world asking for help with their perpetual motion machines, time travel devices, and similar nutty theories. Having worked on black holes, gravity, general relativity, and the bomb, Wheeler was quite a nut magnet. He was also blessed with a bit of OCD in the way he organized and categorized all his notes (his annotated bibliography for Misner Thorne and Wheeler Gravitation filled many shelves in the library). John didn't just receive nut letters. He received, read, organized, filed, classified, and acted on nut letters. His preferred response to them was "I'm afraid I'm not very knowledgeable in the area of your work but I believe you should contact _____ who is working on similar conjectures and may be a good source of additional insights" at which point both parties in that conversation would be so ecstatic to be talking with someone recommended to them by the great John Wheeler that they'd never bother him again. While in grad school, I happened to read an article in the New York Times on a perpetual motion machine (their weekly Science Times section was fantastic but did occasionally step into pseudo science topics). At the end of the article the main researcher thanked John Wheeler for having introduced him to the theorist who had helped him refine his understanding of the mechanisms at play in his invention, and I couldn't help but smile at the wonderful successes of John Wheeler's nut dating service.
In this episode of Tested Joe DeRisi describes how a random letter from a snake owner led to some interesting published results in molecular biology.
The title isn't wrong, either, this one is terrifying (so far).
Perhaps that history could still collapse into existence in stages, as subsequent observations of it became more detailed. The first observer would see far-off galaxies as small white dots in the sky, but not the composition and interaction of each, so perhaps those aspects don't decohere until later observers see more using telescopes.
And not only history but also laws of physics could come into existence in this way. If earthlings are the only observers in the Universe, perhaps dark matter holding galaxies in a circular shape is a law of physics which only came into existence when observers on Earth could start seeing inside other galaxies.
No, you wouldn't. Someone observing you would. Am I getting relativity wrong?
Any incoming light would be blue shifted to infinity, or red shifted to zero, so if any actually touched you there would be infinite energy there. Luckily no light can actually reach you - nothing can. You would have no incoming sensory information at all.
Because of length contraction there's no distance between the start and and of your journey, so from your POV it doesn't take an time at all, since the distance was so short.
10^18 years by our perspective would only be 3 years by the perspective of something traveling at the speed of light.
We observe things going the speed of light all the time =]
"Using the largest allowed value for the photon mass from
other experiments, we find a lower limit of about 3 yr on
the photon rest-frame lifetime. For photons in the visible
spectrum, this corresponds to a lifetime around 10^18 yrs."
Which does work under the assumption that photons have a non-zero rest mass, and as such, could decay. I probably should have phrased my post better - this is hardly settled science, and the majority of physicists would almost certainly say that photons, with our current accepted theories, do not have rest mass. SR is pretty explicit on this :). But we can't say for sure that photons have zero rest mass - experiments have allowed us to set an upper bound on the rest mass, but not allow us to deterministically say they have zero rest mass. In such a case that photons do have rest mass, we would need to stop saying "The speed of light", and instead say "The speed of a (rest) massless particle", though this might be worth doing in general - gluons (and gravitons, if they exist) should be similarly massless.
The good news is that macroscopic objects exhibit behaviours that are predictable. The bad news is that. while those behaviours are related to the behaviour of the microscopic scale, a full description of systems involving that many particles will likely remain out of our reach forever. Although I could see it happening that we might be able to simulate systems of a sufficient scale to convincingly demonstrate wave function collapse.
I remember reading somewhere about a geometry which described photons' existences as fundamentally different from particles with mass. Particles with mass have lifelines that change over the course of time. Particles without mass, such as photons, don't experience time and never change themselves. Photons etch out a straight line in this geometry from fixed points to other fixed points. The fixed points represent particles with mass. (something like that)
Wilczek, 2006. Quoted in a paper that I like for its thorough stab at actually specifically defining the observer. It’s a tough task indeed!
* Whoever is running the simulation cares about whether we're looking.
* They had a good reason for going with quantum mechanics rather than something that would be easier to simulate.
I know some people would say that's because they're interested in simulating conscious life and quantum mechanics is essential for consciousness. However, I find it very implausible that quantum mechanics is essential for consciousness so I'd prefer a different hypothesis for why they went with a difficult-to-simulate physics in their simulated universe. Any ideas?
We can reasonably discuss the feasibility of simulating our universe using our universe. We can do some reasonable discussion of "universes that run something like ours but with some differences". We are profoundly ignorant about everything else.
Frankly, I don't think it's a question of complexity, or even expressivity, but more one of encapsulation and information hiding.
For one, simple relative to every universe we've ever created, such as Minecraft, World of Warcraft, Second Life, etc., all of which are megabytes upon megabytes (if not gigabytes) of ad-hoc rules, specifications, hard-coded geometry, etc. Even if we ignore the visualization aspects, they are all very complicated.
It is very likely that whatever the ultimate Theory of Everything is would fit comfortably into kilobytes. Very dense kilobytes, kilobytes you could spend your entire life understanding, but kilobytes. Certainly the entire Standard Model could be packed into kilobytes even starting from a fairly simple axiomatic mathematics, with a half-decent encoding. Its complexity comes from the number of particles interacting under its rules, and the fact our human brains aren't particularly suited to running QM calculations and confuses that difficulty with "complexity", since very few people have a grasp on what complexity truly is . Most of the complexity of the Standard Model would actually be building up from the simple axioms and defining imaginary numbers and the other basic mathematic operations; once that was done, the fundamental equations to evolve state and the table of constants would not be very large. (Assuming that the parameters are not infinitely complicated real numbers, and that we can cut them off safely aften 40 or 50 decimal places without noticing a difference.)
The real universe's complexity does not appear to stem from its core building blocks necessarily, but from the staggering amount of space it occupies and the amount of computation it does, with both the Planck space and Planck time constants being staggeringly low on the one end, and the age of the universe and the expected age it should be able to survive staggeringly large on the other.
What's really shocking is that the Life rule set is probably sufficient to describe our universe. It's Turing Complete, and there's a sense in which any TC set of rules is equivalent to any other. However, it is also very likely that running our universe on Life rules would also require a massively complicated initial state, making it an unappealing theory to explain the real universe for that and several other reasons. We'd like to see something where the sum total of the complexity of the rule set and the initial state isn't that great, or is at least proportional to the amount of "stuff" the universe seems to have started out with.
Hey, there's a seed for a hard sci-fi novel: we live in a simulation (or pocket universe), the laws of which were intelligently designed to produce aggressive life forms in order to make entertaining viewing for the builders and their clients. So all the wars we decry actually saved our universe by keeping it 'interesting'.
(I find it interesting that I'm willing to allow that evolution as we know it is just a quirk of our universe's dynamics, but I assume that commerce is universal...)
This would explain both our progress in physics and the strangeness of QM, and give credence to the idea that we are in a simulation.
The power running the Universe doesn't have infinite CPU/GPU/RAM resources at their disposal, any more than a game developer at this level of reality does. Some optimizations (or hacks, to be less charitable) are necessary. Quantum behavior is one of those hacks, part of the code that manages the potentially-visible set. It would be wasteful to render or run physics on particles that nothing is interacting with.
The only questions are whether God is an Nvidia fanboy or an AMD die-hard, and if He signed an exclusive deal with Sony or Nintendo at some point in the future.
This is a common misconception. It's more difficult to simulate a quantum system than a classical system. To simulate a classic system you have to simulate one posible state. To simulate a quantum system you must simulate all possible path.
This is the advantage of quantum computers. With the hardware for only N qbits, you can calculate 2^N states (for some very specific problems that are quantum-computer-friendly).
See also (very accessible): https://www.quora.com/Why-dont-more-physicists-subscribe-to-...
You can probably do some kind of smart representation of the wave, were you decompose it in some smart functional base of smooth waves. With a very optimized representation the calculations would be equivalent to the usual quantum mechanics calculations. With a not smart enough representation you would get something that needs more calculations.
Maybe I'm not getting it, but that still sounds simpler than simulating an exponential number of states for the entire universe.
The problem is that if you have five particles then the function ψ(r1, r2, r3, r4, r5, t) is a function on 3*5+1=15 coordinates. If we assume that for each particle we use a 100x100x100=1000000 grid, then for the 5 particles we need a grid of (1000000)^5=1000000000000000000000000000000 points. The space were the wave is defined grows exponentially. (As a rule of thumb, with a current notebook, you can do up to 1000000000 or 1000000000000 calculations in a few seconds/minutes/hours.)
I was also wondering about entropy as the arrow of time -- positrons are just as subject to entropy, no?
That's special relativity in a nutshell.
Did he, really?