It reminds me a lot (though it is quite different) of "Flatland: A Romance of Many Dimensions" . This book gave me the best intuition yet about what it would be like to experience a fourth spatial dimension.
Having said that, I still don't quite know what to think about it. I definitely don't have a mental picture of what it would be like. All I know is that when I was reading the book, I had a part of my brain clicking along, which felt like it understood what it is to experience an additional spatial dimension. Yet there is still very little actual comprehension going on in there.
[A bit of a spoiler alert follows]
It starts by explaining in meticulous detail what it is like to live in and experience fictional 2D world instead of the 3D one we are used to.
Then, the main character transcends into a 1D world. The book explains to the 2D inhabitant what it is like to live in a 1D world. Explaining this to the 2D inhabitant is the same as explaining the 2D world to us (and our 3D world).
Finally, the character moves up to a 3D world, which they cannot comprehend. However the explanations which are given to the character are quite intuitive and satisfying, and help to explain what it might be like for us to move up one dimension and experience it.
 - https://en.wikipedia.org/wiki/Flatland
p.s. The book is also a hilarious parody on victorian era attitudes towards women.
* It's impossible to simulate a universe of our current resolution, because it would take more matter than the original universe.
* You can't just simulate 'observable areas'. Everything needs to be simulated.
* An infinite loop does not end, even in an infinitely powerful computer
* A fun calculation from the ZFS folks: To fully populate a 128bit filesystem (ie permute all combinations) you a lot of energy. So much energy you could boil the world's oceans, See:
*  Physical Limits of Computation: http://arxiv.org/pdf/quant-ph/9908043.pdf
*  https://blogs.oracle.com/dcb/entry/zfs_boils_the_ocean_consu...
The story is quite clear on this:
"a countable infinity of TEEO 9.9.1 ultra-medium-density selectably-foaming non-elasticised quantum waveform frequency rate range collapse selectors and the single tormented tau neutrino caught in the middle of it all"
That is, the entire simulation is being run on a single tormented tachyon.
As for this not being possible in the real world, well, sure. In the real world all evidence points towards there being limits on the computational capacity of the real universe. In their universe, by construction, they do in fact have access to countably infinite amounts of computation, at which point this is potentially possible. Physics in the story are obviously, by construction, not the same as the ones in the real world. This is, shall we say, a well-established literary move in the field of science fiction. Very, very, very... well established.
Sure it does! Run one processor instruction, then run the next one in half the time, then the next one in half as much time again...
If the infinitely powerful computer is an accelerating Turing machine, wouldn't it end in finite time?
 Copeland, B. J. (2002). Accelerating turing machines. Minds and Machines, 12(2), 281–300.
I can't make sense of this. Whatever is outside the causality sphere is by definition irrelevant.
Take a piece of paper. Focus a camera on that piece of paper.
Turn on an overhead light. Did the paper change?
Now, imagine this not only to be a visual representation as limited as this, but instead causality. Then the causality sphere is, theoretically, as big as the universe itself.
The concept being discussed is the limited scope of simulation. However, it can be assumed that everything affects (or can affect) everything else. The only way to know whether or not one thing may affect anything else is to simulate it.
How could you know the sphere of causality without simulating everything?
An object's light cone is not a property of the object; it is the result of the properties of the object as they related to the light shining on that object. If the source of that light isn't represented in a simulation, then unless that object produces the light itself, the object must be dark or the simulation isn't accurate.
The discussion is about simulating a universe. Unless there are truly discrete systems within that universe, you can't do that simulation within a limited frame.
I think what the author was alluding to, though, was to say that the actual rendering of the view, rather than the calculation of the view, would be limited to the current frame of reference.
We do know the sphere of causality already, no computation needed: it is the sphere defined by the distance light has had time to travel. You don't need to simulate anything outside that zone to know that it is impossible for matter outside it to physically impact the center.
Think of it this way: if you want to simulate Earth through the year 2000, you know you don't need to simulate Alpha Centauri after 1996, since it is more than 4 light years away. You can know this without doing any computation at all.
I think you're picturing a sphere of 13Gly radius (or whatever) centred on present-day Earth, expanding at lightspeed to encompass new stars and galaxies. But while new matter enters our light-cone, it is not doing so as stars and galaxies. Those all have pasts within the light-cone - you don't need to go outside our past light-cone to find all the things that can affect them. Any matter that only entered our observable universe in the last year doesn't have a past, because it only just came through the Big Bang. It's primordial chaos, not fully-formed stars; you don't need to work out millenia of its past to know what's there. Unless for some reason the simulation needs to calculate pre-Big Bang conditions, which is possible, but then the definition of "light-cone" needs to be amended.
Of course, an entity's light cone is still likely to be quite large, especially if your simulated universe is old.
Some back of the envelope calculation indicates that the spacetime "volume" of the light cone of an object is 1/8th of the spacetime volume of the universe up to that point. So using the light cone would net you an 8x speedup over a brute-force Universe render. Nothing to write home about if you have infinite computing power.
And the light cone (more precisely, the past light cone) keeps getting larger, so in fact the "amount of universe" that needs to be simulated increases without bound as time goes on.
Our current best-fit model of the universe has it being spatially infinite, which means the "volume of spacetime" between the Big Bang and "now" (more precisely, between the Big Bang and the "comoving" surface of simultaneity that passes through the Earth right now--now" is relative so you have to specify what simultaneity convention you're using) is infinite. Since the volume of any past light cone is finite, it will be effectively zero compared to the total volume of spacetime up to any surface of simultaneity. (But as you go to later and later surfaces of simultaneity, the volume of the past light cone of any event on the surface of simultaneity still increases without bound.)
Even in spatially finite models (which are not conclusively ruled out by the data we have, though they are unlikely to be correct), the fraction of spacetime between the Big Bang and a given "comoving" surface of simultaneity that is occupied by the past light cone of an event on that surface of simultaneity is not constant; it gets larger as you go to later surfaces of simultaneity.
Is that an assertion? Do you have something to back up that claim? [I think if this is not backed up then your first point falls as well].
> * An infinite loop does not end, even in an infinitely powerful computer
What do infinite loops have to do with anything? Presumably the creators of the simulation would be skillful enough to avoid them.
> * A fun calculation from the ZFS folks: To fully populate a 128bit filesystem (ie permute all combinations) you a lot of energy. So much energy you could boil the world's oceans,
Once again, who said anything has to be 'fully populated'?
Take the "window" of the Earth. Without simulating the rest of the solar system, there is no way to account for the gravity effects of the larger planets, the energy from the sun, the occasional impacts from asteroids, the high-profile fly-bys of comets like Halley's, etc. Without simulating the rest of the universe, what would the simulated-astronomers on your simulated-Earth be looking at when they peer into their telescopes? What happens when you fast-forward far enough into the "future" that the Milky Way merges with Andromeda?
You can't simply hand-wave these things away by saying, "well, we would ray-trace the things that are observable" because you have no way of knowing what is observable without simulating the whole rest of the universe too. It's either simulate the whole thing, or your simulation is very limited and inaccurate.
For instance astronomers observe phenomenons that occurred in galaxies far far away. If in the simulated universe you just bound yourself to simulate, say, the solar system then such events wouldn't occur exactly in the same way. We wouldn't find the same bodies at the same position. That in turn would make the path of the simulation diverge significantly from our reality.
Think for instance what would happen if the constellations weren't the same in the sky. All astrology would be different. It might seem like a minor change but in the course of centuries that would probably amount to a big change.
That being said since the story postulates that the computer has infinite processing power and storage you can just leave out this bit and the story still makes sense, you just assume that it simulates the entire universe at all time.
If tomorrow we receive a transmission from a form of life that lived thousands of years ago thousands of light years away from us it's going to change a lot of things. You can't ignore that.
You can only simulate a small part of the universe if you can simulate correctly what happens at the boundaries of the region. Otherwise it "contaminates" the simulation inwards at the speed of light. if you want to simulate what will happen on earth for the next year you at least need to know the full state of the universe in a sphere one light year in radius, or at least that's my understanding.
I think these concepts somewhat overlap with the "Holographic principle" although I might be mistaken, I'm way out of my depth: https://en.wikipedia.org/wiki/Holographic_principle
In Hashlife you still need to have your entire universe in the hash tree, you can't take a bit of the pattern, only simulate for X generations without considering outside influence and expect to get the same results as a full simulation.
So, translating that into physics:
1. You know you are only going to simulate for the next, say, day.
2. Take all the photons, xrays, etc. heading towards earth and continue to simulate them, but freeze everything outside of a reasonable distance (in hash life, this isn't arbitrary, it is perfectly defined).
3. Continue simulation for that sub area only.
That would be this:
But it was still pretty exciting stuff. Holy Zarquon, they said to one another, an infinitely powerful computer? It was like a thousand Christmases rolled into one. Program going to loop forever? You knew for a fact: this thing could execute an infinite loop in less than ten seconds. Brute force primality testing of every single integer in existence? Easy. Pi to the last digit? Piece of cake. Halting Problem? Sa-holved.
At least I hope that's what it is.
* more of sam's great stuff: http://qntm.org/structure and http://qntm.org/ra (ongoing)
I first found this site when someone linked to Fine Structure here on HN. And so it goes...
iirc I first found the site via Everything2, a curious place where people do writeups about pretty much everything (from short technical articles to poetry.) I stumbled upon what looked like one of those diary entries (didn't see the 'fiction' label) by some 'sam512', and was confused when giant robots and aliens suddenly appeared.
They could at least call it a hyper-computer or acausual timeloop processor or something.
The characters clearly state that since their level of emulation is approaching infinity, anything they do to their child universes will simply be reflections of the parent simulations.
Look at it this way. You rewind to 1942. You're going to kill Hitler. But your universe simulation is so in tune to the parent universe they're already doing the same thing. What you interprets as free will is the averaged responses of all parent universes. At some point in the simulation universe 2 billion plus above you someone actually has an original though to kill Hitler. This is simply re-emulated endless times.
There are no original thoughts in this view of the universe.
For example, the chain of universes may follow an alternating sequence: Mary in universe N kills Hitler in universe N-1 (one layer down). Suppose that this murder has the unfortunate side effect of preventing the birth of the Mary in that universe. The Hitler in universe N-2 therefore survives until 1945, allowing the Mary in that universe to kill Hitler in universe N-3. Ad infinitum.
What we're dealing with here is stability. The average Hitler is half-dead and the average Mary half-born. The stable Hitler is dead half the time (in all odd universes); the stable Mary alive half the time (in all even universes).
In fact, even stability is probably not the correct term. Let's broaden our scope. The Hitler example is just one cycle, of length 2. But there may be many cycles, of arbitrary length, from 2 to 3 or 10 or a million universes long. These cycles that interact will almost certainly do so chaotically. The odds that stability will occur is remote.
One solution to this is that they never have the idea to try that in the first place. So one could argue the author has considered it ;)
Instead have this simulation argument here: http://www.simulation-argument.com/
Max speed, sure. But minimum temperature? It's zero (Kelvin). Of course there's gonna be a minimum, this doesn't belong here.
There are negative temperatures, but I wouldn't characterize them as being "lower than zero kelvin". It would be more accurate to say that negative temperatures are "higher than $\infty$ kelvin".
Maximum temperature, not maximum energy. Negative temperatures are even higher energies.
The real issue here is that T is a dumb unit -- 1/T makes a lot more sense, and when you look at it that way, the trend is just "higher energies have lower 1/T" and passing through zero is entirely unremarkable.
I was under the impression that it had no basis in the physical structure of the universe.
Or change the universe below you to change things 2 universes below it to change things in the universe 4 layers underneath that.
You can then setup several simulations to try and see if your simulating yourself just with odd initial starting conditions. Which get's into orders of infinity and countable numbers as some other groups may have the same idea.
"Now, you're looking at now, sir. What's happening now, is happening, now."
"What happened to then?"
"We passed it."
"When will then be now?"
The paradox, of course, is that, having come to that conclusion, you could just choose to make the "non-altruistic" decision. Everyone else has already decided what they're going to do, so you won't change that. Except the solution concept stipulates that they reason exactly as you do, so if you make this decision, so will they.
• This might still be significantly difficult. If only you had a simulated version of yourself that you could use as a control subject so the computer could test whether or not the generated program met your expectations...
But I'd also say that most problems people want solving are massively underspecified. For example, what is the formal specification for the problem that your preferred web-browser solves which its competitors do not?
Simulated copies of the programmer would probably speed up the process a lot, though.
Also I wonder about the ethical connotations of turning off the machine as was suggested at the end. At what point is life "life". If the simulated beings are truly perfect, would turning it off be the largest genocide ever committed?
In that case, turning the simulation off would do nothing from the perspective of the people within it, because every simulation has already been played out to completion. All they would be turning off is the ability to observe what the simulation looked like at specific points in time.
>"Ah, no. Ask me again on Monday."
Really? I'm just a layman but if it's a simulation, like game of life, then logic tells me that they should also be able to rewind the simulation by backstepping. Either that, or she just created magic.
Diane's simulation is perhaps similar: a gigantically complex system of cellular automata.
The structure of this argument somehow reminds me of abortion-related topics...
Anyway, there's a nice article touching this issue:
Answer: look behind you, place the value you see plus one into the next universe.
Why would each universe affect the next one (subtly) differently?
And that's some absurdly good compression, given the universe simulator assumed in the story: anything that will ever occur as a state of the universe can be compressed to a set of space-time coordinates and an extraction algorithm.
With some additional assumptions about the nature of the pseudo-causal loop demonstrated in the story, that compression can also include any arbitrary search parameters you can conceive of, without you having to express them as a program: just search for the data that will be on your hard drive in the future after having executed the search you're currently thinking of.
From a naive literal point of view, no, the universe above theirs is independent. Diane's hypothesis/theorem is that, because their universe is at or near a fixed point, they have in effect invented a way to have unlimited control over their own universe. While there is no causal relationship from their universe the one up the chain, the symmetry implies their actions are mirrored.
I've read some of this author's other works. He likes playing with paradoxes like this. It's a twist on causality (and free will), inventing a new way for your actions to have an impact on the universe in the absence of a traditional causal link. It's pretty damn clever.
If at any time they were to "choose" to shut down the simulation, it can only be because the total state of the world, including their sense-data and internal mental states, cause them to do so. If that's true for them at time T, then by hypothesis it's true for the "god" agents as well.
It may be that, in the sense you have in mind, the protagonists simply do not have choice.
For similar puzzles, you might enjoy Douglas Hofstadter's essay about "super-rationality" (there is a HTML version on gwern's site: http://www.gwern.net/docs/1985-hofstadter), Adam Elga's "Defeating Dr. Evil with self-locating belief" (http://philsci-archive.pitt.edu/1036/), and Scott Aaronson's lecture about Newcomb's paradox (http://www.scottaaronson.com/democritus/lec18.html).