Hacker News new | past | comments | ask | show | jobs | submit login
I don't know, Timmy, being God is a big responsibility (qntm.org)
226 points by negativity on Feb 20, 2014 | hide | past | web | favorite | 112 comments

This was a very well written discourse on the existence of different universes, and a bit of an intuition about how to think about them.

It reminds me a lot (though it is quite different) of "Flatland: A Romance of Many Dimensions" [0]. This book gave me the best intuition yet about what it would be like to experience a fourth spatial dimension.

Having said that, I still don't quite know what to think about it. I definitely don't have a mental picture of what it would be like. All I know is that when I was reading the book, I had a part of my brain clicking along, which felt like it understood what it is to experience an additional spatial dimension. Yet there is still very little actual comprehension going on in there.

[A bit of a spoiler alert follows]

It starts by explaining in meticulous detail what it is like to live in and experience fictional 2D world instead of the 3D one we are used to.

Then, the main character transcends into a 1D world. The book explains to the 2D inhabitant what it is like to live in a 1D world. Explaining this to the 2D inhabitant is the same as explaining the 2D world to us (and our 3D world).

Finally, the character moves up to a 3D world, which they cannot comprehend. However the explanations which are given to the character are quite intuitive and satisfying, and help to explain what it might be like for us to move up one dimension and experience it.

[0] - https://en.wikipedia.org/wiki/Flatland

p.s. The book is also a hilarious parody on victorian era attitudes towards women.

There's a 5min video on youtube by TED-Ed that sums up the Flatland topic. Is really good.


Read this in the past and there's a few things that bother me

* It's impossible to simulate a universe of our current resolution, because it would take more matter than the original universe.

* You can't just simulate 'observable areas'. Everything needs to be simulated.

* An infinite loop does not end, even in an infinitely powerful computer

* A fun calculation from the ZFS folks: To fully populate a 128bit filesystem (ie permute all combinations) you a lot of energy. So much energy you could boil the world's oceans, See:

* [1] Physical Limits of Computation: http://arxiv.org/pdf/quant-ph/9908043.pdf

* [2] https://blogs.oracle.com/dcb/entry/zfs_boils_the_ocean_consu...

"It's impossible to simulate a universe of our current resolution, because it would take more matter than the original universe."

The story is quite clear on this:

"a countable infinity of TEEO 9.9.1 ultra-medium-density selectably-foaming non-elasticised quantum waveform frequency rate range collapse selectors and the single tormented tau neutrino caught in the middle of it all"

That is, the entire simulation is being run on a single tormented tachyon.

As for this not being possible in the real world, well, sure. In the real world all evidence points towards there being limits on the computational capacity of the real universe. In their universe, by construction, they do in fact have access to countably infinite amounts of computation, at which point this is potentially possible. Physics in the story are obviously, by construction, not the same as the ones in the real world. This is, shall we say, a well-established literary move in the field of science fiction. Very, very, very... well established.

(Ugh... I remembered it as being a tachyon, so I typed that too. A tau neutrino is of course not a tachyon.)

Nothing about the actual computing mechanism introduced in the story is realistic, and that's on purpose. The point of the story is to investigate what would happen if such a thing were possible.

Isn't it basically a reductio argument for why it's not a possible scenario - there are no [discernible?] infinites in reality (or maybe there are no infinites because it's a simulation and simulating infinites is impossible ;0)> )

> An infinite loop does not end, even in an infinitely powerful computer

Sure it does! Run one processor instruction, then run the next one in half the time, then the next one in half as much time again...

On termination, is the instruction counter odd or even?

both, untill you observe it ;-)

Better ask the folks researching https://en.wikipedia.org/wiki/Hypercomputation

A closed timelike loop under quantum mechanics is essentially an automatic fixed-point solver, in the same way that a stable quantum state in space is a coherent superposition. So yes, the story is kind of off, but in spirit it's also kind of dead on.

> An infinite loop does not end, even in an infinitely powerful computer

If the infinitely powerful computer is an accelerating Turing machine[1], wouldn't it end in finite time?

[1] Copeland, B. J. (2002). Accelerating turing machines. Minds and Machines, 12(2), 281–300.

> You can't just simulate 'observable areas'. Everything needs to be simulated.

I can't make sense of this. Whatever is outside the causality sphere is by definition irrelevant.

Here's a basic way of understanding this.

Take a piece of paper. Focus a camera on that piece of paper.

Turn on an overhead light. Did the paper change?

Now, imagine this not only to be a visual representation as limited as this, but instead causality. Then the causality sphere is, theoretically, as big as the universe itself.

I don't understand your analogy at all, but it seems to me that an object's light cone at a given time is necessarily smaller than the universe (in volume of spacetime). Also, you could control the size of the cone by controlling the duration of the simulation.

Clarification is necessary I suppose.

The concept being discussed is the limited scope of simulation. However, it can be assumed that everything affects (or can affect) everything else. The only way to know whether or not one thing may affect anything else is to simulate it.

How could you know the sphere of causality without simulating everything?

An object's light cone is not a property of the object; it is the result of the properties of the object as they related to the light shining on that object. If the source of that light isn't represented in a simulation, then unless that object produces the light itself, the object must be dark or the simulation isn't accurate.

The discussion is about simulating a universe. Unless there are truly discrete systems within that universe, you can't do that simulation within a limited frame.

I think what the author was alluding to, though, was to say that the actual rendering of the view, rather than the calculation of the view, would be limited to the current frame of reference.

I think I see what you mean, but isn't it true that if there is light shining on the object, then the lamp must be in the light cone? It is literally impossible for something outside the light cone to have any physical impact on the object at all.

We do know the sphere of causality already, no computation needed: it is the sphere defined by the distance light has had time to travel. You don't need to simulate anything outside that zone to know that it is impossible for matter outside it to physically impact the center.

Think of it this way: if you want to simulate Earth through the year 2000, you know you don't need to simulate Alpha Centauri after 1996, since it is more than 4 light years away. You can know this without doing any computation at all.

I know this is an old thread, but I wanted to point out that you do need to simulate reality beyond an object's light cone so that you know what's there when the light cone expands to reach it.

I don't think that's right. Nothing outside Earth(2014)'s past light-cone can affect anything inside it, nor can anything outside Earth(2015)'s past light-cone affect anything inside it. Expanding the light-cone into the intervening year doesn't require you to simulate anything further out - well, you need to go outside Earth(2014)'s light-cone to simulate Earth(2015), but that's hardly worth mentioning.

I think you're picturing a sphere of 13Gly radius (or whatever) centred on present-day Earth, expanding at lightspeed to encompass new stars and galaxies. But while new matter enters our light-cone, it is not doing so as stars and galaxies. Those all have pasts within the light-cone - you don't need to go outside our past light-cone to find all the things that can affect them. Any matter that only entered our observable universe in the last year doesn't have a past, because it only just came through the Big Bang. It's primordial chaos, not fully-formed stars; you don't need to work out millenia of its past to know what's there. Unless for some reason the simulation needs to calculate pre-Big Bang conditions, which is possible, but then the definition of "light-cone" needs to be amended.

At what point do objects & events outside the causality sphere stop influencing things inside it?

Immediately, by definition. By causality sphere he means light cone [0]. If you want to simulate an entity, you only need to simulate the entity and the contents of its light cone. Anything outside the cone could not have a causal impact on the entity without FTL travel.

Of course, an entity's light cone is still likely to be quite large, especially if your simulated universe is old.

[0]: https://en.wikipedia.org/wiki/Light_cone

Except you have to have been simulating outside the light cone so that, once the light cone expands to include that space you know its state. So in reality, if you're interested in location V at time t, you have to simulate everything that will end up within the sphere centered at V at time t, not just whatever is in the sphere instantaneously.

Yes, which is why the light cone is actually described as a 4D object that exists in spacetime, not a 3D sphere.

Some back of the envelope calculation indicates that the spacetime "volume" of the light cone of an object is 1/8th of the spacetime volume of the universe up to that point. So using the light cone would net you an 8x speedup over a brute-force Universe render. Nothing to write home about if you have infinite computing power.

an entity's light cone is still likely to be quite large, especially if your simulated universe is old.

And the light cone (more precisely, the past light cone) keeps getting larger, so in fact the "amount of universe" that needs to be simulated increases without bound as time goes on.

Some back-of-the-envelope calculation leads me to believe that for a given location in space and time, its past light cone contains 1/8th of the spacetime between the beginning of the universe and the location.

I'm not sure how you're doing the calculation, but your answer is not correct.

Our current best-fit model of the universe has it being spatially infinite, which means the "volume of spacetime" between the Big Bang and "now" (more precisely, between the Big Bang and the "comoving" surface of simultaneity that passes through the Earth right now--now" is relative so you have to specify what simultaneity convention you're using) is infinite. Since the volume of any past light cone is finite, it will be effectively zero compared to the total volume of spacetime up to any surface of simultaneity. (But as you go to later and later surfaces of simultaneity, the volume of the past light cone of any event on the surface of simultaneity still increases without bound.)

Even in spatially finite models (which are not conclusively ruled out by the data we have, though they are unlikely to be correct), the fraction of spacetime between the Big Bang and a given "comoving" surface of simultaneity that is occupied by the past light cone of an event on that surface of simultaneity is not constant; it gets larger as you go to later surfaces of simultaneity.

> * You can't just simulate 'observable areas'. Everything needs to be simulated.

Is that an assertion? Do you have something to back up that claim? [I think if this is not backed up then your first point falls as well].

> * An infinite loop does not end, even in an infinitely powerful computer

What do infinite loops have to do with anything? Presumably the creators of the simulation would be skillful enough to avoid them.

> * A fun calculation from the ZFS folks: To fully populate a 128bit filesystem (ie permute all combinations) you a lot of energy. So much energy you could boil the world's oceans,

Once again, who said anything has to be 'fully populated'?

I think the assertion that you have to simulate the whole thing is somewhat self-obvious, and I'm not physicist, but I'll try.

Take the "window" of the Earth. Without simulating the rest of the solar system, there is no way to account for the gravity effects of the larger planets, the energy from the sun, the occasional impacts from asteroids, the high-profile fly-bys of comets like Halley's, etc. Without simulating the rest of the universe, what would the simulated-astronomers on your simulated-Earth be looking at when they peer into their telescopes? What happens when you fast-forward far enough into the "future" that the Milky Way merges with Andromeda?

You can't simply hand-wave these things away by saying, "well, we would ray-trace the things that are observable" because you have no way of knowing what is observable without simulating the whole rest of the universe too. It's either simulate the whole thing, or your simulation is very limited and inaccurate.

If you are mainly interested, in what happens with people, accuracy of the rest of simulation is not important, you only need to simulate the parts people see with the resolution they can see, you even can have lots of inconsistencies as long as they are not reproducible.

the problem is the world is chaotic: any approximation, no matter how small, will cause divergence very quickly. (This is actually the main problem with the simulation: how did they get the exact initial conditions and physical constants?).

I think because of the laws of causality you can't just simulate a tiny bit of the universe and expect to get accurate results because everything has a sphere of influence expanding at the speed of light.

For instance astronomers observe phenomenons that occurred in galaxies far far away. If in the simulated universe you just bound yourself to simulate, say, the solar system then such events wouldn't occur exactly in the same way. We wouldn't find the same bodies at the same position. That in turn would make the path of the simulation diverge significantly from our reality.

Think for instance what would happen if the constellations weren't the same in the sky. All astrology would be different. It might seem like a minor change but in the course of centuries that would probably amount to a big change.

That being said since the story postulates that the computer has infinite processing power and storage you can just leave out this bit and the story still makes sense, you just assume that it simulates the entire universe at all time.

This is merely a question of fidelity. One can imagine a simulator which produces low-fidelity everything and only increases fidelity gradually in isolated regions as the human observers peer at those regions. And, there is nothing to prevent results being continuously retroactively computed (just in time) as new 'discoveries' are made.

It is not known with certainty that very small actions across very large distances have any effect at all. It's possible that, just like you're literally seeing a past version of a galaxy, you're also literally seeing a less precise version of the galaxy.

But how can you decide what's going to have an influence and what will be without effect if you don't simulate everything?

If tomorrow we receive a transmission from a form of life that lived thousands of years ago thousands of light years away from us it's going to change a lot of things. You can't ignore that.

You can only simulate a small part of the universe if you can simulate correctly what happens at the boundaries of the region. Otherwise it "contaminates" the simulation inwards at the speed of light. if you want to simulate what will happen on earth for the next year you at least need to know the full state of the universe in a sphere one light year in radius, or at least that's my understanding.

I think these concepts somewhat overlap with the "Holographic principle" although I might be mistaken, I'm way out of my depth: https://en.wikipedia.org/wiki/Holographic_principle

That's why us being in a simulation is one solution to fermi's paradox.

Take a look at Hash Life, it changed my perspective on this specific point.

Hash Life still simulates everything, it just optimizes for similar patterns. While it does seem a reasonable optimization for simulating an entire universe (although why would you bother if you have unlimited processing power?) I don't think it's a good parallel to what the parent was suggesting.

In Hashlife you still need to have your entire universe in the hash tree, you can't take a bit of the pattern, only simulate for X generations without considering outside influence and expect to get the same results as a full simulation.

Right, so take it a bit further. You don't have to simulate everything that there isn't an observer on since you can create a temporal boundary between two areas.

So, translating that into physics:

1. You know you are only going to simulate for the next, say, day. 2. Take all the photons, xrays, etc. heading towards earth and continue to simulate them, but freeze everything outside of a reasonable distance (in hash life, this isn't arbitrary, it is perfectly defined). 3. Continue simulation for that sub area only.

> * An infinite loop does not end, even in an infinitely powerful computer

What do infinite loops have to do with anything? Presumably the creators of the simulation would be skillful enough to avoid them.

That would be this:


But it was still pretty exciting stuff. Holy Zarquon, they said to one another, an infinitely powerful computer? It was like a thousand Christmases rolled into one. Program going to loop forever? You knew for a fact: this thing could execute an infinite loop in less than ten seconds. Brute force primality testing of every single integer in existence? Easy. Pi to the last digit? Piece of cake. Halting Problem? Sa-holved.


That's a mathematical joke/commentary on the nature of infinity and how people think it's just a really really large number.

At least I hope that's what it is.

Not to mention that when such an accurate simulation becomes possible, special relativity gets violated - you can know whats happening anywhere before a signal can travel from there to you.

* simulation, consciousness, cellular automata, artificial life: https://en.wikipedia.org/wiki/Permutation_City (check http://libgen.org/ for preview/pdf)

* more of sam's great stuff: http://qntm.org/structure and http://qntm.org/ra (ongoing)

To sell it to others a little more than you did: Folks, if you like that story you may well like Greg Egan's novel 'Permutation City'.

I just want to second the recommendations for Ra and Fine Structure. Fine Structure is older and not as polished, but still a lot of fun and fairly well executed. Ra is better. I enjoyed mood of the authors other stories too.

I first found this site when someone linked to Fine Structure here on HN. And so it goes...

It's funny, I first found that site many many years ago, when I did a web search (might not even have been Google) for "how to destroy the Earth".

His Earth Destruction article[1] is pretty well known indeed!

iirc I first found the site via Everything2[2], a curious place where people do writeups about pretty much everything (from short technical articles to poetry.) I stumbled upon what looked like one of those diary entries (didn't see the 'fiction' label) by some 'sam512'[3], and was confused when giant robots and aliens suddenly appeared.

[1]: http://qntm.org/destroy

[2]: http://everything2.com/

[3]: http://everything2.com/user/sam512/writeups/March+9%252C+200...

It bothers me that this uses "quantum computing" since it plays into common, frustrating, misapprehensions of what quantum computing is even theorized to do.

They could at least call it a hyper-computer or acausual timeloop processor or something.

I know. I loved the story, but twitched every time it said "quantum".

Even the domain is qntm.

Yes, I'm late. So what. People miss the fundamental point.

The characters clearly state that since their level of emulation is approaching infinity, anything they do to their child universes will simply be reflections of the parent simulations.

Look at it this way. You rewind to 1942. You're going to kill Hitler. But your universe simulation is so in tune to the parent universe they're already doing the same thing. What you interprets as free will is the averaged responses of all parent universes. At some point in the simulation universe 2 billion plus above you someone actually has an original though to kill Hitler. This is simply re-emulated endless times.

There are no original thoughts in this view of the universe.

But your universe simulation is so in tune to the parent universe they're already doing the same thing. What you interprets as free will is the averaged responses of all parent universes.

Not averaged.

For example, the chain of universes may follow an alternating sequence: Mary in universe N kills Hitler in universe N-1 (one layer down). Suppose that this murder has the unfortunate side effect of preventing the birth of the Mary in that universe. The Hitler in universe N-2 therefore survives until 1945, allowing the Mary in that universe to kill Hitler in universe N-3. Ad infinitum.

What we're dealing with here is stability. The average Hitler is half-dead and the average Mary half-born. The stable Hitler is dead half the time (in all odd universes); the stable Mary alive half the time (in all even universes).

In fact, even stability is probably not the correct term. Let's broaden our scope. The Hitler example is just one cycle, of length 2. But there may be many cycles, of arbitrary length, from 2 to 3 or 10 or a million universes long. These cycles that interact will almost certainly do so chaotically. The odds that stability will occur is remote.

It's very cool, I'm only a bit disappointed that the author did not consider what would happen if they tried to run the simulation ahead of them to see the future. What would happen then? All kinds of paradoxes arise.

Assuming that Timmy and Diane are simulating a universe identical to their own, they will not succeed at changing their future relative to their simulation (almost by definition).

One solution to this is that they never have the idea to try that in the first place. So one could argue the author has considered it ;)

I've spent a fair amount of time thinking about this thought experiment, and I think the reason it leads to a paradox is because it's impossible to run a simulation of your own universe (in real time, anyway) in the first place. So it's a sort of proof by contradiction.

I would post obligatory link of someone calculating expected number of universes on top of ours based on planks distance and something else, but I can't find it.

Instead have this simulation argument here: http://www.simulation-argument.com/

Are you referring to the SMBC comic?


"Like maybe a minimum temperature or a maximum speed..."

Max speed, sure. But minimum temperature? It's zero (Kelvin). Of course there's gonna be a minimum, this doesn't belong here.

There are temperatures lower than zero kelvin. http://math.ucr.edu/home/baez/physics/ParticleAndNuclear/neg... for a technical description, http://www.nature.com/news/quantum-gas-goes-below-absolute-z... for a more readable article.

There are temperatures lower than zero kelvin.

There are negative temperatures, but I wouldn't characterize them as being "lower than zero kelvin". It would be more accurate to say that negative temperatures are "higher than $\infty$ kelvin".

Value that reaches maximum possible energy goes negative? Sounds suspiciously like integer overflow...

Value that reaches maximum possible energy goes negative?

Maximum temperature, not maximum energy. Negative temperatures are even higher energies.

The real issue here is that T is a dumb unit -- 1/T makes a lot more sense, and when you look at it that way, the trend is just "higher energies have lower 1/T" and passing through zero is entirely unremarkable.

Why is max speed (and min temp) an indication that (our) "universe [is] being optimized for good computation"?

Yup, for some reason I was looking for xkcd comic.

Those two and Abstruse Goose are the trio I always check when looking for stuff like this.

this was a great one, thanks for the link.

Isn't Planck length just an arbitrary and abstract distance?

I was under the impression that it had no basis in the physical structure of the universe.

Instead of changing things one universe below, change things 3 universes below you.

Or change the universe below you to change things 2 universes below it to change things in the universe 4 layers underneath that.

What's more interesting IMO is they assumed there the first and only group to ever run that simulation for all of time. Realistically your probably running on someone else simulation which would end up diverging as they pocked at some other part of reality. In other words you may or may not be a top level simulation if changes don't get reflected in your world, but if your not a top level simulation you would want to find out who's simulating you and try simply to find out what changes to expect in your world.

You can then setup several simulations to try and see if your simulating yourself just with odd initial starting conditions. Which get's into orders of infinity and countable numbers as some other groups may have the same idea.

But if everyone above you mimics that action...

"What the hell am I looking at?"

"Now, you're looking at now, sir. What's happening now, is happening, now."

"What happened to then?"

"We passed it."


"Just now."

"When will then be now?"


There's a technical solution concept in game theory that mirrors Diane's reasoning here. I wish I could remember the name of the solution concept offhand, but the idea is that, even though your own decision won't affect the decision anyone else makes, you still make the "altruistic" decision, reasoning that the other people involved will use the same reasoning and come to the same conclusion.

The paradox, of course, is that, having come to that conclusion, you could just choose to make the "non-altruistic" decision. Everyone else has already decided what they're going to do, so you won't change that. Except the solution concept stipulates that they reason exactly as you do, so if you make this decision, so will they.

Super rationality is a reasoning approach in game theory. Probably what you're talking about.

That's it, thanks!

So if this is a simulation, is talking to yourself in your head or out loud, a program being self-reflective? That's quite a program.

Great story! One thing that stuck out to me—if you had an infinitely powerful computer, the "coding" aspect of it should take no time at all. All you have to do is come up with a description language to say what you want•, and b) seed rules for a genetic programming language, and then throw it at your infinitely powerful computer. You'll get a working program instantly.

• This might still be significantly difficult. If only you had a simulated version of yourself that you could use as a control subject so the computer could test whether or not the generated program met your expectations...

Also, changing outcomes in this simulated universe, as in acts of God that defy its physical laws, would be extremely difficult without messing it up, so even if the developer would want to, say, cure all suffering, it would abstain from doing so. And the developer could save the consciousness of people that he likes and maybe restore them somewhere else, like in another universe, with a different set of laws (e.g. heaven).

You'll get the program instantly, but writing formal specifications isn't trivial, especially when they have to be machine readable. Consider that there are some problems where the complexity of the coding is mirrored exactly in the complexity of the specification (for a simple example, a switch-case statement has as much logic in specification as it does in code).

But I'd also say that most problems people want solving are massively underspecified. For example, what is the formal specification for the problem that your preferred web-browser solves which its competitors do not?

Processing power is cheaper than programmer time. If it were really as easy as coding the desired result instead of coding the desired process, we'd already be doing it. And in some specialised cases, we already are: see https://en.wikipedia.org/wiki/Answer_set_programming

Simulated copies of the programmer would probably speed up the process a lot, though.

This gives me an interesting thought, in the story all lower levels of the simulation are identical copies from the top level, in every way. The first time that they create the black ball the top level universe will diverge from the rest (they will look behind and there will be no ball) at this point there will be no more perfect copies of the original universe.

Also I wonder about the ethical connotations of turning off the machine as was suggested at the end. At what point is life "life". If the simulated beings are truly perfect, would turning it off be the largest genocide ever committed?

If Diane created a complete simulation and then separately created a program to observe that simulation at certain points in time, then doesn't it follow that the entire universe from Big Bang to Big Crunch was already calculated and executed?

In that case, turning the simulation off would do nothing from the perspective of the people within it, because every simulation has already been played out to completion. All they would be turning off is the ability to observe what the simulation looked like at specific points in time.

The problem I have with this story (and the simulation argument in general) are the dubious assumptions that simulation within which we exist, 1) is manifested in a physical reality which is functionally identical our own; and, 2) created by autonomous beings who are fundamentally like us in all important respects. More likely we would merely be the flashing cells of automata to a higher order simulation builder.

>"Can you wind the clock backwards at all?"

>"Ah, no. Ask me again on Monday."

Really? I'm just a layman but if it's a simulation, like game of life, then logic tells me that they should also be able to rewind the simulation by backstepping. Either that, or she just created magic.

It's worth pointing out as an aside that you actually can't rewind the (Conway) game of life: the current state does not contain enough information to recreate the previous state. The reason is that many states can arise as a result of multiple possible precursor states.

Diane's simulation is perhaps similar: a gigantically complex system of cellular automata.

I read that to mean that she hadn't implemented that feature yet.

Game of Life isn't reversible. Take any oscillator, say a blinker http://www.conwaylife.com/wiki/Blinker , trying to reverse that you don't know if it was the opposite state the step before or some other pattern that became a blinker.

So what's the difference between specifying a universe-simulating algorithm, and specifying its initial conditions, versus actually running it? Since the entire history is somehow implied in that specification, don't all those people in some sense exist?

It's an interesting philosophical question. I would prefer to define existence as beginning when the computation of those particular molecules first occurs.

> I would prefer to define existence as beginning when the computation of those particular molecules first occurs.

The structure of this argument somehow reminds me of abortion-related topics...

Anyway, there's a nice article touching this issue: http://lesswrong.com/lw/uk/beyond_the_reach_of_god/.

A question (to which there is an answer), how would you find out how far down the line you are?

Answer: look behind you, place the value you see plus one into the next universe.

The residents of the googoolplexth universe would like to know what they're supposed to do when the number is too big to fit on the screen of the people below them.

Auto-cognition occurs in states of coherence; thus as far as we are concerned the incoherent universe never exists.

About to hit the mass forward button.

I dunno about the final twist. I’d turn it off. The top level would still be fine.

Reminds me of 'the Planiverse' by AK Dewdney.

"There is a feedback loop going on. Each universe affects the next one subtly differently. But somewhere down the line the whole thing simply has to approach a point of stability"

Why would each universe affect the next one (subtly) differently?

This was highly confusing without context.

Oh, and just because you have a infinitely powerful quantum computer simulating infinitely many recursive universes with their own infinitely powerful quantum computers, correlation != causation doesn't apply anymore?

If you have an infinitely powerful computer, does that also imply that you necessarily have infinite storage?

Infinite CPU capability does not automatically imply infinite storage, but it does imply that you can compress any storage to an information-theoretic minimum size, by writing a search for an arbitrary program to generate the same data.

And that's some absurdly good compression, given the universe simulator assumed in the story: anything that will ever occur as a state of the universe can be compressed to a set of space-time coordinates and an extraction algorithm.

With some additional assumptions about the nature of the pseudo-causal loop demonstrated in the story, that compression can also include any arbitrary search parameters you can conceive of, without you having to express them as a program: just search for the data that will be on your hard drive in the future after having executed the search you're currently thinking of.

Good question.. I assume its a relatively trivial question for a serious mathematician, but I suspect it's something like: there are classes of problems that, given infinite computation, require infinite storage. In this case it is strongly implied.

I don't see why it would; re-computation would be no more meaningfully expensive than a load/store, would it?

Actual inter-level causation is not needed for the characters to experience events as they do in the story. The rules that imply that all levels would unfold identically are low-level. Human narrative events that alias as inter-level causation are high-level, sort of emergent effects.

I agree with that. But I'm questioning the reasoning at the end of the story that they better not shut off the simulation, implied to be for their own sake. The observation that their higher-level reality counterparts will likely choose to shut down the simulation exactly if they do themselves, but the protagonists' choice doesn't influence that.

IMHO, the entire point of the story is that causality has become twisted into a counterintuitive shape.

From a naive literal point of view, no, the universe above theirs is independent. Diane's hypothesis/theorem is that, because their universe is at or near a fixed point, they have in effect invented a way to have unlimited control over their own universe. While there is no causal relationship from their universe the one up the chain, the symmetry implies their actions are mirrored.

I've read some of this author's other works. He likes playing with paradoxes like this. It's a twist on causality (and free will), inventing a new way for your actions to have an impact on the universe in the absence of a traditional causal link. It's pretty damn clever.

Yes, exactly. It's like a sneaky way of asking if physics is deterministic, alluding to the paradoxical relation between determinism and free will.

And a fun way of playing with the notion of fixed points, to boot!

They are making decisions in the presence of the knowledge that essentially identical "god" agents will be making decisions based on the same information and the same mental states, and that the "god" agents' decisions will impact their own existence.

If at any time they were to "choose" to shut down the simulation, it can only be because the total state of the world, including their sense-data and internal mental states, cause them to do so. If that's true for them at time T, then by hypothesis it's true for the "god" agents as well.

It may be that, in the sense you have in mind, the protagonists simply do not have choice.

Ah, but is it really that simple? :)

For similar puzzles, you might enjoy Douglas Hofstadter's essay about "super-rationality" (there is a HTML version on gwern's site: http://www.gwern.net/docs/1985-hofstadter), Adam Elga's "Defeating Dr. Evil with self-locating belief" (http://philsci-archive.pitt.edu/1036/), and Scott Aaronson's lecture about Newcomb's paradox (http://www.scottaaronson.com/democritus/lec18.html).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact