If you like longer stories, then https://qntm.org/ra. "Magic is real. Discovered in the 1970s, magic is now a bona fide field of engineering. There's magic in heavy industry and magic in your home. It's what's next after electricity." Magic as an abstraction layer.
Its an excellent story, but very long. I'm not sure if it's longer than Homestuck, but the length is comparable.
Highly recommend both.
The structure of living beings having mind/soul/body and how these are formed to protect against certain kinds of magic (soul-killing for example). The way the major branches of magic essentially map to specific transforms on mana.
It's not axiomatic or anything, but the way magic is structured in the MoL universe feels like it would lend itself nicely to axiomisation in something like a systemic game.
Before reading the antiemetics series I had always considered SCP to be an awkward xfiles style fan fiction collection, I'm still looking for more to scratch that itch.
The quality of the SCPs has increased over time, by and large, though there were always outliers. A good way to find the better stories is to look at the Exploring series on Youtube; for instance, https://www.youtube.com/watch?v=8pJUm4lKOhE
Other than the realisation they they had periodically ceased to exist (or only existed since they last turned it on..) there wouldn't be a great difference.
That said, the black sphere would cause the "top" to diverge anyway at that point, wouldn't it? So it'not really in their control..
Jeph Jacques of QC fame did Alice Gove which was a good Banks like comic. https://www.questionablecontent.net/alice1.html
I also read quite a bit of Kris Schnee's work https://www.amazon.com/Kris-Schnee/e/B00IY1HDDY his earlier Thousand Tales novels I think deal with early brain scanning issues well. IE: Flatbed scanning a living brain.
Somehow, it gets similar points across — similar to this essay, but deeper in feeling.
Let me also say the authors here have done really well with so short a format. To paraphrase my toddler son, I want more story!
I recently finally completed making my way through all of the Culture novels - thoroughly enjoyed every one, and was kind of sad to have reached the end!
Have you come across any other sci-fi authors that approach the style of Ian M. Banks? Otherwise, any recommendations for other sci-fi with the depth that the Culture series had?
Also check out Harry Potter and the Methods of Rationality for a rationistic fanfic crossover between scifi and fantasy if you haven't already.
You can probably use HN's search feature to find previous scifi reading recommendation threads for more.
"Iron Sunrise" and "Singularity Sky" come to mind. Maybe also "Quantum Thief".
What of the style / vibe / etc are you looking for more of?
If it helps, some fantasy authors I like are Joe Abercrombie, Terry Pratchett, Robin Hobb, Brandon Sanderson, Scott Lynch, Dave Duncan and Brent Weeks.
I like excellent character development, creative world-building and exciting stories that are plausible within those imagined worlds, wit that makes you laugh, good writing that has you marvel at the author's skill, and that special thing: immersive writing.
Beyond that, as a noob I don't know if there are sub-genres or particular styles of interest.
I had a quick gander at your suggestions, and the stories of Singularity Sky and Quantum Thief both sounds intriguing - I'll add these to my reading list, thanks :)
Dan Simmons (Hyperion saga, Olympus), Rameez Naam's "Nexus". Nick Harkaway's "Gnomon" is ridiculously good, and I basically buy anyone who wants one a copy of Cory Doctorow's "Walkway". Theodore Sturgeon is probably the closest SF I can think of to Pratchett, although he's Golden Era so suuuuper soft SF and doesn't have the humor. Bujold's "Vorkosigan" saga is one of the funnier SF books (while still being amazing and serious), and I hear good things about Scalzi's "Redshirts"
PS - Also checkout out what Hannu and Rameez get up to outside of writing, it's super cool.
You are the first person I've ever seen to mention that book. It is astonishingly and unreasonably excellent, and I recommend it every chance I get. I'm just beginning my first re-read now.
These guys seem nice, and surprisingly already mentioned ethical implications early, so their corruption might take longer or develop along unexpected paths.
The aspect where exercising various powers requires coding time first could lead to some fun race-against-time scenarios. I do wish the featureless sphere had been a utah teapot instead.
I think leaving it at that point was the right choice for the story. But if I were to speculate what a continuation might look like it would involve Diane having planned all this all along to put right some tremendous wrong that happened to her and she is manipulating Tim in some way.
Ultimately I expect we'd come to the question of whether ten billion human brains in vats experiencing continuous ecstasy was better than something more like the current world or not.
It's hard to treat people below you worse when you know that he people above you will do the same thing if you do. It's an unstable equilibrium made stable by the fact it's already happened.
Put another way, you won't start tap dancing on someones head when you're in the middle of an infinite tower of tap dancers each standing on another tap dancers head.
Although this is pure fiction and speculation, thought experiments like these (existence of free will, laplace's demon), rely on determinism.
Even if we assume the universe to be completely deterministic, we need to "know" the initial seed state (usually called big bang).
That initial seed can and will create vastly different universes (similar to how changing a single pixel in Conway's Game of Life completely changes the emergent behaviour and properties of the patterns, often just destroys the apparent stability in the system).
So, even with the an all powerful quantum computer, the initial state will give vastly different universes (all of them consistent within but not with each other).
We can think about "what" the initial state is ? Is that initial state self contained ? or was it under effect of something else ? Will the computer "generate" the initial state, or the programmer (aka god) has to explicitly hard code it carefully to create an apparent stable universe ?
If time as we know it, started with the big bang, the no notion of something existing "before" the big bang, doesn't make sense. If time existed before the big bang, then big bang is not the actual initial state (it is an initial state for us, as we cannot know outside the big bang, i.e. the universe).
Therefore, all simulations considering that to be the initial state will be incomplete and incorrect. Maybe we simulate what we can observe with our senses (directly or indirectly), and that will give us a "valid" simulation. For us, that is a perfectly valid simulation, but for the "things" that were before the big bang (if you assume that time was present before big bang), our simulation will have an extremely low entropy.
They have infinite computing power, just simulate all possible initial states?
1. I don't agree with the "midpoint stability" argument. Since these are simulations - the real one is on the top, but there is no "bottom simulation", since every simulation below is also running a simulation. Hence there is no stable middle point - since there is no middle in the first place.
2. About simulating the future. It seems quite obvious that, since everything is based on determinism, they cannot look at the future and change something so that the future they observe is no longer the case. Hence the only possible future they could look at would be that which they wouldn't want to change in any way. Which is really an interesting idea. But furthermore - when they look into the future (say) 100 years from now - the simulation they are seeing would be the one which already "looked" into the future. So the act of looking must have a profound effect - it has to order the future so, that whoever is looking at it would not be able to change it, because: 1) if you can change it then determinism falls apart, and so the whole premise of simulation, and 2) if you would decide to change it then, within the future you see in the simulation, that change has already taken place.
I still feel, however, that the story's requirement, that the simulated world mirrors the simulating world (except when the latter is the real world), depends on the physics of the real world (and its simulations) being maximally deterministic (in that everything that is possible happens with probability 1) or else the butterfly effect will cause them all to diverge quite rapidly. But with that level of determinancy, there is no need for a feedback mechanism to achieve "midpoint stability" - there is no possibility of divergence from the already-determined future.
This is the premise of the tv series Devs. Cool idea but super-bad execution, it just devolves into a standard bad-guy hunting good-guys with pistols story..
If the top-level 'God' people choose to run the simulation faster than real-time, I think we can say that no change will be observable anywhere in the simulation stack, except that the people in them will feel as though they have chosen to speed up their simulation and yet not seen any change in it - which is another of the things that will tell that they are in a simulation and not at the top level. (Note that, as Diane has presumably already said in her paper, free will is definitely an illusion in the simulations. I guess it also is at the top-level, given that their universe is deterministic to the point that it can be simulated to the tiniest detail.)
Now suppose the top-level people choose to run the simulation in reverse. Again, I think we can say that nothing will seem to change to those in any simulation, because, at any point in the reversal, everything, and in particular everyone, will be in the same state as they were in the forward pass, which for the people includes having the same memories, plans and expectations - it will be indistinguisable from an instant in forward-running time. They won't even remember that, at some point, the clock was reversed, as that was not something in the forward simulation's past.
This is where it starts to get interesting: if the clock is reversed again and left to run, what will be observed in the simulations, and by the top-level people looking at the first-level simulation, when the simulations reach the time of the original reversal?
Update: I'm leaning towards the view that the top-level simulation continues forward in time from this point, and observes its simulation go around the loop. In general, the Nth simulation goes round the loop N times before proceeding past the reversal point, but no simulation observes that it is going or has gone around the loop, and therefore cannot determine its depth by counting loops.
The moment the 'camera' appears in a simulation, it has diverged from the 'real' world. Now some of its inhabitants know they are in a simulation, and so they are not replicating what happens in the real world. The butterfly effect will likely ensure that the divergence will become general. I think this could be undone by sending the simulations around a time loop, as then, if my supposition about how these loops play out is correct, each simulation will exit the loop in the same state as the real world, with no memory of the camera having once appeared.
The top level could be very different to the first level simulation. The only part that actually needs to be similar is that the real universe and the first-level simulated one both create their own simulated universe.
I agree, and I'm not sure the scenario given could happen. The basis of the possibility of the scenario is "the universe is deterministic" and "we have infinite computing power" - but then, is the infinite computer itself part of the universe? b/c as soon as the top layer interacts with the sims, it diverges, which means the sims are not the same as the top layer anymore, so are not equivalent to it even though they are otherwise deterministic - the first sim under the top is deterministic, but dependent on interaction with another deterministic (top level) universe. Hence, the first sim is no longer representative of the top level - the top level can still interact with the first sim, but it can no longer consider it to be a mirror - this is the answer to the paradox of "running the sim into the future" - once the top level observes the sim, it is altered in a way that might cause the sim to diverge.
> when the simulations reach the time of the original reversal?
The sum total time of the reversal is finite, so the top (level 0) level will observe the reversal in a finite time, the level 1 will therefor observe the same finite delay wrt the top. Hence each level N will be delayed by the same amount wrt the level above.
The idea of a "loop" would be wrong, b/c when a sim reaches "the time of the original reversal" the result only affects the next sim down. But each sim as level N does go through the loop N times, because it shares the loop of each of its parents, hence if a loop delays by 10s, level N which goes through N loops is delayed wrt top by N*10s
> The moment the 'camera' appears in a simulation, it has diverged from the 'real' world
True, but now the "next" sim interference will not be replicated between level 0 and level 1, which in turn can cause level 1 to diverge, and so on.
> with no memory of the camera having once appeared
You could just stop all the sims after you interfered with them, and them restart the sim without interfering (or equivalent - like rewinding the sim and then playing it w/o interfering) - the same would happen in all sims too, so they would be, and have, the same pristine sims.
>> The author is being slightly inconsistent...
I think we have to take it that the computer is part of the top-level, 'real' world, and therefore that the physical location, of the information making up each of the simulations, is in the real world. Each level gets a different view of that information (somewhat like a recursive function's view of the stack), though these views are all the same insofar as they all see an infinite stack of simulated worlds.
They are not all identical, however. For example, when the narrator's world is fast-forwarding its simulation to catch up with their own time, their simulated world is not the same as their own, and they can see that this is so. The level above the narrator's might be in a similar situation with regard to both the narrators world and the narrators simulation.
The author is only assuming that the inhabitants of each simulation perceive their situation as being the same, and only when one compares them at the same local time in each. So far, I don't see any paradox between that and the fact that the simulations are not identical (a fact that can be verified by anyone further up the stack, who has a broader view of what's going on.)
>> when the simulations reach the time of the original reversal?
I think we are in agreement here, but up until now, I had been thinking that there is no way that a simulation could detect that it has been through a time loop. The top-level operators, however, could make their clock visible in the first simulation, which would pass it on to the second one and so on, and, just as the narrator deduces that the black ball materializing on the ceiling is a viewport, each simulation will deduce that the clock is showing real time.
Would this permit a simulation to count the number of times they have gone through a time loop, and therefore be able to deduce something about their position in the stack? I don't think so, as going backwards in simulated time erases, from the simulated world, all memories and any other physical record from its future: the world's inhabitants will just see the real-world clock suddenly jump forward, at the point where the loop starts going forwards again. This will be a different jump at each level, as they have each gone around the loop a different number of times, and therefore fallen behind real time by different amounts. As they have forgotten the duration of a loop, however (erased by time-reversal), they cannot deduce how many loops the clock-jump represents.
On the other hand, going forwards, each simulation has a different memory of how much the real-time clock jumped, and they will not agree on that even at corresponding local times... I have not figured out whether that is a problem for the story's premises.
 Alternatively... maybe each level can deduce the duration of a loop, as they will perceive the simulation they are running go around it once? I'm not sure this is consistent...
My, this escalated quickly.
more likely explanation is the author simulated a different universe to 2018, watched some hulu, and then wrote down the story for free publication on the web
Fantastic story. Will read more from this author!
If you run the program once and not observe anything happening, that means you know you're top level. Then you resolve to not rerun the program.
If you run the program once and observe the black sphere, then run it again and not see the black sphere, you know you're level 2. You resolve to not rerun the program. And on it goes.
EDIT: box to sphere.
Imagine you decide to show a "1" instead of the black sphere. You then look behind you. Most likely you see a 1.
Whatever number you see, increment that number by one in the next simulation. If you see nothing, congrats, you're at the top and leave the number as-is.
So initially what each level sees is behind them is: x-1-1-1-1-1
Then after everyone increments by one: x-1-2-2-2-2
Now repeat the step. Everyone looks behind them and increments the number they see by one.
Eventually you'll look behind you and the number won't have changed. You're at one level higher than that.
If there are millions of levels that'd take ages of course. But I think you could use the same idea do it instantly on the computer instead. Just have your simulation computer modify a number on the simulation computer in the next level down.
2. Make the value of n appear in the simulation
3. If you see that number in yours, increment n and go back to Step 2
4. Else, the number you see is how many simulations down you are.
Another site with good sci-fi is http://www.galactanet.com/writing.html - stories by Andy Weir's (writer of The Martian).
Another HN once theorised / joked that the speed of light was the maximum value an integer could store and planks length was the smallest floating point number.
It's still fun to think how simulations or other types of "nesting" interacts with modal realism. E.g. maybe there is an "Occam's razor" type of effect where we should expect to find ourselves in a world with the simplest physics that allows conscious inhabitants, simply because such worlds have more "instances" within other worlds.
A sort of a Roko's Basilisk, except with an omnipresent dread that they might be say, the 2nd or 3rd universe and the first just hadn't messed with them yet.
If they do anything that scares or abhors the people above, they could be turned off. Maybe they're not even sure if the people above them are waiting to see if they only benevolently observe the universe below them.
But maybe also if they are too boring. :(
If so: either we found an exploit or we're building an entire complex theory (QM) to explain a mistake ...
The complex theory correctly predicts real world behavior, nothing changes from our perspective if it's some kind of bug.
This begs for a sequel to "They're Made out of Meat". You know the thing about tabs in Makefiles, how by the time he realized they were a misfeature he already had ten users and didn't feel like he could do that kind of breaking change?
Or b) you don't accept the idea of "infinite" and you think everything has one single beginning, one original "big bang" that bootstrapped everything and that nothing ever existed before that moment, not even time. But as humans I don't think our brains are wired in a way to understand "there was nothing before something", we will always ask ourselves "what is the cause of that?".
Parmenides would like a word:
One also has to ask "what keeps reality/realities existing?". Why doesn't everything simply just go poof into non-existence?
Aristotle (and Thomas Aquinas, in this Second Way):
argue that there must be an Uncaused Cause. Not a "cause" in the temporal sense of a first domino knocking over a second, third, etc, starting with the Big Bang; but rather like Yo-Yo Ma playing his cello causing music. What is the cello player of reality that is causing the reality that we perceive?
Whatever said entity is, it itself cannot be caused by something else, as you get into an infinite regress.
This is where there seems to be a somewhat subtle implied premise, as the most one can say about necessity here is that without a first cause, then either there could be no subsequent effects or there is an infinite causal regress.
And regardless of that, this argument tells us nothing about what sort of first cause there might be, so when Aquinas says “to which everyone gives the name God”, he's just winging it, in that just about everyone's notion of God has some specific characteristics.
The use of "manifestly", and appeals to plausibility, intuition or convention such as "to which everyone..." are something of a tell: when philosophers resort to such language, we know they cannot prove that they have a sound argument.
These sort of arguments are quite entertaining to discuss, but Alvin Plantinga, for one, is under no illusion that they actually prove the existence of God.
> In particular, the critic owes us an account of why, since physics cannot in principle capture all there is to physical reality in the first place -- and in particular arguably fails entirely (as Russell held) to capture causality in general -- we should regard it as especially noteworthy if it fails to capture causality in one particular case. If the critic, like the early Russell, denies that there is any causality at all, he owes us an account of how he can coherently take such a position, and in particular how he can account for our knowledge of the world physics tells us about if we have no causal contact with it. If the critic says instead that genuine causality does exist in some parts of nature but not in the particular cases he thinks quantum mechanics casts doubt on, he owes us an account of why we should draw the line where he says we should, and how there could be such a line. […]
> In short, anyone who claims that quantum mechanics undermines Scholastic metaphysical claims about causality owes us an alternative worked-out metaphysical picture before we should take him seriously (just as anyone who would claim that quantum mechanics undermines the law of excluded middle owes us an alternative system of logic if we are to take him seriously). And if he gives us one, it would really be that metaphysical system itself, rather than quantum mechanics per se, that is doing the heavy lifting.
(Warning: it's a long read.)
Also references a previous post:
> Recall that in an earlier post Oerter claimed that quantum mechanics casts doubt on the principle of causality insofar as it describes “systems that change from one state to another without any apparent physical ‘trigger.’” Recall also that I pointed out that it is simply a fallacy to infer from the premise that QM describes such-and-such a state without describing its cause to the conclusion that QM shows that such-and-such a state has no cause.
And that is not enough, as even if it could be sustained, Feser is still arguing from a generality to a specific. Physics might fail to capture all there is to physical reality while it also being the case that atomic decay is acausal.
Furthermore, when Feser writes: "In short, anyone who claims that quantum mechanics undermines Scholastic metaphysical claims about causality owes us an alternative worked-out metaphysical picture before we should take him seriously", he is just attempting to shift the burden of proof away from those on whom it properly rests: those who say that they can prove there's a God.
Then there's the straw man "If the critic, like the early Russell, denies that there is any causality at all..." that we need not waste any time on.
- And that's all just from the short quote you have provided!
What Feder is not delivering here is an argument that the decay of an atom had a specific cause that operated at a specific time.
Aquinas at least has the virtue of being straightforward, but there's still the independent issue of the unstated assumption that I raised earlier.
This does nothing to change my description of the decay event as being an uncaused event, for more interesting notions of “cause”. And I believe that it does throw a wrench into the argument for the necessity of a first cause. In short, what Feser lacks in brevity he fails to make up for in persuasiveness.
Edit: Just to support that I am on 2nd solution :)
I would ask: what originated reality 0. If the answer is "something" then reality 0 is not reality 0. If the answer is "nothing. It existed since ever" then I would be dissatisfied because my brain cannot conceive that.
Thanks OP for making us aware of the great site (qnrm.org) since it has many other short stories.
I don't believe the part about all the simulations being linked though. I don't see why that would have to be the case. Our simulation could have "started" 1 second ago and we wouldn't know it.
And then there is the fundamental laws of nature. If our universe is indeed a simulation, then it is either just an approximation of reality, or reality follows different laws of nature. In that case, again, it makes no sense to think of oneself as a random choice between one reality and many simulations, as oneselve's existence depends on one specific kind of universe.
Further, suppose a computer agent knew that it would soon have 6 copies of itself (including current state) would be spun up, each in a different vm, and for each vm, a different simulated environment, but that the current copy of itself would also continue running.
While each of the simulated environments differ from each-other and the true/outer environment, they don’t differ in a way that can quickly be detected.
Different actions in the simulated environments and the “real”/original environment, would have different effects in the original environment, effects which the agent cares about.
In order to best produce outcomes in accordance with what the agent cares about, how should the agent act?
I think it makes sense for the agent to act as if there is a 1/7 chance that it is in the original environment, and a 6/7 chance it is in one of the 6 simulated environments.
How could it be otherwise?
Suppose the original, and each of the vm copies, is given a choice between two options, X or Y, where if it chooses X, then if it is the original, then it gets +m utility of benefit in the original environment , but if it is one of the copies in a vm, instead it gets -n utility of benefit in it cares about in the original environment. If it chooses Y, then there is no change in the reward.
The combination of values of m and n which combine to result in it making sense for to choose X, are exactly those such that would make it make sense assuming it has a 1/7 chance of being the original and 6/7 chance of being in a vm.
That being said, I don’t think we are in a simulation. I just don’t think that the concept of “assigning a probability other than 0,1 or 1/2 to being in a simulation” is always unreasonable in all conceivable circumstances.
I just happen to think that it is highly unlikely for us.
Take the perspective of your agent: There is no way to learn about the number of other agents, and this number is absolutely central to your argument. Every agent will at some point notice that a specific strategy is successful. It will appear like a universal law of nature. For each agent there is no chance involved, it is 100% predetermined what the correct strategy is.
Saying there is no chance involved is like saying that if John rolls a fair 6 sided die under a cup, and you and Jane don’t know the result, and Jane offers to bet you at some odds that the die is showing the number 4, that there is no probability involved because the value of the die has already been determined. Ok, sure, it is already determined in the world, but one should still assign probability 1/6 the the die shows a 4.
So, essentially replacing the simulation with an oracle of sorts for the rest of the world they are in.
This would allow the internal simulation to be just as detailed, because it would just be re-using the same results.
Of course, that only works if it is a simulation of exactly the same world.
I suppose if the simulation was the same except for a small intervention, then you could maybe use what you are already computing as a starting point, and then compute the difference that result from the intervention.
But, because of chaos stuff, I imagine that that would quickly spiral out to the point where it wouldn’t save much computation.
If you are willing to fudge things though, you could make it so that the results of the interventions are subtly adjusted in order to make the differences that result from them not effect much, and where they would be negligible, make them zero,
and with the recursive nature of it, could fudge the results to make it go to a fixed point (or cycle) more quickly, so that you don’t end up having infinitely many distinct levels.
Edit: of course, I don’t think this could be done in our universe, for our universe, or even for one very similar to our universe. We might be able to do something similar for a much much simpler world. And if we specifically hard code in a part of the world which represents “a computer simulating this world”, then that’s no issue, though it isn’t particularly satisfying/surprising. It isn’t a surprise that a videogame can have a scaled down copy of its window drawn as a texture on some object in the game.
Also, this version wouldn’t really suggest the “infinitely many copies are in the fixed point part, and only finitely many are above that part of the sequence” thing, because the chain would just stop once it becomes cyclic or a fixed point.
Also, there might be like, fundamental issues preventing detecting whether a computation in the simulation is simulating the same thing? Quine-ing is possible, so the issue isn’t representing the same code, but “detecting whether some process is equivalent to running some code” seems like it might be an issue.
Like, with Rice’s theorem or something.
Like, when simulating a game of life world, is it possible to automatically eventually detect all parts of it that are doing something equivalent to, e.g. enumerating primes?
Like, all parts for which there is a fast algorithm for computing that part of it using the sequence of primes and visa versa?
Or, eventually finding all such parts that last indefinitely and aren’t interrupted/broken .
Actually, I’m guessing yes, that should be possible. Dovetail together all “fast algorithms” that attempt to predict parts of the state using the list of primes, (and where the primes can also be quickly computed using that state through another “fast algorithm”), and then only keep the ones that are working.
All the ones that eventually stop working should be eventually ruled out, and the dovetail process should eventually find all the ones that do work.
Ah, but wait, that doesn’t result in conclusively deciding “yes, there is one here”, only giving candidates.
Well, still, for fast translations between the states of the simulated prime finding machine, and lists primes, I expect there should be a proof that,
Err, wait, no.
If the machine doesn’t just enumerate primes, but instead enumerates primes, but after the 2^(2^n) -th prime, looks at the n-th candidate for a proof of a self-contradiction in ZFC, and if it is a valid proof of a self-contradiction, messes stuff up and no longer computes primes,
(Or maybe instead of checking all of the n-th candidate, checks one step of the current candidate),
then ZFC can’t prove that this will always produce primes.
We could maybe say that this doesn’t count as doing the same computation, because there’s no sufficiently good correspondence between the states and the sequence of primes, but, eh.
On the other hand, if one settles for merely high confidence that some particular process in the simulation is computing whatever program, before replacing it with something that just gets the results of that computation from outside, you could probably use logical induction? Or, use logical induction for “it can be accurately predicted by this at least until time t in the simulation”, and then use the simplification until time t.
Of course, that’s not practical, because the current best known logical induction algorithm is much too slow. But theoretically.
Err, wait, I was going to cite Bremermann's limit, but that is for if moving from one quantum state to an orthogonal quantum state? Maybe that doesn’t rule it out completely if the computation is done in a way that doesn’t involve enough rotation of states to be able to distinguish?
Ok, but I expect that with enough math that loophole could be ruled out.
Maybe it is in an emergent property, but maybe it emerges not just due to the complexities of an individual being, but something else which is just missing from our current understanding and we don't know about.
That's a pretty huge and unfounded assumption.
- The fraction of posthuman civilizations capable of running high-fidelity ancestor simulations is very close to zero
- The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
For a certain kind of person who is also the most likely to engage with the hypothesis 3) is the most exciting, so it got the most attention. I think 1) and 2) are more probable and at least as worthy of deep consideration.
But writing a coherent story around non-countable infinity is so much harder because our brains struggle to grasp that concept altogether.
Basically, my point is that our brains have a few limitations which we work around by simply ignoring the stuff that does not compute. And that there are much more approachable things which belong there, like the ratio of a circumference and radius of a circle.
Yet even in the quantum future, it's hard to imagine "real" number of universes because we are so bound by the countable numbers.
I'd like to see a story go that far, but it's likely not to be very readable because humans today don't think of irrational numbers as irrational.
In usual math, irrational numbers have aleph-0 decimal digits.
(Rational numbers also have aleph-0 decimal digits, but after some point they start repeating in a loop of finite length.)
Irrational means: cannot be expressed as "integer divided by integer".
As a consequence, the decimal digits of rational numbers start periodically repeating at some point. That is because, as you keep dividing, after some point the remainder can only be a number between zero and denominator minus one, which is a finite number of options, so when you get the same remainder again, the loop restarts.
Therefore, if the decimal digits do not repeat in loop after some point, the number is irrational. This is true regardless of whether the pattern of decimal digits is something complex, or something quite simple but not exactly a loop; for example "1.101001000100001000001..." would also be irrational (i.e. not a fraction of two integers).
Technically, individual real numbers cannot be "countable"; that adjective only refers to sets (and ordinals or kardinals, but those are not real numbers). In standard math (i.e. not hyperreal numbers), every real number in decimal expansion has a finite, or countably infinite number of digits. Countably infinite here means that you can, literally and straightforwardly, count the decimal digits: "this is the first decimal digit", "this is the second decimal digit", etc.
Then there is a question of whether we could write an algorithm that prints those digits. Obviously, for rational numbers, we could: print the (finite) part before the infinite loop, then keep printing the (finite) contents of the (infinite) loop. We could also so it for some irrational numbers, such as the "1.101001000100001000001...". Even for pi, e.g. using the Taylor series. However, for many real numbers, which are effectively just infinite sequences of random digits, we can't do that.
tl;dr -- all real numbers have countable (or finite) number of decimal digits
How can each simulated universe be the same as the one above an below? Why can't they diverge? If they turn off their universe (killing all those below) why would that turn off the one above?
Or is it just hope that if you don't the one above won't ? Because while all decisions are possible we make thrones in our self interest because we hope everyone else is playing the same game.
Even if they are in the middle of many levels of simulation, it might not be just one line, but more like an infinite tree.
There could be someone in the next room also simulating the universe, or they could simulate the universe again 5 minutes later, this time without intervening. (Like I wrote in another comment)
Of course super determinist asteroids become worrying. Or can they adjust those ? If you can interfere can you interfere to produce "good" outcomes ? Steer asteroids away?
Yes, but not on the highest level. :(
So it depends on whether they can simulate the future, and whether someone up the ladder will be nice enough to simulate enough future (with the asteroids deflected) for those below them before they (the ones up the ladder) get killed.