There is a section in the article that explains the motivation behind this: there are a lot of questions in embodied AI that can be answered with such simplified simulators, i.e. you need something more complex than Atari, but you don't need the full complexity of the real world.
From self-driving simulations and agent training to human interactions and large ecosystem simulations.
Could allow us to observe how patterns emerge in/between large agent groups (swarm intelligence is exciting!) and train our models in more complicated and collaborative tasks.
To me, metaverse (drawing from Snow Crash) means essentially "the WWW in 3D" which somehow we still don't have despite commercial approaches to that space such as Second Life and Roblox.
The closest thing I know of is probably Croquet (the original project from Alan Kay and co, less so the commercial venture although I'm watching that with some interest as well) and its various spin-offs like Open Cobalt: virtual spaces hosted independently but with the capability to link to each other to compose a larger distributed virtual space, just like the world wide web.
so like WebXR (& WebGPU is moving ahead too)
a good example of whats possible is Vartiste , a 3D paint editor tool running in a normal browser on Pancake PC's or in any VR HMD
I guess WebXR's responsibility is the interface between XR and the web, and someone else would need to take on the synchronization task to achieve that.
And then the videos show the end result of a single successful run of the trained agent, rendered to video
Of course, there is always the possibility that the agents running the simulation could introduce some explicit information about this simulation, if they were so inclined, but that is entirely equivalent to the observation that God or the Gods could descend from heaven and make their existence known. Basically the simulation idea is as scientific as transcendent religious ideas (heaven, god, illusion, Valhalla etc) - they have similar implications, and are similarly impossible to disprove.
>"Buy Jupiter!" is a humorous science fiction short story by American writer Isaac Asimov. It was first published in the May 1958 issue of Venture Science Fiction Magazine, and reprinted in the 1975 collection Buy Jupiter and Other Stories. The original title of the story was "It Pays," though it was never published under this name.
>Plot summary: Government officials of the Terrestrial Federation negotiate to sell the planet Jupiter to an energy-based alien race. The beings refuse to reveal their plans for its use and whether or not they are at war with other similar beings. Eventually, the aliens reveal that they wish to suspend letters in Jupiter's atmosphere as an advertising slogan (i.e. Jupiter is to be used as an advertising billboard), to be seen by passing spacecraft. The main Earth negotiator reveals to his colleagues that he has outsmarted the aliens, who clearly are not experienced hagglers, having neglected the other Jovian planets. So when rival beings come to do business, Saturn, with its fancy rings, can be sold for an even higher price.
Second, the simulation would have to be restricted in some sense. Either its reality's laws of physics would not be the same as ours or it would run slower then real-time (which would be odd, at least).
If the simulation would run slower then real-time, but within the same physical reality as ours, we could observe it, albeit in a very unfortunate manner: The evolution of the universe (expanding space, decaying particles) would show earlier in the "host reality" then in our "guest reality". Presumably, the simulation would become inconsistent before it would break down.
If the host universe has different laws of physics it is literally metaphysical. There is no point in trying to understand such a thing within the limits of our existence. So we can put this into the world of (religious) believe systems.
If you are at the point where you can simulate a whole world, I think this ceases to be an issue.
That implies that there can only be exactly one universe, whereas the simulation hypothesis (just as the multiverse or many worlds interpretation) say that there are infinite. Thus, the probability of something happening is undefined, as there is no denominator and everything possible happens somewhere.
I think a more useful view is that is extremely unlikely, and so we need an extraordinary claim with extraordinary evidence to explain it.
It’s expensive to simulate every particle, so it’s easier to simulate the macroscopic observables of a system in bulk. But if you try to look at individual particle interactions, you end up seeing that things aren’t deterministic any more (because the individual particles aren’t typically simulated), and the results of experiments are heavily dependent on when/what exactly you decide to measure.
Wave function collapse is what happens when the simulation had to fully evaluate a particle interaction that would have normally been optimized away.
You could argue that the simulation must be running on a quantum computer then, but that's just moving the goal posts.
Of course, which is why the universe doesn’t typically calculate full entanglement; hence entanglement doesn’t really show up on the macro scale.
Only when you do experiments on a handful of particles, and come up with an observable that depends on their behavior (interferometry, etc), the simulation has to scramble to come up with a coherent history that agrees with your measurement, in a lazily evaluated way. The larger scale the experiments are on entangled particles, the harder the simulation has to work to make it all happen and make sense. :-)
One could say that decoherence is the universe’s way of preventing us from stealing its compute power too much.
And let's ignore that most of the supposed optimizations that quantum mechanics was allegedly bringing happen because of entanglement and not in spite of it.
So, let's say we find out that we live in a simulation. The question now becomes: is the outer level a simulation as well? And so on.
If we find out we do not live in a simulation then the question is: what originated us? (Which in some sense is like asking "do we live in a simulation?": we are still asking about the origins of this reality).
Physics can tell us a lot about this reality but it won't ever be able to tell us anything about the origin of it (I'm not talking about Big Bang, but the origin of the Big Bang and the origin of the origin of the Big Bang, etc.)
We'll never find a definitive answer that satisfies us.
Heh. Just wait until you die and wake up.
The fact that it's been given more consideration than as an idle philosophical thought experiment has more to with our current's culture's fixation on computers than a particularly strong argument.
The original thought experiment from Nick Bostrom has also at this point been paraphrased incorrectly by both Neil Degrass Tyson and Elon Musk so the public discourse surrounding it has lost much coherence.
That said, it does make sense; do characters in The Sims as it is now, or as it may be in a hundred, a thousand, ten thousand years, know that they're living in a simulation? We're adding more and more computing power, and more and more details to our own simulations; at the current pace it seems to be a matter of time before we can simulate physics to the smallest quantum details, at an universal scale, at a fraction of the energy cost of our own universe. And maybe not real-time, but if we live in a simulation, who is to say that time as we experience it ourselves passes as fast on the "outside"? It could be that every second of our existence takes the equivalent of a hundred years outside of the simulation. But we're going down a philosophical rabbithole now.
Indeed, the only two aspects of your universe that make the idea that it's a simulation even remotely possible is the fact that there at least appears to be an ultimate smallest division of physical attributes - the plank units and the fact that there is also a specific limit on the speed of causality - c. Neither of these are a slam dunk.
My guess is that the hypothesis is false but unfalsifiable.
If our universe is indeed a simulation there is absolutely nothing preventing the creators from using a hack that modifies our minds on the fly to correct the inconsistencies.
There’s only a few billion people, and so very many particles.
This is why it's pseudoscientific though - you could still argue that it's a hack or whatever since the ultimate universe might have different laws of physics and computation. At that point, what makes this theory particularly more empirically appealing than a supernatural deity though? An unfalsifiable theory is an unfalsifiable theory.
Either the universe is one of a large number of potential universes (anthropic principle + multiverse), or the laws of physics were designed or selected to be "interesting".
It's not a falsifiable question at our current level of understanding of the universe, but it is clearly a question that has an answer even if we can't know it.
I reject the very foundation of the simulation hypothesis: That worlds (can) exist inside other worlds. "Inside" can mean geometrically (physically inside) or topologically (element in a set). That would mean that the "inside" relation is a hierarchy / tree with a root (a true reality). Instead I think of it as a graph, where worlds can observe each other, without any influence on one another, thus there is no need for an origin or root.
Thought experiment: For the sake of simplicity let's say that our world is the game of chess (but the argument can be done for any universe with a set of rules, including ours). The simulation hypothesis would argue that a session of chess is a world and that playing that session on a physical board is a simulation. Furthermore, it would claim that this world exists while it is simulated on the board, by being played. Kicking the board over, ending the simulation would also end the simulated world.
Now, there are many logical inconsistencies here, uncovered by the following questions:
- What if I replicate the simulation by playing exactly the same session on a second board? There is only one session but now it has two hosts!? Which one does it exist "in"?
- How can I claim that I ended a simulated world, when I can't be sure that nobody else is simulating it somewhere else too?
- How can I claim to have done a new unique simulation if somebody else might have already done exactly the same in the past?
The main issue I have is that the simulation hypothesis thinks of simulations as imperative: That the simulation host does invent, change, influence, manipulate, guide the simulated world. I prefer to think of it as functional, it observes where a particular path (which exists regardless) leads without changing the outcome. Thus performing an action on the host merely turns your telescope a bit, so that you now observe a different route but the night sky exists immutably with or without you observing it.
I dont know why you think that's a gotcha. If I copy the solar system down to the last atom including you and put the copy in another part of the universe, do you suddenly not exist because there's an exact copy of you running elsewhere?
...then you have two simulations...? How is this a "logical inconsistency?"
Your post is confusing to me—I couldn't detect any actual objections or issues with the simulation hypothesis in it.
And, of course, they might diverge for various reasons (inherent randomness or intentional tweaks in one of the simulations) at arbitrary time, reinforcing the idea that there's not one entity in two simulations but two separate entities, even if they happen to be identical at the moment, no matter if you run simulations in parallel or sequentially.
In such cases, the question of "which is the true one" is really empty, there's no meaningful notion of trueness, all instances of a similar entity are equally valid. If we'd be making copies of sentient entities, then perhaps we might want to define some distinction to privilege one of the copies over the other (e.g. in mind uploading the physical/original entity over the upload), but if the simulations are equivalent then that distinction between copies/instances is just arbitrary and all of them essentially are just as valid.
> Entities in each simulation are separate instances, no matter if you run simulations in parallel or sequentially
Things can't "happen" to be identical, they are (one). E.g. I write the concept of the number 3 in a hundred different languages, fonts and styles. They are all the same, only their hosting representations are many.
I think our disagreement boils down to whether you accept or reject this.
> "which is the true one" is really empty, there's no meaningful notion of trueness
> simulations are equivalent, then that distinction is just arbitrary
Exactly, so you agree?
Furthermore, the changes depend on a large number of factors - potentially the whole simulation - so unless we're certain that the simulations are fully deterministic and without any interference (and we can intentionally ensure that they're not), there's no reason why they would stay the same. We should interpret every simulation as potentially bifurcating an exponential number of times, and each simulation explores just a tiny subset of the theoretically possible futures of each entity; and if you have "snapshots" of simulations, then you can explore many possible intervention-based branches from each point, just as we do in our experiments with simulated worlds.
The appropriate analogy is not the concept of the number 3, but a reality of a number like "0.3" that may get represented as 0.30000000000000004 in one simulation due to floating point approximation, and get corrupted by a cosmic ray bit flop in another one - and we're talking about the properties and experiences of these nonperfect instances of simulations (since if assuming the simulation hypothesis, that's the experiences we get and care about), not about the properties of some theoretical concepts that may be the "core of the entity" (e.g. Plato's philosophy) since they're not real unless/until they get actually implemented or simulated, and if the realization does not match the ideal, if there are any differences whatsoever, then the realization is the one that matters.
The steps it can take from that point may still exist Platonically.
The simulation hypothesis is often connected with fears like: Are our gods benevolent, what if they abandon us or pull the plug? And my answer is: Why care, they can't "do" anything to us. If they change their simulation they simply observe something else, not us anymore, and we stay where & who we are.
This seems to have a similar premise, but I haven’t read it (yet). It seems interesting: https://en.m.wikipedia.org/wiki/Simulacron-3
If there is, then it's a matter of detecting the existence of that difference.
If there is not, then that would mean science will never reach ultimate truth.
I think there was another paper about similar grid-alignment idea for some properties of galaxies, but I can't find it now.