

A simple test of whether the Universe is a hologram (or a computer) - humanarity

assume the speed of light (and the uncertainty principle, or the product of the Planck constant and the speed of light) is the processing speed limit of the computer which simulates the universe.<p>Also assume that the universe has finite computational resources. Therefore resources have to be diminished in one location when they are in in demand another location.<p>The simulation of a complex event, such as some highly involved, very low redundancy, fast moving and extensive event (like some kind of very large very fast collision between two highly involved structures, there are better examples) will result in significant local load on the universe computer.<p>Given our two assumptions above then we have a measurable result:<p>Ether the speed of light (or Planck constants)  will be diminished in that local region, or they or their product will be diminished in another region.<p>If the diminishment is local (or within a testable neighbourhood) this can be tested.<p>If the diminishment is not in a testable neighbourhood, perhaps other experimental constructions will work.
======
johnloeber
"Assume the speed of light (or ...) is the processing speed limit for the
computer which simulates the universe."

Wholly unfounded assumption. If the universe is a simulation, there's no
reason why any particular constant would be the processing speed limit.

Indeed, even talking about _processing speed limit_ is probably an incoherent
concept when considering the question of whether the universe is a simulation.
If our universe is a simulation, then some other entity is simulating our
universe. We know absolutely nothing about the world of that entity, and have
no grounds to conclude or assume anything whatsoever about the machinery of
the simulator.

The egregious mistake in this "test" is the implicit assumption that the
machinery some entity (in a universe we know nothing about) would use to
simulate our universe would be a computer of the same basic architecture as
the one sitting in your bedroom.

~~~
gendoikari
Moreover, the simulation speed would not be related with any "speed" inside
the simulation. If the processing speed decrease, the simulation will be
slowed, but inside nobody would possibly notice it. If you stop the
simulation, everything is stopped, anybody would be "frozen", and there's no
way to detect something like that inside the simulation.

~~~
bemmu
Exactly. Maybe to calculate one iteration of our universe normally would take
a second for the "outside computer", but because of very heavy load it now
takes a year. We'd be none the wiser.

------
DaFranker
There are so many wrong assumptions in this OP, most of which the OP didn't
state explicitly, that for once I don't even feel like breaking it down into
chunks:

> [quote of the whole thing]

No. When you run Dwarf Fortress on a slower computer, or do something in it
that taxes your computer resources more, the dwarves don't suddenly sit up and
take notice that something went faster in-game than it was supposed to.
Everything just takes longer to calculate, at once, _including_ the dwarves'
perceptions and any (in-simulation) tools they might use to measure this.

QED.

~~~
brudgers
The Dwarf Fortress analogy is assuming a single threaded implementation. In
other possible architectures, dwarfs might experience latency and among the
other things a dwarf might do with all that free time is measure the latency.

Disproving idealism by assuming a god's eye view doesn't warrant a QED.
Reality is a black box abstraction.

~~~
DaFranker
Only if the architecture itself is unsafe and does not enforce timeline
consistency.

It's relatively easy and simple (on the scale of programming a black-box
automaton that runs our universe) to code a completely asymmetrical n-threaded
system where some threads are millions of in-simulation years ahead of the
others and yet the "final" timeline as would be output in a timestamp-sorted
log would be identical to that produced by a single-threaded frame-synched
system.

It's not even all that hard by our modern standards; some software scientists
do it all the time.

So no, the analogy wasn't assuming a single-threaded implementation. It was
merely assuming that the system was designed for a self-consistent (and
probably loopless) timeline, which has strong evidence for being a property of
the kind of spacetime we live in. One such piece of evidence is the ever-
mounting empirical evidence reinforcing the c limit on information transfer.

In an unsafe, approximation-riddled system where causality graphs are only
searched for modifications up to a bounded depth or item count, yes, some
discrepancies could be noticed. However, it's empirically evident that this
bound, if it exists, is far greater than everything we've ever been able to
calculate.

The Universe, whatever it runs on, routinely calculates graphs to perfect
accuracy that would take us millenia of the sum total computing power on earth
to solve accurately enough that we couldn't tell the difference in the result
with our current instrumentation.

------
monk_e_boy
I would assume the frames per second would drop, and for us being simulated we
wouldn't notice. Our clocks would still read the same, no matter how 'fast' or
'slow' the simulation is running. Weather terms like fast and slow even make
sense to the machine simulating us, a linear single direction timeline could
be a construct they are simulating. Maybe the live in a universe with 3D time
or something equally different.

------
dooptroop
Ooh, I wanna play too!

And as more and more classical information in one region of space has to be
simulated (Allude to wave-function collapse), less and less information is
possible to know about distant matter and events, which looks like universal
expansion to our mortal eyes, but it's actually our instance scaling down.

------
brudgers
There's a linguistic problem. Assume there are real-computers [or real-
holograms]. Inside the simulation the word "computers" refers to simulated-
computers. Even if we invent the term "real-computer" inside the simulation
it's fully qualified referent is simulated/real-computer. Global variables
aren't global to the extent their referent persists across system boundaries.

Sysadmin privileges are required to configure the same name to the same value
across systems. Once a proof assumes there is a sysadmin, we have to rely on
faith along with our reason.

------
Mosix1975
I don't think this would testable because in the region of diminished
resources, time would also be diminished (slow down but not necessarily stop).

Slowing time in the diminished resource region would also slow any local
observer, making his observations seem normal. If other resources are
diminished that will also interfere with observation.

Einstein's work on simultaneity also prohibits a single observer from
observing locally and in another region (i.e., there is no God-like view).

It appears that physics has built-in mechanism for keeping 'observers' and
'the observed' in check.

------
paulvs
When you reach for your phone, there's always a time when your hand is only
halfway there, so theoretically, it'll never get there (Zeno's paradox). Since
we know from practice that it is possible to grab your phone, the Great
Computational Machine that runs the universe must be using floating-point
numbers with a finite precision that round to the nearest whole integer when
your hand is about to arrive.

Clearly, proof that we live inside a computer simulation ;)

------
socket0
As these measurements and observations are done inside the system, they're
completely unreliable. Every time the computer clock ticks, a second goes by.
It doesn't matter how fast it's ticking to an observer watching the screen,
for code on the inside a tick means a second. The outside observer would
notice the processor struggle to render all those details, slowing everything
down to a crawl, but inside the code every tick will still signal a second.

------
Lannister
Since we are making assumptions I will make the assumption that each point in
the universe is capable of calculating all possible events taking place at
that point. As such there need not be a drawing of resources from once place
to another. As each cell contains the DNA for our bodies so each point in the
universe shall be capable of calculating all possible events ocurring in that
particular point.

------
thebezet
"Also assume that the universe has finite computational resources. Therefore
resources have to be diminished in one location when they are in in demand
another location."

The "universe computer" could easily be many times larger than our universe
and have enough resources to simulate everything, everywhere.

------
tacone
> "Also assume that the universe has finite computational resources. Therefore
> resources have to be diminished in one location when they are in in demand
> another location."

With all due respect, it's much more likely the universe is multi-core, highly
parallel and enforces proper resource allocation.

------
ajuc
You are mistaking internal time with external time in simulation.

Even if universe runs on Turing machine doing one instruction per external
year, for one external year, and 1000 instructions per second for the next
external year - you won't notice the difference from inside.

Only observators from outside will notice.

------
shultays
The real universe might have a higher speed of light and thus better CPU power
than we can imagine. Our speed of light might be just a define in a header
file so the simulation can be run in reasonable speeds for that super
computer.

------
minerb50
It is as if you're trying to run a benchmark on a system: pushing the limits
until frames start dropping.

------
chrisBob
That seems perfectly reasonable. Let me just crash a few galaxies together and
see what happens.

------
humanarity
“Unfounded assumption” that’s why they’re called assumptions, so we don’t have
to found them. We don’t have to and we also can, so let’s found some of them.

“No reason speed of light would be processing speed.” No reason it isn’t, and
actually there are some reasons it is. SOL limits rate of information
propagation (ignoring quantum entanglement, which may be like two or more
particles being initialized with a shared key to Universe memcache). Planck
Constant limits amount of information. These two things provide clear limits
on how fast and how much information can propagate, which is a reason which
can contribute to a choice to assume that SOL or PC or their product tracks
inherent or imposed computation limits of the Universe computer.

“The simulation speed (from POV of observers in simulation) and the simulator
speed (from POV of observers outside the simulation) are unrelated, because
even if the computer was suspended, we would not notice, because time also
would have stopped.” If the effect is global this would be correct. If I pause
the Universe computer, then no one notices they’ve stopped, because their
noticing has also stopped. If I rewind and refresh from a backup, then no one
notices they’ve gone back Groundhog day style, except if observer memories are
stored separately to the main Universe state, then someone’s information can
persist between refreshes (as happens in Groundhog day and Edge of Tomorrow).
So if the Universe computer has one processing loop, one core and that slows
down, then everything slows down, and no one notices.

However, what if different regions each do their own processing and then
update each other by exchanging photons (and maybe operating on shared
memcache if you want to get quantum)? In this case a local slowdown will not
be observed globally, meaning that it can be observed in a simple manner the
same way that relativistic time dilation is observed. Synchronize two watches,
send one observer to the event region with one watch and keep the other watch
here. When the other observer returns, measure the time difference (correcting
for any effects induced by velocity or gravity) -- is there some left over? Is
there some slow down as a result of the observer having been present in a
region where computation had to slow to maintain precision (Planck constant)
because there was so much going on? Or was precision sacrificed (Planck
constant) for speed? What optimizaion choices were made in that part of the
simulation? If time slows we can measure, if SOL slows we can measure (with a
watch whose movement is bouncing laser between mirrors), if Planck changes we
can also measure it. So the result of this is that if there are local
optimization choices being made, these can be measured, and the experimental
construction proposed remains a workable one. There is evidence that constants
have changed over time, (perhaps as the creators made optimizations?), and
change over regions (perhaps due to run-time optimization choices as we are
proposing to test here). One untestable (because it can’t separate matter
interaction from computation) intuitive hypothesis for why the SOL varies per
medium is that there’s far less computation to be done as photons go through a
vacuum, and interact with nothing, than when they go through a dense material
and interact with many things.

“Any measurements of time distortion done inside the system would be
unobservable” Actually this seems to not be the case even with past
experiments. Time dilation can be measured when it results from local effects
(such as SOL travel, gravity), and these experiments have validated the theory
of relativistic time dilation. Watches going out of sync because of time
dilation is a testable phenomenon. Evaluation the theory of time dilation due
to localized resource constraints will be similarly testable.

“External time is not internal time, any slowdown will be unobservable.” Not
if the effects are local, with different regions making their own optimization
choices. We can send a watch to the region of the high-load event, and when it
returns we can see if it slowed down relative to its twin here.

“The real universe may be more than capable of simulating without slow down,
the constraints may be artificial to keep the simulation in check.” Exactly,
it might. Whether they are inherent or imposed limits of the Universe
computer, if the effects occur locally we can test them.

“Benchmarking the universe.” Yes.

“Crash a few galaxies together.” Well, yes. Just observe when this happens and
figure out a way to use the data we already have for those event to test
theory of Planck constant or SOL being diminished due to these effects.

Taking it Further

What if the gravity effects from which we hypothesised the existence of dark
matter were really just local resource constraints on the SOL or processing
speed, resulting from optimization choices when large objects like galaxies
are doing something load-heavy?

What if gravity itself is an optimization? The more gravity you have the less
things you have to calculate because the more you restrict the movement
leading to less possible system microstates. Broadly, infinite gravity is a
black hole with 0 observable microstates, while 0 gravity is open space, with
infinite possible microstates. Gravity could then change from place to place
based on optimization choices, explaining the anomalous dark matter by
assuming the effects observed resulted from changes in gravitation rather than
extra matter.

However all these consequences is just theorizing. What we have is a theory
that is testable.

So should we despair that "nothing is real?" Hold your horses. Even if such an
effect was validated by experiment, it's possible that Universe computer and
simulation is simply an analogy for some physical principle at work. If that's
true it's still a neat analogy. After all, all our theories are really just
analogies to help us think about and model things about the real world.

