
How would you program a model of a universe? - calmchaos
Let&#x27;s assume you have nearly infinite computation resources.<p>The goal is to create a computer program that creates a virtual universe with the fewest possible rules and shortest amount of code. The model doesn&#x27;t have emulate our known universe.<p>The only requirement is that after injecting initial data the virtual universe becomes self-sustaining and ever evolving so that based on the fundamental rules and fundamental elements more complex structures and interactions can be formed.<p>So an advanced version of &quot;Game of Life&quot;.<p>1. How would you define the core model and rules of such virtual universe?<p>2. What would be the minimum amount of &quot;initial data input&quot; required to start and keep the virtual universe running (assuming ~infinite computational and memory resources)?
======
mattbgates
Google created this a while ago, was very cool.

[http://stars.chromeexperiments.com/](http://stars.chromeexperiments.com/)

And I also had played this game on Steam like 10 years ago and loved it. Very
basic, very simple. You started out as a small speck of dust in space and as
you flew threw space, your gravity would begin to absorb the elements and you
would get bigger, becoming a planet, a bigger planet, a sun, a black hole...
while it was mostly simuluation and you had no real enemies, gravity is always
our biggest enemy and you could be ripped apart by another black hole or even
a giant sun, which made the game so enjoyable. You could drift and float and
control it.. you flew through space and moved the arrow keys to avoid objects.
You could change your speed too.

Let me see if i can find it and i only bring it up since I can't answer your
exact answer, but can show you an idea. Looks like it got a massive upgrade as
it used to be 2D.
[https://store.steampowered.com/app/230290/Universe_Sandbox/](https://store.steampowered.com/app/230290/Universe_Sandbox/)

And CodePen can at least get you started with a simulation.

[https://codepen.io/search/pens?q=universe](https://codepen.io/search/pens?q=universe)

~~~
calmchaos
Very cool! Thanks for sharing! I'll check that out.

~~~
mattbgates
What you want to do reminds me of an episode of Futurama.
[https://www.youtube.com/watch?v=EL7e05pClKM](https://www.youtube.com/watch?v=EL7e05pClKM)

------
rl3
While I appreciate your question and realize what you're asking, I hope you
don't mind my going a bit off topic here:

> _Let 's assume you have nearly infinite computation resources._

I think it's more fun to assume you don't. You need not simulate every water
molecule in the ocean for life to emerge.

Imagine you had a hash table of precomputed interactions between various
simulation elements across all scales. In such a system, the computational
cost of simulating a single protein would be similar to that of two galaxies
colliding.

A planet for example would simply be a vast hierarchy containing a chain of
approximated behaviors, each node in that chain containing its own hierarchy.

The really fun part is trying to envision how changes at very low levels in
this chain would ripple upwards and affect the simulation at larger scales.
Thus far I've only imagined it as analog to node invalidation in a typical
scene graph data structure, where you flag it "dirty" and it's queued for
recomputation on the next frame.

~~~
calmchaos
Good comment, thanks! It's indeed possible to approximate highly complex
systems and interactions with generated models surprisingly accurately. But
I'm more interested in the lowest possible level of rules and interactions.

In our current universe we are still seeking the "theory of everything" \- a
model that would explain physics at all levels. Currently it seems that
everything breaks down at (sub)quantum level.

But I think that our universe is logical, exact and that all higher level
observable interactions are based on some really simple fundamental rules. The
problem is that we can't measure or detect those properly. For example Quantum
Entanglement is one big issue but yet it is also based on a rule. If the rule
is always enforced logically, it doesn't have to comply with the rudimentary
model of physics by humans. Perhaps we are too fixed on the idea that light
speed is the absolute limit to every interaction? :)

To me it seems unavoidable that the fundamental rules and basic elements of
our universe have been purposefully designed. The measured standard model
values even imply that those values have been calibrated to those exact values
by simulation.

I also believe that our universe is not the only universe (energy in certain
3D area of the void governed by a set of distinct rules for that particular
experiment) and that our universe is not the first or the last.

If "intelligence" can be born inside a computer simulation, it's also possible
to have "nested universums" where simulated intelligence creates a new virtual
universum inside the virtual universum where the intelligence was born in.

A related example would be that a human created AI creates a new better AI
system independently and those AI systems can keep evolving and generating new
stuff without any human interaction with the system.

~~~
rl3
In thinking about this, I concluded the subatomic/quantum levels were far
beyond my ability to comprehend or model accurately (if at all) since I'm not
a physicist. Furthermore, both are still on the forefront of discovery and not
fully understood.

Therefore, I reasoned that these low-level rules you're talking about may
manifest upward in scale. If you created layered approximations of observable
behavior, you could essentially capture the output of these rules even if not
directly discovering them at a low level.

If you have a protein, a plant, planet and galaxy all on equal footing in
terms of how they're approximated and simulated, then perhaps one could factor
out commonality from each and deduce some sort of low-level rules from that.

With respect to our reality being an ancestor simulation, I'd say it's highly
likely. You might be interested in looking at Bostrom's simulation hypothesis,
if you already haven't.

~~~
karmakaze
This would be the most fun for me. Model the lowest levels of physical laws by
creating a simulation universe of a very small volume and see how to make
behaviours computable. Only rule is that you can't freeze time in the
simulated universe while computing. So first off there has to be propagation
limits to effects (or at least observability limits) within the simulated
universe.

The best way to make this scalable might be to make each observer a CPU.

------
jerome-jh
1/ Since we have an example of a working universe at hand, I think I would
start by modeling the 4 basic interactions. And there is already a quite hard
issue which is: 3 interactions are short range, and the 4th (gravitation) is
long range. So you will need a large enough universe and a large number of
particles from the start. Second issue is that you may not see interesting
behavior emerging before your patience ends.

Compared to Conway's game of life: only one interaction (short range),
interesting behavior almost immediate.

Maybe there is middle ground. (game of life with a long range interaction?)

2/ Initial data to be chosen randomly. Works both for game of life and our own
universe.

~~~
calmchaos
If I was God, I would first create a very dense three dimensional grid in the
void. Then in any point of the grid there is a gravity vector that points to a
direction with the highest combined force of gravity calculated at that point.
So gravity must be a three dimensional vector field.

Now, the direction and length of those vectors have to be continuously updated
due to moving particles so that we can efficiently move particles arriving at
any point.

The most logical explanation is that our universe updates its state at
frequency that is directly linked to speed of light. This update frequency
dictates the maximum distance any particle can travel in one cycle. Similarly
gravity vectors cannot be updated faster than this frequency dictates. So
gravity must propagate as spherical waves at the speed of light.

Similarly the system needs to continuously calculate electromagnetic field
vectors for all points.

So when you place any fundamental particle in points X in the grid, it is
immediately moved based on the direction and amplitude of the vectors in that
point - but only by vectors that have been defined by rules to have an impact
on this particular fundamental particle. We can e.g. define particles that are
not effected by gravity at all.

So at very minimum we need some fundamental particles, forces and a way to
calculate how those particles move in the grid. The system has an update
frequency which defines the maximum speed in the grid.

Then you need to define rules what happens when two or more fundamental
particles come close enough to each other.

When you have those rules and they produce an evolving system that stays in
motion when fed with enough "input data", you have cracked this problem.

Solving this problem is the key to understanding theory of everything. So
although the initial models aren't exactly correct, this exercise forces you
to think like a God.

------
qubex
Obviously the minimum possible amount of data for a “Game of Life”-type setup
would be a bunch of ‘ON’ voxels somewhere near the center.

As for the question itself, I think you’ve answered yourself: a Game of Life-
type setup defined in three dimensions. I’ve seen a mention on HN of somebody
implementing the (2D standard case) in 23 assembler instructions, so it’s very
minimal.

~~~
Someone
Nitpick: 23 assembly instructions and billions of transistors to run them.
Custom hardware for the game of life could significantly decrease that (and
the number of instructions), though.

I also think that “bunch of ‘ON’ pixels” would have to be rather large, if you
think the universe could have evolved in many, many, different ways. A
stochastic version of the game of life might be a better choice.

To prevent that from getting to chaotic, the next state of a cell could depend
not only on the states of neighboring cells, but also on the past state(s) of
the cell and its neighbors (in a sense making ‘time’ more important).
Alternatively, one could slowly decrease the amount of randomness over time,
allowing the universe to ‘cool down’ from an initial Big Bang into a less
chaotic almost stable state.

Finding an interesting variant would be a challenge, though.

~~~
qubex
I think Stephen Wolfram would beg to differ with you on the topic of how
difficult it is to find an ‘interesting la rule – his hole oeuvre (whatever
your opinion of the man) demonstrates that these things crop up surprisingly
often with even the basest ingredients.

With regards to the kind of rule you speak of, a cellular automata cell in a
3D environment has at most 26 neighbours in the lattice. You could easily
increment that to 27 if you wish to make its behaviour depend on its prior
state (or even 53 if it you want it to depend on its 26 neighbours’ current
state plus 27 prior states of itself and its neighbours). I built such systems
(albeit on dynamic networks and 2D automata lattices) for my thesis nigh on 18
years ago. It’s quite utterly trivial.

