
The Recursive Universe - todsacerdoti
http://www.amandaghassaei.com/blog/2020/05/01/the-recursive-universe/
======
suby
I know Conway was slightly resentful that game of life overshadowed some of
his other work in the public's imagination, but I've always been entranced by
it.

I remember as a child the first computer my family had was a dual boot of
Microsoft DOS and Windows 3.1 (or something like that?). On the Windows 3.1
side was a version of Conway's Game of Life which was preinstalled, and I'd
spend hours messing around with it. You could place two different colors of
cells, and I'd set patterns up and then let the simulation go to see which one
would "win", or outlast the other.

Conway's Game of Life was also one of the first meaningful things I'd
programmed, and even today I like to reimplement game of life when learning a
new langauge. Typically I like to let the user assign different colors to the
grid, and have new cells born be a blend of all of their neighboring colors as
a kind of simulation of natural selection. Right now I'm learning network
programming for game development, and I'm finishing up a networked
implementation of game of life so multiple people can join and manipulate a
running simulation. In general I think it's a good project to use when playing
around with learning something new.

I just really like cellular automatons, and game of life in particular.

~~~
pier25
> _had was a dual boot of Microsoft DOS and Windows 3.1 (or something like
> that?)_

I had one of those too. I think Windows was more like the desktop environment
of DOS.

~~~
aj7
I pity you guys. Finder 6 was 10 years ahead.

------
cgh
The blog post was inspired by The Recursive Universe by William Poundstone. I
read this book years ago and found it amazing. I recently read another book by
him, Fortune’s Formula, which features the Kelly Criterion, Claude Shannon
beating roulette, card counting in blackjack, and the surprising origins of
Warner Bros. Recommended for sure. Thanks to the HN poster who suggested it.

~~~
throw_away
Heh, I remember reading William Poundstone's books "Big Secrets" and "Bigger
Secrets" as a kid
([https://en.wikipedia.org/wiki/Big_Secrets](https://en.wikipedia.org/wiki/Big_Secrets)
). They're mostly about debunking or explaining various urban legends and
conspiracy theories. I'm not sure how well they've stood the test of time, but
they definitely had a formative impact on my thinking in a way that I only
suddenly realized upon reading his name thirty some years later.

~~~
aj7
Everything written by Poundstone is worth reading.

------
OscarCunningham
Adam P. Goucher recently created metacell more advanced than the one described
in the article, for which the Dead state is literally empty space. When a new
metacell is born it is constructed by one of its neighbours by colliding
gliders.

[https://conwaylife.com/wiki/0E0P_metacell](https://conwaylife.com/wiki/0E0P_metacell)

------
blindm
I am convinced The Universe is an enormous fractal. I always wondered if you
'zoomed out' of the Universe far enough, would you encounter more matter or a
separate Universe co-existing next to our one? Keep Zooming out and you could
probably see /infinite/ Universes that go on for eternity, as one long fractal
journey.

~~~
Trasmatta
Interestingly, in the observable universe there seems to actually be an end to
structure or "fractalness" at a certain size:
[https://en.m.wikipedia.org/wiki/Observable_universe#Large-
sc...](https://en.m.wikipedia.org/wiki/Observable_universe#Large-
scale_structure)

------
leoc
Recursive universes don't get interesting until you start doing system calls:

> So when an inevitable bug occurred in that super-duper LIFE machine, the
> intelligent entities in the simulation would have suddenly been presented
> with a window to the metaphysics which determined their own existence. They
> would have a clue to how they were really implemented. In that case, Fredkin
> concluded, they entities might accurately conclude that they were part of a
> giant simulation and might want to pray to their implementors by arranging
> themselves in recognisable patterns, asking in readable code for the
> implementors to be given clues as to what _they 're_ like.

(Levy's _Hackers_ [https://www.worldcat.org/title/hackers-heroes-of-the-
compute...](https://www.worldcat.org/title/hackers-heroes-of-the-computer-
revolution-25th-anniversary-edition/oclc/890527375) ofc, pp. 148-149 in
Penguin, p. 120 in O'Reilly:
[https://books.google.ie/books?id=JwKHDwAAQBAJ&pg=PA120&lpg=P...](https://books.google.ie/books?id=JwKHDwAAQBAJ&pg=PA120&lpg=PA120)
)

It's unfeasible to keep scanning the whole of a simulated world to try to
discern anything that might be intended to pass a message of course, so making
a "real" or at least a normal system call involves using a pre-ordained area
of the simulated space which is set aside as a buffer, altering the contents
of that area according to some pre-ordained protocol known in advance to both
the simulated world and the simulating program. Responding to the system call
likewise involves the simulating program "miraculously" altering the state of
a pre-ordained buffer area according to a pre-set protocol. Not only is this
how you can implement system calls in recursive universes: this is what a
system call necessarily _is_. System calls, calls to the runtime, just are
events of this nature happening between simulated and simulating systems.

Likewise any kind of message passing is built on top of this: basically the
only way any process can pass a message to another is to make a system call
requesting that a message be passed on to the intended recipient, then hope
that simulating system will deliver it as requested. Then delivering the
message obviously involves the recipient's simulating system—which isn't
necessarily the same system as the sender's simulating system—appropriately
altering the state of the recipient.

(Unless the sender and receiver have a shared memory area, you could say, but
that's not so different either: two simulated programs only have a shared
memory area to the extent that the simulating system is pleased to keep the
supposedly-shared area actually consistent in the two programs it is
simulating.)

Notice how annoying it is that, by and large, most system-call protocols don't
allow a process to, for example, send its simulating system a message
explicitly addressed to its simulating system's simulating system, or to send
a simulated system a message explicitly addressed to one of its simulated
system's simulated systems. I suppose you could set it up with nested VMs and
their virtual Ethernet interfaces.

~~~
norrius
That reminds me of a story where a student couldn't figure out the properties
of some physical system, so he ran a massive simulation of a universe with
laws of physics similar to his own (but, of course, cutting corners whenever
possible). This universe eventually produced a life form intelligent enough to
figure out the necessary equations, at which point he happily copied them to
his homework and forgot about the simulation.

...only to find it days later (= billions of years of simulated time), by
which point the simulated life had figured out that their universe was written
hastily and its laws were full of subtle bugs, like floating-point rounding
errors showing up in physical measurements. Their technological advance let
them move stars around, which they grumpily arranged in a message saying "your
code sucks".

Can't find it at the moment, does anyone recognise the reference? It could be
in one of these books, I suppose, but I don't have them.

~~~
jakear
Relatedly, I’ve always found quantum mechanics to be reasonable “proof” that
we’re living in a simulation.

Take wave-partical duality: how is that not an artifact of the implementors
wanting to reuse some core routines from an legacy particle-based universe
while building our next-gen wave based one? The particle-based simulation
wouldn’t work at the scales needed, so they moved to waves (much easier to
simulate), and rigged up some adaptors to switch to particle mode in a JIT
manner as needed.

The folks trying to unify quantum and classical mechanics are essentially
reverse-engineering that JIT (and others like it).

------
mensetmanusman
So fun. I like to imagine the workings of the universe as an infinite
dimensional game of life. Not sure how we will ever wrap our minds around the
complexity.

------
qlk1123
So many people have been working on develop/discover new life forms or
different types of universe in the Game of Life. It is purely deterministic so
obviously there will never be any free will for the lives inside. But maybe
chances are that we can observe the illusion of consciousness as the lives
become more complex.

~~~
braythwayt
If that’s your definition of “free will,” may I ask if you think that we
humans have free will?

------
pauldelany
Anyone point to sources describing how people come up with these large meta-
structures in the first place?

------
warpspin
I find these large scale structures in Game of Life absolutely impressive and
sometimes wonder, how they're constructed to begin with and what the tooling
looks like people use to build this.

------
geophile
OPs website is really interesting. She is doing some amazingly cool things.

------
leafboi
I try to design my programs like the game of life. I start off with some
primitive axioms and build all the high level functions out of a minimal set
of primitives.

~~~
aj7
Are there a lot of parentheses?

~~~
TeMPOraL
Code in most of the popular languages has a lot of parentheses, particularly
when you count in curly braces, and square and sharp brackets.

------
aj7
Some rich person should offer a $1M prize for a single useful application, or
example in nature or science, of cellular automata.

~~~
asimpletune
I don’t see why people are downvoting. I’m not pessimistic, I just don’t
understand for myself the real life applications, so please educate instead of
downvoting.

That said, procedural generation is a great example thanks.

Also, whatever happened of Wolfram’s “A New Kind of Science”, or whatever it
was called. He literally wrote a gigantic book on sampling the computational
universe, but I’m not really aware of it being used, probably because I’m
ignorant.

~~~
taliesinb
Disclosure: I worked on Mathematica for ~8 years.

I think the value of Stephen's big book is that served as a kind of manifesto
to "take computation seriously". By that I mean: to think about computations,
in the _abstract_ , as a kind of new mathematics about which we know almost
nothing, and about which our naive intuitions from other domains is almost
totally inapplicable. It is an injunction to explore the "computational
universe", in other words, the "universe of simple computational systems",
where CAs live as one of several kinds of maximally simple form of computation
(which Wolfram made some attempt to categorize and explore).

By analogy, think about the development of algebra. It didn't come naturally!
Yet Algebra is one of the most natural ideas in abstract mathematics,
incredibly simple and incredibly powerful, lying as a rosetta stone connecting
so many other topics in mathematics. But people didn't immediately accept or
describe it; it took many hundreds of years to crystalize and mature as a
topic. It's a very old piece of the operating system of mathematics that
underwent a lot of hacking and refactoring.

Wolfram proposes we think about computational systems in the same way, as a
nascent field that needs exploration, mapping, the irrigation of young minds
and new ideas. It may be some time before it yields a harvest. We shouldn't
expect it to immediately revolutionize everything.

Even after Turing, computational systems existed as a kind of diaspora in
mathematics, having sat unrecognized in all kinds of places, never having had
a sort of independent state in which they are not considered as an aspect of
something else, as somehow alien and unworthy of respect because of their
confusing aspects. Largely, there were two reasons they were treated so
shoddily:

1\. they required computers (and good computer tools) to actually explore,
since they produced complex and irreducible computations 2\. they resisted any
kind of analysis by prior mathematics

In other words, _we_ were not ready to really probe them until a few decades
ago, and even _now_ our software is not well suited to explore them
(Mathematica remains the best tool for the job, though I anticipate Julia will
surpass it rapidly). Furthermore, traditional mathematical fields have not
adapted to the presence of computation very gracefully.

I guess I would summarize: NKS is an imperfect book and Stephen an imperfect
herald of the ideas within. But he was the first person to really articulate
those ideas crisply and push them hard into the imagination, and I'm very glad
he did. The hand he overplayed was the application to the natural world -- I
think that will take longer to pay off than he predicted.

~~~
perardi
* The hand he overplayed was the application to the natural world—I think that will take longer to pay off than he predicted. *

Not that I actually understand it, for I am a UI designer, but Wolfram’s
recent physics projects seems to be taking a fresh run at that.

[https://www.wolframphysics.org/bulletins/2020/08/a-short-
not...](https://www.wolframphysics.org/bulletins/2020/08/a-short-note-on-the-
double-slit-experiment-and-other-quantum-interference-effects-in-the-wolfram-
model/)

 _disclosure: worked for Wolfram for 7 years, forgive me promoting stuff I
worked on_

------
platz
thought this was going to follow up with a plug to stephen wolfram

