
Cellular automata as convolutional neural networks - wil3
https://arxiv.org/abs/1809.02942
======
debbiedowner
Nice try, Stephen Wolfram...

~~~
oh_sigh
Isn't Wolfram more likely to do convolutional neural networks as cellular
automata?

------
bcherny
Anything surprising here, if anyone’s up to date on this stuff? I thought it’s
a given that since both CNNs and CAs are Turing complete, you can use one to
simulate the other.

~~~
jamesough
> I thought it’s a given that since both CNNs and CAs are Turing complete, you
> can use one to simulate the other.

They are both Turing complete, yes, but the similarities are deeper. Recurrent
CNNs are a very natural way to describe CAs because they both impose exactly
the same prior: that the system's underlying laws are spatially quantized,
local, and translationally invariant.

There hasn't been much work done in this area. People have made the connection
before (e.g. this paper from Sutskever:
[https://arxiv.org/pdf/1511.08228.pdf](https://arxiv.org/pdf/1511.08228.pdf))
but AFAIK this is the first work to actually look deeply into how well you can
recover CA rules with neural nets and gradient descent.

If we are to believe Steven Wolfram's proposition that the universe is a
cellular automaton at its core, then learning the rule by gradient descent
seems pretty obviously useful, and not something that anyone has really tried
yet.

To be more explicit: applying gradient descent to cellular automata has a
reasonable likelihood to being the only way to uncover the fundamental laws of
reality, and essentially nobody is working on it except OP. So it's a useful
paper!

