
Cellular Automata as Convolutional Neural Networks - benibraz
https://arxiv.org/abs/1809.02942
======
hardmath123
See also: "learning" Conway's Game of Life configurations by gradient descent.

[https://hardmath123.github.io/conways-
gradient.html](https://hardmath123.github.io/conways-gradient.html)

~~~
dumb1224
I wonder if it is because using backpropagation all non-linear functions are
chained together when the weights are learnt? Is it naive to think by that
formulation the results will be quite similar since the final model equations
are close?

------
nuclearsugar
Now just use the OTCA Metapixel, build a computer within the simulation, and
let it go wild!

[http://www.amandaghassaei.com/blog/2020/05/01/the-
recursive-...](http://www.amandaghassaei.com/blog/2020/05/01/the-recursive-
universe/)

[https://codegolf.stackexchange.com/questions/11880/build-
a-w...](https://codegolf.stackexchange.com/questions/11880/build-a-working-
game-of-tetris-in-conways-game-of-life)

[https://github.com/QuestForTetris](https://github.com/QuestForTetris)

------
seventytwo
Not sure I understand the importance of this...

~~~
juskrey
Indeed, CA can be represented by simple combinations of boolean functions,
obviously by NN also, which is a combination of similar nonlinear functions.

~~~
benibraz
A wide enough NN can represent any arbitrary binary function, but it's not
obvious that one can learn it.

~~~
labelbias
Yet the best NNs are deep, not wide.

~~~
benibraz
What do you mean by 'the best'? Deeper architectures are popular because they
quiet easy to train. They do work well in practice on many tasks (especially
vision) but they have their limits.

Infinite wide networks are a newly active field and has recently shown some
promising results, theoretically [1, 2] and empirically [3].

[1] [https://arxiv.org/abs/2001.06931](https://arxiv.org/abs/2001.06931) [3]
[https://arxiv.org/abs/1806.07572](https://arxiv.org/abs/1806.07572) [2]
[https://ai.googleblog.com/2020/03/fast-and-easy-
infinitely-w...](https://ai.googleblog.com/2020/03/fast-and-easy-infinitely-
wide-networks.html)

------
yodon
> We show that any CA may readily be represented using a convolutional neural
> network with a network-in-network architecture.

------
yummypaint
I take it this is not a bidirectional mapping?

------
marcAKAmarc
I was hoping to see some mention of rule 110

------
mysterEFrank
kind of obvious

~~~
wadkar
Can you please tell us how is it obvious? I am interested in the network in
network part

