Other works of the authors on deep dreaming reflect better what you describe here.
The astonishing results with regard to morphogeneses cannot be emphesized enough: A complex structure is robustly encoded in a single function and can be reproduced from a single grid cell.
This is the foundation for 'programming' morphogenesis of multi-cellular synthetic life in the far future. Of course you first have to get the programming of monocelluar life to work.
Also, the reverse might be possible, i.e., decompiling genetic code from phenotype and so on.
Foundation for programming the morphogenesis. Yes.
morphogenesis - concept is fundamentally flawed. It's not thoroughly put in theories. Computing has touring test. Does concepts like morphogenesis has any tests? Nope. So building something on top of half baked concepts will not scale/sustain/be a foundation.
If we really had understood morphogenesis in the past and the it is 100% put in theories, we would be having a virtual world mimicking biological world by now.
Yes, morphological models can be tested. You can hypothesize to reactions to alterations of an organism structurally and chemically (morphogenes).
Diffusion-reaction models are available for certain kind of morphogenetic behavior. See also https://www.brandeis.edu/now/2014/march/turingpnas.html
NNs are not a black box in the sense that you cannot see what is going on in them, but in the sense that we currently just very badly understand. You have a function approximator that you have to translate into some other representation. As long as you know how basic building blocks map or can be approximated, it might be possible.
Dang! I started working mid-last year on practically exactly this as a side project (was going to call the paper "Towards the horse", if you get the reference :) ). Congratulation for making it work, I am really psyched. Can you remember what triggered the motivation to work on this? Somehow I remember there being something on reaction-diffusion equations on HN around that time.
Had you tried to use a single Laplace operator instead of two Sobel filters? My approach is to model the reaction-diffusion as a sum of 1x1-convolution (reaction) and a depthwise 3x3 convolution with a fixed kernel multiplied by a learnable constant (diffusion). However, for this to work, a single seed pixel obviously will not work. Any thoughts?
Diffusion-reaction systems were an inspiration in general. Using a Laplace operator (or the discretised equivalent for our 2D grid) might have trouble learning to generate these patterns - the Laplacian wouldn’t always provide unique information as to where a cell might be based on its neighbours. It’s possible the network would learn to exploit the hidden channels to bypass this directional invariance. Starting from a single pixel in such a setting would indeed need some mechanism to break the symmetry (stochastic updates as used here, for instance).
The astonishing results with regard to morphogeneses cannot be emphesized enough: A complex structure is robustly encoded in a single function and can be reproduced from a single grid cell.