The gist of it is that physical data tends to have symmetries, and these symmetries make descriptions of the data very compressible into relatively small neural circuits. Random data does not have this property, and cannot be learned easily. Super fascinating.
It could also be just that intelligence tends to mirror the outside world, but that seems a bit arbitrary.
We are part of the universe, so why would it be arbitrary if our brains were structured in ways that match typical structures found in this universe?
From a cursory reading of that article I do not see it argue the same thing.
What is this even supposed to mean? Also, "pipe" and "water" are ridiculously high level constructs, categorisations made by humans. Neither says anything about structures inherent to the universe.
I mean that when working with symmetries, information flow, and fundamental building blocks, certain structures just tend to pop up naturally. Hence fractals and geometrically shapes in places where you might not expect them. Or how laws of thermodynamics suddenly seems to be everywhere in biology all of a sudden now that we started looking.
It’s interesting that so many equations governing different laws in different fields actually share quite a few properties, and on deeper analysis, can be explained by a single mathematical property. It makes me wonder how many insights we are missing simply because they were discovered in another field, under a different name, for a different purpose.
As the Universe, so the Soul.
GnosticMedia - Alchemy and the Endocrine System - an interview with Jose Barrera
I don't know, but it just feels like this should have been known in CS earlier than 2016. Circuit complexity has been studied for a long time.
Especially if you have neurons with non-linear response
remember ln(a*b) = ln(a)+ln(b)
So basically a deep network can "cheaply" (i.e. not fatally expensive) describe anything that occurs in nature, which is wonderful. I wonder however what will happen when we move to higher cognition and meta-cognition which requires the readout of network states that are not found in nature, but are generated internally. Would be interesting to know if we need much more brain or a little more. In any case very intersting read.
What IS interesting is that it shows just how resilient the networks that come out of this really are that they can support such weirdo activations.
But there are other elements of introspection that should absolutely be handled by neural nets, things like choosing how deep a given network needs to be should be accomplished by other neural nets specifically looking for trends while training.
Fwiw, here's the paper on RNN architecture generation referenced earlier.
Does it matter if your Hamiltonian is smooth or if you are working from a discrete theory?
By way of a counter example, you could say “but it depends whether the person walked or took the bus!” In that case either you need to provide the data on whether they walked or took the bus (in which case your question collapses to a simple Hamiltonian form with the additional data) or you don’t provide the data and the question is fundamentally unanswerable in a simple discrete way.
the original paper:
a refutation:(with very harsh/unprofessional words by tishby in counter-refutation)
I've never heard of it being understood at all.