
Show HN: Two strange useless things to do with a neural net - curuinor
https://github.com/howonlee/twostrangethings
======
murftown
Cool, I'll bite. I don't know some of your mathy terms like "putative
extensive transduction" or "anisotropy" so go easy on me.

But I hear you say things like "Neural nets are nonlinear iterated function
systems by construction.", and it resonates with me. I've often tried to draw
a parallel between the recursive tree structure of a program with the matrix
structure of a neural network, which as I understand it is mathematically
still a graph of functional nodes feeding into one another - it's just that
treating it like a matrix allows the programs to be much more performant by
using low-level structures like numpy arrays.

In any case, I often think about the tree structure of programs, and we
whether could get to a place of programs that can analyze, understand and
evolve the abilities of other programs. This could be seen as the old-school
"symbolic AI" mindset. And then, the similarities and differences of that with
the matrix-based neural network revolution that's happened in the last years.
I wonder about possible holomorphisms and interfaces between those two
paradigms.

I don't doubt that I barely skimmed the surface of what you were actually
saying. The very general and meta way you were talking about information
processing is intriguing to me, and I'd love to understand more. Thanks for
sharing!

~~~
curuinor
I am afraid that I am running the other way: looking hard at the Zipfian
criticality and thinking that the seeming hierarchy of symbolic AI is even
more an illusion.

Many seeming hierarchies are at heart the formalization of a power-law
phenomenon.

------
CormacB
>Neural nets are nonlinear iterated function systems by construction. I tend
to believe that the progression of the weights in weight space is a slice of
another nonlinear iterated function system, also by construction. So I would
tend to believe that the overall landscape of the optimization is suffused
with directions with positive Lyapunov exponent, because if it's a fractal and
an attractor, one considers it a strange attractor and begins suspecting that
the dynamical process that creates it is chaotic. But that induces
anisotropies in the optimization surface.

Any technology that is sufficiently advanced is indistinguishable from a text
generator.

------
curuinor
As I said on the repo, I welcome discussion for a while, probably until end of
day today on HN, but I just stay bookmarked on the SA CSPAM thread so you can
ask me questions there whenever. This is probably the only way you can get me
to actually explain the thing in plain English, I gave up trying to
preemptively explain things

