
Neural Networks, Manifolds, and Topology (2014) - signa11
http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
======
dang
Posted last year at
[https://news.ycombinator.com/item?id=7557964](https://news.ycombinator.com/item?id=7557964).
(Reposts after about a year are ok.)

~~~
ObviousScience
Because this usually turns in to some kind of meta debate, I hope I'm not
sidetracking by too much.. but:

I, for one, am glad that sometimes good things get reposted, because I've been
reading a lot about neural nets lately following Google's various recent posts
(and studied topology in school), but I had missed this article the previous
year (when I was probably distracted by some other fascination).

Reposting interesting articles about topics that are currently in the media
attention (and thus the public attention) is a form of sharing meta-data about
the field: reposts on a topic on particular interest probably have some sort
of underlying metadata link. (After all, someone thought it was worth bringing
up in the context of the current discussion.)

Today, I learned how to integrate something I studied long ago at school and
something that's my current hobby. Fucking awesome!

Thanks reposter.

~~~
waterlesscloud
I'm glad it got reposted because I bookmarked it the first time but never went
back to actually read it. A reminder was useful!

------
0xdeadbeefbabe
> (Apparently determining if knots are trivial is NP. This doesn’t bode well
> for neural networks.)

Yes it doesn't, but what do I know (not much). Another annoying property of NN
is knowing when they are fully baked.

Edit: Yet they have clearly grown in popularity over the past year. Does that
imply anything about the "determining if knots are trivial is NP" criticism?
Or does it just mean they are popular for other reasons such as their appeal
to people who love black boxes?

~~~
Houshalter
I don't really see the relevance. As the article itself says, you can just add
more dimensions and separating the clusters becomes trivial. And empirically
underfitting and local optima do not seem to be big issues for large neural
networks.

------
guepe
Thanks for the post, I found it very intuitive explanation of how neural
networks work !

------
scrumper
This is excellent both to help understand how neural networks classify inputs,
and to understand a bit more about topology in general.

