
 Neural Networks - A Systematic Introduction  - yannis
http://page.mi.fu-berlin.de/rojas/neural/
======
varjag
Looks like a comprehensive volume covering the state of art in late 90s.

That's also about the time I stopped following the domain. So I wonder if
there have been any advances in terms of new models and topologies since then?

~~~
mstoehr
I'm not any sort of expert on the neural network literature but these were
some papers in the last three years that caught my eye, Yann LeCun also does
work on neural nets, but I haven't been all that impressed by his results. One
of the main advances has been ways of developing 'deep' architectures with
multiple layers rather than the traditional shallow neural networks (arguably
the SVM, for instance, is actually a very cleverly trained single layer neural
network)

Here's a Geoffrey Hinton paper on training deep belief networks:
<http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf>

Here's some stuff from Andrew Ng's group: This paper shows how his deep belief
network was able to 'learn' in an unsupervised manner certain plausible image
primitives
[http://robotics.stanford.edu/~ang/papers/nips07-sparsedeepbe...](http://robotics.stanford.edu/~ang/papers/nips07-sparsedeepbeliefnetworkv2.pdf)
This won a best paper award (application paper) and its about fast ways of
building a deep belief network:
[http://robotics.stanford.edu/~ang/papers/icml09-Convolutiona...](http://robotics.stanford.edu/~ang/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf)

~~~
varjag
Thanks a lot for the references, now I have some reading for tomorrow!

