Looks like a comprehensive volume covering the state of art in late 90s.
That's also about the time I stopped following the domain. So I wonder if there have been any advances in terms of new models and topologies since then?
I'm not any sort of expert on the neural network literature but these were some papers in the last three years that caught my eye, Yann LeCun also does work on neural nets, but I haven't been all that impressed by his results. One of the main advances has been ways of developing 'deep' architectures with multiple layers rather than the traditional shallow neural networks (arguably the SVM, for instance, is actually a very cleverly trained single layer neural network)
All the techniques described in this book use rate-based coding. That is, they assume that the important dynamic property is the level of activation or, in biological terms, the neuron's firing rate. Biological evidence shows that correlations in spike timing is also very important for both the network's behaviour and for learning (called "spike-timing-dependent plasticity").
Some people have started developing computer models of this property, but unfortunately it's not a widely known research topic. One paper I've read is Gerstner et al. (1999) in which they describe a model of unsupervised learning. Gerstner also has a book called Spiking Neuron Models, which is available online, that goes into a lot of detail on the topic. Other people have done supervised learning by evolving network topologies using genetic algorithms.
The advantage of spike-based models is that they they also seem to scale better to larger networks, and have greater power for networks of similar complexity.
I'm quite optimistic about this field of research. Neural network research has seemed to become stagnant recently for some reason, but I think switching to spike-based models is the way out of that.
A curious recent variations is allowing the number of hidden units to be infinite. It was observed that increasing number of hidden units in a two level perceptron reduces the number of local minima encountered during training. Allowing the number of hidden units to be unbounded makes the optimization problem convex http://books.nips.cc/papers/files/nips18/NIPS2005_0583.pdf
Neural networks have been untrendy since the late 90s. This has decreased research since then. The general opinion of practical people now is that nnets with a single hidden layer are one of the major workhorses in the classification/regression toolbox. Neural nets with many hidden layers (needed to really advance the state of the art) are still a work in progress. There has been a lot of work on "deep" networks, usually focusing on unsupervised training, and then either sticking an support vector machine on the end, or finally training a single supervised layer. Practical people don't really use that stuff yet.
That's also about the time I stopped following the domain. So I wonder if there have been any advances in terms of new models and topologies since then?