Hacker News new | comments | show | ask | jobs | submit login

In addition to what bravura said, Geoff Hinton's group at UT has recently introduced a new approach to training neural networks that they call "dropout". You can read about it in their paper (http://arxiv.org/pdf/1207.0580.pdf) or watch Hinton describe it in a recent talk he gave at Google (http://www.youtube.com/watch?v=DleXA5ADG78).

Roughly speaking, dropout training provides a strong regularizing effect through a sort of model averaging that is conceptually related to the well-known bagging approach from which random forests derive their power and flexibility. Dropout training has already produced state-of-the art results on several time-worn standard benchmarks and helped Hinton's group win a recent Kaggle competition (for an overview of their approach, see: http://blog.kaggle.com/2012/11/01/deep-learning-how-i-did-it...).

I've played around with this a bit over the last few weeks, and have a Matlab implementation publicly available from my Github at: https://github.com/Philip-Bachman/NN-Dropout.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact