Hacker Newsnew | comments | show | ask | jobs | submit login

"The algorithm then works backwards through the layers, adjusting weights so that the output will be the/closer to the desired output. Backpropagation itself is about a page worth of differentiation derivation, as you need to work out how much each weight affects each node."

For reference, I think that last bit may oversell it a bit, if you are familiar with multidimensional calculus, which is a required course in many computer science programs so that may not be much of a stretch. Getting all the details of backpropagation right can be a challenge, but it boils down to considering each input as a dimension, taking the local gradient of the whole thing, then adjusting the vector represented by all the weights up (or down, depending on your sign) the gradient by the "learning rate", which is nothing more than a statement of how far in the direction of the gradient you'll go.

Sadly, non-looping neural networks are actually quite boring as classifiers go, and unless someone has made a breakthrough that I haven't heard of, nobody really knows what to do with looping neural networks. (It is possible that breakthrough has been made without me hearing about it, but it is the sort of thing that my feeds really should have picked up on, it would be big nows.)

(Actually one of my personal projects someday is to fire one AI buzzword at another and see if I can't use genetic programming to write something that can train a looping network. :) )




By "looping", I'm assuming you mean "recurrent"? If so, you might be interested these:

http://www.idsia.ch/~juergen/rnn.html

http://www.lsm.tugraz.at/learning/framework.html

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.6...

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: