

John Resig dissects the neural network javascript OCR captcha code - mark_h
http://ejohn.org/blog/ocr-and-neural-nets-in-javascript/

======
smanek
If I'm interpreting this correctly (which I may not be, since there is some
ambiguity in the wording) Resig seems to have a bit of a misunderstanding of
how neural networks work.

He says that an edge weight in the neural network means "At pixel 9x6 the
letter A is 58% likely to be filling in that pixel." That isn't how neural
networks work (although, it is fairly close to how bayesian belief nets work).
In a neural network the edge weights of the graph don't obviously correspond
to anything - they are simply chosen (most commonly through the back
propagation training algorithm) to minimize the error between the desired
output and actual output on the training data.

The weights in a neural network are simply the coefficients for terms in an
equation that, when plotted, produces a curve that's a good fit for a series
of data points. Unfortunately, that explanation is not very sexy ;-) That's
one of the big problems with neural networks - they're effectively just black
boxes that incidentally produce pretty good answers. They are (in most of
their standard incarnations) really just a fairly simple technique for
regression analysis.

Incidentally, I got to see Resig speak about jQuery yesterday, and he's
clearly a very clever guy. I say this with all due respect, and fully expect
that this is just a misunderstanding due to me misinterpreting the sentence I
quoted in my second paragraph.

~~~
kragen
I think you can be a very clever guy and still have misunderstandings of how
neural networks work. They aren't contradictory!

~~~
jeresig
Heh, yeah, I'm almost certain that I'm just wrong in my interpretation - I may
know JavaScript but I'm still a neural network newb. I've updated that
paragraph to, hopefully, be a little more correct.

