Hacker News new | past | comments | ask | show | jobs | submit login

A brain learns one example at a time, which probably means a gradient descent is not a very good learning mechanism.



These are not brains, they are not even models of brains even if they use the word 'neuron'.


That was kind of my point. Brain learns well using one example at a time. ANNs don't. Hence my conclusion.


You can use gradient descent one example at a time. It still works just fine. The gradients are more unstable, but you will still converge eventually.


"works just fine" is a relative term. I'm pretty sure when I'm learning a (new) alphabet, I don't need to see 1000 examples of each letter 100 times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: