Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
p1esk
on March 23, 2017
|
parent
|
context
|
favorite
| on:
Why is so much memory needed for deep neural netwo...
A brain learns one example at a time, which probably means a gradient descent is not a very good learning mechanism.
jacquesm
on March 23, 2017
|
next
[–]
These are not brains, they are not even models of brains even if they use the word 'neuron'.
p1esk
on March 23, 2017
|
parent
|
next
[–]
That was kind of my point. Brain learns well using one example at a time. ANNs don't. Hence my conclusion.
akyu
on March 23, 2017
|
prev
[–]
You can use gradient descent one example at a time. It still works just fine. The gradients are more unstable, but you will still converge eventually.
p1esk
on March 23, 2017
|
parent
[–]
"works just fine" is a relative term. I'm pretty sure when I'm learning a (new) alphabet, I don't need to see 1000 examples of each letter 100 times.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: