Hacker News new | comments | show | ask | jobs | submit login

(paper author) You are correct that the 'natural' moth maxes out after about 20 samples/class. It is not yet clear whether this is an intrinsic limitation of the architecture (the competitive pressure on an insect is for fast and rough learning), or whether it is just an artifact of the parameters of the natural moth. For example, slowing the Hebbian growth parameters would allow the system to respond to more training samples, which should give better test-set accuracy. We're still running experiments.



It sounds like you ran experiments on the BNN with >20 samples/class. Why were those data points not included in Figure 2?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: