Hacker News new | comments | show | ask | jobs | submit login
A computational model of the Moth Olfactory Network learns to read MNIST [pdf] (openreview.net)
70 points by higgsfield 7 days ago | hide | past | web | favorite | 14 comments





For some reason, Figure 2 doesn't continue out beyond the few-training-samples regime. Therefore, I think we're left to assume that MothNet underperforms the other techniques in the many-samples regime. Is there something I'm missing?

It is ambiguous. It's not clear whether or not they performed the experiments with # samples / class > 20.

(paper author) You are correct that the 'natural' moth maxes out after about 20 samples/class. It is not yet clear whether this is an intrinsic limitation of the architecture (the competitive pressure on an insect is for fast and rough learning), or whether it is just an artifact of the parameters of the natural moth. For example, slowing the Hebbian growth parameters would allow the system to respond to more training samples, which should give better test-set accuracy. We're still running experiments.

It sounds like you ran experiments on the BNN with >20 samples/class. Why were those data points not included in Figure 2?

Yet to read this paper but wondering if the authors a familiar with the work of Dasgupta et al. on fly olfactory model for locality sensitive hashing?

https://www.biorxiv.org/content/biorxiv/early/2017/08/25/180...

I have been contemplating the relationship between random projections and compressive sensing since reading it and curious to read this paper for any insights on compressive sensing.


This is absolutely brilliant. I have been looking for a way into an understanding of learning within biological neural nets. I dont suppose there is source code around?

So are we learning that brains and neurons are general purpose computation goo that can be applied to many different areas of signal processing yet?

Kind of feel like we already know that brains can do general purpose computation...

Needs a [pdf] flag

Why is it, by the way, that papers have the author names at the top but not the date? The dates are added to papers in references, so why not the date of this paper itself too?

This one happens to have "Workshop track - ICLR 2018" at the top so has some dating, but most don't even have that


Papers are published in journal/conference proceedings/etc. that will have the date of the issue ("Transactions for the International Symposium on Computational Yak Shaving 2018"). The paper might have been written in 2017, but published in 2018, which means that when it gets cited it will be as "ABC et al., 2018".

Papers without a date are usually preprints, or published independently (e.g. on the author's website) while expecting actual publication at some point.


That's the nice thing about arXiv. The first four digits of the paper's number tells you the month and year it was first published.

> The first four digits of the paper's number tells you the month and year it was first published.

I think "published" should be "submitted" there. (I suppose that one could argue for regarding submission to the arXiv as publication, especially given the presence of overlay journals—but probably that's not what you meant.)


meta: this could be automated.



Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: