With this field advancing so fast, I guess if we could do something like this, that would be great:
Maintain a running list of:
- 3-5 most important papers in the last 3 months
- 3-5 in the last 1 year (not all in the 3 month list would make into this list)
- 3-5 in the last 5 years.
I guess it's difficult for a small number of people to rank the papers. Maybe a hackernews or reddit style upvote/downvote system can be used, with a list that essentially scrapes arxiv for papers.
>This is what's done every year at the AI conferences. No need for a new voting system.
Not really. Academia rarely ranks things important to the application (aka industry) of a field of study. An efficiency improvement that completely changes the economics of something would likely be rejected from an academic conference for being 'incremental'. Similarly, 'novel' work is much more highly rewarded in these conferences than improvements to existing techniques, experiment replications, or negative results, all of which are more important to industry.
Bibliometrics are garbage and represent little more than a popularity contest (and what is controversial). It's basically like if you took up votes and down votes on reddit and just had them both increment a single score. If they were worth anything, there wouldn't be such a thing as review papers that highlight important developments.
> Academia rarely ranks things important to the application (aka industry) of a field of study.
"Rarely" is a strong word. I think the truth is closer to, "Academia has screwed up as a whole in a very small number of cases, but usually works just fine, and always self-corrects eventually."
I'm not talking about screwups. I'm talking about fundamentally unaligned objectives between academia (advancing the edges of knowledge) and industry (making knowledge useful in an economic sense).
It's not a problem, it's just something you have to recognize so you don't have the wrong expectation.
Good list! I think it's important to note that this article is (intentionally) focused on modern CNN architectures, and not "deep learning" in general.
I'd also add in the following "technique" articles: Geoff Hinton et al.'s dropout paper[0] and Loffe and Szegedy's Batch Normalization paper[1]. I don't think there's been enough time for the dust to settle, but I'm excited about the possibilities Stochastic Depth[2] could offer, too.
I really enjoyed this write up .. would be so great if research papers in general were commented on like this (what is interesting, what is the significance of the result etc)
Glad to see R-CNN and its follow-on work on the list. We've been using R-CNN for a few weeks now and have seen great results on object detection and localization. A few papers this year have played around with substituting different convnets and different classification schemes and improving the network in various ways. I'm excited to see where this specific architecture goes in the next few years.
Yes, it's a lot of papers, this might help people keep on top of the most important ones (though it works from the top of the queue, and not from the bottom up) http://www.arxiv-sanity.com/
Maintain a running list of:
- 3-5 most important papers in the last 3 months
- 3-5 in the last 1 year (not all in the 3 month list would make into this list)
- 3-5 in the last 5 years.
I guess it's difficult for a small number of people to rank the papers. Maybe a hackernews or reddit style upvote/downvote system can be used, with a list that essentially scrapes arxiv for papers.