

2013 NIPS Proceedings – Advances in Neural Information Processing Systems - benhamner
http://papers.nips.cc/book/advances-in-neural-information-processing-systems-26-2013?

======
ajtulloch
Alexandre Passos has an excellent editorial selection of NIPS 2013 papers
(along with some comments) on his blog at [1].

[1]: [http://atpassos.me/post/67560831508/nips-2013-reading-
list](http://atpassos.me/post/67560831508/nips-2013-reading-list).

------
seiji
Now prove each of those titles isn't auto-generated by the HN headline
generator.

------
benhamner
This is an interesting case study in how the submitted headline affects the
page's rank on HN.

I'd originally submitted this earlier today, with the headline "Neural
Information Processing Systems (NIPS) 2013 Proceedings":
[https://news.ycombinator.com/item?id=6815771](https://news.ycombinator.com/item?id=6815771)

It only got one other upvote and never made it to the front page. Meanwhile,
another article submitted at almost exactly the same time with far less
interesting content but a more provocative headline (on a user getting banned
from Uber for API abuse) got ~15 votes, pushing it onto the front page.

I re-submitted this page with a slightly more descriptive (and buzzwordy)
headline ("State of the art Machine Learning papers: NIPS 2013"), and it
almost immediately ended up on the front page. Since then the headline has
been reverted to the page's headline, again slowing the rate at which it's
received votes.

~~~
presty
Interesting, but maybe too much witch-hunting? I mean, the title influence
could've had as much impact as the time of posting.

Your first post did not hit HN's RSS feed (IIRC, at least 3 votes are needed
for the vanilla feed), otherwise I would've upvoted it. This one did.

~~~
benhamner
Ha, potentially. The dangers of drawing conclusions from N=2.

------
presty
while we're at it, here's from ICML2013

[http://icml.cc/2013/?page_id=47](http://icml.cc/2013/?page_id=47) (click on
the schedule images)

~~~
gcr
For those of you studying computer vision, the entire CVPR2013 proceedings are
released under open access: [http://www.cv-
foundation.org/openaccess/CVPR2013.py](http://www.cv-
foundation.org/openaccess/CVPR2013.py)

In a few days, ICCV 2013 papers will be published too: [http://www.cv-
foundation.org/openaccess/ICCV2013.py](http://www.cv-
foundation.org/openaccess/ICCV2013.py)

------
karpathy
By the way, just this evening I formatted the accepted papers in a nicer way:
(okay, I'm biased)

[http://cs.stanford.edu/people/karpathy/nips2013/](http://cs.stanford.edu/people/karpathy/nips2013/)

The page allows you to toggle LDA topics and off to browse the papers, or (my
personal favorite) find a paper you like and sort the other papers according
to tf-idf similarity, which tends to reveal exceptionally relevant papers.﻿

~~~
kyzyl
Very, very nice job. Your page is a great is example of sane, simple and
functional data visualization.

I've been sifting through these papers trying to prioritize by to relevance to
my work so that I can get them into mendeley and dig in. You just made it a
lot easier. Looks like a pretty good haul this year, for me :-)

------
djulius
Now the game is : Among these tons of (bad/good) applied mathematics , find
the maybe existing one paper that will : \- Have a real world application \-
Go through the years

More difficult : only by reading titles.

~~~
noelwelsh
This attitude is pretty tedious. In research we can't predict in advance what
will have lasting value. That's why it is called research. Same thing in
applies in startups.

~~~
djulius
Tons of papers are published every year for the sake of publication, careers,
.... absolutely not for the sake of science. Most of them do not even contain
any significant delta with previous research.

An important activity of the researcher is to sort between interesting papers
and garbage, since the selection process of even high level conference is
deeply broken.

Just read SIGIR proceedings where every paper beats the previous baseline by
0.X % on datasets that do not represent the real problem, it's just an example
among many others.

Also check this interesting analysis where the authors analyse best vs top
cited papers over a span of ten years:
[http://arnetminer.org/conferencebestpapers](http://arnetminer.org/conferencebestpapers)

Here you can see that some conferences where able to identify lasting value
and others not.

~~~
davidy123
So only pure breakthroughs are valued in science, everything else is
"garbage?" Interesting perspective you have there.

~~~
djulius
That's absolutely not what I wrote.

I said that the motivations behind academics to publish lead to the publishing
of tons of papers that while being scientifically correct (at least for top-
tier conferences) bring absolutely nothing to the party.

