Anyways, I recently updated this collection over the new year (with a bit of help from two students). It's not meant to be a comprehensive list of all best paper awards, but a fairly representative one across the main conferences in computer science. I guess I think of it as the Oscars (Academy Awards) of Computer Science Papers. It originated because award announcements and conference websites disappear quickly because each year a different organizer manages the website, so this information is lost forever. And I wanted a way to look back and see what people thought was the best paper in that year to see if those paper indeed made an impact.
Those who are more quantitative may be interested in this aggregated list of which institutions produce the most best paper awards: https://jeffhuang.com/best_paper_awards/institutions.html And the interesting thing that came out of it is the number of best paper awards is highly correlated with the US News ranking of computer science departments (which is purely based on subjective surveys of department chairs and graduate directors): https://drafty.cs.brown.edu/csopenrankings/
Stuff before 2010 in natural language processing is ridiculous. Dynamic programming algorithms, beam search, dependency parsing (grammar) algorithms (going from O(n^3) to O(n) with cost-sensitive algorithms), a huge focus on lexical analysis, part-of-speech, graphical models (maximum entropy, conditional random fields, etc.).
Today all of these algorithms are completely irrelevant. No one needs part-of-speech anymore, or dependency (grammar) trees, or cost-sensitive reinforcement learning reductions.
I remember being so inspired by all of the work and learned a lot, but it's quite funny how Lindy works.
We are starting to learn how to mix both. E.g HMM + DL = Deep Markov model. It has the advantages of both, structure and large numbers of parameters.
Some SOTA NLP models follow this approach.
I commend to you the classic paper "Lowestcase and Uppestcase letters: Advances in Derp Learning", in the 2021 proceedings.
One thing I've never quite figured out, though, is that other fields in CS tend to splinter into new conferences when the main one gets too big (or perhaps its just systems conferences). In contrast, CHI seems to just get bigger and broader every year.
A common CHI paper would read "we studied a set of users X and Y and found that they tend to run into problems Z. We designed a solution W to address Z, and suggest some other work that could be done to address Z better."
The conference thus contains many papers proposing radically different ways of seeing the issues that users face in computing along with radically different ways of addressing them.
If these new-found problem definitions happen to gather enough attention for a sustained period of time, then it warrants a new conference of researchers ready to address those problems. But I don't think this happens often enough in CHI for it to break into sub-conferences very often.
(Also, hi Jason!)
Coupling Simulation and Hardware for Interactive Circuit Debugging
Designing Menstrual Technologies with Adolescents
Increasing Electrical Muscle Stimulation's Dexterity by means of Back of the Hand Actuation
No parallelism ? :(