Hacker News new | past | comments | ask | show | jobs | submit login
Browse State-of-the-Art Machine Learning Papers with Code (paperswithcode.com)
411 points by ArtWomb on Feb 1, 2019 | hide | past | favorite | 23 comments



Anybody know any good chatrooms for machine learning? Whether that's IRC, Slack, Discord, whatever. Preferably general chatrooms, as opposed to framework-specific chatrooms, but I'll take anything I can get.


Hello, one of the creators here! We just launched a Slack for paperswithcode, feel free to join:

https://join.slack.com/t/paperswithcode/shared_invite/enQtNT...


I think this is one of the few areas where Twitter might be good.


Would also like to get involved in one. My ML community right now is entirely those who work in my group.


There is a good resource just forwarded to me by a colleague on trained models + a discussion forum here: Modeldepot.io

Previously a ShowHN: https://news.ycombinator.com/item?id=18496347


##machinelearning on Freenode IRC: https://freenode-machinelearning.github.io/


This seems very similar to GitXiv: http://www.gitxiv.com . Maybe there is an opportunity to join forces.


GitXiv has been very unprofessionally maintained. Their RSS feed often has junk posts like by a child. They were once good, but that time is long gone. I still subscribe to its feed, but there isn't much to find in it.


Are the datasets publicly or easily available as well?

Also would be great to include papers with SOTA results on “tabular” Multivariate datasets, the kind that arise in numerous applications, e.g. EHR/MHR, advertising, finance, etc. In other words, something like the UCI ML Repository datasets (which are mostly “small” but still would be great to know the SOTA models for those), and much larger versions of such datasets — I often see papers applying ML to tabular healthcare datasets but the datasets are often not available.


This is very good - next step is to hook something like circle to make sure the code actually compiles and does something. Another idea is to put a flag to praise people who do not put compiled artifacts (like matlab mex).


Absolutely, I think that would be amazing!


Please be careful to not take on too much. It's not the job of this service to compile and test code.


This is a good idea.

I think it should be mandatory to publish your code, dataset (or at least a sample of the verification data), and then the trained model weights if you are publishing a neural network "application" paper.

It's very simple to put it on github etc, and then link it in the paper, disappointing when it doesn't happen.

This problem doesn't just exist for neural networks. There are lots of papers published presenting some algorithm, but they never provide the source you can quickly check or experiment. Certainly now the technology exists to allow things to be more open (ie github).


This is awesome! I was just about to ask if there were any resources pointing toward easy-to-recreate machine learning papers.

Is there any way to add some kind of flag/sorting mechanism on this for code that requires massive GPUs/ computing power to reproduce?


This is really great. Thank you!


I see the data sets listed but no links. Are their any plans to host the datasets?


Linking these to colab notebooks would be amazing!


This is fantastic! Thank you


Does anyone know how the categories are associated with the projects?


It's done automatically by looking for category names (and synonyms) in abstracts. This mostly works although it's not 100%. However, if you click on the paper, you can add/remove them in the "Tasks" section.


Awesome! Thanks for sharing


wow, thanks!


thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: