
Browse State-of-the-Art Machine Learning Papers with Code - ArtWomb
https://paperswithcode.com/sota
======
pizza
Anybody know any good chatrooms for machine learning? Whether that's IRC,
Slack, Discord, whatever. Preferably general chatrooms, as opposed to
framework-specific chatrooms, but I'll take anything I can get.

~~~
rstoj
Hello, one of the creators here! We just launched a Slack for paperswithcode,
feel free to join:

[https://join.slack.com/t/paperswithcode/shared_invite/enQtNT...](https://join.slack.com/t/paperswithcode/shared_invite/enQtNTI3NDE2NjQ0ODM0LTdmNzNjODkwOGY0MjU4YzgzNDZhNGM1YWIzYmZhNzk5MTFkYWU4YWNjN2JjZDhlNjJiYjFkYjYwNjkzYzdiZDk)

------
fohara
This seems very similar to GitXiv:
[http://www.gitxiv.com](http://www.gitxiv.com) . Maybe there is an opportunity
to join forces.

~~~
painful
GitXiv has been very unprofessionally maintained. Their RSS feed often has
junk posts like by a child. They were once good, but that time is long gone. I
still subscribe to its feed, but there isn't much to find in it.

------
pchal
Are the datasets publicly or easily available as well?

Also would be great to include papers with SOTA results on “tabular”
Multivariate datasets, the kind that arise in numerous applications, e.g.
EHR/MHR, advertising, finance, etc. In other words, something like the UCI ML
Repository datasets (which are mostly “small” but still would be great to know
the SOTA models for those), and much larger versions of such datasets — I
often see papers applying ML to tabular healthcare datasets but the datasets
are often not available.

------
ingenieroariel
This is very good - next step is to hook something like circle to make sure
the code actually compiles and does something. Another idea is to put a flag
to praise people who do not put compiled artifacts (like matlab mex).

~~~
rstoj
Absolutely, I think that would be amazing!

~~~
painful
Please be careful to not take on too much. It's not the job of this service to
compile and test code.

------
foxes
This is a good idea.

I think it should be mandatory to publish your code, dataset (or at least a
sample of the verification data), and then the trained model weights if you
are publishing a neural network "application" paper.

It's very simple to put it on github etc, and then link it in the paper,
disappointing when it doesn't happen.

This problem doesn't just exist for neural networks. There are lots of papers
published presenting some algorithm, but they never provide the source you can
quickly check or experiment. Certainly now the technology exists to allow
things to be more open (ie github).

------
sstanie
This is awesome! I was just about to ask if there were any resources pointing
toward easy-to-recreate machine learning papers.

Is there any way to add some kind of flag/sorting mechanism on this for code
that requires massive GPUs/ computing power to reproduce?

------
nitelord
This is really great. Thank you!

------
sharemywin
I see the data sets listed but no links. Are their any plans to host the
datasets?

------
syntaxing
Linking these to colab notebooks would be amazing!

------
mikkelenzo
This is fantastic! Thank you

------
ddetone
Does anyone know how the categories are associated with the projects?

~~~
rstoj
It's done automatically by looking for category names (and synonyms) in
abstracts. This mostly works although it's not 100%. However, if you click on
the paper, you can add/remove them in the "Tasks" section.

------
icodemuch
Awesome! Thanks for sharing

------
ldulcic
wow, thanks!

------
master_yoda_1
thank you

