When I submitted a paper to AAAI, there were dire warnings that the paper would be rejected if I included links to any supplemental materials (such as code or data), because this would compromise blind review.
The conference software they were using had a sketchy form for uploading supplementary data, but it appeared that they expected it to be used for small data tables or something. It certainly wasn't going to be an option to upload 12 GB of data into their conference software; they mentioned nothing about code, particularly how to specify its dependencies and the computing environment it needed to run in; and it also certainly wasn't going to appear in any form that was convenient to review if I did.
How could you submit code to blind review anyway? Do you make alternate versions of all the dependencies that don't credit any authors that overlap with the authors of the paper?
In short, because of blind review, I was forbidden from doing anything that would make my paper reproducible at review time.
You are confounding reproducibility and repeatability. Repeatability is the ability to repeat your exact experiment under the same conditions, whereas reproducibility refers to other people being able to reproduce your experiment under similar conditions and obtain congruent results.
Of course, both are needed for great science. Nonetheless, your paper alone should provide a good enough description of the conditions and methods you used for the paper to be reproducible.
In fact, it can easily be argued that the ability to just run your code instead of re-implementing it according to your description is actually detrimental. To see how, consider that a bug in your code, re-used by other researchers, can easily lead to multiple derivative works finding entirely wrong results. In contrast, if those other researchers re-implemented your method there would be a much lower probability of them doing the same mistake you did, leading to incongruent results and hence raising alarms that probably lead to the discovery of your bug. Although re-implementing your algorithms is significantly more work, the overall quality of our research would benefit from doing so...
8 pages now? It used to be 6. Anyway, 8 pages can fit a lot of explanation/discussion. Also, keep in mind that AAAI is a conference, where you are supposed to present promising research that you are still edging out. Finished works are supposed to be presented in journals, where space constraints are typically more relaxed.
On a personal note: I'm sick of smart-ass reviewers, unprofessional researchers, results over-selling (to put it mildly) and so on. I think the whole system is so corrupted that i just quit research in despair. Unfortunately, I don't see how blogs and/or just "publish the code" would magically solve all those problems.
Anyway, publishing your code is certainly a good thing to, so I applaud and encourage you to keep doing it!
> Also, keep in mind that AAAI is a conference, where you are supposed to present promising research that you are still edging out. Finished works are supposed to be presented in journals, where space constraints are typically more relaxed.
This is a dated view of AI research.
Promising research appears in workshops, blogs, and/or arXiV. Finished research appears in conferences. Nobody's sure what journals are for.
Talk to your advisor about that. Everybody is pretty sure that journals are to support your academic curriculum: good luck getting postdoc positions without Q1's under your belt.
Of course if you are in one of the Ivy league universities this may be different. Otherwise... yeah, you can feel research moves too fast for journals, but curricular evaluation practices moves even slower.
>Talk to your advisor about that. Everybody is pretty sure that journals are to support your academic curriculum: good luck getting postdoc positions without Q1's under your belt.
I agree with GP. The view that conference publications is less important than journal papers is inaccurate, depending on your field.
In my field, it's as you describe. Conference papers are not polished, and journal publications is what matters in your academic career.
For many of my peers in some disciplines in CS, it was the opposite. Getting into a highly regarded conference was much more valued than publishing in a journal.
Then I found this was not limited to CS.
It really just depends on your discipline's culture.
The conference software they were using had a sketchy form for uploading supplementary data, but it appeared that they expected it to be used for small data tables or something. It certainly wasn't going to be an option to upload 12 GB of data into their conference software; they mentioned nothing about code, particularly how to specify its dependencies and the computing environment it needed to run in; and it also certainly wasn't going to appear in any form that was convenient to review if I did.
How could you submit code to blind review anyway? Do you make alternate versions of all the dependencies that don't credit any authors that overlap with the authors of the paper?
In short, because of blind review, I was forbidden from doing anything that would make my paper reproducible at review time.