Hacker News new | more | comments | ask | show | jobs | submit login

Sometimes in cs if your research is embedded in a huge ecosystem, it can become quite expensive to reproduce results. I mean proper reproduction, not just rerunning the Benchmarks. If you are dealing with complicated stuff, the reproducer might also just not be able to do the same thing technically.

Maybe, maybe not. Do you have something specific in mind here?

I hope researchers and scientists don't considers others not capable enough, and therefore withhold info on how to reproduce.

Even if the experiment is crazy expensive and complex right now it might be considered much more tractable in 10 years, or someone builds upon your work and invents a simpler method to show the same thing.

I am thinking of huge endeavors like building an asic or huge complex systems like virtual machines. Not always a comparable system for Repetition is available and must be built from scratch. Affording such rebuilds require huge sums.

Of course nobody does consider others not capable enough. Its just that there are not so many people experienced enough to build certain systems in a decent amount of time.

Just being able to rerun the benchmarks (or other data analysis for non-CS papers) would be an improvement on the current state of affairs, where people often don't publish code nor data.

Agreed. Some cs conferences verify artifacts these days, which is a good first step.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact