Hacker News new | past | comments | ask | show | jobs | submit login

At the risk of just mirroring points which have already been made:

> you understand that the links in your post are the exact worry people have when it comes to releasing code: people claiming that their non-software engineering grade code invalidates the results of their study.

It's profoundly unscientific to suggest that researchers should be given the choice to withhold details of their experiments that they fear will not withstand peer review. That's much of the point of scientific publication.

Researchers who are too ashamed of their code to submit it for publication, should be denied the opportunity to publish. If that's the state of their code, their results aren't publishable. Unpublishable garbage in, unpublishable garbage out. Simple enough. Journals just shouldn't permit that kind of sloppiness. Neither should scientists be permitted to take steps to artificially make it difficult to reproduce (in some weak sense) an experiment. (Independently re-running code whose correctness is suspect, obviously isn't as good as comparing against a fully independent reimplementation, but it still counts for something.)

If a mathematician tried to publish the conclusion of a proof but refused to show the derivation, they'd be laughed out of the room. Why should we hold software-based experiments to such a pitifully low standard by comparison?

It's not as if this is a minor problem. Software bugs really can result in incorrect figures being published. In the case of C and C++ code in particular, a seemingly minor issue can result in undefined behaviour, meaning the output of the program is entirely unconstrained, with no assurance that the output will resemble what the programmer expects. This isn't just theoretical. Bizarre behaviour really can happen on modern systems, when undefined behaviour is present.

A computer scientist once told me a story of some students he was supervising. The students had built some kind of physics simulation engine. They seemed pretty confident in its correctness, but in truth it hadn't been given any kind of proper testing, it merely looked about right to them. The supervisor had a suggestion: Rotate the simulated world by 19 degrees about the Y axis, run the simulation again, and compare the results. They did so. Their program showed totally different results. Oh dear.

Needless to say, not all scientific code can so easily be shown to be incorrect. All the more reason to subject it to peer review.

> I'm an accelerator physicist and I wouldn't want my code to end up on acceleratorskeptics.com with people that don't understand the material making low effort critiques of minor technical points.

Why would you care? Science is about advancing the frontier of knowledge, not about avoiding invalid criticism from online communities of unqualified fools.

I sincerely hope vaccine researchers don't make publication decisions based on this sort of fear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: