Far too often, researchers never release code because it's never "polished" enough.
So I began publishing my code on github from the moment I started the project. e.g. https://github.com/turian/neural-language-model
However, some of my more conservative colleagues were averse to this approach. I constantly debated with my office-mate, who was of the opinion that there are many reasons not to present half-finished work.
So there is also a large cultural barrier to more open science. If there were publishing pressure on researchers to open their code, then it might effect a cultural shift.
So, I don't blame them for being embarrassed to release their code. However, to some degree it's all false modesty since all of their colleagues are just as bad.
Add into this the fact that no one in academia understands version control systems, and it's a hard hill to climb.
The original transistor, produced by Shockley and team at Bell Labs, "worked" only in a nominal sense. It didn't do anything other than prove a concept. To turn it into something usable in real equipment took years of effort by other scientists, and engineers. Thank god they published the details of it rather than saying "we made it and it worked, here are the results" because they were afraid of releasing something that was "a pile of rubbish."
A bio researcher I know is afraid of releasing any code because of the way it might tarnish their reputation.
They're not expert programmers and are afraid to be perceived as less competent, in their field, because of the low quality of their code.
Researchers are not judged by the quality of their code -- they're judged on ideas (and more specifically, papers). And to be fair... have you ever written a quick, hacked-together script to prove some point and then move on? That's the same thing that researchers are doing. If they want "high-quality" code, that will probably only happen as the research systems are hardened and/or commercialized.
I should say, I'm still a big proponent of open-sourcing it all anyway -- perhaps just a few months later to maintain competitive advantages (or file for IP protections). All my dissertation code, hardware designs, etc. are online and documented for posterity. And I find that some other researchers genuinely find it useful (which kinda scares me). But I try to be a good citizen and support 'em anyway.
Probably not too interesting, but a start. It looks like the old robot power supply boards and force-torque sensor boards reside on my old lab's "internal" wiki. That's no good! I'll have to ask 'em about moving the files over to the public one. The latest designs (FPGA software defined radio) are being tested, so they've got a while before they'll be released. ;-)
The programs are not complicated. They are usually just some implementation of an equation or some other method for transforming input into output. Researchers don't have hundreds of hours to invest in learning the nuances of the const keyword in C++ or whatever, so they hack it. It works.
How would they know? Because it produces output that looks like what they're expecting? That might work...until it doesn't. :-)
When publishing on wet-lab data, you only publish the assays that worked (i.e., you didn't contaminate the samples, etc.). The wet-lab equivalent of a source repo would be like publishing a video recording of your lab.
I think it is good practice and it makes the paper writing much easier since you can back-track the process. The only experience I have had contrary to yours is that all my colleagues have some sort of envy of my idea, but they just can't do it themselves because of stage-fright, lack of time, etc.
Probably time for an international cooperation on defining open code policies: http://blog.jgc.org/2012/04/more-support-for-open-software-i...
Both sharing and not sharing seems to have pros and cons. For example if the code is buggy and shared, odds might be higher that the bugs will never be found because nobody will bother trying to write the code again.
I would be very interested if the author had actually given even a single instance where the lack of software code that merely implements the experiment has completely impeded progress on the science in a paper. Even if this were the case, would that not imply simply more algorithmic detail is required?
Of course, for all of the above, I am referring to non-computer science. There may be special circumstances in computer science where the code itself is the published algorithm or an intended description of the underlying science.
For an example of a specific circumstance consider theorem provers, because the proof is usually too large for a paper publication. The Archive of Formal Proofs (AFP)  is a repository for Isabelle proofs, which my collegues use. They submit a proof to AFP and write a paper about the results, where they cite the AFP publication.
Your reasoning is a bit odd. It's a good thing that nobody will bother trying to write the code again. The world has enough people who can write code. We need more people who can read and dissect code, refactor them, and add tests where applicable.
I released some code that only compiled on visual studio 6, with a specific version of a fairly expensive library. I got several emails asking for a mac or linux version, rather an update for more modern compilers.
Personally I would have preferred people just reimplement the code from the paper. I suspect for them it would be less work.
Why? Because in many fields there is a negative incentive to provide code and data. It not only takes time, but it opens you up to criticism by people who wouldn't be willing to make their own code/data available. Perhaps something like this would raise the bar and encourage more people to share their code/data. Just a thought.
The more interesting question is, how we can check a paper for completeness. I fear the answer is to try and implement it, which is costly for doing it in the peer review process.
In general though the system doesn't encourage you to follow good practices at all. Having said that I've definitely seen a change over the last few years towards more awareness.
Seeing this made the author lose credibility on the subject.
The paper gets it own website: http://ged.msu.edu/papers/2012-diginorm/; which includes arXic preprint, data and code repositories and even an AMI with everything loaded. Basically eveything you need to replicate the work in the paper.