If any of it ever becomes commercially released or whatever, there'll need to be a complete rewrite that makes it usable and maintainable by people other than yourself. But most of the code will never get to that point because most of what you've done up until about a week ago is wrong and worthless, and the current, correct-until-next-week iteration is stuck together with duct tape.
Speed only matters on the infrequent hot paths, which is why Python is popular. The rule of thumb is nobody cares about speed / resource consumption until it needs to run on a cluster, but then you care a lot because cluster time is metered and simulations can get huge. Fortran is still fairly popular because many math libraries are on it and porting would require huge effort from a very small group of very busy people.
Most of the coders are not software engineers and don't know / don't follow best practices; on the other hand the popular best practices are not designed for their use-case and frequently don't fit. Versioning (of the I-don't-know-which-of-the-fifty-copies-on-my-laptop-is-the-right-one type) is a big issue. Data loss happens. Git/Github/etc has steep learning curve, but so does all the various workflow systems designed for research use.
My own experience with reusing code by making a framework in academia: it immediately prompted me to think of interesting cases not possible within it...
In production software, this is flipped. Every feature claim needs to have an associated test, as it's a contract with your user. But when it comes to performance, everyone just waves their hands.
I'm being a little glib. But production software has to work. You'll spend far more time dealing with all of the "less interesting" details and edge cases than with research software. As ams6110 points out, this means more focus on testing, maintenance and good design. But I do want to emphasize testing - sometimes you'll spend more time testing something than actually implementing it. There's also often many more residual effects from dependencies elsewhere in the ecosystem you're working in. That's not typical in academic software.
The code that comes to mind had the following properties: over 20 years old; written in C and badly converted to C++ somewhere along the way (the stuff-all-the-globals-into-a-class approach); a combinatorial explosion of #define and #ifdef statements (covering all the experiments in the original paper)
In the paper, it is clear that one of the experiments wins, and why. So...
Step 1: remove all dead code.
Step 2: observe that the algorithm needs no dynamic memory allocation, remove all but 1 call to malloc, calloc, realloc, and free.
Step 3: the use of float can be replaced by correctly scaled 64-bit unsigned integers, with no loss of precision
Step 4: rewrite entirely in modern C++, this has two benefits, a) I get to use the <algorithms> library (judiciously, this simplifies the code enormously), and b) the code can send clearer messages to the compiler than the mid-90s liberal sprinkling of the 'register' keyword.
The net result is no asymptotic improvement whatsoever — arguably a slight improvement for very large N as heap performance starts to interfere, but nothing worth the effort.
However, the code now has tests (step 0), is clean and maintainable, is 10% of the size, and is 5-30x faster (depending on the shape of the data)
Ironically, an academic might get to spend a higher percentage of their time on pure coding than a professional coder does. They have other concerns. Maintainable code is not part of the desired outcome. It's consumable and expendable, not durable, so any time spent making it any better than "just barely good enough" is wasted. Why build a tank when all you need is a bicycle?
 At least the expectation. Some academic code lives on far longer than its authors intended, and some non-academic code vanishes pretty darn quickly. But in general, both the intent and the expectation is that non-academic code will live longer.
> easier to maintain code is king. Unless you are writing something extremely time critical do not try to be clever. A little slower is okay (and yes, I am in the performance consultancy business) if it significantly decreases the maintenance burden. Clever hacks belong to toy projects and blog posts. The next person who maintains it will be stupid to the code -- even if it's yourself. That clever hack is now a nightmare to untangle. In short: always code under the assumption that you will need to understand this when the emergency phone kicks you out of bed after two hours of sleep in the middle of the night. The CTO of Cloudflare was woken to the news of Cloudbleed at 1:26am.
Now I correct myself: clever hacks belong to academia, toy projects and blog posts.
I've been moved to Operations for two years after 10 years of Development. When I went back to Development I started coding as if the 20+ years the code will live after the initial deployment are more important than the 6 months spent creating it. And they are more important for the company, because they pay everybody's salaries and the shareholders. The first 6 months? Not so much.
Academia never has that problem. They also almost never have to deploy code to production.
This is the biggest difference between academic and professional programming in a single pithy statement, from a paper that Knuth wrote.
Try to make your company offer them money to cooperate. They might be suddenly very interested in your questions.
A few years back, some of the researchers (professors and graduate students) claimed they were interested in more testing and possibly taking some of their work (Betrfs, specifically), and productionalizing it. In response, I spent a lot of time with kvm-xfstests and gce-xfstests testing infrastructure, cleaning them, making them work in a turn-key fashion, and providing lots of documentation.
Not a single researcher has used this code, despite the fact that I made it so easy that even a professor could use it. :-)
The problem is that trying to test and productionalize research code takes time away from the primary output of Academia, which is (a) graduating Ph.D. students, and (b) writing more papers, lest the professors perish. (Especially for those professors who have not yet received tenure.) So while academics might claim that they are interested in testing and trying to get their fruits of the research into production code, the reality is that the Darwinian nature of life in academia very much militates against this aspiration becoming a reality.
It turns out that writing a new file system really isn't that hard. It's taking the file system, testing it, finding all of the edge cases, optimizing it, making it scale to 32+ CPU's, and other such tasks to turn it into a production-ready system which takes a long time. If you take a look at how long it's taken for btrfs to become stable it's a good example of that fact. Sun worked on ZFS for seven years before they started talking about it externally, and then it was probably another 3 years before system administrators started trusting it with their production systems.
Professional coders are paid to code.
Academia isn't preparing developers for this reality. Many will try to fake it or hide under imposter syndrome, which is fine if everybody in the company is an imposter, otherwise it is plainly obvious you are incompetent.
If you are talking about computer science academics, of course, that's a horse of a different color. In that case, the code is the topic, so I would guess that they're providing it! On the other hand, the majority of such research is probably solving niche problems and special cases, so it may not be very usable in your professional coding.
In contrast, industry doesn't let you choose the problem: you need to solve whatever the problem is that the client has. This means generalising a lot further and having a less optimal solution that is more robust to input error or poorly calibrated measurements. Even if it does fail you should be able to identify why and explain to the user what they did wrong.
In academia this feedback process is generally to the person who wrote the software, so a cryptic error message including some algorithmic details might be sufficient to debug the inputs sufficiently.
This informs my design choices quite a bit.
On a more serious note. In addition to what is already mentioned by others on quality, performance and so on I'd like to add that in professional career you most likely work with a (larger) team. Which means you will run into code conflicts where code is reused for different purposes and you cannot simply change it. In addition you have to think about readability and documentation as your colleagues have to be able to understand the code without losing too much time or needing you.
You will also always have to work with legacy code. Most likely code you want to change but can't considering the timelines.
You will have to sync your design with many others. You might have to convince them or discuss issues with conflicting requirements or deadlines. There will be times you can't finish your entire design and have to think of a staged introduction or even harder, change it so it can work with only 50% of the design.
Also, your code has to run for many years. You can't simply take an expirimental third party package maintained by a single person. Too risky. You have to think about hardware expiring or no longer being supported (especially with gpus).
You gave to think about licenses. Academia is usually free. With professional you have to take a close look.
Also, the focus on building software in teams seems to lead to architectures that need teams (vs. suites of manageable-size, "do one thing well" tools).
Slightly different take on this: http://yosefk.com/blog/why-bad-scientific-code-beats-code-fo...
Professional is often whatever works
This is fairly common with many academic vs professional differences, btw
Copy&pasting my response there:
Why is code coming out of research labs/universities so bad?
1. DON'T SEE WHY CLEAR CODE MATTERS
Academic projects are typically one-offs, not grounded in a wider context or value chain. Even if the researcher would like to build something long-term useful and robust, they don't have the requisite domain knowledge to go that deep. The problems are more isolated, there's little feedback from other people using your output.
2. DON'T WANT TO WRITE CLEAR CODE
Different incentives between academic research (publications count, citation count...) and industry (code maintainability, modularity, robustness, handling corner cases, performance...). Sometimes direct opposites (fear of being scooped if research too clear and accessible).
3. DON'T KNOW HOW TO WRITE CLEAR CODE
Lack of programming experience. Choosing the right abstraction boundaries and expressing them clearly and succinctly in code is HARD. Code invariants, dependencies, comments, naming things properly...
But it's a skill like any other. Many professional researchers never participated in an industrial project, so they don't know the tools, how to share or collaborate (git, SSH, code dissemination...), so they haven't built that muscle.
The GOOD NEWS is, contrary to popular opinion, it doesn't cost any more time to write good code than bad code (even for a one-off code base). It's just a matter of discipline and experience, and choosing your battles.
Who is that clown? and why is the shit-post of a 4-day old reddit account being discussed all over the interwebs like gospel?
That person very likely has regrets not finishing high school and is venting frustration in the form of misplaced anger.
It would befit someone of your intellect to try to figure out why the post was so popular, instead of an arrogant dismissal.
But let's discuss that post in case you couldn't assess the level of ignorance of that shit-bag:
- He is a total hypocrite, as pointed out multiple times on reddit as well as HN, for pissing on other developers about short variable names and yet making a post and comments full of acronyms himself.
- If JS developers are 'inbred peasants' (his own characterization), the fact that one of those visits a machine-learning forum and throws a temper-tantrum at the whole community for variable naming and code comments, only goes further to confirm the impression that the JS community carries some of the least-educated, least-knowledgeable nasty teenagers who just discovered the developer console of a browser they use 24x7 to cast slurs on each other, and now they think they're the gods of computer science.
Even if you ignore all that, the biggest thrust of that shit-post is a wholly subjective one, that variable names he's encountering while reading machine learning code are _not to his liking_. That is it. I could just as well go ahead and say, ctx_h is a perfectly fine variable name, 'ctx' stands for the word 'context' (a well-known shorthand), the underscore is borrowed from the latex convention of subscripting, hence the 'h' is a subscript. And while it is not clear from the name what 'h' should stand for, it's obvious that ctx_h is a special case of some 'context', and it's completely fair to expect the reader to understand this source code in light of the paper associated with it, (which by the way is the source's documentation and, in a sense, a super-polished form of code-comments). Not to mention, this naming convention is practised even more faithfully in the mathematics community, where you would find names like x_i, a_0, all over a theorem or proof (again underscore representing a subscript). And yet my whole argument would be based on a subjective opinion.
While I completely admit that academics, by virtue of being domain-experts first and software-developers second, are more likely to suffer from problems of lack of clean coding and established software-engineering practices, it is far from being a black-and-white case. Not even close. Spending half a decade in a grad school after spending many years in the software industry, and advocating use of modern software-engineering practices, I recently took up work at one of the big software companies, and was shocked to find out the quality of their C++ code was worse than any of the Fortran and C++ codebases I encountered at the university. And personally, I've found machine learning python codes to be a fair bit cleaner than most C++ codes I've come across.
I'm not against criticism, and I think machine learning community could use a lesson or two on software engineering, but if you're up for such undertaking (criticizing the whole community) you better make sure you don't come across as a complete ignoramus and a hypocrite.
That reddit post is clearly tongue-in-cheek, written consciously in an exaggerated voice to spark interesting discussions (which it did) -- not a peer-reviewed journal article. But I have no doubt you're aware of that, please stop trolling.