
Why bad scientific code beats code following "best practices" - smackay
http://www.yosefk.com/blog/why-bad-scientific-code-beats-code-following-best-practices.html
======
mattheww
I am a scientist, and I have seen a lot of terrible code. Most scientists have
no formal training in computer science or coding. Many advisors don't place
much value in having their grad students take such classes, though even a
short language-specific introduction class would vastly improve their
students' productivity.

I recently undertook a complete rewrite of our group's analysis software that
was written by our previous postdoc. It was ~30k lines of code in 2 files (one
header, one source file), with pretty much every bad coding practice you can
image. It was so complicated that that postdoc was essentially the only one
who could make changes and add features.

The rewritten framework is only ~6k lines of code to replicate the exact same
functionality. It's easy enough to use that just by following some examples,
the grad students have been able to do implement studies in a couple days that
took weeks in the old framework. The holy grail is for it to be easy enough
for the faculty to use, but that will probably take a dedicated tutorial.

My point is that following "best practices" may be overkill, but taking a
thoughtful approach to the design of the software can vastly improve your
productivity in the long run. Posts like the OP help scientists who write bad
code defend poor practices. Any scientist worth his salt should support
following good practices because it will always lead to better science.

~~~
_yosefk
I agree of course, I just think a scientist taking a more thoughtful approach
> a scientist taking a sloppy approach > a "software engineer" taking an
overly thoughtful approach. Because the latter could have written ~200K LOC
spread in 5 directories and you'd need a debugger to tell which piece of code
calls which.

~~~
Silhouette
I think you're comparing apples to oranges, both here and repeatedly in your
original article.

For one thing, you describe many "sins" that "software engineers" commit, but
in reality code that was flawed in most of those ways would not even have
passed review and made it into the VCS at a lot of software shops, nor would
any serious undergrad CS or SE course advocate using those practices as
indiscriminately as you seem to be suggesting.

For another thing, how many "scientists taking a sloppy approach" do you
actually know who can successfully build the equivalent of a ~200K LOC project
_at all_ , even if those 200K lines were over-engineered, over-abstract code
that could have been done in 50K or 100K lines by better developers? It's one
thing to say a scientist writing a one-page script to run some data through an
analysis library and chart the output can get by without much programming
skill, but something else to suggest that the guy building the analysis
library itself could.

~~~
gknoy
It's not that a single scientist writes it, but rather that someone publishes
a paper on something, with ugly code used to prove it, and then becomes a
professor. Subsequent generations of graduate students are tasked with
extending / improving this existing codebase until it is basically Cthulu in C
form. ;)

I recall reading a propulsion simulation's code developed in this way.
"Written" in C++, initially by automated translation of the original Fortran
code. Successive generations of graduate students had grafted on bits of
stuff, but the core was basically translated Fortran, with a generous helping
of cut-and-paste rather than methods for many things. (I don't mean this as an
insult to Fortran: I've tremendous respect for its capabilities, and have read
well-written code in that as well.)

The net result was that fixing bugs in the system was very challenging, as it
was a very brittle black box. It was not Daily-WTF-worthy, but still very
frightening. I'm very grateful I was not the one maintaining it. ;)

------
dasil003
Setting aside the straw man of needlessly baroque architectures, I think
there's an argument to be made that erring on the side of verbose but
primitive code works in science because:

A) It needs to be read and understood by scientists who are primarily oriented
around data rather than code.

B) Many people will need to read and understand the code who are not part of a
core team maintaining a system over time. Peer reviewability is paramount.

C) In fact there is likely no "system" to be designed and maintained anyway,
all scientific code is one-off in some sense.

All that said, software engineering as a discipline can further these goals,
and it's a mistake to assume that getting "software engineers" involved will
inevitably lead to complexification. A good software engineer can assess the
goals and improve code along many axes, not just traditional enterprise
software development patterns.

~~~
wpietri
Agreed.

Another mitigating argument in his favor is that he appears to be practicing
debugger-driven development. Personally, it gives me hives, but given his
circumstances (not an expert, lots of code, much of it not his, lots of
throwaway code), it may be his best option.

------
krick
I don't know about programmers-vs-scientists-vs-engineers-vs-… and all that
stuff (basically, these are just _some words_ and I can question the fact they
mean something at all), but I agree with the main point of the article. Or the
way I interpreted it, anyway. That is "road to hell is paved with good
intentions". I have to deal with it on daily basis. There is some legacy code
in the project considered to be bad and always referred to as that. And, well,
yeah, it _is_ bad. But when I have to deal with some new architectural marvel
of some my colleagues, who are considered to be good and actually are pretty
bright adequate people, then I often think that that "legacy code" was
actually easier to deal with before "refactoring". Exactly for the same
reasons author mentioned.

I mean, some god object with multi-screen functions with 9000 ifs and non-
escaped SQL is ugly and horrible, but in fact pretty simple to debug,
comparatively easy to understand and often even easy to clean up a little bit
without breaking anything. But some metaprogramming-reflection-abstract-class-
GenericBuseinessObjectManagerProviderFactory-10-levels-of-inheritance is
_not_. It even might be not ugly, it's often clever and somewhat elegant. If
you know how that works. But if you don't (and for starters you always don't,
unless you are author of that elegant solution) it takes you hours of pain and
bloody tears before you can understand what happened here and finally make
changes you wanted to.

I actually believe that this is a problem, because it isn't something that
some person does, because he is dumb. He's not! It's the culture, that overly
praises clever techniques and "elegant" solutions, while spreading the myth
that "not sophisticated enough" means "bad". It's not! "Hard to understand" is
"bad". Nobody really needs "cleverness" and "elegance", in the end of the day,
they need something that works and is easy to understand and develop further.
And the truth is that something "not sophisticated enough" (even if it's goto,
copy-paste, mutable variable, global object, whatever) is often easier to
understand than something sophisticated one.

~~~
userbinator
I've been faced with this countless times too and I think the problem is a
culture that equates complex and abstract solutions to elegance, when we
should really have one that equates simplicity and conciseness to elegance.

By simplicity I also mean considering the solution as a whole, not the false
simplicity of refactoring everything into one-statement-methods.

~~~
buzzybee
When I've taken this thought to its extreme, Chuck Moore's ideology around
Forth makes total sense:

If a problem is only going to be solved in a complex, Byzantine fashion, it's
the wrong problem. Walk away from it. Solve a different one. Quit the job.
Reconsider your lifestyle.

And most people aren't going to be able to consider it seriously on that
level. The monstrous systems are there because everyone involved has
collectively agreed that whatever is justifying the problem is so important
that it's OK to let the resulting system grow monster-sized and swallow
everyone up. On that basis the only thing anyone can hope for is a painkiller
to make the monster a little less soul-crushing.

------
danso
> _Oh, and one really mean observation that I 'm afraid is too true to be
> omitted: idleness is the source of much trouble. A scientist has his science
> to worry about so he doesn't have time to complexify the code needlessly.
> Many programmers have no real substance in their work - the job is trivial -
> so they have too much time on their hands, which they use to dwell on "API
> design" and thus monstrosities are born._

Jesus, seriously? Can't tell if the author is just trolling flippantly in
response to what may have been an unfair post...but ignoring the "programmers
have no real substance in their work" thing...the OP mistakes thinks that
"science" is all one needs to keep something on track. Uh, no. Just because
someone thinks they know what they're doing scientifically doesn't mean they
are good at examining or scrutinizing the way they _work_...which can include
everything from the efficiency of data collection to the _accuracy_ of such
measurements. A good software engineer is not just fluff in such a situation.

~~~
adamors
The entire post reeks of ignorance. I don't understand how it got to the front
page.

~~~
spacemanmatt
Seriously. It seems like there have been more neophytes willing to come out
against mature practice like factory methods and other artifacts of
polymorphic architecture. Oh well, complaining probably triggers a seratonin
release or something for them.

~~~
TillE
"Hey look, it's possible to misuse design patterns! Don't they suck, amirite
guys?"

Online forums are generally filled with programmers who have never written
anything large and complex.

~~~
spacemanmatt
Even worse: "Hey look, it's possible to be confused by design patterns! Don't
people who use them suck, amirite guys?"

The whole internet is amateur hour. I just got caught in a daydream,
pretending like HN was above the noise for a while. Maybe it was.

------
ronaldx
It's surprising there is no comment about _wrong_ scientific code: code which
apparently does one thing, but actually doesn't, and may produce harmful
results.

(By "bad" code, I understand code which doesn't meet best practices)

It's very easy to accidentally produce wrong scientific code, partly since
scientists are doing research. They use novel mathematical algorithms to solve
hard problems, and it's not typically obvious what output is expected. It's
not CRUD.

In this sense, the sins of the scientific programmer might actually be
important - fragile code which crashes when something is wrong could be
considered good - this may help to avoid publishing wrong code.

~~~
raverbashing
This is very easy to happen

For example, if you do a simulation with random numbers, and you're doing
random()%NUM where NUM > MAX_RAND

The trick is that MAX_RAND varies between platforms _cough_ Visual C _cough_

~~~
barrkel
random()%NUM is usually wrong even if NUM < MAX_RAND.

random() is often a linear congruential generator (LCG:
[http://en.wikipedia.org/wiki/Linear_congruential_generator](http://en.wikipedia.org/wiki/Linear_congruential_generator))
for speed and simplicity purposes. LCGs are a multiply, an add and a modulus
(the modulus is usually implicit from the machine word size). That means their
low bits are highly predictable and not random at all.

    
    
        X(n+1) = (a * X(n) + c) mod m
    

Assume m is a power of 2 since it's usually implemented via machine word wrap-
around. If c is relatively prime with m (in order to fill the whole range of
m), then it will be odd. a-1 is normally a multiple of 4 since m is a power of
two, so a is odd too.

So if X(n) is odd, X(n+1) will be even (o * o+o => o+o => e), and X(n+2) will
be odd (o * e+o => e+o => o), and so on, with zero randomness.

So if you're trying to simulate coin flips and use %2, you will get a
1,0,1,0... sequence.

~~~
raverbashing
I just tested this quickly (on MacOS 10.9.2), it is not a 1,0,1,0 sequence.
(It is repeatable since I'm not seeding)

On Linux, same thing. It even gives the same sequence as the Mac OS version

(it's 100 numbers, "1,0,1,1...0,1," no \n at the end)

./rt | md5sum 7a5a5a0758ca83c95b21906be6052666

~~~
mturmon
@barrkel made a small mistake, confusing rand() and random().

rand() is the earliest C random number generator. Its low-order bits (back in
the day) went through a predictable sequence, so rand() & 0x1 was a bad source
of random bits.

I don't think that rand() was specified so fully as to make this behavior
required, but typical implementations exhibited it, so you could not use
rand() for any serious work.

random() came after, does not use a LCG, and thus fixed this problem, so you
would not see it if your code calls random(), whose man page says:

    
    
      The difference [between random() and rand()] is that rand() produces a much less
      random sequence -- in fact, the low dozen bits generated by rand go through a
      cyclic pattern.  All of the bits generated by random() are usable.  For
      example, `random()&01' will produce a random binary value.
    

Typically, because of this screwup, people use a third-party generator, like
the "Mersenne twister".

~~~
barrkel
LCG can be used for serious work, depending on your definition of serious. You
need to use multiplication instead of modulus.

------
weland
I am a programmer who previously wrote scientific software, as a scientist. I
can confirm the author's impression: the worst code usually came from the
people with more "software engineering" expertise, but there was a catch: I
can't think of any of them who were actually good programmers in the first
place. Most of them couldn't fix a division by zero exception if the program
consisted in a single line that read "1/0".

This isn't unexpected: everyone fucks up things that are his profession to
fuck up. I was fucking up mathematical models of integrated devices and they
were fucking up code.

But things really aren't that bad. Honestly. When I moved to industry, the
first company I worked in was a small place place where the lead developers
were exceptional, both as programmers and as leaders, so we wrote exceptional
code and I also thought gee, I was coding a load of crap back then.

Then I moved to a larger, fairly well-known company and frankly, it's
comparable. The mission-critical parts are ok, but the rest is such a gigantic
pile of shit that it probably led to a few PhDs being awarded.

~~~
zo1
_" the worst code usually came from the people with more "software
engineering" expertise, but there was a catch: I can't think of any of them
who were actually good programmers in the first place."_

So you basically define an entire group of people based on only the ones
you've met. Even then, you admit that none of them are good. Thus, all
"software engineers" are bad?

If that's not what you were trying to say, then perhaps you should clarify a
bit more. But it's how I interpreted what you were saying. And I probably
wouldn't be the only one.

~~~
weland
> So you basically define an entire group of people based on only the ones
> you've met. Even then, you admit that none of them are good. Thus, all
> "software engineers" are bad?

No, sorry if that's what ended up being understood (I assume you didn't read
my whole comment?)

What I meant was that when I worked there, I've seen worse code coming from
actual programmers than from scientists who wrote code, but didn't think of
themselves as programmers. This isn't much of a surprise; it was a research
lab and money were fairly tight since we were researching neither weapons nor
patentable drugs. Most of it was spent on equipment and scientists. The under-
paid programmers were usually under-skilled, too; brighter folks quickly left
for greener pa$ture$, leaving the ones who couldn't otherwise land a job
behind.

------
jerf
It seems to me that the article ultimately addresses a straw man... I don't
see anyone saying "Scientists would be better off if they adopted best
practices that a first-year student would think are a good idea without any
understanding" or "Scientists would be better off if they got terrible
software engineers to help them". Most, if not all, of what's in the article
is bad software engineering.

In fact, we are well aware that bad programming is not confined to scientists,
and at the risk of being tautological, better programming practices lead to
better programming outcomes than worse ones, regardless of science or not.
(And note my careful confinement to "programming outcomes"... I won't promise
that better programming practices lead to better final outcomes
unconditionally, though I'd be willing to state there are times and places
where it is called for.)

There _are_ a lot of cargo-culting "software engineers" who blindly apply
principles in stupid manners, but that's hardly what people are calling for
here.

------
bluedino
There's a guy who used to frequent a forum who routinely asked for help with
his C programs.

He had no clue when it came to programming. He didn't understand the standard
C functions, didn't understand memory allocation, didn't understand Big O.

He formatted his code in very odd ways, requested we tell him how to shut
compiler warnings off, refused to use different sorting algorithms, it was
just insane to see what this guy would do.

I recommended he switch to a language like Python so he could concentrate on
getting the methods and ideas into code, instead of coming to us for help on
arrays, file I/O, etc. He said he had been using C _since the 80 's_ and
wasn't about to switch languages.

His justification for not wanting to learn to do things 'the right way' was
that his way worked, and he had been published (with others) in a few academic
papers, and therefore he was doing things right. Not that I would trust a
single result generated from any of his programs...

~~~
cLeEOGPw
You can train anyone to do anything with positive reinforcement that is
salary. If he gets money for crappy C code, there's no motivation for him to
improve.

------
kirab
I just can’t find any relationship between the sins of software engineers he
enumerates and the "best practices" he referenced in his title. In my
experience best practices actually make those sins less probable.

~~~
fmstephe
As a java developer I can say that his list of sins very nearly is our list of
best practices.

I think that the programming culture has a big impact here. Yossi's complaints
are typically about C++ programmers, and I can't comment on their culture
directly. But I think Java and C++ both have "Design Patterns: Elements of
Reusable Object-Oriented Software" as a spiritual foundation. So I suspect
they are depraved along similar lines.

(Since it's hard to get tone across on a forum, I am being playful here.
Although I am worn out by Java programming culture)

~~~
deong
I used to work in a place that had a mix of Java and C or C++ projects going
on, and different groups specializing in each. I always said I preferred
dealing with the legacy C code than the new Java code. Function too long? I
don't care. I can start at the top, read to the bottom, and understand what
it's doing. If it actually _is_ too long (rather than just longer than some
new graduate's magic number k that is supposed to be the hard upper bound on
function length), then I can quite easily break it up.

I could take Java code written by the best minds in that half of the company
yesterday, and have absolutely no idea what it did, how it worked, where the
files were or what resources it needed. Nothing actually happened in the code
that anyone at the company wrote, as far as I could tell. They spent all their
time writing what look to me to be prayers to the gods of third party jar
files asking them to do whatever function needed to be done. "Well, I need to
get a list of customers sorted by last name. Those heathens over in C++ land
would write a SQL query, but I have Spring, Hibernate, and SOAP, so instead,
I'll edit a generated XML file to refer to another generated XML file to refer
to another generated XML file to refer to another generated XML file to refer
to another generated XML file to refer to another generated XML file to create
an object creation factory to create an object that can read something from a
generated XML file that loads another generated XML file that tells Hibernate
to load four gigabytes of customer data which I need to then prune down to
what I need by editing a generated XML file so that Hibernate can send 20
records to a SOAP library that reads a generated XML file that reads a
generated XML file to write a few bits over the wire where the client can read
a generated XML file that parses a generated XML file that reads a generated
XML file before crashing because the client's JAX-WS jar was 0.0.0.0.0.0.2
iterations off from the server's JAX-WS jar. But at least I don't have to
write a difficult and error prone for loop."

~~~
watwut
Getting "a list of customers sorted by last name" in hibernate does not
require xml or anything like that. It is one java method with annotation
containing order by query.

It is considerably shorter then traditional non-hibernate version.

~~~
deong
I rather hoped that the rest of the text would have been a clue that I was
exaggerating things slightly.

------
bpyne
The author exposes a problem of ideological practice in software engineering.
We're trained to organize code for extensibility and high availability, among
other goals. We should probably learn to evaluate situations in which those
goals are not helpful and adapt to the situation. After all, a surgeon
probably doesn't follow surgical best practices when removing a splinter from
his child's finger. SE's should probably recognize when a simple, non-
abstracted approach works sufficiently for a situation and leave it at that.

~~~
collyw
Fast, cheap, good, pick two.

------
JulianMorrison
I really wish programming would get over its "OOP style" madness. If you are
writing C++ or even Java that uses, rather than creates, an object hierarchy,
then just write in procedural style with functional decomposition.

~~~
leephillips
I'm not sure I understand what you're getting at, but I think I try to do this
when I use Python: I never create classes, but write in as functional a style
as possible. The problem is that I seem to be fighting with most of the
libraries that I use, because they are all written by good, normal Python
programmers who use the OO features of the language. You can't really use
their methods as if they were pure functions, because they're not: they mutate
arguments and have all kinds of side effects, often undocumented.

~~~
zo1
It's called encapsulation/abstraction. You're not supposed to know the
internal state of a class, and you shouldn't care if it's changing. The fact
that you're complaining about that means you're not using those provided
classes correctly. It's like complaining that a car isn't driving properly on
ice. And that's because it wasn't meant to drive on ice, but rather on non-
slippery surfaces.

Perhaps you shouldn't try to pigeon-hole classes that were made for normal OO
usage into your functional tastes.

~~~
leephillips
I think everything you say here is correct. It sounded as if the comment I was
replying to was recommending a practice that I've tried to follow, and I was
explaining how trying use a functional style in an OO ecosystem caused me
problems.

"You're not supposed to know the internal state of a class, and you shouldn't
care if it's changing."

A recent headache was caused by trying to use a library (forgive the
vagueness, I don't want to pick on anybody) that interfaced with an external
service. One method was documented as returning a piece of information that
had been previously stored in the main object through which you interact with
the service. But when you invoke it the library makes additional API calls to
the service and changes other data in the object. If you use the method in
some expression that calls it 20 times it will make the set of API calls 20
times. There is no way to know this (ahead of time) unless you read the code.
It's not documented because "You're not supposed to know the internal state of
a class, and you shouldn't care if it's changing.". The author made
assumptions about why you were using the method and what you were going to do
next.

So what appears to be a function for retrieving a single value actually
returns several values, returning one as a result, stuffing others silently
into an object, and initiating network activity. This kind of lack of
orthogonality and hiding behavior from the programmer is what motivates me to
learn functional programming and avoid OO systems - although I understand they
are a good match for programming GUIs and similar things.

------
diegoloop
That's why I came up with the idea to make a huge repository
([http://codingstyleguide.com](http://codingstyleguide.com)) of programming
conventions, "best practices", etc for any language. Where anyone (scientist,
experts and newbies) can visit this platform and take a look at the "best"
conventions to use.

~~~
collyw
Thanks for this! I have been looking for something similar.

I find it easy enough to pick up the syntax of a new language, but this will
allow me to go that step further and do things properly.

~~~
diegoloop
Thank you @collyw! Still too much guidelines to post... The idea is to have
different solutions for every writing convention in any programming language.

------
vendakka
A large part of this boils down to being able to estimate how much technical
debt you can afford to carry. I haven't seen very much code written by
researchers (scientists/phd students/postdocs). However, from the little I've
seen, the tendency sometimes is to either not be aware of technical debt
accumulating, or unintentionally overestimating how much can be afforded. This
results in unorganized codebases.

The other extreme is software engineers who focus overly on the mechanics and
always underestimate how much technical debt they can afford. This results in
over architected systems which try to plan for all eventualities.

Two useful skills to have as a software engineer are to know when to stop
writing code and when it's okay to write messy code. The latter being done
with the knowledge of when or even if you'll have to clean it up later.

~~~
ams6110
Technical debt is often irrelevant in scientific code. It's one-off code for a
specific experiment. In many cases, once the paper is published nobody will
ever run the code again. That's not always true, but it often is.

~~~
collyw
Science is supposed to be reproducible. Writing a script that runs once on a
specific machine is unlikely to achieve that.

~~~
leephillips
Meaningful reproducibility would mean writing your own code and performing
your own experiment in a different lab with different people to see if the
_results_ hold up. Running the same code more times on the same machine, or
repeating a measurement in the same lab with the same people isn't what we
mean by "reproducing" a result.

------
bowlofpetunias
TL;DR:

People being incompetent part-time do less damage than people being
incompetent full-time. But for the sake of my straw man argument I ignore the
fact that incompetence is the problem here.

------
yk
Thing is, computers were build to handle scientific problems. And it shows in
numerical programs, a especially egregious example was a simulation program
with ~1k LOC main. However it is possible to work with that since it has a lot
of structure, that main routine looked something like

    
    
        main(){
           initMatrix(reasonableName);//Repeat this block
           loadData(reasonableName);// 100 times.
    
           for(){
               someBookkeeping(reasonableName);
               //Another 100 lines
               for(){
                    numericalStuff(reasonableName);
                    //Again repeat 100 times
               }
            }
            cleanUpAndOutput(reasonableName);
            //Again repeat 100 times
            return 0;
        }
    

So that is of course horrible code, but at least it is horrible in a
consistent way. I got really burned by code where the high level architecture
was build by a software engineer and the details were filled in by a
physicist. Then you get atrocities like should be separated classes that
spread their functionality over multiple levels of inheritance. ( Along with
several pages of a constructor...)

------
noselasd
"Best Practice" isn't just about the code aesthetics, but things like source
control, testing, assumptions made, documentation etc.

[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjo...](http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745)
gives some nice advice.

------
Htsthbjig
Bad scientific code is bad, and bad programming code is bad.

Every time you do a clever thing, you have to do three things:

1-Document your smart idea on your code.

2-Document your smart idea, preferably on drawing.

3-Document your smart idea, preferably on audio-video.

People forget their smart ideas after 6 months or so. So basically if you have
to debug the code later you have to spend at least the same amount of time you
spent developing the smart idea in the first place each time you have to
debug.

In my opinion, smart ideas are great tough, if you follow the three principles
above solving a bug becomes fast.

Most people don't know about psychology so they believe that because they know
it today, they will know it on six month time. Or worse, they fear that if
they document their work they could be fired(this is the mentality of weak
programmers that know they are weak, hopefully you wont work with these
people, if you do quit as fast as possible).

------
jacquesm
Simple code is simple to fix, but without some layers of abstraction there
will likely be so much of it that understanding is hampered along a different
dimension.

Quantity of code can be reduced by increased complexity of the code, at some
sweet spot between the two is your ideal, code that is neither so dense that
you can't read it any more and code that is not so verbose that you're going
to be overwhelmed by the quantity.

It's never black-or-white, it is always a trade off.

------
touristtam
I am surprised the author call himself a "programmer" and judge that "... Many
programmers have no real substance in their work - the job is trivial ...". On
what is based such judgment? Personal experience? Then it is totally biased.

He is also saying that "A scientist has his science to worry about ..." which
is quite a demeaning of the job that a programmer has to go through in
comparison: The programmer has to understand his field AND the domain: the
scientist's field. Sure it might 'only' be to get the knowledge from the
scientist. But doing away with software abstraction for the sake of writing
simple (simplistic) code is hiding the fact that if the code is to be worked
on in the future, it will be a giant spaghetti monster.

The author might as well write VB macros in MS Excel. :p

~~~
_yosefk
Yet in reality, I work on chip architecture and hardware accelerators and have
never written VB macros in MS Excel. Go figure.

(I did write one VB macro in VS 6 though, I think. Perhaps it was that
incident that distorted my worldview.)

~~~
touristtam
Sorry if my previous comment seemed a bit harsh, I guess your blog post was
bound to take some flak after such an opinionated view of the programmers in
your field (and the perceived generalization to the programmers outside this
field). That being said, you are possibly touching at a subject more
fundamental regarding the human behavior, were some of us are perfectly happy
not learning as much as we can and just going by our daily job. ;)

------
userbinator
Although I've not worked with much scientific code, I think the phrase "best
practices" has a lot to do with why this phenomenon happens; those who were
subjected to formal software engineering education will have been exposed to
them, and unfortunately for many, the _best_ in "best practices" leads them to
believe that these are the ultimate way to do something and should always be
followed. Instead of thinking and reasoning about the problem, they are taught
to apply a set of "best practices", and that doing so will provide the best
solution. When faced with a problem, dogmatic adherence to these principles
replaces actual thinking, and groupthink further indoctrinates them into
something resembling a religion. Engineering is all about tradeoffs when
solving problems. There is _never_ a universal "best" practice that fits all
situations, so I think anyone who claims to be practicing or teaching
"software engineering" does not deserve to be called a "software engineer" if
the bulk of their thought process consists of regurgitating "best practices".

On the other hand, those who haven't will approach the problem from a
completely different perspective: they'll be primarily concerned with solving
the problem itself, and will tend to use actual thought rather than relying on
memorised practices. It's true that this can lead to "bad code" depending on
their skill level and knowledge, but they'll also be far less likely to
overcomplicate things and more easily understand and use abstractions
appropriately. One of my favourite examples of this is the demoscene; it's
comprised of programmers, most of them young and self-taught and never
subjected to formal CS education or "best practices", who manage to do pretty
amazing things with software and hardware. That's why I believe these people,
the ones who learned "bottom up" from the basic principles of how the computer
works and how it can be programmed, can eventually produce better code than
the "indoctrinated software engineers" who were taught more on _what_ to think
than _how_ to think.

------
Nimi
Respectfully, I think the author got the reason wrong. In software, there are
inherent problems, and non-inherent problems (as observed by No Silver
Bullet). Scientists, when writing scientific code, can only encounter/create
non-inherent problems: local (or relatively local) bugs in their code.
Programmers, otoh, are employed in order to tackle (sometimes successfully,
sometimes not) the inherent problems, which mostly distill to the problem of
"scale". Note that most of the problems the author listed in those bullets may
be described as "this doesn't scale".

So when the author is called to solve a problem created by a colleague, he
either gets a very local bug in some scientific code, which is apparently easy
to debug (I'm surprised about the concurrency stuff also being easy, but if
that's the case - great for him), or a problem with a large code base, badly
architected, which we all know is very hard to solve.

The author seems to imply that if software engineers would ditch this
extravagance and start writing simple code instead, we would be better off - I
highly doubt this. I mean, the code would certainly be easier to understand,
but how much duplication would there be? How much more code would we have to
understand?

------
kyzyl
Okay, here's my attempt to distill things a bit. Now, I'm not very old, but my
time putzing about this field has taught me that there are two components to
carrying on successfully: experience, and communication.

Right now I run a company where we do science. To do our science, we have to
write a ton of code. Web stuff, server deployments, numerical analysis,
machine learning, hardware description, USB drivers, data visualization, you
name it. Over the years I've gotten pretty good at picking the right level of
abstraction for the job, and it's served me very well. The key is to have
foresight about your situation, and foresight only gets better with
experience. When I see one of the less experienced engineers at work doing
something dubious, it's usually very easy to steer them away from danger.
That's the experience bit; somebody has to know better, and has to act on it.

But knowing better isn't enough. If you want your academic lab to use better
coding practices, you knowing better and flatly telling them to change their
ways will never work. You have to convince them that they want to do it. If
you can't come up with an argument for the use of your other language, or
design paradigm, or SCM software that is both factually solid _and
contextually relevant enough_ , then 9 times out of 10 you are probably not
hitting on the right solution. That doesn't meant that your lab director who
is refusing to change is correct in his refusal, but I'll bet you if you
pitched upon a solution that was right for the situation, and put some thought
into your explanation, you would get a _much_ better reception. The same
principle applies when you're pitching a new methodology out in non-academic
land, it's just that more often than not both parties' expectations are
already closer to being in alignment.

Why does it work? Because most people in these types of environments, such as
academic coding circles, are really quite smart. If something is sensible and
sufficiently low friction, they will probably go for it. So, no, bad
scientific code doesn't beat 'best practices'. But a good solution beats a bad
solution every.fucking.time. If you can't show people that it's a good
solution, it's probably not the right thing to do. Even if your idea _is_ the
better one, if you haven't convinced the people who are going to have to deal
with it of this fact, they won't understand it, they will misuse it, mess it
up, screw up their research and they will blame the software engineer. And
they'd be at least partially correct in doing so, because you only painted
half the house.

I'd end by saying 'It's really not rocket science', but... it really might be.
;-)

------
logn
I think that the 'raw coding' style the author sees in scientists is desirable
because that's essentially the base level of programming that even software
engineers think in. But then software engineers jump one level higher and try
to organize and abstract things. Not everyone can effectively write complex
code on that level, and it's also an area where everyone has their own opinion
on best practices or style.

Personally I often tend to write messy and 'not best practice' code until I
know what direction I'm headed in. Then I refactor. I recently turned a few
hundred lines of if-statements into a sensibly organized piece of code.

But I think the problem here is bad software engineers. And I don't think we
should be apologists for poor coders (scientists or otherwise). If an
organization accepts mediocre code from non software engineers, that's fine
and more power to them. But I don't think it's good to encourage poor
programming anymore than it is to tell kids grammar and spelling don't matter
because it's better to read short slang phrases than long sentences no one
will understand anyway.

------
dtech
Alex Papadimoulis (from The Daily WTF) wrote an interesting short essay on
exactly this problem, why this is caused and how you can detect and prevent it
in yourself.

Programming Sucks! Or At Least, It Ought To:
[http://thedailywtf.com/Articles/Programming-Sucks!-Or-At-
Lea...](http://thedailywtf.com/Articles/Programming-Sucks!-Or-At-Least,-It-
Ought-To-.aspx)

~~~
spion
The defeatist attitude in that article is interesting. I disagree. Its just a
hard problem, and when we try to solve it we often fail and make things more
complicated. It doesn't mean that the problem is unsolvable or that all of the
tedium is inherent and irremovable.

------
Nursie
So he's comparing the minor faults made by people writing single-use
scientific programs to the excesses of the worst of 'enterprise' style coding.

It's not really any wonder he comes to the conclusion he does. It's a shame he
doesn't know what software engineering is though. Hint - it's not about making
things as complex, abstract and verbose as possible.

------
sanxiyn
This rings true. Programmers are powerful, so programmers can do powerfully
bad things. Non-programmers may write bad codes, but don't (probably can't)
write powerfully bad codes.

~~~
acqq
I also like to quote the following description of the problems generated
following apparent "best practices":

[http://blogs.msdn.com/b/ricom/archive/2007/02/02/performance...](http://blogs.msdn.com/b/ricom/archive/2007/02/02/performance-
problems-survey.aspx)

"The project has far too many layers of abstraction and all that nice readable
code turns out to be worthless crap that never had any hope of meeting the
goals much less being worth maintaining over time."

The problem is that once programmers learn something that is "hard" to them,
that is, something that demanded from them a big investment some of them then
start to believe that _anything_ they touch will benefit from using these
_hard-to-aquire_ techniques. That's how we end with "far too many layers of
abstraction" and "the nice readable code turns out to be worthless crap."
There's a lot of code produced following some recipes, without questioning if
the recipes are appropriate to the problem.

Another problem is what I call "religious approach" to programming and design:
blindly _believing_ and applying without questioning all that is written in
some books. It's an interesting psychological problem that often ends
implemented in the code, which often happens due to the often "solitary"
approach to design and code writing. If you have "architects" that don't look
at the implementation and aren't ready to question and redo their own designs
you can be almost sure the result will be ugly and maybe even totally wrong.

------
codezero
This reminds me of a talk I attended at AGU about the huge legacy of code for
climate modeling. The question was whether the projects should be started from
scratch with better engineering principles. Check out the slides here.
[http://www.cs.toronto.edu/~sme/presentations/Easterbrook-
AGU...](http://www.cs.toronto.edu/~sme/presentations/Easterbrook-AGU-
fall2010.pdf)

The goals of scientific code are often different than code written by software
engineers. Having reproducible consistent output for a given input is very
important and it's hard to move a bunch of Fortran to Python with confidence
that the output maintains a 1:1 precision from historical inputs.

------
emsy
TL;DR Software Engineers complexify the code because they have nothing to do
and scientists do all the real work.

I was left with the impression that the author lacks a broad view over the
software development landscapes and thus tends to generalise badly.

------
raverbashing
Here's the problem

Programmers (and I see an example almost every day here on HN) _don 't know_
the math and equations (or the concepts) of even basic scientific
calculations. Let alone some more complicated stuff.

Scientists on the other hand, most don't know the basic best practices _of
today_ , or the most modern techniques. We even have to be thankful they're
not using Fortran (not that it's bad, but...)

But guess what, "wins" who can produce some results. In this case, the
scientists with ugly code.

~~~
Nursie
Speak for yourself. Some of us have had a reasonable amount of training in
mathematics and various sciences as well as computer programming.

It's true that you don't always need this to be a programmer, but some of us
have a decent grasp of some of this stuff.

~~~
raverbashing
"Speak for yourself. Some of us have had a reasonable amount of training in
mathematics and various sciences as well as computer programming"

It should be obvious that if I'm pointing about the issue I'm aware of it,
_hence_ I know something about math and science.

~~~
Nursie
Then... don't presume to speak for everyone else?

"Programmers (and I see an example almost every day here on HN) don't know the
math and equations (or the concepts) of even basic scientific calculations.
Let alone some more complicated stuff."

This is a really sweeping statement and really does not apply to everyone.

~~~
raverbashing
Well, true, in the way the sentence is written I am generalizing.

It meant to have a "usually" after the first parenthesis and the "don't"

And in the same way I know programmers who never heard of the Newton-Raphson
method I know ones that know a lot about scientific subjects and mathematical
methods.

~~~
Nursie
The only reason I object is because people keep saying this stuff and it
becomes accepted wisdom. Like "Programmers don't know the basics of science",
"Software engineers always build an over-abstract, enterprisey-mess" or
"people that studied computer science are only interested in solving esoteric
technical things and have no view on business needs".

Whatever it is, it's starting to feel like there are a whole load of
stereotypes building up that don't apply to me but might prejudice future work
opportunities.

~~~
raverbashing
This might happen, but this may be easy to filter in a CV/Interview setting
(especially if the recruiter knows what they're looking for), and, of course,
job application (one of the reasons to tune-up the cv and add relevant
information to the cover letter).

I always made sure to get the point across, for example "oh, I see that your
job opening mentions Finite Element Method and the area of numerical
computation is something that interests me" or something similar for the other
examples (if it's relevant to my case).

------
danieltillett
Humans can only handle so much complexity. With a lot of scientific code the
underlying concepts are complex and this results in "simple" code. This is not
a bad thing in the main as the people who understand the concepts (scientist)
can understand the code - if you show some biologist elegant code they end up
spending all their time trying to understand what the code is rather than what
the codes is trying to do.

------
silentvoice
I think I have a unique perspective on this, since I have both a strong CS
training as well as a strong scientific computing training. I can look at code
both from the perspective of producing useful scientific results as well as
from the perspective of code quality (readability, maintainability, etc).

Scientific computing is a unique problem in programming that is different from
what I think software engineers are accustomed to solving themselves.
Performance is often extremely important, it trumps almost everything; yet
changes need to be made to the code frequently which often necessitates
rewrites. The software engineer solution to this is abstraction: hide details
of conceptually unrelated components so that changes can be made in isolation
without needing to propagate them through the whole source. Unfortunately
throw in things like paper deadlines and abstraction quickly becomes a
contradictory goal to performance. Since the kinds of changes one needs to
make to scientific code can rarely be predicted (it's research, by definition
we don't know what the results are before we get them), spending a huge amount
of time on an extensible framework almost always is a waste of time since we
can't design it for the unknown (believe me, I have seen dozens of these and
none of them have widespread adoption in research, despite being of very high
"code quality.").

Therefore an accepted practice is to write short hacks whose rewriting will be
as painless as possible, if that is necessary. This is of course a
generalization, not everybody does this. I'm just giving the reasoning behind
this kind of code, and why modern practices are often ignored. The running
assumption for this neglection is stubbornness or "get of my lawn" mentality
from the researcher, but in my experience usually it's not.

------
cousin_it
I agree that professional programmers often overestimate the benefits of their
hard-won expertise and their beloved ideas about programming in general, and
underestimate the value of being immersed in a specific domain. Here's another
essay that makes the same point:
[http://prog21.dadgum.com/190.html](http://prog21.dadgum.com/190.html)

~~~
collyw
I think this is an experience thing as well.

To me this post sums it up well:
[https://medium.com/p/db854689243](https://medium.com/p/db854689243)

------
anatoly
They keep saying that drinking lots of sugary drinks is bad for your health.

But I've seen people who drink lots of sugary drinks, and I've seen people who
eat greasy burgers and nothing else, and let me tell you, the people who eat
greasy burgers are much worse off.

It is therefore important to understand that drinking lots of sugary drinks
beats eating only greasy burgers.

------
julienchastang
I work at a scientific institution where scientific programming is our bread
and butter (www.ucar.edu) and has been for decades. There are people here who
are trying to make progress in the culture clash that Yossi describes. In
particular, the UCAR Software Engineering Assembly holds an annual conference
to tackle some of these thorny problems. This year's conference is over
[https://sea.ucar.edu/conference/2014](https://sea.ucar.edu/conference/2014),
but hopefully it will be back next year. The conference had sections on best
practices and software carpentry. Also, I am seeing a sea change where young
scientist fresh out of grad school are actually pretty good programmers making
traditional software developers (with little hard science background) somewhat
obsolete.

------
RazorX
I am a physics grad student and the paper I just submitted involved a lot of
code (mostly fitting in Python plus plenty of LaTeX). I'm really trying to
beat the stereotype here and write documented, maintainable, version
controlled code. I see plenty of the bad code happening all around me.
Research is supposed to be reproducible and the way the code that goes into
the analysis is typically handled often runs opposite that goal.

Once I put the paper on the arXiv I open sourced everything and made a portal
with links to all the repos. The only thing missing is the repo for the source
of the submitted paper which I'll add once it makes it through APS review.

Here is my work if you're interested or have any feedback:

[http://evansosenko.com/spin-lifetime/](http://evansosenko.com/spin-lifetime/)

------
Cheatboy2
Scientific code is often about solving ill-defined problems, working
interactively and refining the work along the way. This is a different job
than developing an application for end-users.

I am quite confident that, most of the time, scientific code is pretty bad.
But I am also pretty sure that even in industry code is not so good anyway.

My worst experiences with scientific code was less the quick and dirty
approach than on-purpose obfuscation and ego-coders who are more concerned by
impressing others than writing simple and effective code.

------
galapago
From the security point of view, i can confirm this. The team i was working on
found a large number (+100) of exploitable binaries in Debian, most of them,
in scientific packages.

------
spacemanmatt
I work with scientists who self-trained in application development. They use
design patterns and factory methods just like the next programmer because they
have learned what is practical. We all started out writing shitty code but
some of us had a need and a venue to refine the skill to the point that our
employers could sell our work products. Researchers rarely see that pressure.

~~~
collyw
What is practical in the short term is usually not practical in the long term
in my experience.

------
mturk
There is a strong, growing movement to improve scientific code. As two quick
examples, the WSSSPE series:
[http://wssspe.researchcomputing.org.uk/](http://wssspe.researchcomputing.org.uk/)
and the Software Sustainability Institute:
[http://software.ac.uk/](http://software.ac.uk/) .

------
stcredzero
The takeaway is not that software engineering is bad. The takeaway is that it
can be misapplied. People can write spaghetti horrors using patterns every bit
as unnecessarily complex as the the worst maze of gotos.

The takeaway is that your programmer had better be aligned with your goals,
not programming for programming's sake.

------
brudgers
[This is a sketch]

Lately, I've been thinking about _the act of writing_ in a programming
language as the act of writing. Which is to say I've been thinking about
writing and wondering if it makes sense to think of programming as
ontologically distinct. In other words, I'm wondering what it would be like if
I were a lumper rather than a splitter.

As the push to toward greater 'coding literacy' gains ground --in the
anglopnone world, at least -- there will simply be more marginally literate
code for the same reason there is more marginally literate writing as a result
of seeking universal literacy. A goal of functional literacy tends to produce
the functionally literate not a society in which everyone is a person of
letters.

Programming as just another form of written communication suggests that
perhaps programmers are being trained [programmed ?] in a manner that
predisposes them toward inflexibility as writers [1]. Imagine if Creative
Writing graduates entered the workplace believing that everything must be
written in meter. Imagine, having to maintain iambic pentameter or extend a
sonnet. As bad as that sounds, adding a protagonist to a blank verse epic is
probably a closer analogy.

Programming as just another form of written communication suggests that its
rise in the academy has encumbered the minds of 'some people' with values
closer to those of literary theory than practical empiricism. There aren't
accepted 147 patterns for beam design. There are three - LFRD and SoM and pure
empiricism from tables. Civil engineers are trained to use apply them in order
of increasing complexity because Civil engineering training values simple
standard solutions not creative ones.

The difference between engineering training and creative writing training is
that the engineering ideal is doing it right the first time, not progressive
refinement toward the great American novel through iterative rewrites - and
editing. An engineer learns when a fink or howe or warren truss is
appropriate. Civil engineering culture says "it's ok to solve simple problems"
not "every problem is a prototype truss design problem." Treating every
problem as the chance to create a snowflake is what the Creative Writing
department does.

We can call programming "software engineering" or "computer science", but it's
still pretty much just writing. What makes it unique is what makes all writing
unique - the peculiarities of the audience and programs I will admit have some
particularly peculiar members of their audience - computational devices whose
peculiarity dictates certain conventions [keeping in mind that the goal of
writing is effective communication].

What the article hits is that these conventions become stylized and that
classically trained programmers fall into the practice of using style as a
starting point.

Of course this is a sketch of an idea that's kicking around in my head, and as
I've written it I've got the impression that despite all the bigO one finds in
computer science departments, the process of constructing software systems is
pretty much entirely empirical. That's why there is no SoM and LFRD - bigO is
just the kernel of an SoM analog. What's missing is time and experience which
grounds the field in humanist values. It will take time before the idea that
the bridge is built so that _people_ can cross the river, becomes the obvious
driving force.

[1] Not all mind you, or any necessarily. I'm just accepting the
characterization in the article as fact.

~~~
jostylr
Do you have thoughts on literate programming? This is turning programming into
the act of writing. I find it to be very liberating.

Many of the sins mentioned in the article can be addressed with good use of
literate programming techniques. In particular, it makes reading the code a
joy.

~~~
brudgers
I'm wondering if good coders are just good writers, and "a good coder" is
simply what we call someone who writes well in a particular genre in the same
way we call someone "a good poet."

To put it another way, we see a continuum between the acts of writing a
product manual and a textbook and a news feature and a short story and a
novel. Why are we convinced computer programs ontological different?

All that said, Literate Programming often seems over done to me. Stripped down
to it's essentials code can be crafted like poetry. Literate programming is
only literature when being crafted as a literate program allows for something
literary to be expressed which can't be said without untangling the knot of
procedure and weaving it into a document.

------
nmrm
Another problem in many research settings is compensation. Labs/centers often
cannot compete with industry on pay. They then assume that experienced
professionals are willing to take a substantial pay cut to "do science".

In reality, the opposite is true. You are asking someone to forego like-minded
software people at Large Software Corp. and instead work with peers who
largely view software as plebian and inferior to their science (see the last 2
paragraphs of the article!).

These under-paying research labs end up with bad engineers, or young engineers
who now have zero mentorship opportunities (just as bad as a bad engineer).

The take-away is that scientific labs should expect to pay slightly more than
the going rate for __mid-career __software engineers.

~~~
Fomite
> The take-away is that scientific labs should expect to pay slightly more
> than the going rate for mid-career software engineers.

Whether or not they _should_ do this (I've been in a number of labs where a
skilled software engineer is highly valued, as are many other lab technical
staff members), this isn't going to happen. "Slightly more than the going
rate" is pretty grant-breaking.

~~~
nmrm
> "Slightly more than the going rate" is pretty grant-breaking.

That's probably correct. But it's somewhat disingenuous to sample from the
lower end of the distribution and then make sweeping statements about the
profession.distribution.

~~~
Fomite
I don't follow.

------
ianstallings
Most of these gripes sound like "programming is hard" to me.

------
fleitz
It's not so much scientists vs. programmers, it's more about experience.

If you write a lot of software, hopefully you're getting better at it,
learning new skills, applying that, etc. It's like abstraction is like
calculus if you know it it greatly simplifies things, if you don't it
mystifies things.

More experienced software writers generally write at a higher level of
abstraction.

~~~
fmstephe
I think that Yossi's point was the the non-programmer writes at a very low
level of abstraction and this is a great virtue. Although there is a great
deal of nuance to this debate I fall more and more on his side of it.

I think that one of the biggest problems with this debate are the numerous
examples of exceptionally powerful and useful abstractions we used every day.

Programming languages are a wonderful abstraction over assembly which is a
marvelous abstraction over machine code. Compilers leverage this abstraction
to produce exceptionally fast machine code that few humans could ever match.

The file system is a fantastic abstraction over a broad range of very complex
storage medium, you can even pretend a hard drive on another computer is part
of your local file system.

So abstraction is clearly the most powerful tool in our toolbox. It's what
allows us to climb above the Roman Numerals of computing.

I think the thing that is lost in this debate is the number of failed
abstractions. Each of the abstractions listed above were hard won and are
painstakingly maintained by dedicated developers. Even within their ranks
there are countless failed examples, file systems that were unreliable and
programming languages that made life harder than it needed to be. The other
feature of those abstractions listed above is that there are few of them and
we spend a large amount of time learning about them. As a Java developer I
have invested a significant amount of time learning the details of Java
garbage collections. Now there is a wonderful abstraction, that has some sharp
spikey corner cases.

We can look at some common abstractions, third party libraries. I will use
Spring as an example. The list of abstractions explodes, each one appears
unique and I know developers who swear by Spring and use it extensively and
who struggle to answer concrete questions about it. Vast quantities of
unreliable and very hard to read code has been built on top these kinds of
abstractions. (I note reluctantly that plenty of reliable software has been
built on top of Spring too, and there are developers who have a very good
grasp of at least some of it).

Finally lets survey a final category of abstraction. The ad hoc per project
abstraction. This is what Yossi is actually talking about in his article. In
enterprise Java these are typically a swirling mass of classes whose names are
concatenations of various design patterns. These are typically of a very low
quality, the programmer who wrote them had deadlines and has since moved on.
They are 100% unfamiliar, they are 100% awkward and, like a man lost in the
desert desperate for a drink of water, I find myself yearning for a concrete
instantiation and a plain old method call.

My personal feeling is that, yes abstractions are extraordinarily powerful,
but only when they are good and only when we understand them. I think
producing new and useful ones is always harder than we expect. It's good to
remember that we didn't replace roman numerals with ten thousand unique
numeral systems.

YMMV :)

~~~
mercurial
> My personal feeling is that, yes abstractions are extraordinarily powerful,
> but only when they are good and only when we understand them.

Certainly. Saying "I have created an abstraction" only works in conjunction
with "I actually understand the problem on an abstract level, and I am
familiar enough with the existing codebase." However, thinking that "amateur
code", with bad or no data structures ("I actually want to use a dictionary,
but let's have a 15-cases if/ifelsif/else expression instead"), 5-screen long
functions, badly named variables and functions, state all over the place is
"better" is folly. It isn't, it's just as terrible if not worse, and usually
extremely brittle. Not to mention usually impossible to unit-test properly.

~~~
ams6110
Dictionaries are a great example. They are a useful general-purpose
abstraction. Often times programmers will develop a hierarchy of classes that
once you get to the root of is just a dictionary with some window dressing to
hide the fact that it's just a dictionary. Sometimes it's easier to expose the
dictionary directly and code the special cases directly. The programmer may
fret about tight coupling but if you know you're never going to replace the
dictionary-based implementation, it can be a lot easier to understand,
especially if it's one-off code. And when dictionaries are provided by your
language/libraries, you don't have to unit test them at all; you assume they
work.

~~~
mercurial
But now your program is weakly typed. Using the same logic, maybe your id is
just an integer, but having an "ID" type will prevent bugs in the future, just
like "dressing up" the dictionary with a meaningful name and exposing only the
minimum amount of methods necessary will avoid people being tempted to, say,
add things they shouldn't in your dictionary, not to mention that strong
typing has a documentary value.

------
frozenport
>>Complete lack of interest in parallelism bugs

Sounds Like

>>The result is that you don't know who calls what or why, debuggers are of
moderate use at best, IDEs & grep die a slow, horrible death, etc. You
literally have to give up on ever figuring this thing out before tears start
flowing freely from your eyes.

