
Hopefully more controversial programming opinions - elssar
http://prog21.dadgum.com/149.html
======
cubicle67
[update: I've got hold of some of the code. Here's a copy of one of the
programmes <http://pastie.org/4615158>]

I have an awesome story that i think I've told before, but I'll tell it again
anyway :)

My father-in-law is a pharmasist. back in the early 90s computers were
starting to make their way into Pharmacy and the first few pharmacy sortware
packages appeared. this increased with time, as you'd expect. At some point he
decided it was inevitible but he decided (I have no idea why, except that he's
always been quite independent) he'd write his own software. Took himself off
to nightschool and learned dBase III. He'd never written a line of code prior
to this.

He then proceeded to write an entire pharmacy suite of software tools that are
_still_ in use. They've been heavily updated and added to over the last 20
years but it's still all DBase III running on DOS. It's a saggering amount of
code over hundreds of files but it all works. I've seen the code and it
wouldn't pass a first semester coding course. each programme is a single
10,000 line spaghetti-fest. I've tried to explain functions/methods to him but
he didn't understand and in the end I gave up. apparently he doesn't need
them!

I can't understate how complex this software is - it's stock control, point of
sale (including the usual things like birthday vouchers, discounts accounts
etc etv), prescription dispensing, patient records etc. everything for a
modern pharmacy. The australian health dept know about it and it's met all
their criteria. it does everything.

It's mindbogglingly awfull bad code by any metric imaginable but yet it's
robust, appears to be as-good-as bug free and has been able to be maintained
(gov regulations for pharmacy change constantly so it's required major changes
each year to keep up to date) and in production use for almost 20 years. oh,
and it's fast :)

[Edit: looking at the other comments it seems I've replied to the wrong thing.
I was actually writing a comment in response to this
<http://prog21.dadgum.com/87.html> (Write Code Like You Just Learned How to
Program) which was linked to from the main article]

~~~
schmrz
I wonder how much that initial version changed during the 20 years of
maintenance.

I find it hard to believe that someone was maintaining it for 20 years without
improving the quality (of the code/design) and making it easier to maintain.
Have you seen the code base of the initial version or of the version that is
currently being used?

~~~
cubicle67
I've know my wife for the last 18 or 19 years and it was written before that,
so I haven't seen the original. I have seen him debugging code in the evenings
though. This consisted of printing out the entire programme (in the DBase
sense a "programme" is a single file, but a programme in our sense would be a
collection of dbase programmes) and the paper (dot matrix, so it's all
connected) trailing all thruought the house as he draws pencil lines all over
it to reconstruct programme flow. no indentation at all

The short answer to your question is no, I didn't see any evidence of improved
structure or practices at all. All variables global. goto ftw. no
procedures/methods/functions _at all_. He's never even heard of the concept
when I tried to explain.

[Edit: managed to get hold of some of the code (see parent) and it looks like
I was wrong on the indentation - he did start to use it at some point]

~~~
slurgfest
closely looking at a printout is an underrated archaic practice.

~~~
ams6110
Ah yes "desk checking." When compiles took hours or would only run during the
overnight batch cycle. You better believe it was worth spending some time
manually reviewing your code.

------
InclinedPlane
I would go farther in regard to the Computer Science point.

CS programs should be burnt to the ground. And in their place we should build
up three separate things. First, software trade schools that are actually good
(e.g. not ITT). Second, for reals software engineering majors at colleges,
that are heavy on things like practical programming, tools (version control,
issue tracking, automated build systems), refactoring, teach multiple
languages (javascript, python, ruby, SQL, etc.), and only delve into
theoretical underpinnings as warranted (compare electrical engineering vs.
physics programs). Third, legitimate Computer Science programs that are
contractually limited to about 5% of the current CS student capacity for at
least the next two decades and which teach a very mathematics heavy and
science focused CS program and have zero expectation that the graduates of the
program will go on to write software in industry after graduation.

~~~
UK-AL
The problem with software trade schools, even though you may not require a
pure maths focus, I still expect people who program professionally to be good
at maths. I would expect most programmers to analyse algorithms in a formal
manner if they have to. And the people capable of doing maths at that level
are not the people that traditionally go to trade schools.

~~~
pepve
To make something fast or to make something scale you don't need a proof. You
need a profiler.

~~~
UK-AL
A profiler is for micro optimisations. A profiler won't let you go from bubble
to quick sort for example.

~~~
Negitivefrags
A profiler will tell you if sorting actually using a meaningful amount time in
your application.

If you go ahead and blindly change your bubble sorts for quick sorts then at
best you may be wasting time doing something that has no effect on
performance, and at worst you may be making your program slower.

Quicksort is slower than bubble sort for nearly sorted input after all.

~~~
UK-AL
It does give you information on where the most time is being spent, but it
doesn't tell you what to implement. Without adequate algorithm knowledge you
might try a micro optimisation when it really needs a completely different
algorithm.

With superficial knowledge you might stick to certain rules without really
understanding them. That bubble sort example you gave is a perfect example.

------
polemic
> _Computer science should only be offered as a minor. You can major in
> biology, minor in computer science. Major in art, minor in computer science.
> But you can't get a degree in CS._

The problem is not that Universities produce poor CS majors. They don't. The
problem is that everyone else expects that a CS major is going to be a good
commercial developer. Some are, but that's just the odds.

If you want to be a programmer, do an software ENGINEERING degree. It deals
with the practical issues and you actually do lots of real, actual,
programming. Or go to a media design school, where you do lots of actual web
programming.

Expecting every CS major to be a great programmer, is like expecting every
Physics major to be a good baseball player. Sure, (s)he knows the optimal
angle to strike ball to achieve a home run, but actually doing so requires a
whole of experience and real world context.

(PS. I have a physics degree, and I'm a software developer, not a baseball
player. But that's because I programmed a lot for fun and profit before going
to university, and CS seemed like a big backwards step. And physics is more
fun.)

------
AngryParsley
I just read the post that inspired this post:
[http://programmers.blogoverflow.com/2012/08/20-controversial...](http://programmers.blogoverflow.com/2012/08/20-controversial-
programming-opinions/). While a lot of these opinions sound reasonable, I want
to know if they're actually true. All of these statements are expert opinions,
but it's unclear how many of them are backed by research. Without controlled
studies, experts can easily believe incorrect things.

If you hold an opinion, look at the evidence backing it up. If it's not
strong, reduce your confidence. Or even better: Gather evidence, _then_ form
your opinion. And remember: Anecdotes don't count. I wish more of my
colleagues would do this, but it seems most of them haven't heard of very many
software-related studies.

If you want to learn more, I recommend <http://www.neverworkintheory.org/> as
a starting point. After reading some papers, you'll be surprised how limited
our evidence-based knowledge is. Looking at software engineering studies made
me realize that I'm not allowed to poke fun at psychology anymore. Even that
field is more evidence-based than ours.

------
hansbo

      It's a mistake to introduce new programmers to OOP before they understand the basics 
      of breaking down problems and turning the solutions into code. 
    

Given how well I remember my classmates with little-to-none programming
experience struggled with understanding pointers while not being able to write
the simplest algorithms, I thought this was a no-brainer. Is it really a
controversial opinion?

~~~
alexchamberlain
I've never understood what's confusing about pointers...

~~~
mkopinsky
1) Indirection. Pointers require thinking in a few steps. Indirection is hard.
It is a vital skill in any kind of programming (or especially debugging), but
it does not come naturally to people who've only ever had to deal with the
concrete.

2) Early on, you learn that ints and chars and floats and some_structs are
fundamentally different data types. Then suddenly you're told that int%s,
char%s, float%s, some_struct%s, and even void%s are fundamentally the same.
Huh?

3) The fact that C uses % both to create a pointer and to dereference. These
are conflicting meanings, and the unrelatedness of those two concepts is not
sufficiently explained.

EDIT: Silly HN parser. I now replaced asterisks with %s.

~~~
alexchamberlain
3) That make's sense. For example...

    
    
        int *i;
    

means "When you dereference i, you get an int."

~~~
mkopinsky
When I see

    
    
       int a;
    

I read "create an integer variable called a".

    
    
        int *i;
    

means, "create a variable that when dereferenced, points to an integer".

Makes sense once you understand it, but I can definitely understand how a
beginner would find it confusing.

~~~
alexchamberlain
It may be a problem with the teaching though, rather than the subject matter
itself. I understand why it takes a little bit of thinking to get used to, but
not why it is fundamentally hard.

Now, algorithms can be difficult to understand and many of them use
pointers... Do some people confuse the 2?

~~~
mkopinsky
I don't think the difficulty is fundamental, and I don't think anyone here
claimed that it was. But it is initially challenging nonetheless.

To tie this back in to the beginning of the thread, being able to understand
indirection (such as in pointers and algorithms) is a far more important skill
than understanding OOP.

------
tikhonj
Here's my version of his opinions, probably even more controversial :P.

CS should be offered as a major by itself. All the most interesting stuff is
CS-specific with indirect applications. Working on something like automatic
programming is far more exciting than working on biology or art or what have
you. (I can't think of anything more awesome or more CS-only than automatic
programming.)

It is a mistake to introduce programmers to OOP.

A complex compiler is awesome. A sufficiently smart compiler may be a myth,
but it is a _utopian_ myth; we should strive for it. However, I would take it
even further: program synthesis is better still. I'm in the business of
telling the computer _what_ to do, not how to do it, so there should be no
obvious but unnecessary correspondence between what I write and what the
computer executes--they just have to have the same semantics.

You shouldn't be allowed to write a library unless you have a thorough
understanding of programming languages and some relevant math. There is always
relevant math. Your functions should be accompanied by useful and verifiable
laws others can depend on. Or maybe everyone should be encouraged to write
libraries regardless of skill level and then the libraries could be ranked a
posteriori. Any other guidelines make less sense.

Pretty code is readable and readable code is pretty. If you can render your
code as a nice pdf and distribute it as a paper, it's about as readable as it
will ever be. Even if you can't, remember that aesthetics aren't random--there
is a reason why pretty code is pretty.

Purely functional programming is a straw man. Even Haskell lets you write code
that at least acts impure. Haskell is a local maximum. On the other hand: a
purely functional spec that the computer uses to generate a potentially impure
program _should_ work. But I've already talked about that :P.

I don't know what a "software engineering mindeset" is. It sounds like
something a manager would say. Don't do stuff a manager would say. This is
unfair to good managers but still a useful guideline. Have as much fun as you
can unless people's lives are on the line.

I should note that I don't even think all these opinions are true. But a
belief does not have to be true to be _useful_. If I could boil it down to a
single sentence, it would probably be: math and CS theory aren't scary and you
should reject conventional "wisdom". But that would be somewhat cheap--two
independent and rather unrelated clauses joined with "and" may as well be two
sentences :P.

Also, there's something very appealing about throwing out intentionally
extreme opinions. I can certainly see why this guy keeps on writing his blog.

~~~
enjo
_Pretty code is readable and readable code is pretty. If you can render your
code as a nice pdf and distribute it as a paper, it's about as readable as it
will ever be. Even if you can't, remember that aesthetics aren't random--there
is a reason why pretty code is pretty._

I'll restate this: "Every programmer should know more than a little bit about
typography". I was amazed at how much learning design fundamentals improved me
as a programmer. I've learned to _communicate_ through code much more
effectively than I ever had before. Thinking about grouping, spacing, and the
like leads not only to more readable ("pretty") code but usually more
efficient code as well.

~~~
jbrechtel
Can you suggest any books?

~~~
peapicker
I recommend "The Elements of Typographic Style" by Robert Bringhurst.

------
sbt
Great stuff. Since these are controversial opinions, I somewhat
agree/disagree.

\- Computer science should only be offered as a minor. Good point, assuming
that you don't consider theory an end in it self. I consider theory mind
opening, even if it doesn't make money per se.

\- It's a mistake to introduce new programmers to OOP before they understand
the basics.. Fully agree.

\- You shouldn't be allowed to write a library for use by other people until
you have ten years of programming under your belt. Another good point. Writing
a good library (or designing an API) is one of the more challenging things you
can do, because it requires a good understanding of the problem, simple
design, and sensitivity to conventions (which only come from long experience).

\- Superficially ugly code is irrelevant. Somewhat true, but the main problem
with ugly code is that nobody wants to touch it. So if its functionally
correct, readability remains irrelevant until you need to make a change.

\- Purely functional programming doesn't work Agree, in the sense that there
exists problems where a purely functional approach is not the best. I think
you can run pretty far with purely functional though.

\- A software engineering mindset can prevent you from making great things.
Strictly true, but it should be said that the opposite mindset will eventually
destroy the things you made.

------
davedx
"Superficially ugly code is irrelevant. Pretty formatting--or lack thereof--
has no bearing on whether the code works and is reliable, and that kind of
mechanical fiddling is better left to an automated tool."

I disagree with this one. Clear, readable code is... clear and readable. It's
like not bothering to format text in a textbook because "the meat of the
matter is in there, so who cares?"

Of course substance is more important than style in programming, but style
also helps and is it really that much effort to make sure your code is
readable for the next guy who comes along?

~~~
pepve
I agree with you. What i took away from that point was to leave it to an
automated tool. Th Eclipse Java formatter was the first time I saw this work.
Distribute the formatter settings among your team, make everyone set the
"format on save" option, be done with it.

I just wish all languages had this kind of support.

~~~
qu4z-2
"indent" the program (for C) has been around for a while.

The other nice thing about automated formatting is that everyone can edit code
in their preferred format, as long as they convert it back before checking in
(for readable diffs)

------
primitur
I agree with the first 10 controversial opinions, and I agree with this
followup, to boot. All good points, all worthy of discussion.

As an autocrat, I'd also add another few, what I conceive to be
contemporaneously controversial point of view, to the discussion:

* Code Coverage matters. Dead code is broken code. Always.

* Programming is Always in Service To The User. The User is the only way your creative, artistic, amazing, junk of spaghetti-code crud, is going to ever get Used. Use is where your software is alive. Non-use = Dead. Thus, the USER is YOUR MASTER. Serve them.

* Pretty tools are one thing, ugly tools another thing entirely. NO! WRONG! ALL TOOLS ARE TOOLS. Use what works. If you're using something because you want to, even though it sort of doesn't work, its no longer a tool, but instead an .. ingredient .. of something. Something else, perhaps something creative. Do that shit on your own time: use the tools which work, _at_ work.

* Discussion is the only way things ever get resolved. If you hate on something about someone, discussion is the only way the problem will ever get solved, ever. Ignoring something and being afraid to discuss really secretly means 'do not want' to solve the problem. Even vile words are still yet but words, words eventually work it out. Developers who do not use words are not the scribes they're meant to be ..

------
capkutay
Computer science should only be offered as a minor. You can major in biology,
minor in computer science. Major in art, minor in computer science. But you
can't get a degree in CS.

The way university is structured, it would be nearly impossible to minor in CS
and learn anything low-level or advanced like computer architecture or
operating systems. Perhaps you can push out a desirable employee with a
practical knowledge of programming, but could you really educate true
"software engineers" via a minor?

~~~
mkopinsky
I am a bioengineer by degree, software developer by profession. Most valuable
skill that I use in the day-to-day is thinking like a programmer, and
understanding how to make sense of data in an intelligent way. But ultimately,
this is all about solving problems - programming is simply a tool towards this
end. Go up to a carpenter (and I mean a well-trained carpenter, the kind who
carries more than just the PHP hammer [0]) and ask him which carpentry
technique to use, and he'll tell you the right answer in a flash. But ask him
whether carpentry is even the right approach for a given problem, and I wish
you good luck.

Programming skills are important, but a deep knowledge of the problem domain
is sometimes far more critical to being able to actually solve the problem.

[0] [http://www.codinghorror.com/blog/2012/06/the-php-
singularity...](http://www.codinghorror.com/blog/2012/06/the-php-
singularity.html)

------
npguy
"You shouldn't be allowed to write a library for use by other people until you
have ten years of programming under your belt. If you think you know better
and ignore this rule, then one day you will come to realize the mental
suffering that you have inflicted upon others, and you will have to live with
that knowledge for the rest of your life."

stunning.

~~~
notimetorelax
I'm actually currently in process of writing a small library, but I have only
6 years of experience. I'm frightened.

~~~
lmm
Don't be. That idea is bollocks. Experience has approximately zero correlation
with ability.

(how's that for controversial?)

~~~
digitalWestie
Exactly. Does anybody really sit and write a libraries solo these days?
There's an omission of combined experience here. 2 others like yourself would
be 18 years of experience.

(I understand the combined experience thing isn't perfect logic but that's not
really my point.)

~~~
colomon
Errr... I write libraries solo. Professionally. More or less full time.

------
walrus
_> Purely functional programming doesn't work, but if you mix in a small
amount of imperative code then it does._

That isn't controversial nor is it an opinion. It's just the truth. Purely
functional code has no side effects. The entire point of a program is to have
side effects.

~~~
johnpattiyson
Not true. For example, a compiler can be a pure function. It accepts input
that is the source code, and outputs the machine code. There's no side effect
there. I admit it needs minor scaffolding to always read all of stdin first,
and write all to stdout at the end, but the programme, as written by the
programmer, is a pure function.

This is one of several ways that Haskell worked before there was an IO monad
[1], all allowing pure functional useful programmes.

[1] S. Peyton Jones. Tackling the awkward squad: monadic input/output,
concurrency, exceptions, and foreign-language calls in haskell. Technical
report, Microsoft Research, Cambridge, 2010.

~~~
walrus
Yes, the 'minor scaffolding' has side effects. That's why I said it wasn't
controversial.

~~~
qu4z-2
The side effects in the scaffolding are purely an implementation detail.

For non-interactive use (like a compiler) you could implement the program as a
function taking and returning a string.

~~~
walrus
How would I invoke a compiler that isn't interactive? The act of invoking it
_is_ the interactivity.

------
qznc
The difference between a compiler not optimizing (gcc -O0) and optimizing as
good as possible (gcc -O3) can be an order of magnitude of performance. That
does matter in many cases.

Of course, various programs (web apps) are IO-bound. And gcc -O0 should still
outperform Python/Ruby/etc.

