

Hacker style versus the Dijkstra style - mad44

Recently, I have been thinking about these two opposing styles: the hacker style ---rapid prototyping and incremental improvement---, and the Dijkstra (academic?) style ---think hard and get it right the first time---.<p>Given a problem, the hacker style is to: 
1. Suggest a partial solution
2. Improve solution, 
   add to solution to cover all cases
3. Repeat 2 as necessary.<p>In contrast the Dijkstra style is to:
1. Think hard to get the RIGHT solution
2. Justify that it is the right method, 
   if failed throw it away
3. Go to 1 as necessary
4. Build solution using the method.<p>The hacker style makes you get started and enables you to make steady progress.  Even though the resultant system is not always an elegant solution, you are blessed with the advantage of predictability, and always having a more-or-less working product.<p>Dijkstra style prevents you to commit the sin of over-specialization and to settle with a complex or suboptimal solution when there is an elegant solution that also covers the general case of the problem. The downside is that you may not able to make any progress at all, since you are not to accept a mediocre solution.<p>I am sure the HN community has a lot of success stories and arguments supporting Hacker style over the Dijkstra style. I am wondering if you had any cases where the Dijkstra style saved you, and I am interested in hearing your arguments/anecdotes supporting the Dijkstra style over the Hacker style.
======
MaysonL
The difference between the hacker style and the Dijkstra style is more
apparent than real. Both rely on successive approximation.

See for example
[http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EW...](http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD316.9.html)
where Dijkstra goes, in painstaking detail, through the process of solving the
"8 Queens" problem, building up the solution piece by piece.

A hacker would probably come up with exactly the same algorithm, though
probably written more quickly in another language, and with a few more
compile/run/debug cycles.

The thing is: the mode of attack - breaking down the problem into simpler
subproblems, solving one at a time, and in such a way that you're pretty sure
that each step is correct - is used by both successfully. It reminds me of a
study I read many years ago in Gerald Weinberg's _Psychology of Computer
Programming_. The investigators studied the work patterns of many good
programmers, and one of the variables they studied was number of runs per day
(this was back in the day when one submitted ones program as a deck of punched
cards to be compiled and (possibly) executed. They found that the good
programmers tended to divide into two groups. The first (more akin to
Dijkstra) mostly submitted a program which compiled cleanly, and executed,
followed a significant time later by another. The second group submitted a
program, got either compilation or runtime errors, quickly submitted a fixed
version, perhaps iterating a few times, then went off for a while till they
went through the quick cycle again. The point is: each group consisted of
equally good (as far as could be measured) programmers. Both styles are
usable, both styles are necessary to have in your arsenal: you need to be able
to think through a problem, and prove your solution correct; and you need to
be able to tackle it piecemeal and experimentally when it's too tough to
tackle all at once.

------
waldrews
Ok, but Dijkstra would be the last person to use a go to statement in step 3.

~~~
roberto
↑ Quote of the week.

~~~
unalone
I never ever upvote witty statements; I think that was the first time. Far too
well done.

------
dantheman
Exactly, if you know what the problem is the Dijkstra style is great: system
software, compilers etc..

If you are doing anything with business logic well that's where hacking takes
precedence because the problem you are trying to solve isn't well understood.

~~~
delano
Dijkstra advocated thinking about a problem _until_ it is well understood.

~~~
dantheman
But often times you don't have all the information. You can watch how people
currently do the task, but that might not be the optimal way once software
starts being used. It's better to get them using the software so that they can
suggest features, or perhaps even whole new ways of doing the task.

Not all problems can be merely thought about until their understood there are
quite a few that require experiments to get further information. That is the
hacking approach rapid experimentation.

------
d0mine
MIT approach versus worse-is-better philosophy <http://www.jwz.org/doc/worse-
is-better.html>

------
stonemetal
The easy argument supporting Hacker style over the Dijkstra style is the whole
get it in front of the customer and make sure you are solving the right
problem. You can solve the wrong problem as elegantly as you want no one will
care.

~~~
Retric
"get it in front of the customer" is a great idea when you don't understand
the problem. Now there are plenty of times when it's OK to not understand the
problem, but if your building a driver and still don't understand the problem
then something is wrong. Iteration as a process of discovering the question /
need, but if the problems well understood it's not really the best solution.

------
gritzko
A long time ago in a galaxy far, far away I spent a month to reengineer a
billing program. Previously, it needed 24h to issue monthly bills; after the
reengineering it finished in under a minute. It was an easy target, the
previous guy had that "hacker" style of thinking which resulted in a program
being a pile of quick fixes.

~~~
dgabriel
"Hacker-style," doesn't mean completely abandoning all planning and
thoughtfulness. It means rapid prototyping, sure, and (of course) constant
refactoring, not a constant deluge of slapdash quick fixes.

What you inherited was a garden variety mess, possible from any poor
practitioner of any school.

------
anthonyrubin
I suggest a bit of both. Start with "Dijkstra style" and finish with "hacker
style". Adjust levels of each as appropriate for the project you are working
on.

~~~
nihilocrat
I prefer the opposite; hack together a prototype to get stuff moving.
Encounter issues you didn't even think about during the planning stages.
Eventually notice your code is a big morass of hacks that you don't want to
look at. Enumerate all of the issues with the hacked-together software which
could be done much better now that you've got hindsight and experience with
the problem domain, and rewrite the software Dijkstra-style.

~~~
mad44
The Dijkstra argument is that looking at examples, learning about concrete
cases may prevent you from thinking about the most general abstract problem,
for which you may devise an elegant solution. So according to the Dijkstra
argument the prototypes you create may bias/prejudice your thinking, and
should be avoided if you want to get to an elegant/optimal/general solution.

~~~
nihilocrat
That sounds right, I can think of some revamp/rewrite projects that felt
stunted because we only really thought about the problem in terms we already
created in the first iterations.

However, in practice I find that programmers don't necessarily know what the
(software engineering) problem is, what feasible solutions are out there
(feasible != possible), don't know what the drawbacks to their plan are, and
generally feel unmotivated if they don't start banging out code relatively
soon. It seems really easy for a project to stay indefinitely in the planning
stages if you want to make sure it's a "perfect" solution, and it's very hard
to know that a solution is "perfect" without encountering its drawbacks
through experimentation, i.e. coding.

------
likpok
It seems that a good balance would be to rapidly prototype as a method of
understanding the solution (in fact, for most proofs I've done, this is
exactly the method: do a few specific cases and try and generalize them).

This then lets you start hacking right away, but without loss of direction. It
will also help you find the general solution.

~~~
mad44
Let me repeat a comment I made in response a similar comment above:

he Dijkstra argument is that looking at examples, learning about concrete
cases may prevent you from thinking about the most general abstract problem,
for which you may devise an elegant solution. So according to the Dijkstra
argument the prototypes you create may bias/prejudice your thinking, and
should be avoided if you want to get to an elegant/optimal/general solution.

------
tow21
Partly it boils down to the sort of problem you're trying to solve.

I find that if your problem is not very domain-specific; ie if you can strip
out the labels, and consider what you're doing as manipulations of relatively
abstract objects; rearranging lists and graphs and so on - then up-front
thinking is repaid many times over.

There tend to be lots of corner cases (what if this list is empty, what if
this graph has no nodes, ...) and accidental infinite loops you can fall into.
Thinking about it and specifying the algorithm up front forces you to take a
coherent view on your problem domain.

If you approach this sort of thing iteratively (well, if _I_ approach this
sort of think iteratively) I tend to find that even if I can get my initial
usecase working quickly, I haven't got the problem well enough established in
my own mind, so I end up fixing things in an ad-hoc manner, squashing bugs in
one corner case only to find 5 or 6 cropping up elsewhere because eg in half
the code I assume X can never be 0 while in the other half I've been using 0
as a marker value for invalidity.

Solving that sort of problem I find is much better done by thinking hard about
it before hand and working out what you're trying to do. Of course, in any
case, you should surround your method with lots of testcases documenting what
it's doing as well.

This is very different I think from the argument of agile development against
waterfall-style development. The hard thinking in this case is at most a few
hours - it fits in entirely with agility.

------
lhorie
Debugging comes to mind: I think most will agree that it's much better to take
time to understand the underlying cause of a seemingly mysterious bug, and
apply a patch that makes logical sense on the correct layer of abstract,
rather than patching and repatching the layer of abstract where symptoms
appear, without understanding the root of the problem, just to get the code to
compile (or get someone off your back or whatever).

------
IsaacSchlueter
Often, the best approach is a blend of the two, I find.

Identify the aspects of the problem that you don't understand fully, and which
aspects are likely to change. There's usually some part of the problem that
you _do_ understand.

Then spend a day thinking up an elegant framework that allows you to
experiment without making a mess. Small classes that can be swapped out or
added and removed as needed. Modules that can be customized and moved around.
A strategy pattern that lets you tweak your algorithms as you get more data.
Whatever makes sense for your application.

"Hacker style" is no excuse for sloppiness; nor is "Djikstra style" any excuse
for slowness.

~~~
shiro
I second for that. Actually I think the OP's description of two styles is two
extreme cases of one spectrum. Plus, the OP doesn't distinguish difference in
the size of the problem domain.

Standing out academic papers looks perfect and it seems as if the authors come
up with the final version immediately, but in reality, many problems dealt in
academic papers are attacked progressively. For bigger problems, you'll find
series of papers beginning from special case solutions to more general ones.
Wiles gave a proof of only special cases of Taniyama-Shimura since that's
enough to prove Fermat's last theorem (so I heard; I can't understand the
actual paper).

On the other hand, if you're writing critical part of nonstop software and
trying to fix a bug related to multithreads, you want to iterate every
possible timing combinations to make sure things won't be possibly broken.
It's much like the process of writing math proof.

And there's a problem domain. Djikstra style works well when applied to the
problem whose domain is well understood. That's why it requires much thinking
before actually writing solutions; you have to understand the problem clearly
enough. For many software, the envionment in which it runs is more vaguely
defined; anything that interacts with human has to incorporate this big
unknown, unpredictable element which is called human. The software that
supports business must adapt to the business process, in which many details
are left to decisions of skilled human without any explicit rules. Software
for such problems can't have the well defined domain before start writing in
Djikstra style.

------
jonsen
HackerMethod: SelectPartialSolution; GOSUB DijkstraMethod; REPEAT
AddCaseToSolution; GOSUB DijkstraMethod; UNTIL AllCasesCovered.

