
Deliberate Practice - raganwald
http://www.stubbleblog.com/index.php/2008/05/deliberate-practice/
======
misterbwong
Has anyone had any success applying deliberate practice principles to learning
how to hack? I'm having a hard time coming up with measurable units of work
because hacking is very much an art.

Measuring output by lines of code or # of functions doesn't give the right
metric. It's like measuring the # of brushstrokes a painter takes to gauge
his/her quality.

~~~
nostrademons
Hacking's an interesting case because all practice is also product. That means
that if you practice enough, you eventually "finish" your practice material
(which is great in one respect, but it means you have to find a new one to
practice on, which often has its own probles). It also means that it's
impossible to practice without a project to practice on - I know some blogs
suggest "code kata", but most of the interesting problems in software
development occur when you get past 3000 lines or so, and you can't really see
that when you're writing Quicksort for the umpteenth time.

Anyway, here's what I think the difference between an expert hacker and an
experienced non-expert is. When faced with a bug, a non-expert will fix the
bug. An expert or prospective expert will ask himself "What caused this bug,
and how can I prevent it from ever happening again?"

The answer will usually be some form of unit/regression testing, so the
prospective expert will write some tests to catch the bug and _then_ fix it.
Then they'll ask themselves "What prevented me from writing tests in the first
place?" and the answer will usually be some combination of tedium and lack of
defined interfaces. So they'll fix the first problem by learning how to
abstract test state into reusable fixtures, and they'll fix the second by
refactoring the production code so it has clearly defined public interfaces
with defined inputs and outputs.

Then, after doing this a few times, the prospective expert will ask themselves
"Where are my bugs coming from? What are most of my tests checking?" And if
their code is anything like mine, the answer will be "the use and abuse of
mutable state". So they'll go through their code looking for places where they
can use purely functional techniques to eliminate mutable state. Or they'll
change their APIs so that they eliminate any call-order dependencies they've
programmed in. Or they'll encapsulate their mutable state in certain well-
defined objects with known lifetimes.

Then the prospective expert will look at this activity and think "how could
this be made less tedious?" And the answer will often be to add some extra
abstractions which have now been made obvious because of the data flow's been
exposed.

Then they add some more features, keep track of the root causes of their bugs,
and fix the _next_ most common error. (Which in my experience is usually type
errors, but unfortunately working in a statically-typed language seems to
eliminate many of the opportunities for abstraction that are exposed in the
last step, unless it's Haskell, which isn't yet practical for webapps.)

Disclaimer: I'm not an expert. Yet. I am, however, noticeably better than I
was 8 months ago when I left my last job. One of the reasons I left was
because there was a ceiling to how many of the above techniques I could apply
- I could unit test my own code, but I couldn't change the API to make it more
stateless, or eliminate nulls as a legal parameter to methods, or separate
components so they were each stubbable on the local computer.

I have known a couple programmers that were real experts though - one wrote
the filesystem and transaction server for Stratus and has been an early
engineer and tech lead (and sometimes VP of engineering) of 3 IPOs. She was
full of all these techniques, some obvious and some pretty subtle, for
reducing bug rates. For example, she'd define a private copy constructor on
any class that wasn't intended to be passed by value, so the compiler would
flag shallow copies as an error. Then she defined this in a mixin base class
and inherited from it, so she didn't need to implement the pattern in every
single one of her classes. Or she'd use references vs. pointers as a way of
flagging object ownership: they're exactly the same thing, but the compiler
won't let you delete a reference, so she'd pass by reference if ownership
wasn't being transferred and pointer if it was. Every language has these sort
of subtle tricks: in JavaScript, it's things like not extending built-in
prototypes and using closures effectively. The difference between deliberate
practice and mundane programming is that you're actively looking for patterns
like this when you deliberately practice.

~~~
antirez
I disagree as I think the real difference is mainly in the design process. If
there are too many bugs or things start to get complex an expert hacker will
ask himself what I can do to improve the design? To make it simpler, more
modular? If I've this problems probably I've a design error.

Actually most of the times like an expert chess player an expert programmer
just using "pattern matching" and his experience can tell what the right
design (right = simple, modular, scalable, ...) is among all the different
possible solutions. That's the difference not testing or ways to avoid
pitfalls.

~~~
nostrademons
I don't think we're really disagreeing, as usually the answer to "How do I
make sure this never happens again?" is some sort of design change. My point
is that this is done iteratively, in response to _actual_ problems, instead of
hypothetically in response to imagined problems.

I should probably also make clear that experts don't throw away everything
they've learned with each new project and start anew. They do throw away
_some_ things, and part of being an expert is knowing what to throw away and
what to keep. It's sort of an unfortunate fact of software engineering that
each problem is different: otherwise, you'd just use pre-existing software.
But there're some skills (like the C++ tricks I picked up from my Stratus
friend) that you can take with you as long as you maintain the same technology
platform, and others (like unit testing and clear component interfaces) that
apply to all platforms.

~~~
antirez
> in response to actual problems, instead of hypothetically in response to
> imagined problems.

This is really one of the differences indeed and you stated it very well, and
you are right, stated this way we are saying a very similar thing. Also I
think this other sentence from your comment is very interesting:

> part of being an expert is knowing what to throw away and what to keep.

So now the picture I get is that the real difference is selectiveness: about
what design to pick among the possible, what problem is worth to fix, what
experience is worth to remember.

Thanks for the interesting comment.

------
paulsmith
The Freakonomics guys discussed deliberate practice recently on their NYT
blog: [http://freakonomics.blogs.nytimes.com/2008/03/11/how-did-
a-r...](http://freakonomics.blogs.nytimes.com/2008/03/11/how-did-a-rod-get-so-
good/)

------
wallflower
There is a famous saying from the Army: "You don't rise to the occasion, you
fall (or default) to your level of training".

