

The planning fallacy, and how to fix it - fortes
http://www.overcomingbias.com/2007/09/planning-fallac.html

======
fortes
I thought these two parts were the most interesting:

"A clue to the underlying problem with the planning algorithm was uncovered by
Newby-Clark et. al. (2000), who found that:

Asking subjects for their predictions based on realistic "best guess"
scenarios; or

Asking subjects for their hoped-for "best case" scenarios...

...produced indistinguishable results."

and:

"So there is a fairly reliable way to fix the planning fallacy, if you're
doing something broadly similar to a reference class of previous projects.
Just ask how long similar projects have taken in the past, without considering
any of the special properties of this project. Better yet, ask an experienced
outsider how long similar projects have taken."

------
marksutherland
I've been told a good rule of thumb for producing reasonable estimates is to
make an estimate for how long you think it should take, then double the answer
and add 10%. I never want to follow the advice when making estimates, as it
feels like saying it'll take so much longer than I actually think it will
might make it seem I'm inefficient, but in reality it turns out to be about
right.

Perhaps also worth mentioning Hofstadter's Law:

"It always takes longer than you expect, even when you take Hofstadter's Law
into account."

~~~
swillden
I've jokingly suggested for years that the right approach is to make your best
estimate, then multiply by pi.

Not only does it boost the estimate enough that you're likely to hit the
target, it also adds a lot of bogus precision to the numbers that the less-
than-clueful will interpret as accuracy.

------
Empact
Very interesting. It happens that Pivotal Tracker
<https://www.pivotaltracker.com/>, (full disclosure: from my employer Pivotal
Labs), is built to combat this bias via the concept of "emergent iterations."
You specify "points" for each fine-grained task, and tracker determines your
velocity over time in dealing with these tasks, and predicts future progress.

Therefore, the question for the user is "what other tasks is this task like?"
at which point tracker can be the external observer, noting how long those
tasks actually take. Interesting stuff, so I wrote a short post on this:
[http://pivotallabs.com/users/woosley/blog/articles/724-pivot...](http://pivotallabs.com/users/woosley/blog/articles/724-pivotal-
tracker-and-the-planning-fallacy)

------
sachmanb
Whenever I come across articles discussing development estimation, I am
reminded of the study discussed in Peopleware, the one in which the team that
had no estimations finished first.

Book summary with a table of some data from Michael Lawrence and Ross
Jeffery's 1985 study: <http://javatroopers.com/Peopleware.html#Chapter_5>

The only way to estimate (and I don't mean guess) that I have found to be
reliable is the one mentioned in this article: to use previous data, and this
only applies to the extent that what you're doing this time is very similar to
what you did that time. Once you throw in any research, new technologies, new
methodologies, that method is no longer applicable.

For this reason, I find value in separating Research (testing new methods, new
technologies, integration methods) from Development (applying what you already
know). Research is inestimable; as far as I am aware.

------
jwilliams
The thing with planning and estimation is that it's really the actual work in
microcosm - you're building a model of what you want to do.

The tradeoff is - how much time do you want to spend on the model, instead of
building it? You can get an absolute 100% accurate estimate if necessary -
i.e. by doing the project and seeing how long it takes.

For tasks that are reasonably repetitive or normalised, this can be
straightforward, and you can reuse old models. the problem is larger software
projects (like a unique building) usually involves either (1) high levels of
complexity, such as legacy data or large levels of integration or (2) a lot of
otherwise novel aspects, such as a new bit of R&D.

~~~
Eliezer
But the errors aren't random; they are (experiment shows) overwhelmingly in
the direction of optimism. How can it take lots and lots of expensive little
detail work to correct such a gross statistical bias with a known direction?

------
bd
Besides all popular explanations of "failure to plan" (based on various
cognitive biases and reasoning errors), there could be also another effect in
play.

Even for somebody who could produce a perfect estimate, sometimes it can be
more strategical to present more "optimistic" estimate to others.

Deliberate underestimation can be used to get approval for a project which
otherwise wouldn't get through (if a more realistic but much larger estimation
would be given).

This can be a way how to get sunk cost fallacy [1] work for you.

 _"It's easier to ask for forgiveness than it is to get permission"._

If we look at those example "estimation failure" projects, it did actually
work for them in the end. They were not canceled, additionally money was
poured into them. Maybe if the initial estimate was a correct $102M (instead
of massively underestimated $7M), Sydney wouldn't have its landmark at all.

Actually, such strategical underestimation can work as a motivational tool
even for a single actor (with no external parties involved). What can be
daunting under realistic assessment, could become more palatable when taken in
smaller chunks.

[1] <http://en.wikipedia.org/wiki/Sunk_costs>

------
ams6110
This is the idea that underpins "poker planning" used in some agile software
methodologies. You estimate a feature's required effort relative to other
features that you (or the team) have completed in the past.

It's interesting to me, because I've always been more partial to Joel
Spolsky's philosophy that only the developer who implements a feature can
estimate it, and only after the feature has been designed in detail. This
study says that is all wasted effort (it's the "inside view").

