
Why Are Projects Always Behind Schedule? - sidcool
http://priceonomics.com/why-are-projects-always-behind-schedule/
======
nhaehnle
There is actually a nugget of thought in the article beyond the explanations
you always hear (and which make up all the comments in this thread as I'm
writing this): Even if you base your project schedules on accurate estimates
of the median time required for each step, you will still be behind schedule
for most projects.

The reason is actually quite simple: when things go well, they can only go so
well, but when they go bad, they can go _really_ bad.

Or, in a more quantitative way: while each step of the project will be equally
likely to take longer or shorter than median, the steps that take longer can
take _much_ longer, while the steps that go faster only have a limited
potential for balancing out the delays.

~~~
astazangasta
According to this reasoning you should set targets based on expected time, not
"median" time. You're saying the tail of "completion time" is long, and so the
median is a poor estimator for expected time. But if you actually knew the
distribution of times for task completion (which is implied if you know the
'median time'), you could simply use the expected value of this distribution
and your projects would then tend to complete on time.

I don't think this is the correct explanation. I think it's far more likely
that (1) people don't know the true distribution of task times, so estimates
are just crap guesses based on hubris, what managers want to hear, etc., and
(2) scope creep.

~~~
21echoes
Exactly. All this article is really saying is "most of the time, adding
together median expected times underestimates the sum of mean expected times".
Well then, easy fix: add together the mean expected times off the bat.

~~~
gkop
There's more to it than your easy fix, the article alludes to it but does a
poor job of covering it: if you know the distributions of the intermediate
steps, but you only sum the means of the steps, then you are throwing away
data that could help you calculate a better estimate.

~~~
21echoes
Oh of course! More data (sanely applied) is almost always going to result in
more accuracy. I just found it very strange that the core assumption of the
article -- that the default setting is to add median times of subtasks -- was
never questioned. Why was that the chosen method to begin with? Most time
estimating tools (Pivotal Tracker, etc.) work with averages, not medians, for
precisely this reason.

------
te_platt
This is a great writeup on a simple bit of math that violates common sense
expectations. Consider you need to go on a 100 mile trip and want to average
100 miles per hour. Conditions are bad and for the first 50 miles you only
average 50 miles per hour. How fast do you need to go to average 100 mph for
the whole trip? Clearly 150 mph, right? Except of course you are too late and
would have to go infinitely fast as you need to complete the trip in one hour
and have already taken an hour.

From the article: "But while there’s a lower bound to how “under” the median a
step can be – a step can’t take negative time – there’s virtually no upper
limit to how much over the median time a project can take."

So when one step takes a little longer than expected most people fall into the
150 mph trap and get frustrated when they don't meet the false expectation.

~~~
mattmcknight
Steve McConnell has this example in sin #5 of his 10 deadly sins of
estimation...

[http://www.ewh.ieee.org/r5/central_texas/austin_cs/presentat...](http://www.ewh.ieee.org/r5/central_texas/austin_cs/presentations/2004.08.26.pdf)

------
ThomPete
Time estimations is an industrial way of thinking applied to a post-industrial
world.

In the post industrial world time isn't the problem but rather project
definition and scoping.

In the industrial world the problem was already solved (machine was built,
market often established and output depended on a few factors that could be
adjusted. Need more output add more of X)

In the post industrial world every project is about problem solving and
scoping.

To put it into comparison.

If we apply post-industrial reality to an industrial world. It means that each
time a product needed to be done, if not the factory, then the machines would
have to be developed.

It will take many many years before time estimation will die, but it will
happen.

~~~
calinet6
That's a good point, but it has two issues you should consider.

First, the industrial world _also_ had significant issues with project
definition and understanding. The problems were not already solved, and the
process, inputs, outputs and the whole system was constantly changing. Every
project was already about problem solving and scoping. So there's a bit of
rose-colored glasses toward the past here.

Second, the idea that time estimation is no longer applicable is probably off.
There are two separate problems: problem definition, and problem solving, and
estimation is extremely useful in the latter. Problem definition is a
different problem that still needs much focus, but it doesn't preclude the
need for better understanding of time to coordinate other processes and
dependencies. You might be saying that those dependencies aren't as important
as we think, and I tend to agree, but that's a different argument.

Generally speaking, the idea that knowledge and skills from the industrial era
are no longer applicable is untrue. There is a huge body of knowledge about
how products are made and built that have 99% applicability to software and
technology in the post-industrial world. This is because the problems are the
same: management of people, leadership, understanding interactions within
complex systems, understanding statistics (the importance of which this
article proves profoundly), and improving the spread of knowledge. This is the
way that Toyota began operating in the post-WWII era, the way W. Edwards
Deming modeled companies, and the way that the current Lean movement guides
you to improve almost any business. It's highly relevant.

The main point we should take away, is that time and estimates are not
constraints on a system; but rather outputs that are predictable and follow
statistical patterns. We can use those outputs to make better decisions,
especially if we understand the whole process of production in a systemic way.

Estimation need not die. It's a tool for good in the hands of a systems
thinker.

~~~
ThomPete
It is my experience that there is absolutely not predictability in learning
from the past unless you are doing exactly the same. For something like an
agency that are hired to come up with something else thats normally not an
option.

~~~
calinet6
The idea that each project we tackle is unique and a special snowflake is an
illusion.

In reality, most of the factors that influence the outcomes are exactly the
same. Same team, same knowledge, same approaches, same psychological biases,
same methods, same politics, and so much more.

Those are the things that influence timelines most; not the project itself.

~~~
ThomPete
It's not that it's unique it's that the problems that arise in them are
unique.

------
bikamonki
Simple: time is a resource, it is a cost. When one bids for a project, time is
also taken into account when selecting a vendor. So, just as one tries to
offer the best price and experience, the same goes for time. One knows that
the project will most likely take longer but if that realistic timeframe is
put into the proposal, there is risk in losing the contract. It is a safer bet
to later ask for reasonable time extensions than not getting the project at
all.

~~~
mikhailfranco
Yes, I call it the 'conspiracy of optimism'. The bidder wants to win the
business and implement the project, the customer-user wants the capability
provided by the project, so they conspire against the customer-payer.

Realistic proposals fail. An under-resourced under-priced and short-scheduled
proposal is functionally approved by the customer-user, then signed-off as the
cheapest/shortest option by the customer-payer.

The real negotiation is in the T&Cs for change management, scope creep,
responsibilities for delays and the structure of payment according to
milestones.

------
_paulc
The fact that planning estimates represent a probability distribution is well
known and there is an established process (PERT)[1] for estimating the
expected time of a set of estimates - essentially instead of asking for a
single 'most likely' estimate you should as well explicitly walk through some
of the risks around this and the ask for an 'optimistic' estimate (which is
frequently very similar to the initial estimate) and a 'pessimistic' estimate
(which is frequently much larger) - given:

    
    
      m = 'most likely' time
      o = 'optimistic' time
      p = 'pessimistic' time 
    

You can then estimate the expected time 'e' by modelling a triangular
distribution based on these and sum the estimates on the critical path.

    
    
      e = ∑(oi + 4mi + pi)/6
    

This is actually a very useful technique but sadly is done very infrequently
(probably because it usually comes up with an number that people don't want to
hear - but is usually much more realistic).

[1]
[https://en.wikipedia.org/wiki/Program_evaluation_and_review_...](https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique)

The program (or project) evaluation and review technique, commonly abbreviated
PERT, is a statistical tool, used in project management, which was designed to
analyze and represent the tasks involved in completing a given project. First
developed by the United States Navy in the 1950s, it is commonly used in
conjunction with the critical path method (CPM).

~~~
netghost
The typical PERT three point distribution is better than assuming the mean,
but does have a few drawbacks when modeling joint probable outcomes. I
honestly can't remember if it over our under represents the optimistic or
pessimistic result.

The other quirk is that it breaks down when you map from effort or work to
duration, the actual time something takes when you factor in availability,
interruptions, communication costs, etc.

~~~
_paulc
I think that the real value is in forcing people to think about optimistic vs
worst case estimates - in my experience the initial estimate is almost always
actually the optimistic number and when they start to think about risks people
tend to become much more conservative.

------
LoSboccacc
There are unknown unknown in projects, people won't pay you for a full
analysis and there is always the option that one library you rely on has a
nasty bug you cannot foresee.

One doesn't have to go very far and look for psychological issues when a major
source of indeterminism comes from directly the imperfect tooling itself we
use.

Sometimes software is compared to construction, except you don't get to start
with a perfectly detailed environment to build your software on. Until we have
this major source of indeterminism in the tools themselves, pretending to fix
issues on the estimation process alone is a fool errand.

I believe a as-400 project can be estimated fairly well now based on previous
experiences. But how would you go estimating a project relying on, say, local
browser storage? The only way is to go and build it and identify all the
pitfalls yourself, and then the browser landscape changes and you get set back
all over again.

Just following Safari rules on IOS to obtain a full screen mode sets us back a
week on almost every releases. We started before they started messing with
stuff, now we include a week of fixes in the schedule for every major IOS
update, but we had no chance to predict this when the project started.

~~~
jacques_chester
Building construction is actually a good analogy.

For stuff that's been built a thousand times, you can state with a fair amount
of confidence how long a project should take and how much it should cost. You
can obtain financing and insurance because the dataset is large enough.

For stuff that's unusual or bespoke -- the kind of thing that will appear in
an architecture magazine or in a newspaper investigatory report -- then
estimates are very likely to be wildly optimistic.

So it's the same, insofar as the further you stray into research, the less
certainty there is. The bigger the bet, the fewer such things have been built,
the bigger the risk will be that things go awry.

I have a book in my collection -- _Industrial Megaprojects_ \-- which makes
fascinating reading for enumerating all the ways that chemical process plants,
giant mines, gas pipelines, gigantic factories etc can blast through the
budgets and schedules.

My personal favourite: a chemical plant built relying on an adjacent river for
cooling. To save costs, only one water temperature sample was taken during
planning ... in winter. Three billion dollars later, the owners found that the
plant was inoperable for about half the year because the river water was too
warm.

~~~
RyJones
I worked building a bank once where a detail cost us a bunch of heartburn
estimating and building. Having the architect stop by, look at it, and say "I
had no idea if you guys could build that" was infuriating.

------
zacharycohn
This is such.an elegant way of explaining the problem:

For every step in a project, there’s about a 50% chance of completion under or
on the median step completion time. And there’s about a 50% chance of not. If
a project is composed of 2 steps, the probability that both steps are at or
under their median times is 50% * 50%, or 25%. For a 3 step project, it’s 50%
* 50% * 50%, or 12.5% and so on. If a project has 6 steps, the chances of some
of those steps going over its median is greater than 98%.

~~~
jldugger
But some of those steps will go under the median as well. The problem is the
distribution is not normal -- if the range of possible outcomes below the
median n is [0,n], the range of possible outcomes above the median is far
wider than [n,2n]. Going by the chart the article offered, I suspect a sort of
mixture distribution: the better understood tasks have normal distributions,
and the poorly understood ones have bullshit estimate distributions.

------
yason
When I was doing more project work I outright refused to give estimates,
explaining that I could come up with a random figure that's so low they won't
believe it or so high they won't like it, and in any case the figure would
have nothing to do with when the project would be complete.

Instead I offered broad time ranges that would narrow down to more accurate
ones as work progressed. The managers who had to report those upwards didn't
always like that first but they grew to appreciate how the dynamic worked
across the project timeline. In the beginning, they didn't know how long
because nobody knew how long. Well, maybe definitely more than a month and
definitely no longer than a year, and "it all depends". But every week we knew
more and they knew more, and the time margins could be shrunk incrementally as
soon as difficult tasks turned out to be not so difficult. So the more
progress the better everyone knew how we were doing.

That is one gratifying slide to completion but I do admit it probably doesn't
work for every team or company.

~~~
skeolawn
"instead I offered broad time ranges that would narrow down to more accurate
ones as work progressed"

Well, that is an estimate according to managers who actually understand what
they're doing ;-)

------
CM30
There's also feature creep, which is a pretty big problem in a lot of less
organised companies and organisations. The original design may well be a six
page website with mostly static content, but the final version could just as
easily be a web app with a built in help/support/ticket/chat system, member
accounts and live updating from a third party API. One two week estimate
easily becomes a six months one.

Or it's delayed because of more and more edge cases being found and someone
not wanting to be consistent about them. If the first, third, ninth, sixty
second and hundredth items in a loop have to all act differently for seemingly
random reasons, that adds a lot of time to the development.

It unfortunately doesn't matter how good your estimate is if the
company/client tosses out the orignal spec at the first possible opportunity.

But for well managed projects, yeah, it's a good writeup.

------
xarien
Here's the other piece: Estimations happen prior to negotiations. This is why
you see so much "buffering." A much better approach is to figure out your
budget / schedule first and then work on what scope can fit within that pre-
alloted budget / time frame.

------
markbnj
Hey, thanks for the twitter:description that showed up when I shared this in
slack. Not nice, especially given that nothing in the piece is about the
intelligence of bosses. Someone needs to grow up.

<meta name="twitter:description" content="Possibly because your boss is an
idiot.">

------
jwatte
Another common source is delay and procrastination up front.

First, a dev estimate is given of six months.

Plan is made to ship in six months.

Then, stakeholders bicker for four months on whether, how, and when to
actually do it.

Then, dev gets started, being told that they already spent four months, so
they should be done in two, according to the "initial estimate."

Was it "debugging the development process" that called this out? Or "code
complete" perhaps?

------
jdmoreira
I shared this with my bosses on slack and the subtitle for the link on slack
is 'Possibly because your boss is an idiot'. FML!

~~~
bgilroy26
Well, at least now they know where they stand with you

------
LukaAl
The article is really cool even though it is a little bit simplistic in my
view (and experience). I've also made this comment on the article by the way.
The first problem is that rarely steps are independent from each other. I
haven't seen hard data on this (and if they have hard data, they should look
at it) but my guess is that if one step fell behind schedule, there's an
higher probability for the next step to fall behind schedule. That's because
the reason a step is behind schedule is not purely random but depends on the
internal element of the project (it is harder than usual, using new tech, a
person is overloaded and always late...) and all this problem will not
magically disappear in the next steps. The second issue is that project are
not so simple as a sequence of task. They usually have tasks in parallel,with
different dependencies and sync point. Accounting for time variations on each
step quickly become a total mess.

------
jacques_chester
Where I work we don't estimate in time, we estimate in points. Points are
intended to represent engineering's view of the relatively complexity and
uncertainty of a given story.

Pivotal Tracker then looks at story delivery over the past 3 weeks and gives a
simple average: velocity. You can then look forward to see approximately when
future stories will be completed. You can also see a volatility measurement,
which characterises how much velocity is fluctuating.

Our horizon is deliberately short, because we move very quickly.

Nevertheless, it has a simple advantage: it is based on the true and most
recent data of the exact project you are estimating.

Other estimation techniques are useful in other situations, but simply being
able to say "those are the _actual numbers_ for _this_ project _this month_ "
is enormously powerful. There's no fudging. The numbers are right there in
black and white.

It usually takes a little while for people new to this approach to accept that
velocity is not a target; it's a measurement only. It's a unitless measurement
that is only meaningful within a single project, operating at a floating
exchange rate with calendar days.

One last thing that helps, as others have pointed out. Break down your
estimation tasks into smaller units. Never accept the small headline that
hides a big feature. Continuously look for seams to break big stories into
small stories. When the pointing begins, summarise aloud with fellow engineers
a rough idea of what will need to be done.

Psychologists call this the "unpacking effect", and I suspect that it's
responsible for most of the estimation-increasing power discovered in more
fully-dressed estimation techniques like PERT, parametric estimation tools or
even good old fashioned checklists.

(I was working on an estimation tool for a while, so this subject is dear to
my heart).

------
Mz
My son keeps telling me about some study that found that optimists fall
further behind than pessimists, but it wasn't what they expected. They
expected (as a made up example) that if it required 30 days, pessimists would
estimate 40 and perhaps come in on time and optimists would estimate 20 and
not make it. Reality was more like it would require 40 days and pessimists
would estimate 30 and optimists would estimate 20. They both got it wrong,
optimists just got it more wrong.

Another issue: It is really common for people to "take some well deserved time
off!" when they finish some piece of the project earlier than anticipated,
thus flushing away time that could have helped out on parts that will take
longer than expected.

Cuz: Humans.

------
paulsutter
Evolution. In the distant past, some humans were able to accurately estimate
schedules. Since they knew the true difficulty of any task, they didn't
bother.

So only the cockeyed optimists reproduced. That's why we can't estimate
schedules today.

------
dools
The time it will take to complete a project is fixed but unknowable. The only
thing you can control is the scope and the cost for each unit of work.
Therefore the only strategy is to do the least amount of work you can possibly
do for the least possible cost. If someone requires you to fix your costs you
have to charge them a lot more money. Best to tell them that whatever happens,
nothing could have been done to improve it and make sure they're willing to
compromise heavily to get it live.

------
SixSigma
Hofstadter's law + Parkinson's Law + Murphy's Law = Delay

~~~
rwcarlsen
Hofstadter's Law is my favorite law of all time:

"It always takes longer than you expect, even when you take into account
Hofstadter's Law."

— Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid

~~~
AndrewKemendo
This always bugged me cause it means every project estimate should be infinite
in length.

~~~
analog31
Zeno's Paradox. The solution is that a series can converge.

~~~
TheOtherHobbes
But then you'd expect it to converge, and that would break the law.

Cost estimation is an NP problem. Perhaps it will become practical when we can
use a quantum supercomputer to estimate the time needed to build a single page
web app.

~~~
ArkyBeagle
No, because you'll just ask it to build it in constant time. Software costs
more to measure than to do.

------
connect2hcb
One reason is perhaps to do with what really happens when someone is trying to
win a project. Given that winning involves beating others in the fray, all
including the winner would end up making commitments which are higher than the
capabilities and competencies available with them at that point in time. This
initial gap creates risk right at the beginning and if it is not mitigated in
time, would amplify to bigger proportions. This mismatch eventually results in
projects running behind schedule which in turn becomes a self-feeding vicious
cycle with short-cuts being taken to cover up. That then leads to a
compounding effect on schedule slippage.

------
andrewclunn
Because prototyping requires you to build and then get feedback, because
almost no company knows how to actually plan a project in isolation rather
than have to go through multiple iterations. Then rework (where you're redoing
things based on user feedback and changing requirements) causes forward
progress to not be the sustained motion that management always expects. That's
why.

------
daltonlp
Projects end late whenever they are started late.

If a project is a month past its deadline, that means it should have begun a
month sooner (in order to meet the deadline).

It's surprising how little attention is paid to this fact in the professional
world. So many teams and managers focus on the end date, and ignore the start
date. In truth, the start and end dates are _precisely_ equal in importance.

------
netghost
If you want a tool that goes beyond assuming the mean, take a look at
LiquidPlanner. It can model complex plans, and is based on ranged estimates
that let you capture uncertainty in ways that most other told can't.

Full disclaimer, I wrote a large chunk of the scheduling engine, so if you're
curious about it, just shout.

Heads up though, my son was just born, so it might take me a while to respond
;)

------
agarden
The main problem with the typical estimate is that it is a scalar number. 2
hours or five days or three months. That's fine if you are looking for a
target to aim for. But if you want to actually know when something will be
done and how much it will cost, a probability distribution with a confidence
interval is the only reasonable way to model that.

------
mark_lee
Because nothing is perfect, not the project, not part of the project. Each
time you make any tiny piece a little bit better, you spend a little bit more
time, you slide behind.

When you accept this fact, sometimes projects get done on schedule

------
marvel_boy
First time I see a credible take on this subject ! Great writeup.

------
elorant
Cheap, fast, good. You can only have two.

~~~
analog31
In practice, you can't even have two. That's because the cheap-fast-good rule
assumes that you're already close to an optimum, whereas in practice, things
are expensive, slow, and bad, for reasons that are unrelated to one another.
As a result, a team will lower the requirements, or agree to spend more money,
and discover that the project still takes just as long. I've seen this happen
over and over.

~~~
jodrellblank
In the words of DevOps_Borat: _Software project 1) On time 2) On budget 3)
With quality. You can not able pick any._

\-
[https://twitter.com/devops_borat/status/289782091250532352?l...](https://twitter.com/devops_borat/status/289782091250532352?lang=en)

------
known
Requirements != Expectations

------
juskrey
Asymmetry. No project can finish in negative time.

------
wldcordeiro
Hofstadter's Law rears its head once again.

------
pinaceae
Because humans suck in predicting the future.

~~~
Falkon1313
This is actually a good answer. If we knew how long it would take to finish
this project, we would also know how long it would take a given horse to
finish a race, or when a stock would hit a certain price point on the market.
We wouldn't need to be working on the project because we'd be inordinately
wealthy.

All this talk of medians and means assumes that you've done the same thing
many times before (otherwise you wouldn't have those statistics). In software,
if you've done it before, then it's already done so the estimate is zero.

Projects always involve doing new things, new requirements, using new
technology, techniques, or tools; targeting a new system, and/or using new
processes and people. New requirements, constraints, market situations,
priorities, and discoveries rise up during the project. Sometimes all of the
above. Past performance is not a predictor of future results.

It's more realistic to flip it around. Prioritize the most-important
requirements and set an initial deadline. Then you'll get something (the most
important stuff, or at least some progress towards it) by the deadline and can
decide whether to add the less important stuff afterward.

When doing that, quality and other intangibles need no longer be overlooked
but become part of the prioritization. Do you want high quality, or more stuff
by the deadline? Do you want the team functioning well for the long haul or is
it worth burning them out to rush this, then having downtime afterward?

To get more control, shorten the deadlines to the minimal point where you're
still getting deployable chunks of acceptable quality. That acknowledges the
inherent uncertainty and gives better control and visibility than just making
up a random number (or 3).

------
mikerichards
Because people estimate based on ideal time.

------
Shivetya
Because when you are in a business that follows a process you are in a
business that can hide behind that process and no one has to accept
responsibility.

~~~
Retra
??

Are you saying businesses shouldn't have processes?

