
Why Development Teams Struggle to Deliver on Time, on Budget, or at All - encorekt
https://www.7pace.com/blog/software-development-planning-fallacy
======
Aaargh20318
The reason that planning software development is bullshit is that you simply
cannot know all the little details because in software development you're
always doing something you've never done before (because if you did you could
just copy-paste your previous work).

To use the 'going to the supermarket for milk' example. I could make a fairly
accurate estimate for that because I've gone to the supermarket hundreds of
times. I know all the different things that can go wrong and account for them,
because I've encountered them before. The elderly person who wants to pay cash
and has a bag full of coins. The guy who finds a 10 cent discrepancy on his
€92,30 receipt and has to argue for 5 minutes with the cashier (while the row
behind him keeps growing). etc. etc.

Now imagine that today is your first time going to a supermarket. In fact,
before today you had never heard of the concept of supermarkets, or milk for
that matter. How good will your estimate be ?

The only time you can make a decent estimate is after you've finished. Or to
put it differently: making an accurate estimate is possible, if you're going
to accept that making the estimate is going to take a long time, but I can't
tell you how long.

Development time estimation, and every methodology that attempts it (I'm
looking at you, Scrum), are little more than desperate attempts by managers to
feel in control and relevant.

~~~
jbigelow76
_The reason that planning software development is bullshit is that you simply
cannot know all the little details because in software development you 're
always doing something you've never done before (because if you did you could
just copy-paste your previous work)._

If you can plan and execute putting a person on the moon, you can plan and
execute software development.

I think the problem is the overwhelming majority of software development is so
low stakes, easy to replace, assumed that bugs are par for the course, etc...
that most orgs have stopped putting real effort into planning AND executing on
that planning, instead just cargo culting the planning phase.

~~~
wvenable
You've missed the point. Nobody can (or did) accurately estimate how long the
planning and development would take to get someone on the moon. Half the
materials weren't even invented yet in 1962. Nobody knew that Velcro was
flammable in pure oxygen.

Sure once all that expensive and unknowable planning and development is done,
you can double-click that executable and put a man on the moon.

------
kareemm
In my experience talking to hundreds of software engineering managers, running
a software project management business, and being involved on software dev
teams for 20+ years, there are two reasons why teams don’t ship or ship poor
software.

1\. Poor engineering planning. The way to do this well is to break down a
feature until they’re at about half-day sizes tasks. Identify any obvious
risks or ambiguities with the technical approach and communicate them to
Product and figure out how to de-risk them. Communicate costs to Product so
PMs can make cost/benefit trade offs (eg “if we built it the way you specced
it it’ll take two weeks. But if you make these trade offs it’ll take two
days.” As a Product Manager I might not want the feature if it costs two weeks
but do if it costs less than 5 days. Without costing the features in dev days
I can’t makr this trade off).

2\. Failure to make trade offs that drive towards shipping software. The way
you ship is by declaring it’s done, even when it’s not really done. When code
is being written unexpected issues always arise. It’s Product’s job to make
hard trade offs that drive towards shipping. Eg “that’s an edge case bug,
let’s punt it to v2” Or “let’s cut that nice to have feature and squash the
showstopper bugs and ship”.

People bring up impossible deadlines set by management. In my experience most
deadlines are movable and a strong PM/Eng team can convince management to push
a deadline back. And if a deadline can’t be moved, it’s all the more important
to know how much features will cost and make hard trade offs to get the
product out the door.

~~~
fvdessen
> The way to do this well is to break down a feature until they’re at about
> half-day sizes tasks.

I have never seen that work in practice.

~~~
mohaine
Breaking a task down into meaningful half day task is almost as hard is just
implementing it, but usually this is attempted in a committee setting. This is
not a recipe for success.

~~~
khyryk
Agreed. Think of how ridiculous it would be to make a half-day task for
figuring out X -- sometimes I've spent days, even weeks, trying to figure out
how a module works.

In practice, being able to break things down meaningfully and accurately to
such a minute level implies that a _lot_ of time was spent in grooming, which
means we're just back to waterfall again. (Or the tasks at hand really are
just that simple and/or the people involved have a lot of expertise.)

------
Animats
Development time estimates are too short because there's no penalty for
management underestimation. If programmers were paid like the movie industry,
estimation would be more reliable. Time and a half after 8 hours, or after 5
days. Pay for 4 hours if you're needed briefly during off hours. Double time
on Sundays. It's not just telling people it's a "crunch".

Movie scheduling and estimation is organized enough that you can buy a
completion bond. If the job isn't completed within an error margin over cost
and schedule, the completion bond company pays. If there's a cost overrun, the
completion bond company has the authority to send in their own people to
monitor things and if necessary, to _fire the director_ or anybody else, and
take over the production.[1][2]

Completion bond companies do project cost and schedule estimation
independently of the production. They have the data for this, because they
have the full accounting data for hundreds or thousands of films. They watch
project progress carefully. "Our monitoring process requires the production to
email us daily shooting progress reports and a weekly cost report in order to
properly evaluate the progress of the film. FFI also makes periodic visits to
the shooting area."

This prevents directors and producers from low-balling their estimates.
Underestimation leads to unemployment.

A completion bond typically costs about 4% of the film budget.

[1]
[https://www.eqgroup.com/completion_bond/](https://www.eqgroup.com/completion_bond/)
[2]
[http://www.filmfinances.com/services/evaluation](http://www.filmfinances.com/services/evaluation)

------
agalaxynearby
Do you really want to know why _some_ development teams struggle to deliver on
time, budget or at all? Because the majority of software projects out there
are utter useless crap, commissioned by people who have no idea what the heck
they are doing and focus on the smallest stupidest details, before even
getting a decent amount of users or even _wondering_ whether users would like
those changes (before implementing them). People who, once the money runs out,
will make the project crumble, frustrate developers who have to reimplement
the same stupid piece of logic 20 times because "that button looks too big" or
"this would be a really cool animation to have" while everything else goes to
shit.

We get into software development because we expect it to be a creative,
challenging and fun profession that creates value and yet most of us answer to
clients or employers who expect us to spend 80-90% of our time working on
boring, senseless stuff. You want us to do that? Great! But don't expect high
quality and on time delivery.

The real reason behind delays is that we just don't give a crap about your
"social network for cows" and we can't wait to save enough money to get the
fuck out and either start a business, work for a decent company or start
investing.

Apologies but it feels good to rant every once in a while.

------
snowwolf
So the author of this has a vested interest in their "solution" being the
correct one, but I fundamentally disagree with it.

I agree with their hypothesis that we are bad at planning/estimating, but the
solution is not spending more time on it, but rather less.

Firstly I really like this humorous look at the problem:
[https://www.quora.com/Why-are-software-development-task-
esti...](https://www.quora.com/Why-are-software-development-task-estimations-
regularly-off-by-a-factor-of-2-3/answer/Michael-Wolfe)

You cannot estimate what you don't know, and you don't know what you don't
know and no amount of upfront planning will surface those. In their own
example, you can't know upfront that there is a road closed or an accident or
that only one cashier is working until you get to that point.

In my opinion the most reliable solution is to break the work down into small
pieces that deliver value (ideally less than a week's work). Prioritise and
then deliver the first piece. Have regular reviews/checkpoints with
stakeholders to decide whether any value was delivered, are there new
learnings that we need to apply to the rest of the project, or indeed new
'pieces' that we've discovered now need doing, what is the next piece we need
to do and is it worth continuing.

~~~
harryf
> You cannot estimate what you don't know

Exactly! It tried to express that and a strategy for handling unknowns here -
[https://medium.com/@hfuecks/what-donald-rumsfeld-can-
teach-u...](https://medium.com/@hfuecks/what-donald-rumsfeld-can-teach-us-
about-software-estimates-1be22864e509)

My theory is the reason why we’re not getting any better at estimating
software projects rests on a flawed assumption; because computers are binary
and deterministic, we assume that the process of creating software must also
be deterministic and quantifiable.

Example: when Apple opened the App Store to 3rd party developers, would it be
realistic to expert an accurate estimate from one of those developers,
building their first app? There were so many unknowns; how does the build /
publish process work? what will pass the review process? what about all these
new APIs?

~~~
softawre
Agile has a concept called the cone of uncertainty.

The way we use this concept in practice is by doing small chunks and
delivering fast, and the more chunks we have done with a
project/feature/whatever the more likely our estimates get better over time
because there is less work left and we are more familiar with the work now.

[http://www.agilenutshell.com/cone_of_uncertainty](http://www.agilenutshell.com/cone_of_uncertainty)

~~~
AstralStorm
Sometimes there are no "small chunks" and this is where all agile approaches
fail.

If a thing does not work it is not a deliverable. Bonus points if the thing is
not really testable because it is tied to platform/OS. (And you'd either have
to implement it in full our spend weeks implementing a test harness.)

~~~
jacques_chester
> _Sometimes there are no "small chunks" and this is where all agile
> approaches fail._

I work at Pivotal. Cloud Foundry is a successful application of agile (XP +
Lean trimmings) to a large, complex distributed system.

------
jhall1468
In my experience, deadlines set by higher-ups long removed from (or never
having) the technical chops to be determining the deadlines in the first
place. VP's and Directors are often the ones dictating the direction, which is
great, but then also introducing deadlines, with helpful input from Directors
who ALSO haven't touched code in many years.

Generally that results in one of two things:

* Product delivered on-time with massive technical debt.

* Product delivered late with massive technical debt.

Frankly I don't know if adding front-line engineers to the deadline decisions
is going to make the issue better or worse, but fundamentally having non-
technical or formerly-technical people defining deadlines definitely doesn't
work.

~~~
ttul
Having worked in the trenches as a software engineer and also as a CEO has
given me respect for the VP level as well as the engineers. When the VP sets a
deadline, it’s driven by the survival of the company - aka release by June or
we can’t make our numbers and have to do layoffs. When the engineers push
back, they often do so without an appreciation of the business reality, aka
those numbers pay their salaries.

Engineers hate releasing poor quality products, but the reality is that
customers often will buy and use something laden with defects rather than
nothing at all. And in any case, after the initial botched release, you can
always fix stuff. See Apple.

~~~
jhall1468
That's a fine argument for a bootstrapped startup with one product, but isn't
meaningful when talking about the tech titans that suffer from the exact same
problems.

This is the same type of argument that makes short-term profit take precedence
over long-term success. Why in the world are we getting a deadline of 4
months? You, as an executive, had to know burn rate was blowing through cash a
lot earlier than that.

Engineers hate releasing poor quality products because we know we'll
eventually have to fix them and we will likely be given almost no leeway time-
wise to fix the actual debt, as time will always favor ugly hacks that "work"
over well-written code that prevents the problem in the future.

~~~
ThrowawayR2
> _the tech titans that suffer from the exact same problems_

The tech titans aren't monoliths with a collective hive-mind. Each business
unit and team is responsible for projecting revenue targets and then hitting
them, same as any other kind of business. And, yes, if the business unit
managers fail to hit their financial goals too many times, their project will
be cut and the team reassigned or laid off, same as a startup running out of
financial runway.

------
rossdavidh
Having been involved professionally in software development for about 14 years
now, I have to say I respectfully disagree. More planning does not result in
better predictions. In fact, it often results in worse ones. That is just my
empirical observation.

My best guess as to why, is that there are managers involved that attempt to
negotiate the planned delivery time down. They do this, in part, because it's
hard to get devs to work late or on the weekend when the project is on
schedule, but easier to do when there is obvious risk of falling behind
schedule. So, from their point of view, the best way to get the product
delivered early is to get the schedule made too optimistically.

Not saying they SHOULD do this, or even that they are consciously thinking
this way, but it's what the situation incentivizes them to do, and it's what
normally happens. The gut level immediate answer is based on past experience,
and the long drawn out meeting produced, System 2 answer, is based on
management bargaining the developers down to a shorter timeline.

------
gmiller123456
I think software gets a bad wrap when it comes to budget and time overruns. I
think we see a lot more software projects go over budget, over time, or fail
mainly because they're more common than any other type of complex project.

You don't have to look far for a failed project of another type. While not
necessarily a "project", 50% of all businesses completely fail within 5 years
[1]. How many Kickstarters have you seen deliver on time? Even if you limit
the criteria just to experienced people working in their field? One I've been
following [2] is just a book that was supposedly already complete before it
was funded was supposed to start in June 2017, and ship in Aug 2017. It's now
June 2018 and it still hasn't shipped, that's 600% over time and counting.

I'm not claiming that's scientific proof software estimation isn't worse than
anything else. But most of all project's estimation is done behind closed
doors, so we'll never get a good feel for how bad/good things really are.

[1] [https://fitsmallbusiness.com/small-business-
statistics/](https://fitsmallbusiness.com/small-business-statistics/) [2]
[https://www.kickstarter.com/projects/pighixxx/abc-basic-
conn...](https://www.kickstarter.com/projects/pighixxx/abc-basic-connections-
the-essential-book-for-maker/description)

------
drblast
The thought occurs to me that no matter how much planning you do or how good
at estimating you are, the development is going to take however long it's
going to take.

You can plan your trip to the store to buy milk and figure out exactly how
long it will take, but the only thing that actually matters is that you arrive
back home, with milk.

If that milk is absolutely necessary, whether it takes 30 minutes or ten
minutes to get it is really a secondary concern. If you spend five minutes
getting a better estimate you've delayed the milk by five minutes regardless
of how long it takes or how right you were about the timing.

I think we spend too much time thinking about time estimation when the
planning we should be doing is figuring out what is so important that the time
it takes to build is worth it even if the time estimates are off.

~~~
commandlinefan
> the development is going to take however long it's going to take

Hofstatder's law: It always takes longer than you expect, even after
accounting for Hofstatder's law.

------
wildekek
Higher up manager here. I fully accepted the #noestimates movement and it is a
complete blessing for all the teams and organizations I've implemented it in.
Roast me.

~~~
__dontom__
Did you do that in an agency as well? I mean it's one thing if you are
developing a product, but telling your clients "the project is finished when
it's finished and we won't be able to tell you how much it will cost you until
it's finished" doesn't really fly in my experience.

~~~
supreme_sublime
Seems like charging a specific fee might help alleviate some of the concerns
about that. Though I suppose you still need to know how much money you should
charge.

I really hate estimates and have always hated estimating projects. I do
appreciate the need/desire for them by some in management. I've tried to
figure out how other people do it and have yet to find anything satisfactory.
It doesn't have to be an extremely simple process, but I don't really know how
one gets better at estimating. The only way to get better is to just
understand the domain more clearly but estimating doesn't really help with
that.

Kanban does have a cool idea of just attempting to break down work into equal
parts and measuring the throughput of these roughly equally sized components.
The only problem is actually sizing stories to be equally sized. Some changes
don't have any kind of real stopping point of functionality without large
changes. It seems kind of arbitrary to chunk it out just because your project
management system wants you to.

------
EnderMB
Another reason I'd like to add regarding why so many software projects fail is
a really basic one, but is ultimately the reason why most software projects
fail.

The budget isn't there.

For many, building software is a race to the bottom. I've worked at countless
places, from "Wagile" places that run spiral methodologies mixed with agile,
to fully agile agencies that deliver well but crumble the second a client gets
pissy about something taking longer/costing more than it should.

In my view, the most basic problem in software is that we're committing to too
much for too little, which is why I see development to be similar to working
in a skilled trade. If you pay good money for a renderer, you'll get the
outside of your house rendered nicely with good advice on what to use, what
looks good. They'll also tell you how long it'll take, and if you say you want
it sooner they'll tell you it'll either cost a lot more to get more manpower,
or they'll decline the job. If you are cheap about it, you'll probably get
someone that'll take longer than expected, will make a mess of the job, and
you'll be left with something you're not entirely happy with.

A solid methodology will probably help with delivering software on time and on
budget, but if you are unrealistic with either metric then it doesn't matter
what methodology you use. You'll take liberties with it, decide that it's
bullshit, and continue to cowboy your way towards a duct-taped mess of a
solution.

It's something few want to talk about, probably because there isn't really a
solution to it outside of:

* Paying a premium for a development team with a track record of recent success

* Having people that know the full software lifecycle be involved in all parts of the process

* Actually embracing the fact that when requirements change, to the point where budgets and timescales are flexible.

* Not joining the race to the bottom.

------
ThomPete
There are ways to deliver on time & budget but most developers are not going
to like it neither are their "clients".

Use the frameworks exactly as they are intended, don't try to invent new
solutions that aren't native to the framework.

Anything that takes you outside the beaten path in development is going to be
a potentially infinite black box.

In other words, a lot of software engineering can't be put in timeboxes
because it's actually R&D more than it's development and where each little
step forward can add a potentially infinite amount of new tasks to be done or
problems to be solved. Add to that the constant need to update, upgrade,
improve, re-design and you know it's just not doable.

So the primary problem IMO is that we think about a lot of development as if
it's something that can be put in boxes. Some can of course and the better and
more solid the team becomes to better they are but the teams who struggle are
mostly struggling because the expectations for what they are actually doing
(inventing problem-solving) isn't matching up with what they are being paid to
do (build)

------
dtech
Site is down, cached version:
[http://webcache.googleusercontent.com/search?q=cache:https:/...](http://webcache.googleusercontent.com/search?q=cache:https://www.7pace.com/blog/software-
development-planning-fallacy&num=1&strip=1&vwsrc=0)

~~~
antoineMoPa
Next article: Why web servers struggle to deliver before timeout or at all.

------
UK-Al05
Having been through multiple month long planning "phases", that were
eventually shown to be wildly inaccurate I disagree.

This is exactly what we used to do in the waterfall days. It didn't work.

The only way that's been compatible with me is to build something really small
but valuable. So small that its hard to be disastrously wrong. Once you've
released that value, build upon it.

Stakeholders tend be much happier as they at least have something they can use
really early on.

~~~
gonzo41
This for the win! Always be wary when they ask for single sign on in the first
sprint!

------
Jtsummers
Software is design, not construction. This is why it's hard to estimate.

You ask an architect to design a _new_ skyscraper in a fixed three-month
timeframe, good luck. It won't be what you want, or it'll have severe problems
discovered during construction.

Software developers are creative workers, we have to accept this. Unless
you're doing the nth iteration of basic CRUD/RESTful web app, or a trivial
"display data from a database" app (note: the DB design may take more time,
some of the controls on interaction will take more time, but the core of it
will be the same) you can't reasonably estimate your time.

Once you know Ruby on Rails, making a prototype of a webapp is a rote task.
You can knock it out in a known (from experience) time. Then you try to
improve it, add new features, customize the backend (first time you've written
a DB connector), things start going off schedule.

The only other way to have reasonable estimates of the _development_ , is to
spend a ton of time upfront (unestimable) designing the system before we touch
the code. Ok, now the coding tasks are well understood, but you also just
spent 2 years designing it. And, like the architect, if you rush this design
part, problems will crop up during coding that will blow your schedule.

~~~
nradov
Architects regularly design new skyscrapers on fixed schedules. Look at some
of the major developments going up in China.

~~~
Jtsummers
How many of those fixed schedule projects are based on old and proven designs?
I specifically emphasized _new_ in that sentence. Perhaps I should have said
to produce _novel_ designs. Which is what many programmers are asked to do and
estimate.

------
franciscop
Upon reading the first couple of paragraphs I decided to test it on myself. I
had to buy X, which is available in convenience stores (comvini, Japan). I
estimated it'd take 5-7 minutes, but I was very surprised when it actually
took 4 minutes! To be fair, this has to do a lot more with Japan than with
myself. The same test in my home country (Spain) I would have had to guess
15+-5 min during day time. Sidenote: I was very surprised about the concept of
"driving for milk", which I'm assuming is a very American thing.

I'm fairly realistic about software estimation which only came after a LOT of
retrospection. It normally takes what I estimate, both personally and
professionally. The hardest factor I've learned to include is the level of
detail. For instance with my personal website [1] I gave myself a full
Saturday because I knew I wanted high detail level but I had in mind the
overall design. It took the Saturday +1h of a couple of improvements/bugfixes
(under 10% error). With my current job I'm also under 10% error.

In the past I have been bitten a LOT about my unrealistic estimations, so the
only solution I had to move forward was to learn from those and so I did. So
now, from the article, I know that _my_ "quick thinking" is around 50% of the
project. I force myself to think a bit more and the details trickle down.

Another thing I've learned is that projects tend to fill as much time as
possible (Parkinson's law [2]). So if you are told a deadline, half it! Put
the half as your internal deadline, then the project will be just on time.

Finally, complexities are exponential, so learn to say no to unnecessary
cruft. "A small change" might seem like a 1% change for business and for you
initially, but it will more likely than not grow into a 10-20% change in the
end. FFS that is why _the duck_ was added in the first place, to avoid wasting
time [3].

[1] [https://francisco.io](https://francisco.io)

[2]
[https://en.wikipedia.org/wiki/Parkinson%27s_law](https://en.wikipedia.org/wiki/Parkinson%27s_law)

[3]
[https://rachelbythebay.com/w/2013/06/05/duck/](https://rachelbythebay.com/w/2013/06/05/duck/)

------
tootie
This is literally what agile is for. Step 1 (and this is the hardest part) is
to negotiate the need for either a flexible scope or deadline. Accept that
you're estimate for a fixed scope will just never be accurate and just make
room to adjust. Putting "buffer" into your estimate is also not the right
approach because it's planning for failure. Flexible scope means you have work
to fill up what would go into your buffer that you can launch without if you
need to, but would be very nice to have.

Create your high-level backlog and do MoSCoW prioritization. Figure out your
"musts", "shoulds", "coulds", "wonts". Now apply some estimates to your
features and add 10% for unforeseen growth. Estimate velocity based on team
size and now you've got a date when you could conceivably hit your musts,
shoulds, coulds. Set your "deadline" if you must somewhere deep in the coulds.
If your musts go over, you are still able to launch an MVP. If things go well,
you can start delivering non-musts.

Adjust your plan every sprint based on actual velocity.

------
grigjd3
I hate it when PMs ask me for an estimate of effort on a task I have never
done before and I get this question all the time. I get asked to get a new
process through a deployment system I've never worked with before and that's
fine. How long will it take? That depends on how complex the deployment system
is and I haven't worked with it yet.

------
jillesvangurp
There's a lot of snake oil for sale if you are looking to spend money on
solving this problem; this probably is more of that.

The reality is that mostly we've gotten better at avoiding things that clearly
don't work or are historically obviously misguided/inappropriately expensive
(waterfall, CMM level 5, etc); and instead emphasizing things that don't work
slightly better: managing risk by doing iterations, not attempting to plan the
unplannable eventualities too far ahead of time, etc.

Some people refer to this as Agile. Other people as common sense. Either way,
not wasting time on things that clearly don't add value tends to free people
up to do something productive (duh). There's a pattern with agile with mostly
non technical people higher up the management chain getting overly excited
about things like estimates, velocity, burn charts, etc. I usually call stuff
like that the illusion of progress and waterfall in disguise. Scrum
particularly seems to have devolved to decorating offices with post its and
employing busy looking people with moving those around and manually tracking
them in convoluted tools like Jira.

But undeniably, we've gotten better over the past decades at building stuff
with huge groups of people. Any idiot can probably gobble together some lines
of code that does something vaguely useful. But committing to building stuff
with hundreds or thousands of people is a different game. It requires lots of
money and focus and there are quite a few companies that are doing this
successfully.

A bigger pattern in our industry is that people seem to have shifted to
calendar driven roadmaps for the most important bits of software where they
ship whatever is ready on a fixed dates instead of committing to a long list
of stuff shipping whenever it is ready. E.g. Apple ships OS versions once a
year, Mozilla, Linux, Chrome, etc. ship every few months, typically with
massive amounts of code changes.

~~~
nradov
CMM level 5 does work, if you actually do it instead of just treating it as a
paperwork exercise to satisfy an auditor. There's nothing in CMM that's
contrary to or incompatible with Scrum or other agile methodologies. The major
emphasis in CMM is on documenting your process (whatever that process is),
training people properly, and continuously improving.

~~~
jillesvangurp
Yeah it works. It's just not very appropriate in a lot of situations because
it is insanely expensive because it adds a lot of non functional bureaucracy
and time consuming activities to the critical path of delivering software.

------
maxxxxx
I really wish people would put more effort into making sure people don't waste
time instead of endlessly trying to make development more predictable. In my
company there are a ton of inefficient processes and other things limiting
productivity (noise, teams spread out over entire building, developers having
to do work that would better be done by qualified tech writers, lack of
adequate onboarding, lack of decision making constant changes). Instead of
addressing theae management keeps on doing status meetings and planning.

I think they would be much better off if they made sure that their people have
optimal productivity and then see how quickly things can be done. I guess that
is exactly what scrum originally tried to address...

------
ibdf
When I first started working as a web developer, my boss would ask me how long
it would take to finish a project. I would give him an estimate time, and he
would always double it. As it turns out he was always right. But because of
it, I learned to provide better time estimates. Overtime I became more
accurate and started to provide more realistic time frames. Depending where
you work or who you work with there's a lot more that goes into your day to
day tasks than just coding.

------
san_at_weblegit
The fact that there are so many reasons for the failure itself tell why the
failures are so frequent. I strongly believe that it should be more than just
the development teams which should be attributed for failures. Projects rarely
get delayed or fail cause of developers only, the more responsible parties are
management and the company culture. Even the strongest of the developers would
learn over period of time that sticking to a more realistic schedule would not
earn them praises. Unfortunately the path to promotion is to keep the bosses
happy in most places. Most people do not have motivation about the end
results, its more about looking good on day to day basis. Keeping the failures
aside what has worked for me in the past is to add little padding (10%-20%) to
all the tasks which no one would question and then we have enough padding to
cover for any task in which the team really spent around 1000% percent more
time than estimated. Again it really depends on how much the product people
understand the efforts involved in development. Its hard to make some one
believe that one line change took 3-4 days and another 1000 lines were added
in half a day if they have not been there themselves.

------
Quarrelsome
I think neither solution is the correct one.

I find that in the case of the second solution: "thorough planning" you get a
bunch of people fabricating estimates of estimates with padding or not padding
or trying to estimate a bunch of unforeseen events that are too far ahead to
get an accurate handle on. Sure, the second is likely to be much more accurate
just because the second is likely to be much, much longer but I have never
given one of these estimates without it being followed up by immense
disappointment by business with a desire to strong arm me into something
shorter. This demonstrates a fundamental issue with the topic. Its not about
the estimate at all.

I think the problem with estimates is their finality and assumption of
correctness. I figure once you're in the developer-years category you might as
well iterate the estimate a bit to get a better sense of it. Error bars should
be translated into risk for the business decision.

Too often I see people claim to be making "rational, facts based" decisions on
estimates (beyond 1 year+) are complete codshit. This is not rational decision
making. These decisions should be about risk management assuming failure as
opposed to thinking you can slot year+ development estimates together.

I think very often the desire to lock down development estimates into
"rational fact" are business decisions of risk masquerading as technical
developer decisions of fact. I have yet to see a situation where we deliver an
estimate that blows a business decision out of the water where the business
just backs down. It just learns to ask a different question and gets the
answer it wants out of that.

------
geebee
I thought this was a decent article, with decent advice if you're in a
situation where you have to make estimates under conditions of uncertainty.

An example I've tried to use to explain software deadlines is the college term
paper vs the mathematical proof. I can commit to writing a college term paper
by a set date. I may do a good job or a bad job, but I know I can produce 20
pages on a topic with citations and references. It's really just a matter of
will and follow through.

Many people experience deadlines this way, which is why they get a bit
outraged when software developers fail to hit deadlines or warn properly that
they won't. They don't understand that software can be more like a
mathematical proof. You can tackle it, try things, but you only might crack
it. You might be no closer than when you started. You might be moments away
from cracking it and not knowing it.

My career advice to people is to seek out situations where software
development deadlines have more in common with term papers than mathematical
proof. These jobs do exist, though they are elusive. It's just another
desirable aspect of a job, like better pay, nicer working conditions,
telecommuting. There are jobs that define the goals of a software project more
vaguely, to the point where you can deliver something great, or something
merely ok, but there's really no chance you can't deliver something at all.

Some jobs go extinct because they are just so unpleasant that the people with
the talent to work them simply find other options for employment. I personally
find the stress or working under strict deadlines with very little certainty
unpleasant enough that I'll accept lower pay or other tradeoffs to avoid them.

------
exabrial
I can name exactly two reasons:

First is they commit to unclear expectations and requirements and no one has
the backbone to call it out. This is a leadership failure or a top-down
problem, when a division is lead by weak management with poor communication
skills. The fix for an organization is to hold the top accountable first for
the results, which rarely happens.

The second is a bottom-up problem: where a team has convinced themselves the
only way they can solve a problem is to use this one framework that they
haven't used yet. In music, there's a mental block students get where they
look at the student violin or guitar and are convinced the reason they don't
'sound good' is because they're using a cheap instrument. This is
categorically false: they need to put years of practice into the instrument.
In much the same way, developers are obsessed with the framework they haven't
used yet. They should be obsessed with discerning the business requirements.
If we went to the moon on slide rules, the stack you already have is likely
more than sufficient to implement whatever challenge is front of you.

------
nikhizzle
As a long time engineer, and occasional pm. I steer teams to think about the
worst case, and then triple that time estimate.

I know this sounds like setting up a team for failure, but I’ve seen it work
again and again. It sets clear expectations for quality and delivery upwards
and downwards which everyone can agree on.

Once this is done, the easier part is keeping everyone focused, and using all
the leftover time well to raise quality.

~~~
maxxxxx
I often ask people when they think it will be done and then multiply that
number by five. It works pretty well :)

------
AnimalMuppet
I kind of like the XP approach to this.

First, estimates longer than two (or was it three?) weeks need to be split
into smaller pieces. (Because it seems that when the estimates exceed two
weeks, the accuracy of the estimates goes down. We're just not good at
estimating things longer than that.)

Second, if you don't know enough to make the estimate - if the task is
something that you don't know how to do - the first task is to find out. In
XP, this is called a "spike" \- a task where the purpose is to nail something
down, rather than to produce a usable artifact. Often you don't know how to do
something, but you can say that after two weeks of research, you'd have a
better idea. So take the two weeks of research, and then you can give a decent
estimate. (Hopefully - there are some tasks that you'll know you're done when
you're done.)

------
triggercut
As someone that does this every day across all industries and project types, I
agree 100%, but proper/effective planning is not a panacea for project
success.

Identifying and managing Risk is as equally as important. As is Interface (the
people to people / team kind) management and correct Quality Assurance.

On top of all of that you need good Project Governance to handle change
properly, with clear limits and bounds for changing durations of activities,
budgets, scope and acceptable quality, defining responsibility and
accountability, delegating authority, setting regular reporting requirements
and cadence.

Managing (non-trivial) Projects well is difficult, it takes hard work and
careful thought. That is why we struggle. We revert to System 1 for all
aspects of project management, not just planning.

------
chiefalchemist
If you __really__ don't get to set the deadline, is it still reasonable to
held accountable for that miss? Even if you give worst case and best case the
only one anyone else focuses on it best case.

Personally, I prefer the 80/20 rule. That is most of the work will go fair
quickly, or at least uneventful. It's the last bit that's always the killer.
The devil is in the details, as they say.

When asked for an estimate we quickly identify the 80. The key is to pause and
not write off the remaining 20.

p.s. Early in my career I read somewhere (I wish I couple remember where)
something along the lines of:

Alomst everything takes twice as long and costs twice as much as your original
estimate.

If I had $20 for every time I found that to he true, I wouldn't have time for
HN. I'd be too busy counting my money.

~~~
pythonaut_16
I also like the 90/90 rule:

The first 90% of the work takes 90% of the time. The remaining 10% of the work
takes the other 90% of the time.

[https://en.wikipedia.org/wiki/Ninety-
ninety_rule](https://en.wikipedia.org/wiki/Ninety-ninety_rule)

~~~
chiefalchemist
Let's no forget the new feature requests after the first 90, once they see it.
And, of course, don't allow the deadline to be adjusted.

Fast. Cheap. Right. Pick two :)

------
davewasthere
Utterly awful article.

If I know that I've gone and gotten the milk a dozen times or more recently, I
have a pretty good idea of the minimum and maximum amount of time it'll take
me to go get that milk. It's 4km, about a seven minute drive, and the quickest
I've done it is 20 minutes round trip, and the longest about 40 minutes
(probably browsed the store a bit for other stuff), then I can give a fairly
confident range quite quickly without having to over-plan.

It's after you've gotten the milk that the client remembers they're lactose
intolerant and can you please go get some almond milk. That's when you get
over-time over-budget. Although in hindsight, perhaps I should have asked what
sort of milk they prefer.

------
fizixer
Complexity is not as democratizable as the industry likes to believe.

\- an org-chart team of 100 will perform worse than a tightly knit team of 10

\- a tightly knit team of 10 will perform worse if tangible results are
expected on regular intervals of the choosing of managers (e.g., weekly,
monthly, take your pick) as opposed to treating the software project as a
computer science research project spanning a year or more.

\- a tightly knit team of 10 working on a software research project spanning a
year or more will perform worse if expected to succeed in their first attempt
as opposed to allowing them to fail one or more times and change directions,
maybe even starting from scratch every time.

------
golergka
> This is where System 2 comes in—if we performed a more thorough analysis,
> these factors would have been considered in our answer. Then it would be
> clear that it’s much more likely to take 20 or 30 minutes to run to the
> store instead of 10.

I have encountered an article (can't find the link now, sadly) that claimed
that when developers gave estimations, breaking down tasks to sub-tasks
actually had a __reverse __correlation to their accuracy. In other words,
their first gut reactions were actually better than estimations given after
going in-depth through all the details and sub-tasks.

~~~
jcadam
> I have encountered an article (can't find the link now, sadly) that claimed
> that when developers gave estimations, breaking down tasks to sub-tasks
> actually had a reverse correlation to their accuracy. In other words, their
> first gut reactions were actually better than estimations given after going
> in-depth through all the details and sub-tasks.

I've noticed this as well. The last few years my 'gut' estimates have tended
to be more accurate (and larger) - but since I can't support them with
anything other than "Trust me, I've been writing software for over a decade",
they're never taken seriously.

My guess would be that breaking down a complex software system into sub-tasks
(esp. when you are breaking things down within modules) that are too fine
grained fails to capture the inter-dependencies between sections of the code-
base. So, you complete task A and move onto to task B, then realize you have
to revisit the code you wrote during task A because task B is dependent on it
and some of your early assumptions were imperfect. And then you need to fix
the unit tests you wrote for task A.

And so on, and so forth.

~~~
cpeterso
Bottom-up estimation is probably more accurate for the _work_ to be done, but
not for other project overhead. Using data from an analogous project to
forecast the delivery date would include that overhead.

------
thinkingemote
One statistic that has stuck with me is that 80% of software projects fail.
It's the default that your software will not succeed, or do well.

One small reason for that is poor time estimation, but having good time
estimation won't make your software succeed. I think having good planning,
makes for a good working environment, but it doesn't mean the project as a
whole will succeed. It might mean the developers will be happier to be with
you and pivot to the next idea.

I would like to look at the percentage of successful projects and see what
proportion of these had good estimation attributes.

~~~
nradov
That's a meaningless statistic. The vast majority of large software projects
do eventually deliver something of value, even if it's late, missing features,
and full of defects. Is that a failure? Depends on the circumstances.

------
smdz
The simple fact is that individuals or companies following System 1 have a
much greater chance of survival in the competition. The consulting individuals
or companies will simply find it hard to survive without System 1 - unless
they already had a lot of capacity and capability built up. Unfortunately many
get comfortable using System 1

System 2 works for Products (includes SaaS) and chances are that it mostly
works for Products that have been slightly successful in the first place. This
is where you also find consulting companies that may have obnoxious rates.

------
CM30
I thought it was because people seem to like changing their product plans
midway through development. Seen an awful lot of scope and feature creep in
project management, usually either because of a 'client' who keeps wanting
more or design by committee.

Though I guess that often ties into project management and planning failures
too. Seen a few projects where no one seemingly asked what the
client/customer/business needed or how they actually worked, which then meant
a ton of refactoring further down the line.

------
lmilcin
I think there are few main reasons why this happens. I understand this will
not appeal to many developers and understandably so. Learning new frameworks
and having freedom of choosing your technology is definitely nice when you are
developer, but you also need to recognize that if you are changing your stack
every year you can be experienced developer and yet are effectively beginner
at your new stack.

1\. Managers (and developers, too) don't strive for repeatability and
predictability of the process. Sticking to frameworks, reducing variability,
forcing an exact development pipeline. It is not appealing to developers and
managers are afraid to push this.

2\. Lack of feedback loop. At the end of every project/deliverable/iteration,
ask what could have been done to prevent the problem (what could have helped
to estimate this more reliably). Implement mercilessly as if bad estimate was
on par with deployment failure.

3\. Move unplanned work to planned work. Prioritize delivering sound and good
100% of code over delivering quickly "the first" 80% of your solution. Develop
code as if it was controlling Solid Rocket Boosters. Follow good practices
like MISRA. Don't allow exceptions. Do PROPER code reviews. Most code reviews
I have seen is a colleague spending minimum possible time so that he does not
feel he did bad job. In my opinion good code review requires going throug
evereything top down from requirements and then bottom up through each
statement to understand everything is implemented correctly. This takes about
as much time as implementing it in the first place. Make sure managers
understand this typically takes more time to do correctly and they should
expect payout later, not this iteration.

4\. Hire carpenters and make them your senior staff. Make sure you have clear
understanding who is senior/junior developer. Senior developer is a person
that understands broader context and can be left to supervise a small project
with understanding he/she will be able to uphold standards and provide correct
solution and guidance for the junior staff. Make sure your senior developers
are carpenters -- most projects don't require exceptional skills and rock star
developers. They require people who don't get bored once they see something
working but instead have the drive to finish the second 80% of your
functionality and do it with the same amount of focus as when they have
started.

------
o_nate
When I was just starting out as a wet-behind-the-ears developer, an older,
grizzled dev gave me invaluable advice for estimating: take your best guess of
how long something will take and double it, that's what you tell the users.
That advice has stood me in good stead, though I've found as I've gotten older
and more experienced, I could probably reduce the factor from 2 to about 1.5.

------
misterbowfinger
"Delivery dates have often irrelevant but very simple to understand impacts.
Good and bad solutions have dramatic but very difficult to understand
impacts."

[https://minnenratta.wordpress.com/2017/01/25/things-i-
have-l...](https://minnenratta.wordpress.com/2017/01/25/things-i-have-learnt-
as-the-software-engineering-lead-of-a-multinational/)

------
Shivetya
One missing element is the failure to allow teams or their members to stay on
the project. It is very common to see people pulled off to must haves, support
needs, and even many projects totally skip out on accounting for staff
vacations which can be costly with long term employees who have four to six
weeks out.

It is so easy to come across the frustrated developer, frustrated that they
just cannot be allowed to do the work.

------
parvenu74
Conway's Law: "organizations which design systems ... are constrained to
produce designs which are copies of the communication structures of these
organizations."

And most organization -- or collections of three or more people -- have
dysfunctions. There are rarely process problems where technology is the main
problem; it's the wet-ware between the keyboards and the chairs that make or
break projects.

------
wwarner
Yes, plan, the more planning the better. But set the deadline _first_ , and
plan and design around meeting the deadline. Ask yourself, "I have two weeks
to deliver this, but if I had to deliver _something_ tomorrow afternoon, what
would I do?" \-- and do that first.

~~~
house9-2
A.K.A. deadline driven development

this can work really well if you pare down the list of features and build the
simplest version of these features

------
the_arun
For me, not designing the product as a platform is no. 1 root cause of all the
delays and confusions. People then to deliver for speed, make compromises.
Over a period we end up in mud pile and slows down all future developments.
This makes the speed slower & slower with age.

------
JohnL4
The eternal conversation, apparently.

I agree with making fine-grained plans as a way to uncover issues. Just
remember, no plan survives contact with the enemy. On the other hand, fortune
favors the prepared. (I actually think those are both Eisenhower quotes,
aren't they?)

------
progx
And then you make a good plan with buffers and it fails too.

Cause you forgot so many things that cost time: you'll catch a cold, the warm
weather distracts, ignored the existance of family & friends, computer breaks
down, ...

------
js8
OT: I like the "thinking systems" infographic but the fact that the green
areas are few pixels off is annoying me to no end. I hope it's not a new
trend.

~~~
idiomatic1
"to no end" means: uselessly. "no end" means: endlessly.

~~~
js8
Thanks, wasn't aware of that.

~~~
idiomatic1
Welcome :-)

------
xyz-x
Let's try another comparison: that to a skilled surgeon. 4+4 years to get GP
status, then at least 4 years more to become surgeon; another 2-10 to lead in
an operating theatre.

Diseases vary just like reasons for misbehaviour in code. While coding also
has a bit of construction, that surgery doesn't, a lot of coding time is spent
on extending or fixing existing behaviour. As such, the simile is better than
the bridge-construction one.

It would also allow discussing leadership; a strong lead handles the 1-12
hours the surgery takes. He/She's expected to know the human body as a system,
by being able to diagnose adverse conditions that occur during the operation,
instructing the people around her as she goes.

Operations can't take too long, or fatigue gets you; the corollary being that
you don't have slippage like you do in s/w dev. You can't be too unskilled, or
else you can't perform the work. You have to have lots of training before
you're the one leading.

Contrasting education; a lot of physicians go through much rote memorisation
when they start their education. Then they continue with laborations; this
could be useful for software engineers and operations folk, by letting them
try their hand at diagnosing production systems having problems. Such training
could be done by a 20-questions approach, with each question being answered a
metric of a category of log entries. In the end you should know what the
problem was. Labs at uni are much more constrained; they don't teach how large
production systems behave and don't teach mental tools to debug them, they
only teach the basics of programming. Furthermore, what they teach of
programming is never geared towards what real systems look like, because most
teachers have never been close to one.

There's no-place in the world that you can be education like what a surgeon
would get; comp-sci is more like training everyone to be a psychologist
(because we want to get the full picture!). Apprenticeship programs are more
like training to be a nurse.

\- The formal education people receive is not applied towards bettering
industry performance, it's geared towards inventing and academia \- Shorter
education people receive is not about understanding the system and how to
construct and fix it; it's about working next to it \- Similar hard rote-
memorisation tests could be coupled with in depth debugging/operations
sessions and experienced teachers active in large production systems, like
understudies preparing to be expert surgeons \- In the light of this; we need
a career ladder that doesn't end with the same title you start with
"engineer"/"senior engineer". One that is structured.

------
deedubaya
Software development is much more like R&D than it is engineering

