
Why Programmers Are Bad at Estimating Time - jpro
http://java.dzone.com/programmers-are-bad-estimating
======
bradleyland
A simple multiple is not the answer. This is an incredibly well studied
problem. At the basis of all of the reasoning is the inherent uncertainty of
the task. Estimation is only accurate when all the details of the required
task are known. For example, you can estimate how long it will take to
assemble 5000 widgets if you know the average time it takes to assemble 10
widgets, and that time is consistent between executions.

With programming, there is no atomic unit of work that is consistent between
projects, thus highly accurate estimation is next to impossible. What seems
similar from a conceptual standpoint is often completely different depending
upon the input factors: lanuage used, experience of the team, specific project
requirements.

This problem has given rise to many new ways of working on programming
projects. Solving this issue inside a company means building understanding of
the inherent uncertainties with management and stakeholders. Solving this
issue as a freelancer means inflating estimates and hoping and praying that
you can keep the customer in check, because "we can't estimate that" just
doesn't work for consultants.

~~~
jacques_chester
Watts Humphreys made the point that any seriously constructed estimate is
_still_ more useful that no estimate at all.

The SEI have claimed that organisations with well-defined processes working on
problems where they can control many factors such as language and tool choice,
with stable staff and so on can in fact get to within 5% variance (sorry,
can't find the source right now).

If your software shop is working on same-y tasks (and many do), using
historical data as a guide to future performance is perfectly reasonable.

~~~
fghh45sdfhr3
In my anecdotal experience, at large corporations with well-defined processes,
the 5% variance is the result of huge time estimates. Then you make sure you
don't implement any faster than the estimate. (Even if it turns out you could
have done it half the time.)

~~~
davidcuddeback
I was thinking something similar. In my anecdotal experience [1], large
corporations can control variance by normalizing everyone's velocity. For
example, in order for me to finish a feature, I had to write a document
describing how I would solve the problem and convince my manager and tech lead
that it would work. After spending about two months getting my plan approved
[2], they let me implement it, which took me about four hours. At that ratio,
the actual development time is practically negligible, so it's easy to have
low variance. The side effect is that you also have very low productivity.

Imagine the opposite scenario, where they let me implement it first and then
approve it if it works. Let's say it takes me a couple days to finish it (this
time has to include some of the time that it took me to figure out how to
solve it originally). Now the feature is getting done in a matter of days
instead of months. The flip-side is that you can have higher variance, because
if there's a problem with my initial implementation, that adds more time to
fix the problem. In this scenario, the time it takes to finish features is
more dependent on the engineer's ability than the time it takes to pass
through the bureaucracy, and there's going to be more variance in the
abilities of individual engineer.

[1] I worked at Microsoft as an intern in the summer of 2009.

[2] There were many days that I literally couldn't do anything except wait for
clearance to go forward.

------
droithomme
I must strongly disagree with the article. The problem and its true cause are
well known.

Manager: "How long will it take to build X."

Experienced developer: "I'll need a couple weeks to prepare an estimate."

Manager: "I don't have two weeks. Just a rough estimate, no pressure."

Developer: "This is a big project with a lot of uncertainties, with Abdul,
Karen, and Jim, we might be able to get it done in 12-18 months, minimum,
assuming there are no scope changes."

Manager: "No, that's wrong, we need it in three weeks maximum, and you can
only have Amy."

Developer: "The new girl? It will take months just to get her up to speed with
the skills. It's impossible."

Manager: "Make it possible."

Developer: "Wishing doesn't make it possible."

Manager: "You geeks are unprofessional and your bad attitude is damaging the
company. Where is the can-do team spirit like the sales team has?"

Later...

Executive: "How long will it take for the new project?"

Manager: "Five weeks, just like you wanted, assuming the dev doesn't spend the
time posting on the internet."

Executive: "Good I am giving the project the green light then."

One, two, three and four weeks later:

1: "Here are some scope changes."

2: "Here are some change requests."

3: "This should be done completely differently."

4: "The way ABC works has been changed to DEF. Should just be a minor tweak."

End of five week death march:

Executive: "Why is this not done?"

Manager: "The stupid geeks are unprofessional! Please refer to this chart I
found on the internet about how these losers don't know how to estimate!"

~~~
ThomPete
Even with a couple of weeks the risk of the estimate being wrong is high.

------
edw519
Why blame the programmer?

Bad estimates, like bad anything, have all kinds of possible causes. The
programmer's "estimating weakness" is just one possible (and often unlikely)
cause.

In general,

    
    
      A. People do stuff.
      B. Programmers write code.
      C. People do stuff.
    

so don't automatically blame the programmer for fuck-ups in Phases A or C.

More specifically, one of my recent real world examples:

[http://edweissman.com/it-takes-6-days-to-change-1-line-of-
co...](http://edweissman.com/it-takes-6-days-to-change-1-line-of-code)

~~~
ThomPete
Exactly!

It has nothing to do with the programmer. It has everything to do with what
projects really are today.

1\. Undefined in scope even when defined 2\. Problems to be solved
(innovation) not solutions to be produced (production) 3\. Filled with hidden
complexities of potential infinite character.

 _Copied from another post of mine:_

Time estimations is an industrial way of thinking applied to a post-industrial
(and post-capitalist) world.

In the post industrial world time isn't the problem but rather project
definition and scoping. In the industrial world the problem was already solved
(machine was built, market often established and output depended on a few
factors that could be adjusted. Need more output add more of X)

In the post industrial world every project is about problem solving and
scoping. To put it into comparison.

If we apply post-industrial reality to an industrial world.

It means that each time a product needed to be done if not the factory, then
the machines would have to be developed.

It will take many many years before time estimation will die, but it will
happen.

~~~
ahsteele
Do you have a link to that post? I really like your summary here and would
like to link to it directly, outside of linking to this comment.

------
noonat
Programmers are often asked to accurately estimate tasks that are too large to
estimate. The smaller or larger the task, the more likely it is that there
will be errors in the estimate. The larger a task, the more likely there is a
detail that has been missed during estimation, the more chance that you will
encounter a surprise during development that greatly increases development
time.

Most successful estimation methods boil down to refusing the estimate larger
tasks, and instead breaking them down into smaller tasks. The process of
breaking them down forces people to think through them in more detail, and you
flush out many of the surprises early on. Similarly, most of the successful
systems require you to apply a minimum amount of time per task (e.g. one
hour). This removes the problem of programmers underestimating simple tasks.

The best estimates come when you continue to use that system over several
iterations of work, and analyze the accuracy of previous estimates. This gives
you (and other developers) a better sense of how to make good estimates.

Sadly, even when you have a reliable system of predicting how much work a
development team can get done in a given chunk of time, many managers bow to
pressure from above to hit deadlines, and expect the team to accomplish more
than the data says they are able to accomplish.

~~~
roc
I would add that the only really good estimates happen when a small task is
estimated by a programmer who has already performed that task, at the same
level of complexity, a few times before. (And the programmer doing the
estimating is the one who'll be doing the implementing, natch.)

Even at the scale of small tasks, programmers estimating things they've never
done before are often wildly wrong.

~~~
HeyLaughingBoy
You are correct, but note that a wildly wrong estimate provided with good
intentions is better than no estimate at all.

If I get an estimate for something that's never been done, I want to know that
it's a SWAG so I can rate it accordingly. If you go ahead with the task, I'd
probably ask you to give me more frequent updates than normal, so we can see
if the estimate needs to be adjusted.

I know that a lot of people work for hugely disfunctional organizations, but
in most reasonable places, it's expected that some estimates will be wrong and
either the task should be killed, or the schedule adjusted.

~~~
roc
Oh absolutely. It's like that old chestnut about plans being useless but
planning being essential.

And one can even find productive use for an aggregate SWAG-to-reality ratio,
tracked over time. Individually, that ratio won't give you any better an
indication on where a given SWAG is likely to land, but in the aggregate you
can use the ratio to put something like error-bars on the project as a whole.

------
don_draper
Programmers are bad at estimates for one main reason: Many think that to stay
employed they must give the answer the manager or client wants to hear and
often that number is way too low.

~~~
gcp
I've done some work at a customer where they used Planning Poker. I'm guessing
the idea comes from some Agile development or Extreme Programming methodology.
I'm not a fan of the latter, but Planning Poker was entertaining and seemed
like it had some good points.

The team sits together, everyone has a deck of cards with values. The team
lead calls out a task and everyone throws down his estimate at the same time.
If the values diverge widely, the team lead asks the outliers to explain, and
everyone can reconsider after the explanation. The highest remaining number is
taken as the estimated time.

This was done in days for big tasks, who where then split up in sub-tasks
where the same was repeated in hours.

Sharing the responsibility of estimates prevents the manager-pressure problem
you talked about. If the team estimates it takes 5 days and the manager thinks
it should only take 3, the team isn't going to back down (unlike what could
happen in 1-1 conversations), and the manager will ignore the estimate at his
own peril.

[http://www.codinghorror.com/blog/2007/10/lets-play-
planning-...](http://www.codinghorror.com/blog/2007/10/lets-play-planning-
poker.html)

~~~
ctdonath
Planning Poker helps, but a few caveats must be recognized:

Some people have little, or no, idea the effort required for a task. Fair
enough, just don't base plans on known ignorance. Compelling them to vote is
silly.

Some people have differing notions of "done". One self-appointed hotshot may
be able to crash thru the code and make something run happy path in a few
hours and thus gives a low estimate, but unlike others fails to account for
ripple effects, breakage, documentation, consulting with others, etc. Are
votes for "it works"? or "everything related is completed"?

All too often outliers are _right_ , but dismissed because they're outliers
and given just a few impromptu seconds to explain complex nuances why they're
right and the disinterested opposition is wrong. Subsequent "I told you so"s
much farther down the road are disheartening, and kept quiet as the opposition
still doesn't see the connection nor want to.

Again, make sure the estimate includes _all_ related work. Implementation may
be quick; integration testing or documentation may be long. Don't give
something a 3 to do and ignore the additional 5 needed for ripple effects.

~~~
JoeAltmaier
THe team quickly learns what everybody is going to say for each part of the
estimate, and they all throw down the same card. Nobody wants to spend any
more time in the meeting, and throwing down an outlier means thinking hard and
explaining yourself.

------
snowwolf
The top answer to this question on Quora is the best I've seen.
[http://www.quora.com/Engineering-Management/Why-are-
software...](http://www.quora.com/Engineering-Management/Why-are-software-
development-task-estimations-regularly-off-by-a-factor-of-2-3)

What really surprises me is that everyone accepts that there have to be
estimates. While estimates are useful in many scenarios, most of the estimates
I see developers giving add no value and are a waste of time. I've been
working with teams now for the last 2 years who don't do estimates, and they
are the highest performing teams I've ever worked with and the rest of the
business have the highest trust in IT I've seen.

------
njharman
The (only) way to get good at estimation is to make them, track your time
accurately, then compare actual to estimate with focus on what factors
influenced the estimate to be wrong. Repeat.

This will also provide a velocity (actual / estimate) which future estimates
are multiplied by. My initial velocity on new project is 2. That is if I think
it will take a 8 hours, I estimate 16 hours.

Also the unit of estimate is "eng hour" not "wall clock hour". You need to
multiply estimate by friction factor to account for standups, meetings,
demo's, hallway talk, breaks, etc. Typically 1.2-1.5 depending on environment.

btw the "Tracking time spent accurately" is the main reason developers don't
get better at estimation. It's hard, developers don't do it, PM's / Managers
are interested in billable time or other derivative metrics and they and sr
eng rarely provide the support needed to develop estimating skill.

~~~
iamwil
I've tried this before, and I didn't find that I got any better. I had an
excel spreadsheet tracking each task, my time estimate, and the actual time
spent doing it. And I found that while for half to most tasks, I was fairly
accurate, every couple of tasks, I'd have a task that was wildly off. Thus, my
variability was huge for task estimation.

~~~
HeyLaughingBoy
How large were your tasks? The rule of thumb I use is that no individual task
should be more than 4 hours. If it comes out longer than that, I break it up
into subtasks. It is much easier to estimate small things than big things.

~~~
iamwil
The tasks were at most 8 hour estimates. Perhaps I should try 4 hours next
time.

------
codegeek
I don't think that the problem if being able to estimate time is only with
programmers. Time estimation is a combination of many factors. In the context
of programming or software development, it can be anything from the skills of
the developer, constant customer/client scope changes, sudden resource loss,
critical vendor bugs that are usually more time consuming and cannot be fixed
in-house (if using vendor system), sudden change in company's budget policy
and project criticality (i have worked on million dollar projects that are
delayed because management does not want to take a risk due to other un-
related issues) etc. Unfortunately, all these variables are extremely
difficult to estimate.

So even if a programmer is good and can estimate almost accruately from
his/her side, these other variables can throw off a project in no time.

------
vbl
I know this isn't a real answer, but I say: fuck estimating.

I'd much rather have my team focus on good on-the-fly prioritization, recovery
from rabbit holes, spiking efficiently, smart dependency engineering and
building fully featured stories.

Building software - especially anything new, or with an odd mix of
integrations (i.e. everything) - is full of a lot of unknowns. When you accept
that, and stop trying to control or predict it, you can spend more of your
time and methodology horsepower on things that deliver more reliable value.

~~~
ctdonath
Easy when software is the totality of development. When there's hardware
design, marketing, training, etc. happening in parallel, top management needs
to know how long it will take. Sometimes, if you don't get the software out
the door by an arbitrary date, the company is finished; _estimate_
accordingly.

~~~
mattvanhorn
This is exactly why software estimating is bullshit.

If the software has to be out the door by a specific date or the company dies
- you DON'T NEED AN ESTIMATE.

Are you going to cancel the project based on the estimate? Dead Company

If the project is _really_ too big to complete in time: Dead company, and
optimistic estimates won't help.

If you could build a subset of the project and survive, then you still don't
need an estimate - you just do the important stuff first.

If you were being chased by a lion, would an estimate of the time necessary to
outrun it be of any use whatsoever? Of course not. You start running, and you
either make it or you don't.

If there is other stuff happening in parallel, like marketing, then it is
happening in parallel, and has no dependency on the software. That's what
parallel means.

If you have a critical path dependency, then you're back to the situation
above where you are going to make it or not, and the estimate won't really
affect your ability to deliver.

Even estimating for cost is bullshit - if you practice lean development, you
watch your metrics and if something doesn't work, just scrap it for the
minimum amount of money lost. You can't do much better than that, except for
not starting, and in that case you don't need an estimate.

I prefer the approach that saya "we're going to deliver feature X, and if it
takes longer than Y or costs more than Z, we abort." You know exactly how much
you stand to lose under this plan. The problem is, this means that management
has to be involved on a day-to-day level with monitoring progress and making
go/no-go decisions, when most managers I've met would prefer to have a single
3 hour planning meeting and then disappear for 4 months to raise money or play
golf or go to tradeshows in Vegas or whatever.

~~~
ctdonath
No, correct estimations mean management knows "X cannot be done in time T",
and now can act accordingly to decide what subset/variant of X to pursue
instead.

"Dunno, we'll give T a shot" will fail.

~~~
mattvanhorn
I was responding to the comment that established "X _must_ be done in time T"
or the company fails.

In a perfect world, estimates would come before deciding to proceed, but in
95% of the places I've ever worked, the project necessity is decided first and
the estimates are asked for afterwards.

In reality, the project often settles for a shoddy subset of the original
vision in order to make an arbitrary date, when they could have had the same
exact subset at much higher quality if they had merely proceeded with the
important parts first in a lean/agile manner, using micro-estimates (if
bothering to estimate at all).

A bigger problem with estimating the solution of unknown problems is that
confidence in the estimate varies inversely with the precision (or usefulness)
of the estimate. I could estimate all my projects with 99% confidence as
"somewhere between 1 day and 1 year".

That's not a useful estimate.

Or I could estimate "this will take 42 days" and be wrong 99% of the time.

------
jacques_chester
Steve McConnell of _Code Complete_ and _Rapid Development_ fame also wrote a
book about estimating software development -- _Software Estimation:
Demystifying the Black Art_. It's a great read and as usual McConnell is an
absolute master at distilling vast reams of research into concise, interesting
reading.

------
don_draper
But is it just a programmer problem? How good are lawyers at giving estimates
for solving legal problems that are even slightly complex?

~~~
Albuca
Its everyone's problem.

Everything in life has a deadline, and at every point in your life you'll be
asked to make and meet them.

From dinner to that assignment/project due, the pressure is always there.

I feel that most people underestimate the time things take, mainly because
they want to look compitent. There are many things you just cannot put a
deadline on; and yet you will be asked to.

Under promise, over deliver. Either way, they want results.

------
akeefer
The only thing I've ever seen work (and I've seen it work well) is to avoid
estimating based on time. Instead, you estimate things relatively and then
empirically measure how long things take (known as the "velocity" in agile
parlance). So the programmer doesn't have to think about the 1001 things that
they have to do in order to complete a task, or all the distractions or BS or
overhead they'll have to deal with, they just need to think: is task A about
as hard as task B, or is it twice as hard?

There are a key pieces to making that work. First of all, you give your
estimates in terms of "points", not hours or days. Secondly, you estimate
tasks relatively to one another, not absolutely: a 3-point task is about
3-times as much work as a 1-point task. Third, you have multiple people give
the estimates, often using something like "planning poker" (where everyone on
the team selects their estimate and reveals them simultaneously, then
discusses if they're different), which ensures they're more reliable. Fourth,
you measure the velocity of the team (not individuals, the team) over time. It
often takes a while for it to get anywhere close to stable, as people get used
to the project, the technology stack, each other, and so forth.

That's the only thing I've ever seen work, and it can actually work pretty
well. It prevents the developers from having to account for all the non-coding
things: you don't have to think "Well, this will take 2 hours of coding, 2
hours of test-writing, 1 hour of debugging, 1 hour of docs . . . but I also
have 2 hours of meetings a day, and I get pulled off to firefight customer
problems periodically, so really it'll take 4 calendar day." You just measure
all that stuff empirically: if you have a bunch of meetings or firefighting,
your velocity slows down, but your estimates don't have to change to account
for it.

It also avoids wishful thinking, especially if you're rigorous about what
counts as something being "done". It's harder to lie about whether or not
you're going to hit your dates if your measurements say you're going too
slowly; it's far easier to be in denial when all you have are time estimates,
since those are easier to fudge than relative estimates of difficulty or
empirical measurements of velocity.

(Of course, as with any measure of productivity, it's subject to abuse by
idiots. But as a scheduling and measurement tool, it can be invaluable.)

------
rickmb
This bit made me stop reading immediately: _"The task is way too large to get
an understanding of for most programmers. It has to be sent back to an
architect.."_

I checked the date to see if it wasn't written in 1982 before I closed the
tab.

~~~
dpark
Yeah, that was a really poor comment. If your developers can't break down a
1-week task without an "architect" stepping in, then either your process is
broken or your developers suck (or both).

------
RyanMcGreal
Humans are generally bad at estimating time (and budget) for projects, in
large part because we tend to take an "inside view" [1] and imagine
unrealistic best case scenarios, instead of making an estimate based on
empirical data for similar projects completed previously.

[1] <http://www.overcomingbias.com/2007/07/beware-the-insi.html>

------
peteretep
The most popular article on my blog is called "How To Estimate Like an Adult":

[http://www.writemoretests.com/2012/02/how-to-estimate-
like-a...](http://www.writemoretests.com/2012/02/how-to-estimate-like-adult-
part-1.html)

Estimation is a skill, like any other part of learning to be a good software
engineer. People shouldn't be making excuses for their poor estimation skills
after ten years of experience...

~~~
bmj
I think this is a good point, but, also, see edw519's comment at the top. You
should work on your estimation skills, but management should understand the
values you give them are for the work, not necessarily today + your
estimation. Stuff happens, and your estimates seem to slip, but in reality,
they could be spot-on for actual time it would take do the task without
interruption.

------
stripe
Estimating time is no trivial task. From my past experience I can say that it
really differs from person to person from team to team. In fact in the past we
have wasted so many hours estimating work load, time that could have been
better spent on the actual tasks. It took quite some effort to get into a
Kanban style process. Not for developers but for business and product managers
as their central question 'when?' is meassured differently now. We got away
from developers guessing how long it will take and instead take the past
performance to guess future timelines. Works better for us. The whole point
is: Take burden off your developer, improve your processes around him. Provide
better requirements, give valuable feedback, understand issues and risks. The
more input from your side, the better the 'estimations' from your developer.

------
sequoia
"4 hours: This is probably the only realistic estimation. It is large enough
to have some margin for unexpected problems, while the task is still small
enough to grasp."

    
    
        1. Estimate everything in 4 hour increments
        2. 100% accurate estimates
        3. Pizza party

~~~
saraid216
This was also my entire takeaway.

------
ThomPete
Time estimations is an industrial way of thinking applied to a post-industrial
(and post-capitalist) world.

In the post industrial world time isn't the problem but rather project
definition and scoping.

In the industrial world the problem was already solved (machine was built,
market often established and output depended on a few factors that could be
adjusted. Need more output add more of X)

In the post industrial world every project is about problem solving and
scoping.

To put it into comparison.

If we apply post-industrial reality to an industrial world.

It means that each time a product needed to be done if not the factory, then
the machines would have to be developed.

It will take many many years before time estimation will die, but it will
happen.

~~~
iamwil
If writing software is more akin to writing essays than building bridges, and
thus hard/near impossible to do accurate estimations--

Why do we need time estimates at all? Why does a company manager need time
estimates? Is it mostly budget concerns?

How would a company operate if there were no time estimates?

~~~
ThomPete
That is hard to say and it would be a book worthy to explain what could come
next.

All I know is that it is unsustainable.

The complexity is simply too high and it's not getting better. One of the
reasons I think why you see the fail fast movement be so successful.

Once you accept that failure is part of the process, once you abandon the
"zero mistake" policy that many large organizations instill internally and
externally you will begin to approach projects differently.

The truth is that "zero mistake" organizations make as many mistakes as
everyone else, they just have the financial strength to ignore them as long as
economy of scale works in their favor.

I could write forever about projects that went wrong not because the
developers where bad but because the premise that fuels product development is
broken.

I blame primarily business schools and large parts of academia for this. But
it could extend all the way into the way the stock market is structured.

If you buy my premise that post-industrial is different than industrial age.
That project definition is primary and time is secondary today. Then it does
put some doubt at least in me about whether the stock markets focus on growth
and Q's is sustainable.

Nature seems to be doing a good job as pacing various processes. It takes nine
months to give birth to a child. One cell at a time. But the process is
ongoing. Nature is the ultimate continues deployment strategy.

------
rburhum
When explaining this, I usually turn it around and say "How long will it take
you to get 10,000 paid recurring customers? OK, how about 2?" Then the concept
of task estimation and scope starts to make sense to most people.

------
debreuil
I just want to point out that in #1: "Time for starting the computer, the
development environment and getting the right source." What programmer needs
to start their computer to fix a bug? That is crazy talk.

------
bceagle
The annoying thing for me is not necessarily that programmers are bad at
estimating time but rather that programmers and others don't fully
understand/appreciate that fact. People who don't understand it always seem to
try and push developers to come up with timeframes that are either completely
unreasonable or have so much padding that they end up being ridiculous. I
think the solution to this whole problem is simply getting everyone involved
to read articles like this and start leaning more toward managing priorities
than setting hard deadlines.

~~~
bceagle
But to counter my own point, the only reason why I think deadlines are good
are for some people that don't put in a full effort unless there is a
deadline. Even with that, however, the person setting the deadline should
understand the estimates will be off and expect things may very well be late
no matter how hard you push the developers.

------
stcredzero
Does the "wisdom of the crowds?" help in this case?

At the company I just did a project for, there were no requirements that
weren't vague general statements, only "bugs" that are discovered after the
fact. Discovery after playing with the app is fine, but calling it a bug is
not. Also, all estimates were done on the spot or on short notice (as if they
were "bugs.")

Even worse, it turns out that there is a lot of knowledge about requirements
and use cases, but it only comes out in the form of "this is a bug because..."

You can think of this as "avoiding documentation through blame."

------
brudgers
I've been estimating my time as a designer in the AEC industry since my first
job more than 20 years ago.

The only reason I did was because Greg, the VP of Engineering, said doing so
was the only way I would get better at it. And the only reason I got better at
it was because I've tracked my actual time against my estimates on many
occasions over the years and been burned by bad assumptions from time to time.

At this point I have a process which produces a fairly accurate range of
"billable time" based upon an important recognition: a list of tasks with
assigned times doesn't account for flow nor does it account for the efficiency
experience brings when dealing with "known unknowns." My first estimates tend
to be wildly pessimistic (the opposite of what the author observes in the
estimates of others).

On the other hand, one of the ways I evaluate my initial task list with
assigned time estimates is by chunking the work into half days, because I've
found half-days to be highly accurate across a variety of project sizes...at
one point all my proposals were written as $xxx.xx/half day.

One of the useful features of the half day is that it is vague. Maybe it's
three hours and I can go for a walk, maybe it's five and I'm at the computer a
little longer. Either way, it doesn't have much impact on my day or
productivity.

Another useful feature is that it allows chunking multiple small tasks
together intuitively.

    
    
       > (equal half-day (add-hours .25 1.0 .25 .5 .75))
          T
       > (equal half-day (add-hours 1.25 1.0 .25 1.5 .75))
          T
    

But the half day doesn't try to force the unrealistic optimism of large time
spans downward - what does it mean to estimate something will take a week?
Billable time does not translate directly into calendar time because of flow
and the productive use of procrastination.

In other words, a forty hour project doesn't mean I will be done in a week
(unless the work is entirely rote, and in that case I am probably going to be
too expensive anyway). From start to end, a forty hour project probably takes
at least two months - which is about the same amount of calendar time an 80 or
120 hour project might take - because a 40 hour project tends to have a higher
proportion of creative time (in my case design) than a relatively larger
project.

------
jameshart
Most people don't even know what they are actually asking for when they ask
'how long will this take?'. I always try to tease out whether the person
asking is looking to find out 'when will this be done by?' or 'how much effort
is this?'. Even when you've figured that out, you need to understand what they
think they mean by 'done'. code is changed on my machine? I've checked it in?
code deployed to QA and tested in multiple browsers and run through the
regression suite? code deployed to pre-production for approval by the client?
code approved by client, deployed to production and live? Also, do you want me
to take into account the time I'll spend waiting for decisions to be made
about how this feature has to work, or just tell you how long it will take me
to do it once you've got me the answers to these questions?

One thing I've noticed is that organisations can only normally operate at a
certain number of 'decisions per day' (and the larger the organisation, the
lower this number). For estimating anything where requirements are not yet
firm, or will be discovered during development, that rate is usually the
constraining factor, not the development effort.

------
MattRogish
Tom DeMarco, of Peopleware fame, wrote a book called "Controlling Software
Projects: Management, Measurement, and Estimation". He has, in some ways,
rejected his ideas in that book.

In this article, "Software Engineering: An Idea Whose Time Has Come and Gone?"
[http://www2.computer.org/cms/Computer.org/ComputingNow/homep...](http://www2.computer.org/cms/Computer.org/ComputingNow/homepage/2009/0709/rW_SO_Viewpoints.pdf)
he states:

"My early metrics book, Controlling Software Projects: Management,
Measurement, and Estimation (Prentice Hall/Yourdon Press, 1982), played a role
in the way many budding software engineers quantified work and planned their
projects. In my reflective mood, I’m wondering, was its advice correct at the
time, is it still relevant, and do I still believe that metrics are a must for
any successful software development effort? My answers are no, no, and no."

The tl;dr of his article is that the software projects that require tight cost
control are the ones delivering little or no marginal value. That is,

"To understand control’s real role, you need to distinguish between two
drastically different kinds of projects:

■ Project A will eventually cost about a million dollars and produce value of
around $1.1 million.

■ Project B will eventually cost about a million dollars and produce value of
more than $50 million.

What’s immediately apparent is that control is really important for Project A
but almost not at all important for Project B. This leads us to the odd
conclusion that strict control is something that matters a lot on relatively
useless projects and much less on useful projects. It suggests that the more
you focus on control, the more likely you’re working on a project that’s
striving to deliver something of relatively minor value.

To my mind, the question that’s much more important than how to control a
software project is, why on earth are we doing so many projects that deliver
such marginal value?"

I believe this as well. If the value of software we're writing is so low, it's
probably not worth being written. The high value stuff isn't worth more than a
cursory (e.g. Agile "Points") estimation.

The other thing, which he doesn't touch on the article, is that in order to
shrink the "cone of uncertainty" (<http://construx.com/File.ashx?cid=1649>)
you must put more and more effort into actually "solving" the problem. Rarely
do estimates consider the _time it takes to estimate_ (requirements docs,
interviews, etc.). Ultimately, if you follow the line of reasoning all the way
out, perfect estimation requires you to solve the problem and all you then
estimate is precisely the amount of time it will take to type the code you've
specced out.

Of course, what you don't have an estimate for is all the time it took to
arrive at your estimate. And thus, the folly of software estimation is
revealed.

(edit: Lack of blockquote is quite annoying, apologies for the messed up
styling)

~~~
overgryphon
One thing to consider about the relationship between value and control is that
value is not a constant. Project A may be worth 70 million if it is finished
in time for partners C and D to do their portion before dates X-Y, and only 30
million or less after that time period. Time often plays a strong role in the
value of a product, and the more time impacts produced value, the more control
is valued.

~~~
MattRogish
That assumes that software development is infinitely scalable. If you knew
going into it that you couldn't make the first deadline, just start off with
more programmers, right? Just like 9 women working together can produce one
baby in one month, right?

Provided you have the right software developers, the product takes as much
time as it takes. Managers can't "speed it up" however much they hope, pray,
and wring their hands.

"Finished in time" - if the requested product can't fit in that time window,
then the only thing to do is cut features. Agile teaches us that if we start
with the highest value features _first_ , we can meet any deadline by cutting
features as we get there.

Throwing intense estimation to see if you can hit that "finished in time"
point with a giant list of features is simply waterfall - and studies have
shown that (for most software projects) waterfall cannot predict _a priori_
whether or not you will hit a deadline.

Thus, if you're working on a project where TIME and SCOPE are fixed, you're
boned to begin with. Spending a few weeks waterfalling won't get you out of
it. If anything, you burned precious developer time in the process.

~~~
michaelt

      That assumes that software development is infinitely scalable.
    

I don't see how overgryphon's post assumes that; whether the project can meet
the required deadlines could be a go/no-go decision rather than an assign-
more-resources decision.

~~~
MattRogish
But even assuming you do Waterfall - which has super strict requirements
gathering and estimation - you still don't get a very high degree of
confidence that you'll hit some arbitrary date in the future. Again, if your
software has a go/no-go based on hitting some date many months in the future,
you're in a no-win situation. Odds are astronomically small you'll hit it if
SCOPE, TIME, _and_ EFFORT (people, since adding people to a late) are fixed.

Again, to paraphrase DeMarco, if the project is that sensitive to time/cost,
it's probably not delivering enough value.

~~~
michaelt
What would you say to a games company that wants to get their triple-A title
out in time for the holiday season, instead of releasing it in January?

~~~
MattRogish
I've never worked in the games industry, but it's my understanding (from the
sidelines, reading about EA Spouse and all that) that games company execs
decide on a release date and then throw everything they've got at it to make
it happen (long work hours, working weekends, etc.).

And given how often games get delayed (Unreal, Team Fortress, Duke Nukem etc.)
it seems they're not very good at predicting software delivery dates either.

I'm not stating that software developers shouldn't estimate at all - sometimes
considering the angles helps you design it better - just that estimation in
pursuit of nailing down a delivery date weeks/months in advance is a bad idea.

~~~
michaelt
We all know the quote that adding extra developers to a late project make it
even later. So if you have a market-imposed deadline you've got to get the
number of developers correct from the beginning of the project. Hard to do
without that much estimation.

------
timr
Where's the box where the project manager, given an otherwise realistic
estimate of project time, decides that the number is too high, and cajoles the
developer into cutting everything by half (or more)? Because that happens _all
the damned time_. I've even got a catchy headline: _"Why do managers always
want to squeeze blood from a stone?"_

Granted, newer programmers are bad at estimating these sorts of systematic
overheads. But people get better at estimating as they gain experience. One
root problem that _never_ goes away is that the people who want the estimates
don't often want to hear about a cost that can't be broken down by line-item
and individually justified. An answer of _"it's going to take twice as long as
we think"_ , however true, rarely satisfies a manager.

------
boomzilla
Simple: programmers' productivity has very high variance, especially in team
project. It's not because programmers are unpredictable, it's the nature of
the work. Some main reasons are:

1) Programming (by that I mean R&D, not configuration/deployment) is, by
definition, creating new stuff. So unless the new project is very similar to a
previous project, even experienced developers wouldn't be able to provide good
estimations.

2) There's always a degree of tradeoff between reliability and new features in
every design decision. Should I use the new version of this library with more
functionality but has not been tested with current code base? Should there be
an incomparability, do I have the time and expertise to work it out? etc.

3) Dependency on other team members and/or other teams.

------
hxf148
I am bad at not lying about it. To myself and anyone who asks. Mostly
forgetting that in reality I can't ignore life's other functions and loops.

It's going to take longer than I said. But it'll be better than I described.

------
sbjustin
I live my life on the Scotty Principle...
[http://www.urbandictionary.com/define.php?term=Scotty%20Prin...](http://www.urbandictionary.com/define.php?term=Scotty%20Principle)

------
markokocic
Most of the errors in estimating time when programmers do estimations is the
assumption that requirements are final. The requirements are never final, and
never will be.

------
paulsutter
Evolution. Our ancestors were the optimists who underestimated the difficulty
of capturing that horse, crossing that ocean, turning that land into a farm.
Those who could accurately estimate the difficulty wisely avoided it, didn't
go hunting, didn't reproduce.

If it was really just a matter of uncertainty, we would overestimate the
difficulty as often as we underestimate. But we don't. Humans consistently
underestimate. I really think it's evolution.

------
slogmen
What it makes really hard to be well on estimating time: Programmers often
have to solve problems they solved never before. You rarely solve exact the
same problem multiple times. And what it makes even more difficult: The
environments and tools you have to solve these problems are changing. So you
don't really have that much experience you can rely of.

------
dugmartin
I think what makes most of us bad at estimation is that we estimate based on
effort and not duration. A good example of that is the first item on the chart
- the 30 second code change that takes an hour. As a solo consultant I've
stopped giving estimates for both and only give duration estimates without the
explanation of the difference.

------
ww520
Ask a house contractor for an estimate and you will find out how off he is
when the house is being built.

Ask a bridge builder for an estimate and you will find out the project is
delayed for years (see Bay Bridge retrofit).

------
known
It's not possible to estimate _debugging_ time

~~~
JoeAltmaier
Kind of. Debugging a serial protocol takes time to set up the line monitor,
run the tests, validate the behavior against the spec, diddle the state
machine to bring it into conformance. Actually pretty easy to estimate that
process.

But other code, its different. Especially tinkering with a big ol dinosaur of
code, to add/change a feature. Debugging becomes almost the whole effort
there.

------
dstroot
Easy - The requester asks for "time" not "duration" Programmer responds with
"time".

------
digisth
Great comments in this thread so far. Let me summarize a few and add some of
my own:

1) Some people do not want estimates. They want unrealistic promises for
purposes that have less to do with the project, and more to do with making
themselves look good in the short term (for a promotion, bonus, whatever.) If
you're in this situation, you need to go out of band with your communication
about the project (i.e., above someone's head) or find a new position/manager.
This is not a good place to be.

2) As some have mentioned, when people ask for when it will be "done",
qualification is needed. Very often you may be asked for "calendar time", not
"perfect engineering days" or their equivalent. Make sure you figure out which
is being asked of you; not doing so can end badly.

3) Some estimates are well within the range of the possible, others are truly
just not easy to come by. Figuring out the difference between these and
communicating them is extremely important. "Build a perfect X system for our
Foobarbazio engine." "GiantCo spends millions a year on this problem and still
hasn't solved it. No one has. More basic research is needed (on AI, ML, NLP,
whatever) to even begin to do this." "I don't believe you! Bilk-us Consultants
says they can build it in 3 months! What do we pay you for!" "..." They aren't
all so easy to tell apart.

That all said, there are many good rules of thumb to get you at least a basic
estimate, many of which are covered in the above comments. If I had to pick a
few favorites, I'd pick:

1) Ask someone who has done it already. If someone hasn't the (almost) exact
thing, ask someone who has done something roughly similar. If you can't find
that, ask someone who has done something /vaguely/ similar. Try to transpose
the specifics (they used Oracle, we're using Postgres; they used a great
library, we have to write our own.) If this is truly green-field stuff, try
to:

2) Build a prototype. No, you will not get the one million tiny details that
can bite you during real development, nor the unknown unknowns that can crop
up while trying to solve truly hard(er) problems, but it can be a helpful way
for getting your mind wrapped around the problem. If this isn't enough, you
can also:

3) Build another prototype (it's the second, so maybe we should call it a
deuterotype.) Flesh it out. Make it bigger than the original, but ignore the
(currently) irrelevant details. Yes, this takes longer, but it's better than
getting halfway through full-time development and realizing it's not going to
work.

4) If after doing the above, you feel like you're no closer to an estimate,
either give up and just develop it, and you'll eventually get to the point
where you might be able to estimate the rest. This can of course blow up if
you hit an impassable snag; that's why it's a last resort.

------
IndianGuy79
my manager is worse, and he doesn't even program!

------
executive
because they don't account for the 8 hours a day they spend on HN

------
logjam
I've long maintained that time estimates should not be a programmer's task, at
least in most shops. That's one of the few things that a "manager" should do
and do well: insist on realistic and as-precise-as-possible requirements,
collect metrics over time to make rational estimates, and shield the team from
eternal corporate stupidity.

------
paulhauggis
99% of the time, I'm bad at estimating time because my boss or manager decides
to keep changing the requirements.

------
blindhippo
And managers are bad at understanding how long it takes to do pretty much
anything.

Most effective flow I've found is: 1) ask programmer how long, 2) drop that
number by 1/3 and give them a deadline, 3) indicate that the deadline needs to
be adjusted if anything unforeseen appears.

Adjust ratio in item #2 as the team grows the relationship.

