
Software Effort Estimation Considered Harmful - MattRogish
http://mattrogish.com/blog/2012/08/16/software-effort-estimation-considered-harmful/
======
akeefer
There are two serious problems with this post, and it really saddens me that I
see these sorts of posts so frequently here, with so many concurring voices.

First of all, cost absolutely 100% has to factor into prioritization
decisions. That doesn't require absolute estimation, but it does demand
relative estimation (which he mentions tangentially at the end of the post).
If Feature A will deliver $100,000 in revenue but take 18 months and Feature B
will deliver $10,000 in revenue but take 1 week, the choice is pretty obvious.
What matters is never "return" but always "return on _investment_." If you
don't know anything about the I side of the ROI equation, you're doomed to
make bad decisions. With no estimate at all, and just a snarky "it'll take as
long as it takes, shut up and let me work" response, you'll inevitably focus
on the wrong things.

Secondly, many of us do in fact have deadlines, and they're not just total BS.
If you have a customer that you're building something for, they have to be
able to plan their own schedule, and just telling them "well, maybe we'll ship
it to you in 10/2012 or maybe in 6/2013, you'll get it when you get it"
doesn't fly. And it's totally reasonable that it doesn't fly: if they have to,
say, install the software, or buy hardware to run it on, or train users, or
migrate data from an old system, or roll out new products of their own that
are dependent on the new system, they clearly can't plan or budget those
activities if they have no clue whatsoever when they'll get the new system.

And if you do have a deadline, you kind of want to know, to the extent that
you can, if you're going to make it or not so you can take corrective action
(adding people, cutting scope, pushing the date) if need be. You can't do that
if you have no estimates at all.

Relative estimation of tasks with empirical measurement of the team's velocity
works pretty well; it doesn't work in all cases, but it's pretty much always
better than absolutely nothing.

There's a huge, huge difference between doing relative point-based estimation
and date-driven, pointy-haired-boss estimation, and it's a total disservice to
the software community that so many engineers seem to not really understand
that difference, and seem to think that the only two options are "unrealistic
date-based estimates" and "no estimates."

TL;DR - Don't just rant for 3000 words about how estimation is horrible and
then add in one sentence about relative estimation. You'll do the world much
more good if you just help educate people how to do things the right way and
spare the ranting.

~~~
jacques_chester
> _There's a huge, huge difference between doing relative point-based
> estimation and date-driven, pointy-haired-boss estimation, and it's a total
> disservice to the software community that so many engineers seem to not
> really understand that difference, and seem to think that the only two
> options are "unrealistic date-based estimates" and "no estimates."_

This was my beef with the article too. Basically on the one hand he proposes a
strawman composed of known-worst practices (estimate-by-deadline, estimate-by-
gut, ad hoc estimation and so on) and thereby tars _all_ estimation with the
brush ... except for the one alternative he approves.

This is the fallacy of dichotomy.

The root problem is the concept that estimates have to be accurate. Well, duh,
they can't be. The bigger the project, the more people, the longer the
timeframe, the less likely the project is to meet the estimate.

 _That's why you don't perform one estimate_.

 _That's why you have confidence bands on estimates_.

The whole blog article feels like a pastiche of criticism cribbed from agile
books and not from a direct, thoughtful engagement with the primary or
secondary literature on software project estimation.

I'm only 31. By any measure I'm still a young man. Why do I feel like such a
curmudgeon all the time? Because apparently nobody reads _books_ or _papers_
any more. It's all blogs citing blogs who echoed the summary of the notes of
the review of a single book.

One more thing. There's a difference between a _plan_ and an _estimate_. Plan-
and-control is not the same thing as performing an estimate; DeMarco's self-
criticism is not directly applicable.

~~~
jwhite
I agree. What would you recommend as a good modern book on software
estimation?

~~~
jacques_chester
As usual, Steve McConnell has done the hard yards of turning literature and
research into something readable and instantly applicable.

[http://www.amazon.com/Software-Estimation-Demystifying-
Pract...](http://www.amazon.com/Software-Estimation-Demystifying-Practices-
Microsoft/dp/0735605351/)

Every time I estimate for clients I always talk about the Cone of Uncertainty.

~~~
reedlaw
This is why I agree with the original article over these comment parents. I
work in a small software agency for clients who would never grasp the Cone of
Uncertainty. They are much closer to the pointy-haired boss type than the type
of person who appreciates the finer points of software project estimation.
While reading the literature is good, the average developer will seldom find
the time to do so. And even if they do, an off-the-cuff estimate is often
better than carefully planned specification documents that no business
stakeholders will ever read.

Of course accurate estimates have tremendous business value. But in reality
they often come at the expense of what the client really needs which is
delivery of features. I have seen estimation and tight project control taking
up substantially more time than delivering actual features. And it was exactly
as the OP stated:

> Software projects that require tight cost control are the ones delivering
> little or no marginal value.

The lesser the project value the tighter the control leading to a vicious
cycle of developers cutting corners and increased micro-management.

~~~
jacques_chester
> I have seen estimation and tight project control taking up substantially
> more time than delivering actual features.

It sounds to me that what you saw was a conflation of _estimates_ and _plans_.
Which is a common error.

~~~
reedlaw
Clients sometimes want an estimate of how long a single feature or fix will
take, even when it will take only 15 minutes. The communication overhead and
time spent estimating easily outweigh the time to implement.

~~~
jacques_chester
I'm ... not sure what this proves?

~~~
reedlaw
Nothing, just an explanation of what I meant by estimate.

------
btilly
He's sometimes right. There are lots of good reasons why you might need
software schedule estimation. When you do, there is no point in throwing up
your hands and saying, "You can't do that, everybody knows it." Instead get
[http://www.amazon.com/Software-Estimation-Demystifying-
Pract...](http://www.amazon.com/Software-Estimation-Demystifying-Practices-
Microsoft/dp/0735605351/ref=sr_1_1) and teach yourself how to do it.

Why would software estimation matter? For many companies it matters a lot if
you can announce something in time for Christmas. Or you might have to hit a
deadline for compliance with a legal contract. Or you may be trying to
prioritize two different possible products - without a sense of the effort
involved you're lacking one of the parameters you need for such an estimation.

That doesn't mean that you always estimate. If the value proposition is clear
enough, you're going to work on it until you're done and then you will be very
happy. But the real world does not usually work like that.

~~~
MattRogish
"For many companies it matters a lot if you can announce something in time for
Christmas."

It will be done by xmas or it won't. I don't recommend "no estimation", but
intense estimating won't make you hit xmas. Not estimating may give you
several weeks that you could be coding.

Either way you're takin a mighty big risk when you publicly commit to that.
Welcome to hell, population: Dev Team. I've known of no major software project
that has a public date that hasn't resulted in major delays, feature cuts, or
massive overtime to try and hit that date.

"Or you might have to hit a deadline for compliance with a legal contract."

That is also a suitably terrible position to be in. I recommend you don't take
such contracts. Again, your estimates will be wrong. Even if you spend months
doing them. Then you're in the same spot.

"Or you may be trying to prioritize two different possible products - without
a sense of the effort involved you're lacking one of the parameters you need
for such an estimation."

Which delivers most business value? I find it farfetched that both are of
equal importance.

~~~
squirrel
I have just moved from B2B, where many "deadlines" were largely artificial and
driven by sales targets rather than real business need, to online retail,
where the deadline is often concrete, immovable, and external (trade show,
Olympics, Christmas). In both environments I found it most useful to encourage
my team to commit to completion of _the best solution they can manage_ by a
business-meaningful date (picked by the team after suitable consideration, of
course). In this model the team do not agree to a specific solution or scope,
though to pick the target date at the beginning they usually have an initial
solution in mind; as they work toward the date, they (working with the
customer representatives in the team) have the freedom to cut features or
components, or to add new ones, as they grow in their understanding of the
feature and the limitations on delivery (legacy code, lack of experience,
unclear business needs). I have generally found that in an environment where
development teams are trusted (yes I know such environments are far too rare)
this produces results that are as good or better than the recipients expected,
and almost always on time or nearly so.

~~~
Domenic_S
_driven by sales targets rather than real business need_

If you're in the business of selling software, sales targets _are_ a real
business need.

~~~
squirrel
I should have said "sales quotas" rather than targets. Your target to sell £x
million in June reflects a real revenue need for the business, but the
deadline of 30 June this implies is artificial, and the business would (in
most cases) do just as well if you delivered the feature and took the revenue
on 1st or 10th July.

------
tmurray
First, continuous deployment works fine for web projects, but it's not an
option if you've got a packaged product that you have to ship at some point.
Agile is not a panacea for these environments.

It's important to differentiate between the two kinds of software that you can
write. This is post is dead-on for developing truly new software (new
features, stuff that fundamentally hasn't existed before, etc). However,
estimation (even the kind of project-manager-driven estimation that most
engineers, myself included, generally hate) can work really well for software
where you're doing basically the same thing you've already done in the past
such that the risk of unknown unknowns is extremely low. I've seen it in
practice, and when you have a checklist of stuff to do that you've already
done with very minor changes in the past that can't be automated for whatever
reason (e.g., support a new piece of hardware, except that the new piece of
hardware is basically the same as the old piece of hardware with very minor
well-documented changes), you can estimate with surprisingly good accuracy.

~~~
pnathan
Yes, but a decent amount of software is doing things that are relatively
unique to the time, place, and organization creating it.

For those sorts of projects, you might as well use this equation for time to
completion: sum of(number of lines in the spec/request * d4)*3 = days to
complete.

------
gpcz
I'm fascinated to know the author's opinion about Joel Spolsky's "Evidence-
Based Scheduling" approach. The author mentions that breaking the schedule
into the smallest possible pieces can lead to "overfitting," but Evidence-
Based Scheduling uses previous programmer estimates vs. actual time to give a
probabilistic schedule that changes with new events. Instead of a rigid ship
date, Joel's method can give you a probability of shipping on a given date
with the current information. Is Evidence-Based Scheduling still harmful, or
is it basically the equivalent of the relative points-based estimation the
author brought up at the end of the article?

~~~
jacques_chester
EBS looks to me like an elaboration of the old-school PERT 3-point estimation
technique. By the Laws of Agile, everything invented before 2001 is busted and
useless, so I guess EBS would go out with the bathwater for being tainted by
association.

------
tmoertel
Software effort estimation causes three problems. First, most estimation
processes model the uncertainty about the effort required for tasks but fail
to model the uncertainty about the tasks themselves, leading to unreliable
estimates. Second, people are overly confident of the estimates, in any case,
because the estimation process looks impressive and produces impressive
looking artifacts. Third, the estimation artifacts obscure the coupling
between the stakeholders and the implementers by not transmitting how the one
group’s decisions affect the other; the estimates, in effect, form a barrier
that makes it harder for people on opposites sides of the estimates to take
shared responsibility for business goals.

If you understand these problems and can solve them for your projects,
estimation can help you to make better decisions and to allocate your
resources more effectively. But, in a lot of organizations, these problems
have no good solutions (for cultural reasons), and there you might be better
off not sharing your estimates if you do them.

------
drawkbox
Hofstadter's Law: It always takes longer than you expect, even when you take
into account Hofstadter's Law

or

Parkinson's Law: Work expands so as to fill the time available for its
completion.

There is always some probability that software estimation is wrong no matter
how well planned out. The greater the size/scope, amount of people and the
bigger the timeline is the more chance it will be late.

Like Valve Time proves
(<https://developer.valvesoftware.com/wiki/Valve_Time>), sometimes product
trumps time you put in and making games is hard. Sliding deadlines are the
most successful because they take into account reality of changing
scope/product during development. Or short task sprints/windows like week long
or month long product delivery stretches with complete task freedom in
between.

Software estimation is hard because there are so many factors and it is a
constant ship vs quality balance.

------
iandanforth
The flip side of lie-based estimation is arbitrary deadlines.

I'm not the first to notice that work expands to fill the time allotted. There
can be real value in setting a hard deadline with almost no regard for
difficulty. This eliminates the 'process' overhead and often produces amazing
amounts of work in a shorter period of time than anyone would have estimated.
I really like working to deadlines because my motivation is inversely
correlated to the time till deadline.

~~~
freshhawk
It might work for you but it's a horrible way to manage people in general.

Motivation by arbitrary imposed deadline is a standard dev nightmare isn't it?
I've watched two companies destroy themselves with this tactic.

Personal motivation and goal setting and even team based motivation and goal
setting is important, sure, but you better have other motivation techniques
ready when your "hard deadline with almost no regard for difficulty" gets
completely missed and the team is depressed and bitter about missing it even
after all the effort that was put in.

Also, your team isn't stupid (hopefully). So when they bust their asses to hit
that deadline and the next day nothing happens, because it was arbitrary, they
learn not to listen to your deadlines.

~~~
iandanforth
Good point, I was assuming this was done transparently. In the same way that
contests are run, a date is chosen, announced and then met with the full
knowledge that pride and perhaps a reward are on the line.

Also this can't be the _only_ way you set deadlines as any team would rapidly
burn out.

~~~
freshhawk
Ah, gotcha. That type of push or stretch goal is something different. I've
never tried it though, anyone have any experience with this type of
motivation?

------
logn
I think estimation can mostly be forgotten. The planning part which the author
identifies is always helpful. Sure, get the team together to plan features and
get a feel for the goals. Even get biz analysts together with folks to
determine req's.

But do we really have to tell you how long it will take? When a lot of
managers just take the estimate and multiply by three can these estimates
really be trusted? What's the end game? Is it any different than just building
it out? If you have a hard set of features then it will get done when it gets
done. Just predict what half of what year. If you're an agile enterprise, then
timebox the release and build out from there. Why must we all do this absurd
dance of time estimating?

~~~
sliverstorm
The goal is be able to effectively load-balance and prioritize. If something
won't take long, for example, it can be deferred (if needed) if it depends on
something else that will.

Additionally, depending on the market and how important schedule is, it lets
you know if the project will be hopelessly late and as a result whether it
should be canned.

~~~
mattvanhorn
Any project whose only value comes from being there first is a loser and
should be canned before the estimating. "Hey Sergei, Larry says Alta Vista is
on track to beat us to market, so lets scrap this search thing and invest in
new DRM schemes"

~~~
btilly
Time to market sensitivity does not always mean "loser".

An example that comes to mind (because a story about it was on HN in the last
day) is that the first edition of Warcraft was aimed for the 1994 Christmas
season. If it had arrived 2 months late, they would have missed that season,
and technology improves quickly enough that they would have been unlikely to
be as successful if they delivered the same product in the 1995 Christmas
season.

Do you think that Warcraft is a product that is a loser and should have been
canned immediately because they had a schedule that they really needed to
deliver it on?

~~~
j-kidd
That sounds like a weird example. I have never associated the gaming
demographics with Christmas shopping.

~~~
sliverstorm
How about the gaming demographics' parents?

------
abhimishra
A good read, and more-or-less in line with my observations from working at a
couple big software companies. I think there is generally less value in
estimating versus doing, especially for tasks/projects that are inevitable.

------
X4
"The roughness of the fractal dimension of a problem that needs to be solved
can be calculated more easily in my opinion than with classical estimation
techniques."

We're always applying the Roman "divide and conquer" strategy without thinking
about it. It wouldn't make sense to apply this, or any other strategy
ignorantly! The D&C strategy works because a naive solution to count the
fingers in this picture without knowing the fractal dimension is: "divide and
conquer". <http://mark.rehorst.com/Bug_Photos/fractal_hands_c.jpg> (mirror:
<http://i.minus.com/ibz9NsZ6ET32aV.jpg> )

I think this is also the reason why some autistic people feel uncomfortable
when they don't know each detail of a not yet happened situation in advance.
Because communicating the fractal dimension, or "roughness" of a problem or
situation is the most time consuming and fragile phase in a project.

Here's an article about: "Roughness of fracture surfaces and compressive
strength of hydrated cement pastes", which appears to be completely out of
topic. But I believe it's nearer to the best estimation technique than other
techniques. (Fig. 3)
[http://www.sciencedirect.com/science/article/pii/S0008884610...](http://www.sciencedirect.com/science/article/pii/S0008884610000311)

While you may critique that I've not contributed to solving the problem, you
may also notice that I've helped to shed some light on the roughness of the
problem to be solved :) (Am lazy, it's very late and I'm just back from
training to be honest=)

~~~
X4
Hey cool, I've found a solution :)

The Fractal Planning Solution – Jim Stone, PhD. PDF
<http://www.fractalplanner.com/clear_mind.pdf> He also offers a hosted
planning tool.

James Theiler has found a formula on estimating the fractal dimension. See
Google Doc: <http://tinyurl.com/cantjjp>

------
URSpider94
@akeefer nailed it. This is an essay about why project management is bad,
written by someone who it seems like has never actually studied project
management.

Good planning and estimation is the tool of the worker, not of management. It
keeps pointy-haired bosses from coming in and asking "is it done yet?" When
done properly, it help justify the number of resources that will be needed to
deliver the project on time, and the amount of cash/resources that it will
take to complete the project. Good project management results in a not-to-
exceed date that is 90% confident (only 10% of projects will exceed this
date), so the team has time to tackle unknowns that pop up along the way.

And, yes, at the end of the day, project management is a tool for
accountability. Like it or not, everyone has to be accountable for their
performance at the end of the day, whether as parents, life partners, or
employees. Saying that you don't want to be held accountable by some stupid
boss is naive and unrealistic.

~~~
freshhawk
"the tool of the worker, not of management"

If you have a stereotypical corporate antagonistic divide between workers and
management then I completely agree. Good planning and estimation is about
managing upwards and managing expectations.

If you are in the context that this post was about, with no or very little
divide between management and workers, where blame for delays or praise for
being ahead of schedule is equally or nearly equally shared among everyone
it's quite a good essay imo.

------
mistercow
>In machine learning, overfitting generally results from having too much test
data or focusing on irrelevant metrics.

Huh? Overfitting usually happens when your training set is too small. The size
of the test set does not affect overfitting because the test set is, by
definition, only used to evaluate the accuracy of the final learned function.

In addition, overfitting doesn't happen because of "focusing on irrelevant
metrics". It happens because your data set is noisy, or because your learning
model is too simplistic to fully model the observed phenomenon (which is known
as deterministic noise).

If your model focuses on irrelevant metrics, that won't actually be a problem
as long as your training set is large enough to reveal their irrelevance.
After training, those metrics will not have much bearing on the output
function.

This misinterpretation of overfitting really hurts the analogy.

~~~
MattRogish
You're right, I had a brain hiccup with respect to the test/training sets (I
used it correctly later on). However, it was my understanding that too many
attributes can cause overfitting, and the wiki article suggests this, too.
Where am I wrong?

"Overfitting generally occurs when a model is excessively complex, such as
having too many parameters relative to the number of observations. "

<http://en.wikipedia.org/wiki/Overfitting>

~~~
mistercow
> I had a brain hiccup with respect to the test/training sets (I used it
> correctly later on).

Just to be clear, it's not just that you said "test data" instead of "training
data", but that you said that too much is a bad thing. More data is always a
good thing for ML.

[Edit: Actually, there are times where it may not be. If you're doing
something like image classification and your data is being created by hand
qualitatively, you can actually get overfitting from adding data. As far as I
understand, this is because the measurement based on fuzzy perceptual
qualities is biased, so the algorithm will overfit to that bias. Maybe this
applies with your analogy; I'm not sure.]

>"Overfitting generally occurs when a model is excessively complex, such as
having too many parameters relative to the number of observations. "

Well, that's in reference to overfitting as it applies to statistical models,
not machine learning. To apply that reasoning to machine learning you have to
look at the _output_ of the machine learning algorithm rather than the
parameters fed into the algorithm itself. That is, an overfit learned function
will often be characterized by excessive complexity, but this is not a result
of telling the learning ML algorithm to look at too many parameters. It's a
result of telling the ML algorithm to train for too long given the size of its
training set.

A key point to note is that an overfit function can be excessively complex
even based on very few input parameters if it builds the learned function out
of overly complex relationships between those parameters. Conversely, it can
build a very simple function, even if many of the parameters prove to be
irrelevant, by simply not making the learned function depend on those
parameters at all.

------
arkitaip
So what do you do as a freelancer working on projects that last weeks or
months? Tell your clients to go screw themselves, that you'll be ready when
you're ready?

~~~
mbell
You tell them to define a minimal feature set, and impose that it is actually
minimal. This feature set you estimate and give a hard deadline on. Then you
tell them that you will iterate based on feedback and add features as directed
until such a time as they choose to stop paying you. There is no such thing as
"done" in software development.

~~~
jasonhanley
'There is no such thing as "done" in software development.'

Exactly! That's what makes it so different from physical world projects. You
can keep adding/changing things forever.

------
emjimenez
In a 2001 article, J.P. Lewis demonstrated using the Kolgomorov-Chaitin-
Solomonov noncomputability theorem that there are large limits to software
Estimation:

<http://scribblethink.org/Work/kcsest.pdf>

Algorithmic complexity is not computable, then:

1\. Program size and complexity cannot be feasibly estimated a priori. 2\.
Development time cannot be objectively predicted. 3\. Absolute productivity
cannot be objectively determined.

In fact, Software Estimation methods have an error margin of 100-400% (see
Kemerer, C. 1987: An Empirical Validation of Software Cost Estimation
Models").

Software Effort Estimation is harmfull because trusting in anything with a
400% margin of error is risky.

------
ljw1001
I've spent a fair amount of time criticizing the way estimates are done (for
example: [http://deathrayresearch.tumblr.com/post/4503505772/the-
patho...](http://deathrayresearch.tumblr.com/post/4503505772/the-pathology-of-
estimates)) but although I empathize, this post missed the flight deck
completely.

First, as other have pointed out, sometimes estimates are needed (If the guys
at YouTube finish their work for the Olympics next month, they have a
problem.)

Second, if people only work on software where there's a clear a 50 to 1
payback (as the post suggests), there would be demand for about 10,000
programmers in the world. Competition forces most companies to fight for
small, temporary advantages in their products and for incrementally lower
costs with their backend systems. (And this ignores entirely the relationship
between time to market and returns. Sometimes software is worth a lot if you
could have it today, not so much a year from now.)

Third, story points? Not the panacea they're made out to be. I wrote a post on
that, too, but one self-referential link is enough for one comment.

Finally, if you're not going to estimate a completion date, why estimate at
all? The reasons he gives for estimation are actually reasons to spend time on
design. Estimation done that way adds nothing.

------
scotty79
Perfect system for me as a software developer:

1\. When considering what is to be done, ask me to asses how hard it is in
points. Also ask me to tag the task with words that I consider important like
"database schema change" or "attach legacy system" or "IE7" or "find third
party library". You can also ask me about additional things like, what
subsystems I think I'll have to touch or source written in what languages I
think I'm going to touch.

2\. Use the prior knowledge of how long similarly hard, similarly tagged tasks
took to guess how long this task will take, but do not tell me that because
I'm not really the one that is interested.

3\. If I'm given any input from people who want to get this done ask me how it
changes hardness of the task and the tags.

4\. After the task is done ask me again about same things and update your
knowledge according to measured time the task was actually worked on, my
initial predictions of initial hardness and tags and my final evaluation of
actual hardness and tags and any other information you chose to gather.

Worst thing you can do to get an estimate is just ask me how long will it
take. I'd rather you, not I make the WAG because then everyone is aware of the
fact that it's a WAG.

------
dools
This is the best article on software I've ever read.

------
DanielBMarkham
As akeefer pointed out, this was a long rant which finally got to the point:
separate effort and scheduling. Take a quick chop at estimating effort, but
not in terms of duration/dates on a schedule. Relative size works fine. It's
better to take a quick hit at estimation over and over again as you begin to
see empirical results than it is to spend a lot of time up-front trying to do
the perfect estimate.

Software management is not like programming. It's also not like other kinds of
management. Any group of people needs some sort of management function. The
trick is to put in the most lightweight structure possible. For some reason
many technical people of great intelligence, when faced with planning
activities, begin to construct various kinds of paper and mathematical
wonderlands to live in. We naturally enjoy creating complex models.

Don't do that.

------
meej
I'm still reading the essay -- I had to stop and comment about the following
paragraph:

"This results in folks that think adding more developers to a project at the
beginning will allow them to hit an arbitrary date by spreading out the
workload. Unfortunately new software doesn’t work this way, either, as the
same exponential communication overhead occurs in the beginning, too. You
merely end up with a giant mass of incoherent code. And you waste a lot of
money employing people who don’t add much value, sitting on the bench waiting
for their part of the project to begin."

These must be folks who stopped reading The Mythical Man-Month after chapter
2? I would think the following chapter about The Surgical Team would counter
such thinking. Unless people are only taking away the lesson about the
programmer productivity gap?

------
sharingancoder
Agreed! No real point throwing out random estimations instead of actually
sitting down and writing the code

------
thisisnotmyname
There is a great deal of philosophy here, but I'd prefer to see some data that
backs up his point. Can you show me a project in which investing in an
estimate harmed it?

~~~
MattRogish
Peopleware, DeMarco and Lister, pg 27-29, alluded to in the post.

Namely: "The most surprising part of the 1985 Jeffery-Lawrence study appeared
at the very end, when they investigated the productivity of 24 projects for
which no estimates were prepared at all. These projects far outperformed all
the others..."

Study refers to Jeffery and Lawrence, 1985 study, to which I cannot find the
raw data, unfortunately.

~~~
MattRogish
I found the study:

[http://www.sciencedirect.com/science/article/pii/01641212859...](http://www.sciencedirect.com/science/article/pii/0164121285900068)

Jeffrey, D.R. and M.J. Lawrence, "Managing Programming Productivity", The
Journal of Systems and Software 5, (1985), 49-58.

------
aut0mat0n1c
Using the phrase 'considered harmful' in the title of a blog post is
considered harmful

~~~
stcredzero
Comments using the phrase "using the phrase" considered as harmful as comments
using the phrase "considered harmful." :)

------
mille562
FYI: Flights are on-time if they are within 15 minutes of their scheduled
arrival. Arriving 16 minutes early is not an on-time flight.

~~~
MattRogish
"A flight is counted as "on time" if it operated less than 15 minutes later
the scheduled time shown in the carriers' Computerized Reservations Systems
(CRS). Arrival performance is based on arrival at the gate. Departure
performance is based on departure from the gate."

<http://www.bts.gov/help/aviation/index.html>

------
gadders
As a PM, the other reason developers (or anyone else) doesn't like making
estimates:

1) it's hard 2) you are accountable for them

No "My dog ate my homework" type excuses, no leaving at 5pm for a week and
telling me on the last day your work will be late.

When you make an estimate, you are putting your credibility on the line. No-
one is 100% perfect, but you should at least give meeting the given dates a
solid try. Not "Whoops, didn't make it, can I have another week please?"

~~~
ken
I think there's more to it than that. There are other things I do which are
hard, and hold me accountable, but which are not painful in the way that
software estimation is.

The difference that I see is that software estimation has a third component:
lack of control. Every couple days Alice is going to stop by and say "you know
we need feature XYZ, too, right?" (which was never in the spec). Then I'm
going to overhear at lunch that Bob rewrote the account-management system so I
need to rewrite part of my code to integrate with that new interface. And
Charlie is going to walk into my office 3 times a day and say "hey, did you
see the new video game?". Oh, and this release we're also upgrading to a new
version of the foobar library, which works fine for everyone else but
mysteriously crashes on my machine, so I spend a day and a half fighting that.

I _love_ being held accountable for hard things. (2 quickdraws on the side of
the cliff that I need to retrieve? Great! I'm performing a solo next week and
I need to get up to speed on something I've never played before? Bring it on!)
But only if I'm being held accountable for something I have control over.
Accountability - control = pain.

See also: traffic jams, delayed flights.

