Hacker News new | past | comments | ask | show | jobs | submit login
Software Effort Estimation Considered Harmful (mattrogish.com)
122 points by MattRogish on Aug 16, 2012 | hide | past | favorite | 78 comments

There are two serious problems with this post, and it really saddens me that I see these sorts of posts so frequently here, with so many concurring voices.

First of all, cost absolutely 100% has to factor into prioritization decisions. That doesn't require absolute estimation, but it does demand relative estimation (which he mentions tangentially at the end of the post). If Feature A will deliver $100,000 in revenue but take 18 months and Feature B will deliver $10,000 in revenue but take 1 week, the choice is pretty obvious. What matters is never "return" but always "return on investment." If you don't know anything about the I side of the ROI equation, you're doomed to make bad decisions. With no estimate at all, and just a snarky "it'll take as long as it takes, shut up and let me work" response, you'll inevitably focus on the wrong things.

Secondly, many of us do in fact have deadlines, and they're not just total BS. If you have a customer that you're building something for, they have to be able to plan their own schedule, and just telling them "well, maybe we'll ship it to you in 10/2012 or maybe in 6/2013, you'll get it when you get it" doesn't fly. And it's totally reasonable that it doesn't fly: if they have to, say, install the software, or buy hardware to run it on, or train users, or migrate data from an old system, or roll out new products of their own that are dependent on the new system, they clearly can't plan or budget those activities if they have no clue whatsoever when they'll get the new system.

And if you do have a deadline, you kind of want to know, to the extent that you can, if you're going to make it or not so you can take corrective action (adding people, cutting scope, pushing the date) if need be. You can't do that if you have no estimates at all.

Relative estimation of tasks with empirical measurement of the team's velocity works pretty well; it doesn't work in all cases, but it's pretty much always better than absolutely nothing.

There's a huge, huge difference between doing relative point-based estimation and date-driven, pointy-haired-boss estimation, and it's a total disservice to the software community that so many engineers seem to not really understand that difference, and seem to think that the only two options are "unrealistic date-based estimates" and "no estimates."

TL;DR - Don't just rant for 3000 words about how estimation is horrible and then add in one sentence about relative estimation. You'll do the world much more good if you just help educate people how to do things the right way and spare the ranting.

> There's a huge, huge difference between doing relative point-based estimation and date-driven, pointy-haired-boss estimation, and it's a total disservice to the software community that so many engineers seem to not really understand that difference, and seem to think that the only two options are "unrealistic date-based estimates" and "no estimates."

This was my beef with the article too. Basically on the one hand he proposes a strawman composed of known-worst practices (estimate-by-deadline, estimate-by-gut, ad hoc estimation and so on) and thereby tars all estimation with the brush ... except for the one alternative he approves.

This is the fallacy of dichotomy.

The root problem is the concept that estimates have to be accurate. Well, duh, they can't be. The bigger the project, the more people, the longer the timeframe, the less likely the project is to meet the estimate.

That's why you don't perform one estimate.

That's why you have confidence bands on estimates.

The whole blog article feels like a pastiche of criticism cribbed from agile books and not from a direct, thoughtful engagement with the primary or secondary literature on software project estimation.

I'm only 31. By any measure I'm still a young man. Why do I feel like such a curmudgeon all the time? Because apparently nobody reads books or papers any more. It's all blogs citing blogs who echoed the summary of the notes of the review of a single book.

One more thing. There's a difference between a plan and an estimate. Plan-and-control is not the same thing as performing an estimate; DeMarco's self-criticism is not directly applicable.

I agree. What would you recommend as a good modern book on software estimation?

As usual, Steve McConnell has done the hard yards of turning literature and research into something readable and instantly applicable.


Every time I estimate for clients I always talk about the Cone of Uncertainty.

This is why I agree with the original article over these comment parents. I work in a small software agency for clients who would never grasp the Cone of Uncertainty. They are much closer to the pointy-haired boss type than the type of person who appreciates the finer points of software project estimation. While reading the literature is good, the average developer will seldom find the time to do so. And even if they do, an off-the-cuff estimate is often better than carefully planned specification documents that no business stakeholders will ever read.

Of course accurate estimates have tremendous business value. But in reality they often come at the expense of what the client really needs which is delivery of features. I have seen estimation and tight project control taking up substantially more time than delivering actual features. And it was exactly as the OP stated:

> Software projects that require tight cost control are the ones delivering little or no marginal value.

The lesser the project value the tighter the control leading to a vicious cycle of developers cutting corners and increased micro-management.

> I have seen estimation and tight project control taking up substantially more time than delivering actual features.

It sounds to me that what you saw was a conflation of estimates and plans. Which is a common error.

Clients sometimes want an estimate of how long a single feature or fix will take, even when it will take only 15 minutes. The communication overhead and time spent estimating easily outweigh the time to implement.

I'm ... not sure what this proves?

Nothing, just an explanation of what I meant by estimate.


I haven't read it myself, but http://www.amazon.com/Agile-Estimating-Planning-Mike-Cohn/dp... looks a good description of the story points/relative estimation techniques. They're really not something that should require a whole book to explain, but I can't say I've found any one blog post or article-length writeup that does a good job at it. The summary at http://epf.eclipse.org/wikis/openup/core.mgmt.common.extend_... is pretty good (though I'd ignore the bottom section on "Estimation of Effort"), and the wikipedia article on Planning Poker http://en.wikipedia.org/wiki/Planning_poker is a decent writeup as well.

It's unfortunate that so much of the literature on relative estimation/story points/velocity/planning poker ends up intertwined in agile-development-consultantese, so sometimes reading some of these things, you have to take it with a serious grain of salt and weed out all the BS and dogma to get to the useful and important bits. The important bits there are really pretty simple: * Estimate in terms of points, not days, and estimate tasks relative to one another * Use planning poker (or something like it) within the team to get more accurate estimates * Empirically measure how many points you do in a given period of time to come up with your "velocity". To do that, you have to be pretty honest about things being "done done" before you move on to the next task; otherwise it's too easy to fudge and your numbers are worthless. "Oh, yeah, we did 10 points of work, well, except for writing the tests or doing the styling . . ."

Remember that velocity is NOT a productivity measure, it'll change over time, and it'll change if the team members change or if other external factors intervene, like an increase in support requests or something. So this technique kind of only works if your organization isn't already dysfunctional: as soon as velocity is seen as a productivity measurement, you're pretty screwed.

That's pretty much it. The relative estimates let you prioritize work appropriately (i.e. "I'd rather have these five 1-point stories than this one 5-point story, so we'll do those first"), and the velocity lets you track how fast you're actually going and about when you'll be done with a given chunk of work, so you can adjust plans as needed.

Note that relative estimation doesn't work nearly so well for large-scale estimation, or for greenfield development where you don't know what you're doing yet. For large-scale planning, my company will generally just give a SWAG in terms of points (we think this feature is 100 points) to give us at least some initial idea of expected relative sizes of things, then we'll compare that initial SWAG to the actual points as we break things out into more bite-sized stories that we can estimate more accurately. If we feel like we're half way through the feature and we've already done 70 points of work, that's a signal that we need to up our estimate for the remainder of the work. Steve McConnell's book is good as well, though honestly we don't really do much in the way of confidence-based estimates at this point. My experience is that out of every 10 projects we do, 8 or 9 will be within 20% of our initial SWAG and 1 or 2 will blow out to 2 or 3x the initial estimate. Of course, we never know which ones will blow out, we just know something will. In other words, we don't bother with confidence intervals at the individual feature level, we just do it at the macro level. So if a team has a velocity of 10 and we have 26 weeks to deliver a release, giving us a hypothetical total capacity of 260 points, we'll "commit" to maybe 2/3 of that. So we say "Okay, this first 170 points we're pretty sure we can get done. Anything after that will be a bonus."

I thought Cohn's book was good, but a bit padded out.

Basically all that agile really changed is how often folks take a sounding. Velocity is basically a productivity metric, it's a first derivative of project size (hence ... velocity).

But I mean you could always do that with traditional estimates.

That's pretty much what an Earned Value chart does.

Don't get me wrong: agile is an improvement. But it is an improvement that is evolved from what came before; not a categorical change. Looking at the historical literature still pays dividends to thoughtful engineers.

> What matters is never "return" but always "return on investment."

You're right. This is one of the primary advantages of startups with technical founders that talk to customers. Among many other flaws of the "OfficeSpace" work environment, one of the biggest is the fact that no single person in the organization knows 1) what is possible to build 2) how much it will cost to make and 3) what is the value of product to the customers.

At my last corporate job before I founded a startup, the CEO didn't know which software was possible to write, and didn't know which new features were easy vs. hard. The marketing people rarely told engineers what the customer wanted, and the engineers didn't talk to customers, and didn't know the cost (or value) of their time.

If there are any well-tested rules for producing good software, they're 1) small teams, 2) technical people with authority and responsibility, 3) talk to the customer.

But it's already established that deadlines are moot. So giving a deadline for planning, training, and hardware merely reduces to a random guess.

They'd be better off telling the client the truth that "We don't know a honest date and we don't want to lie and come up with an arbitrary one. I can only say it's unlikely that the project would take more than six months and you should at least be prepared to order the hardware by the early summer. We'll keep you posted."

Emphasis on the last part.

Schedules are more accurate later on when some work has been done already. Instead of creating an arbitrary point in the future keep the customer posted with: "We've already finished X and Y in the first month, so we'll soon start working on Z which is a big task but on the other hand we noticed that we don't have to do Q at all since we can just build it in terms of X and Z combined which won't take more than a week instead of the month that was originally planned."

If you tell this every week then everybody has an up-to-date view of the runway ahead. You can also give a "deadline", like: "If you need a deadline for higher-ups or administrative planning, use October 31st but tell them to prepare to plunge in earlier. We all know we'll be done then by a far margin."

Most customers that I've ever had have intuitively understood this is how it works, even if they had to have a "deadline". YMMV.

Are you agreeing with the premise of the article? (It's not clear if you are or if you're just disagreeing with the GP).

If you are then... "We'll keep you posted." "up-to-date view of the runway ahead."

Where do these things come from if you're not estimating effort?

Estimation becomes a lot easier a few weeks into any project. Once you have the framework and an overall story down, and you start getting your hands dirty with a few classes, the estimation numbers start to have more confidence and weight.

The only thing you can really confirm though, is what is already completed, and that is what you should keep the stakeholders posted on. And a description of work left, not necessarily the time it will take.

I don't understand, why not just be fully transparent?

This is the work left, we think it's this size*, this is how much work we've been getting done. It looks like we might be done at this date but that's dependent on a) our estimation being correct and b) our pace continuing.

I think it's more about managing expectations than it is about managing the information you let stakeholders have.

You took the words out of me.

Even if there are no external customers waiting for the release, a business' other departments have to plan their activities. When will marketing start their pre-release activities without engineering's estimates? When will sales start talking to customers about the new release? There is just too many things that need engineering's estimates.

He's sometimes right. There are lots of good reasons why you might need software schedule estimation. When you do, there is no point in throwing up your hands and saying, "You can't do that, everybody knows it." Instead get http://www.amazon.com/Software-Estimation-Demystifying-Pract... and teach yourself how to do it.

Why would software estimation matter? For many companies it matters a lot if you can announce something in time for Christmas. Or you might have to hit a deadline for compliance with a legal contract. Or you may be trying to prioritize two different possible products - without a sense of the effort involved you're lacking one of the parameters you need for such an estimation.

That doesn't mean that you always estimate. If the value proposition is clear enough, you're going to work on it until you're done and then you will be very happy. But the real world does not usually work like that.

"For many companies it matters a lot if you can announce something in time for Christmas."

It will be done by xmas or it won't. I don't recommend "no estimation", but intense estimating won't make you hit xmas. Not estimating may give you several weeks that you could be coding.

Either way you're takin a mighty big risk when you publicly commit to that. Welcome to hell, population: Dev Team. I've known of no major software project that has a public date that hasn't resulted in major delays, feature cuts, or massive overtime to try and hit that date.

"Or you might have to hit a deadline for compliance with a legal contract."

That is also a suitably terrible position to be in. I recommend you don't take such contracts. Again, your estimates will be wrong. Even if you spend months doing them. Then you're in the same spot.

"Or you may be trying to prioritize two different possible products - without a sense of the effort involved you're lacking one of the parameters you need for such an estimation."

Which delivers most business value? I find it farfetched that both are of equal importance.

I have just moved from B2B, where many "deadlines" were largely artificial and driven by sales targets rather than real business need, to online retail, where the deadline is often concrete, immovable, and external (trade show, Olympics, Christmas). In both environments I found it most useful to encourage my team to commit to completion of _the best solution they can manage_ by a business-meaningful date (picked by the team after suitable consideration, of course). In this model the team do not agree to a specific solution or scope, though to pick the target date at the beginning they usually have an initial solution in mind; as they work toward the date, they (working with the customer representatives in the team) have the freedom to cut features or components, or to add new ones, as they grow in their understanding of the feature and the limitations on delivery (legacy code, lack of experience, unclear business needs). I have generally found that in an environment where development teams are trusted (yes I know such environments are far too rare) this produces results that are as good or better than the recipients expected, and almost always on time or nearly so.

Features, budget, schedule. The development team has to have control of one of those three.

Generally schedule is fixed by external factors. Budget is fixed by the existing team size times the schedule. Therefore the corner that makes the most sense for the development team to control is features.

driven by sales targets rather than real business need

If you're in the business of selling software, sales targets are a real business need.

I should have said "sales quotas" rather than targets. Your target to sell £x million in June reflects a real revenue need for the business, but the deadline of 30 June this implies is artificial, and the business would (in most cases) do just as well if you delivered the feature and took the revenue on 1st or 10th July.

Estimating may be a way to figure out what is plausible to do by Christmas. Which may help you scale your desires up or down to have the best deliverable product that you can.

Also in any contract negotiations, someone has to take the risk. The more risk you take, usually the higher margins you can get. (I am currently working on a contract where my willingness to take on all of the risk of not delivering value has improved my likely profit margins 10-fold.) Thus if you're willing to do fixed schedule and/or fixed price contracts, on average you can charge more. But it would be insane to attempt that without being able to estimate your likely costs and schedules.

Sure, that estimation is an intrinsically hard problem. That is why you get to charge extra for having done it. But it is not trivially the wrong business decision to make.

As an extreme example, I offer you SpaceX. Which signs lots of fixed price bids on development tasks that make the average software project look like a cakewalk.

How does this apply in an agency context? I would like to see if this can be applied to my company, but I am a little fuzzy on how this theory can work with paying customers who want to get an idea of cost up front.

"your estimates will be wrong. Even if you spend months doing them. Then you're in the same spot"

If you spend months doing estimates you will never do "nothing else". At the very least, you will have a much learnt a lot about the problem domain, and thus you will have a better overview of the work to do. You likely also will have discovered a few pitfalls to avoid, and you already will have changed the requirements of the project at the moment when it is cheapest: before you have a pile of code.

The FSA (and other regulatory organisations) frequently give a set date when a feature (EG Souped up trade reporting or similar) needs to be in place by.

> It will be done by xmas or it won't.

You claim to be talking about estimation, but I believe that you're actually talking about being given unrealistic, non-negotiable deadlines. (How did we know that the estimate was unrealistic, you ask? We did an estimate.)

I wonder what would happen if the rest of the world took this approach?

Customer: I was thinking of getting an extension built on my house

Builder: OK

Customer: How long would it take and what would it cost?

Builder; Sorry, but to tell you that would slow down the process of building your extension. It will be quicker if I just get on with it and tell you what you owe me at the end.

What would happen? The customer would know beforehand what is true, anyway.

Berlin Brandenburg Airport was estimated to be opened in October 2011, then 2012, and currently 2013. Costs were estimated with 630 million. The current estimate is 1.2 billion.

Not giving an estimate isn't the entire solution, either.

The problem with an airport is that every time a deadline arrives, all you can do is notice that you don't have an airport, yet. You must estimate again (and you're wrong again).

With software you can say: "We're going to give you a shippable version every week. You can cancel the project at any time and you keep what you've paid for so far. We can screw up completely and you still have last week's shippable product."

Personally, I'd love that pay-as-you-go approach for the rest of the world.

First, continuous deployment works fine for web projects, but it's not an option if you've got a packaged product that you have to ship at some point. Agile is not a panacea for these environments.

It's important to differentiate between the two kinds of software that you can write. This is post is dead-on for developing truly new software (new features, stuff that fundamentally hasn't existed before, etc). However, estimation (even the kind of project-manager-driven estimation that most engineers, myself included, generally hate) can work really well for software where you're doing basically the same thing you've already done in the past such that the risk of unknown unknowns is extremely low. I've seen it in practice, and when you have a checklist of stuff to do that you've already done with very minor changes in the past that can't be automated for whatever reason (e.g., support a new piece of hardware, except that the new piece of hardware is basically the same as the old piece of hardware with very minor well-documented changes), you can estimate with surprisingly good accuracy.

Yes, but a decent amount of software is doing things that are relatively unique to the time, place, and organization creating it.

For those sorts of projects, you might as well use this equation for time to completion: sum of(number of lines in the spec/request * d4)*3 = days to complete.

> continuous deployment works fine for web projects, but it's not an option if you've got a packaged product that you have to ship at some point. Agile is not a panacea for these environments.

I think that misses one of the key insights of continuous deployment. The biggest gain in CD is not about actually pushing code to production on every commit, but working as you actually do push to production on every commit.

This means that commits are small, contain tests, are delivered frequently and don't break the build. That workflow is a good idea, regardless of the final product or industry.

I practice CD (hell, I built a company to do CD), but if I ever had to go back to shipping desktop software or whatever, I'd still practice CD.

What is this "packaged" software you speak of?

I'm fascinated to know the author's opinion about Joel Spolsky's "Evidence-Based Scheduling" approach. The author mentions that breaking the schedule into the smallest possible pieces can lead to "overfitting," but Evidence-Based Scheduling uses previous programmer estimates vs. actual time to give a probabilistic schedule that changes with new events. Instead of a rigid ship date, Joel's method can give you a probability of shipping on a given date with the current information. Is Evidence-Based Scheduling still harmful, or is it basically the equivalent of the relative points-based estimation the author brought up at the end of the article?

EBS looks to me like an elaboration of the old-school PERT 3-point estimation technique. By the Laws of Agile, everything invented before 2001 is busted and useless, so I guess EBS would go out with the bathwater for being tainted by association.

Software effort estimation causes three problems. First, most estimation processes model the uncertainty about the effort required for tasks but fail to model the uncertainty about the tasks themselves, leading to unreliable estimates. Second, people are overly confident of the estimates, in any case, because the estimation process looks impressive and produces impressive looking artifacts. Third, the estimation artifacts obscure the coupling between the stakeholders and the implementers by not transmitting how the one group’s decisions affect the other; the estimates, in effect, form a barrier that makes it harder for people on opposites sides of the estimates to take shared responsibility for business goals.

If you understand these problems and can solve them for your projects, estimation can help you to make better decisions and to allocate your resources more effectively. But, in a lot of organizations, these problems have no good solutions (for cultural reasons), and there you might be better off not sharing your estimates if you do them.

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law


Parkinson's Law: Work expands so as to fill the time available for its completion.

There is always some probability that software estimation is wrong no matter how well planned out. The greater the size/scope, amount of people and the bigger the timeline is the more chance it will be late.

Like Valve Time proves (https://developer.valvesoftware.com/wiki/Valve_Time), sometimes product trumps time you put in and making games is hard. Sliding deadlines are the most successful because they take into account reality of changing scope/product during development. Or short task sprints/windows like week long or month long product delivery stretches with complete task freedom in between.

Software estimation is hard because there are so many factors and it is a constant ship vs quality balance.

The flip side of lie-based estimation is arbitrary deadlines.

I'm not the first to notice that work expands to fill the time allotted. There can be real value in setting a hard deadline with almost no regard for difficulty. This eliminates the 'process' overhead and often produces amazing amounts of work in a shorter period of time than anyone would have estimated. I really like working to deadlines because my motivation is inversely correlated to the time till deadline.

It might work for you but it's a horrible way to manage people in general.

Motivation by arbitrary imposed deadline is a standard dev nightmare isn't it? I've watched two companies destroy themselves with this tactic.

Personal motivation and goal setting and even team based motivation and goal setting is important, sure, but you better have other motivation techniques ready when your "hard deadline with almost no regard for difficulty" gets completely missed and the team is depressed and bitter about missing it even after all the effort that was put in.

Also, your team isn't stupid (hopefully). So when they bust their asses to hit that deadline and the next day nothing happens, because it was arbitrary, they learn not to listen to your deadlines.

Good point, I was assuming this was done transparently. In the same way that contests are run, a date is chosen, announced and then met with the full knowledge that pride and perhaps a reward are on the line.

Also this can't be the only way you set deadlines as any team would rapidly burn out.

Ah, gotcha. That type of push or stretch goal is something different. I've never tried it though, anyone have any experience with this type of motivation?

In one of DeMarco's other books -- I can't now recall if it was PeopleWare or The Deadline -- he explains how arbitrary deadlines come unstuck.

Because people learn that they are arbitrary. When the Super Serious Urgent Red Alert All Hands Man Battle Stations deadline whistles by without much more than a frown from management, the team quickly realises that it was total horseshit to begin with.

Software engineers are, it has been observed, a smart bunch. If deadlines are being used a "motivational" technique, they are quickly degraded into meaninglessness.

Which will hurt the company when an actual hard deadline pops up.

Boy who cried "Wolf!" and all that.

The cost of those hard deadlines is often extra bugs that delay the actual launch date. But they make everybody feel like they're doing some serious management. ;)

I think estimation can mostly be forgotten. The planning part which the author identifies is always helpful. Sure, get the team together to plan features and get a feel for the goals. Even get biz analysts together with folks to determine req's.

But do we really have to tell you how long it will take? When a lot of managers just take the estimate and multiply by three can these estimates really be trusted? What's the end game? Is it any different than just building it out? If you have a hard set of features then it will get done when it gets done. Just predict what half of what year. If you're an agile enterprise, then timebox the release and build out from there. Why must we all do this absurd dance of time estimating?

The goal is be able to effectively load-balance and prioritize. If something won't take long, for example, it can be deferred (if needed) if it depends on something else that will.

Additionally, depending on the market and how important schedule is, it lets you know if the project will be hopelessly late and as a result whether it should be canned.

Any project whose only value comes from being there first is a loser and should be canned before the estimating. "Hey Sergei, Larry says Alta Vista is on track to beat us to market, so lets scrap this search thing and invest in new DRM schemes"

Time to market sensitivity does not always mean "loser".

An example that comes to mind (because a story about it was on HN in the last day) is that the first edition of Warcraft was aimed for the 1994 Christmas season. If it had arrived 2 months late, they would have missed that season, and technology improves quickly enough that they would have been unlikely to be as successful if they delivered the same product in the 1995 Christmas season.

Do you think that Warcraft is a product that is a loser and should have been canned immediately because they had a schedule that they really needed to deliver it on?

That sounds like a weird example. I have never associated the gaming demographics with Christmas shopping.

How about the gaming demographics' parents?

A good read, and more-or-less in line with my observations from working at a couple big software companies. I think there is generally less value in estimating versus doing, especially for tasks/projects that are inevitable.

"The roughness of the fractal dimension of a problem that needs to be solved can be calculated more easily in my opinion than with classical estimation techniques."

We're always applying the Roman "divide and conquer" strategy without thinking about it. It wouldn't make sense to apply this, or any other strategy ignorantly! The D&C strategy works because a naive solution to count the fingers in this picture without knowing the fractal dimension is: "divide and conquer". http://mark.rehorst.com/Bug_Photos/fractal_hands_c.jpg (mirror: http://i.minus.com/ibz9NsZ6ET32aV.jpg )

I think this is also the reason why some autistic people feel uncomfortable when they don't know each detail of a not yet happened situation in advance. Because communicating the fractal dimension, or "roughness" of a problem or situation is the most time consuming and fragile phase in a project.

Here's an article about: "Roughness of fracture surfaces and compressive strength of hydrated cement pastes", which appears to be completely out of topic. But I believe it's nearer to the best estimation technique than other techniques. (Fig. 3) http://www.sciencedirect.com/science/article/pii/S0008884610...

While you may critique that I've not contributed to solving the problem, you may also notice that I've helped to shed some light on the roughness of the problem to be solved :) (Am lazy, it's very late and I'm just back from training to be honest=)

Hey cool, I've found a solution :)

The Fractal Planning Solution – Jim Stone, PhD. PDF http://www.fractalplanner.com/clear_mind.pdf He also offers a hosted planning tool.

James Theiler has found a formula on estimating the fractal dimension. See Google Doc: http://tinyurl.com/cantjjp

@akeefer nailed it. This is an essay about why project management is bad, written by someone who it seems like has never actually studied project management.

Good planning and estimation is the tool of the worker, not of management. It keeps pointy-haired bosses from coming in and asking "is it done yet?" When done properly, it help justify the number of resources that will be needed to deliver the project on time, and the amount of cash/resources that it will take to complete the project. Good project management results in a not-to-exceed date that is 90% confident (only 10% of projects will exceed this date), so the team has time to tackle unknowns that pop up along the way.

And, yes, at the end of the day, project management is a tool for accountability. Like it or not, everyone has to be accountable for their performance at the end of the day, whether as parents, life partners, or employees. Saying that you don't want to be held accountable by some stupid boss is naive and unrealistic.

"the tool of the worker, not of management"

If you have a stereotypical corporate antagonistic divide between workers and management then I completely agree. Good planning and estimation is about managing upwards and managing expectations.

If you are in the context that this post was about, with no or very little divide between management and workers, where blame for delays or praise for being ahead of schedule is equally or nearly equally shared among everyone it's quite a good essay imo.

>In machine learning, overfitting generally results from having too much test data or focusing on irrelevant metrics.

Huh? Overfitting usually happens when your training set is too small. The size of the test set does not affect overfitting because the test set is, by definition, only used to evaluate the accuracy of the final learned function.

In addition, overfitting doesn't happen because of "focusing on irrelevant metrics". It happens because your data set is noisy, or because your learning model is too simplistic to fully model the observed phenomenon (which is known as deterministic noise).

If your model focuses on irrelevant metrics, that won't actually be a problem as long as your training set is large enough to reveal their irrelevance. After training, those metrics will not have much bearing on the output function.

This misinterpretation of overfitting really hurts the analogy.

You're right, I had a brain hiccup with respect to the test/training sets (I used it correctly later on). However, it was my understanding that too many attributes can cause overfitting, and the wiki article suggests this, too. Where am I wrong?

"Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. "


> I had a brain hiccup with respect to the test/training sets (I used it correctly later on).

Just to be clear, it's not just that you said "test data" instead of "training data", but that you said that too much is a bad thing. More data is always a good thing for ML.

[Edit: Actually, there are times where it may not be. If you're doing something like image classification and your data is being created by hand qualitatively, you can actually get overfitting from adding data. As far as I understand, this is because the measurement based on fuzzy perceptual qualities is biased, so the algorithm will overfit to that bias. Maybe this applies with your analogy; I'm not sure.]

>"Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. "

Well, that's in reference to overfitting as it applies to statistical models, not machine learning. To apply that reasoning to machine learning you have to look at the output of the machine learning algorithm rather than the parameters fed into the algorithm itself. That is, an overfit learned function will often be characterized by excessive complexity, but this is not a result of telling the learning ML algorithm to look at too many parameters. It's a result of telling the ML algorithm to train for too long given the size of its training set.

A key point to note is that an overfit function can be excessively complex even based on very few input parameters if it builds the learned function out of overly complex relationships between those parameters. Conversely, it can build a very simple function, even if many of the parameters prove to be irrelevant, by simply not making the learned function depend on those parameters at all.

So what do you do as a freelancer working on projects that last weeks or months? Tell your clients to go screw themselves, that you'll be ready when you're ready?

You tell them to define a minimal feature set, and impose that it is actually minimal. This feature set you estimate and give a hard deadline on. Then you tell them that you will iterate based on feedback and add features as directed until such a time as they choose to stop paying you. There is no such thing as "done" in software development.

'There is no such thing as "done" in software development.'

Exactly! That's what makes it so different from physical world projects. You can keep adding/changing things forever.

In a 2001 article, J.P. Lewis demonstrated using the Kolgomorov-Chaitin-Solomonov noncomputability theorem that there are large limits to software Estimation:


Algorithmic complexity is not computable, then:

1. Program size and complexity cannot be feasibly estimated a priori. 2. Development time cannot be objectively predicted. 3. Absolute productivity cannot be objectively determined.

In fact, Software Estimation methods have an error margin of 100-400% (see Kemerer, C. 1987: An Empirical Validation of Software Cost Estimation Models").

Software Effort Estimation is harmfull because trusting in anything with a 400% margin of error is risky.

I've spent a fair amount of time criticizing the way estimates are done (for example: http://deathrayresearch.tumblr.com/post/4503505772/the-patho...) but although I empathize, this post missed the flight deck completely.

First, as other have pointed out, sometimes estimates are needed (If the guys at YouTube finish their work for the Olympics next month, they have a problem.)

Second, if people only work on software where there's a clear a 50 to 1 payback (as the post suggests), there would be demand for about 10,000 programmers in the world. Competition forces most companies to fight for small, temporary advantages in their products and for incrementally lower costs with their backend systems. (And this ignores entirely the relationship between time to market and returns. Sometimes software is worth a lot if you could have it today, not so much a year from now.)

Third, story points? Not the panacea they're made out to be. I wrote a post on that, too, but one self-referential link is enough for one comment.

Finally, if you're not going to estimate a completion date, why estimate at all? The reasons he gives for estimation are actually reasons to spend time on design. Estimation done that way adds nothing.

Perfect system for me as a software developer:

1. When considering what is to be done, ask me to asses how hard it is in points. Also ask me to tag the task with words that I consider important like "database schema change" or "attach legacy system" or "IE7" or "find third party library". You can also ask me about additional things like, what subsystems I think I'll have to touch or source written in what languages I think I'm going to touch.

2. Use the prior knowledge of how long similarly hard, similarly tagged tasks took to guess how long this task will take, but do not tell me that because I'm not really the one that is interested.

3. If I'm given any input from people who want to get this done ask me how it changes hardness of the task and the tags.

4. After the task is done ask me again about same things and update your knowledge according to measured time the task was actually worked on, my initial predictions of initial hardness and tags and my final evaluation of actual hardness and tags and any other information you chose to gather.

Worst thing you can do to get an estimate is just ask me how long will it take. I'd rather you, not I make the WAG because then everyone is aware of the fact that it's a WAG.

This is the best article on software I've ever read.

As akeefer pointed out, this was a long rant which finally got to the point: separate effort and scheduling. Take a quick chop at estimating effort, but not in terms of duration/dates on a schedule. Relative size works fine. It's better to take a quick hit at estimation over and over again as you begin to see empirical results than it is to spend a lot of time up-front trying to do the perfect estimate.

Software management is not like programming. It's also not like other kinds of management. Any group of people needs some sort of management function. The trick is to put in the most lightweight structure possible. For some reason many technical people of great intelligence, when faced with planning activities, begin to construct various kinds of paper and mathematical wonderlands to live in. We naturally enjoy creating complex models.

Don't do that.

I'm still reading the essay -- I had to stop and comment about the following paragraph:

"This results in folks that think adding more developers to a project at the beginning will allow them to hit an arbitrary date by spreading out the workload. Unfortunately new software doesn’t work this way, either, as the same exponential communication overhead occurs in the beginning, too. You merely end up with a giant mass of incoherent code. And you waste a lot of money employing people who don’t add much value, sitting on the bench waiting for their part of the project to begin."

These must be folks who stopped reading The Mythical Man-Month after chapter 2? I would think the following chapter about The Surgical Team would counter such thinking. Unless people are only taking away the lesson about the programmer productivity gap?

Agreed! No real point throwing out random estimations instead of actually sitting down and writing the code

There is a great deal of philosophy here, but I'd prefer to see some data that backs up his point. Can you show me a project in which investing in an estimate harmed it?

Peopleware, DeMarco and Lister, pg 27-29, alluded to in the post.

Namely: "The most surprising part of the 1985 Jeffery-Lawrence study appeared at the very end, when they investigated the productivity of 24 projects for which no estimates were prepared at all. These projects far outperformed all the others..."

Study refers to Jeffery and Lawrence, 1985 study, to which I cannot find the raw data, unfortunately.

I found the study:


Jeffrey, D.R. and M.J. Lawrence, "Managing Programming Productivity", The Journal of Systems and Software 5, (1985), 49-58.

Using the phrase 'considered harmful' in the title of a blog post is considered harmful

Comments using the phrase "using the phrase" considered as harmful as comments using the phrase "considered harmful." :)

FYI: Flights are on-time if they are within 15 minutes of their scheduled arrival. Arriving 16 minutes early is not an on-time flight.

"A flight is counted as "on time" if it operated less than 15 minutes later the scheduled time shown in the carriers' Computerized Reservations Systems (CRS). Arrival performance is based on arrival at the gate. Departure performance is based on departure from the gate."


As a PM, the other reason developers (or anyone else) doesn't like making estimates:

1) it's hard 2) you are accountable for them

No "My dog ate my homework" type excuses, no leaving at 5pm for a week and telling me on the last day your work will be late.

When you make an estimate, you are putting your credibility on the line. No-one is 100% perfect, but you should at least give meeting the given dates a solid try. Not "Whoops, didn't make it, can I have another week please?"

I think there's more to it than that. There are other things I do which are hard, and hold me accountable, but which are not painful in the way that software estimation is.

The difference that I see is that software estimation has a third component: lack of control. Every couple days Alice is going to stop by and say "you know we need feature XYZ, too, right?" (which was never in the spec). Then I'm going to overhear at lunch that Bob rewrote the account-management system so I need to rewrite part of my code to integrate with that new interface. And Charlie is going to walk into my office 3 times a day and say "hey, did you see the new video game?". Oh, and this release we're also upgrading to a new version of the foobar library, which works fine for everyone else but mysteriously crashes on my machine, so I spend a day and a half fighting that.

I love being held accountable for hard things. (2 quickdraws on the side of the cliff that I need to retrieve? Great! I'm performing a solo next week and I need to get up to speed on something I've never played before? Bring it on!) But only if I'm being held accountable for something I have control over. Accountability - control = pain.

See also: traffic jams, delayed flights.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact