Hacker News new | past | comments | ask | show | jobs | submit login
Tasking developers with creating detailed estimates is a waste of time (2020) (iism.org)
499 points by lauriswtf 65 days ago | hide | past | favorite | 271 comments

At the end of the day, it doesn't matter what process is used for coming up with estimates and delivering.

Regardless of what you think your process is, somewhere near the top of the leadership pyramid, it all boils down to a customer promise upon which hinges your organization's reputation.

In non-dysfunctional organizations people at all levels can understand and adapt to curveballs that cause deadlines to slip. In such places you make estimates to the best of your ability with limited time and resources and communicate constantly with stakeholders on how things are going.

The hard thing is it all requires honesty.

Probably the most angry I ever got at work was once when, at the end of a project, I was on one of those bullshit status update conference calls with 20 people on the line. Throughout the project the team was I was on was led to believe that we were constantly behind and the deadline was from the beginning impossible to meet yet magically got pushed out several times to give us "grace-time". We cut corners like mad-men, amassed insane levels of tech-debt, and all but put a midget in the machine to make it work. What happened on the call? The product management team congratulated the PM for delivering so far ahead of time. I was so angry I couldn't even talk. Basically, the PM got accolades, the customer got shitty product, and we hated our jobs.

I agree with you on bullshit dates. But I don't think this part is true:

> Regardless of what you think your process is, somewhere near the top of the leadership pyramid, it all boils down to a customer promise upon which hinges your organization's reputation.

First, I see the same sort of dysfunctions driven by The Date™ even when no customer promise is involved. Even when no customers are involved.

Second, if honoring customer promises were the number one priority, organizations wouldn't make them so casually and without figuring out what is possible.

Third, they would use management approaches that make it maximally likely to really hit customer promises, not just in terms of date but features and quality. If I have a date that really matters, I'll aim for first MVP by 50% of the schedule. Then we use customer feedback to iteratively improve every week, so at the deadline we have something polished and proven to satisfy.

Fourth, they'd try to hit dates in ways that build capacity to make good on future promises. Instead, the panics I see around The Date™ mean technical debt, strained processes, and burnt-out staff, all of which diminish odds of satisfying promises in the future.

First, there's always a customer, and so there is always a promise, and always a customer. Customer can mean you, your boss, an external client, the general public, the janitor who's keycard doesn't work on the ladies bathroom at 2am when no project managers, business line owners or even ladies are around to watch him clean.

Second, profitability is usually the number one priority.

Third, they DO use management approaches that make it maximally likely... ...that they achieve profitability.

Fourth yes, they would and they do. Unfortunately most don't understand that it is impossible to estimate a fixed period of time in which an unknown problem can be solved.

This always comes full circle to Steve McConnell's book Software Estimation. In my 34 years in IT, across three continents, in industry sectors ranging from insurance and telecoms to aerospace and defencee, and Nokia Maps, that book will take you closer to the grail than anything else on earth. And you'll still remain light years from an accurate estimate until the day you ship.

That first one is stretching the definition of "customer" quite vigorously. Certainly beyond the bounds of the post I was replying to. A company's reputation does not hinge on whether the janitor's keycard works on the ladies' bathroom at 2am by a made-up date.

Long-term profit is not the number-one actual priority of American management methods. Not even close. As an example, look at Toyota versus the big 3 American car manufacturers. Toyota is much more profitable [1] and has been for decades. That's because for Toyota profit is an outcome, not a holy grail. The actual priority of American management methods is making executives look/feel smart, in control, and dominant so they can justify extracting lots of cash.

I do agree that they use management techniques likely to achieve those feelings. And that includes making up bullshit dates and then insisting everybody make them happen. But that's part of the problem.

[1] https://www.detroitnews.com/story/business/autos/2015/02/22/...

Toyota is still pursuing profit. They are just operating within a larger scope of time. Profitable this quarter vs this year vs this decade are all very different goals.

Edit: And that definition of customer is not uncommon in the industry. It helps to know who your customer is.

Doesn't seem to me like you contradicted your parent poster. They said that profit is not number one and that's it an outcome, not the one and only metric. Also you didn't address their claim that USA companies value good PR and executive bonuses above everything else, even if the project fails miserably -- which matches my observations from 20 years of career as well.

Beware confirmation bias. There are certainly companies like that, but I bet there are many more that aren't that don't rise to your attention level.

Profit is the holy grail for all companies. It's not an accidental outcome that Toyota is profitable. The quest for profit is the basis for everything they do, even if it doesn't seem like it. They just picked their heads up a little compared to their US peers so they can see farther down the road.

You are absolutely incorrect about Toyota. And most Lean companies. Their management philosophy is fundamentally different. If you'd like to learn more, maybe start with Rother's "Toyota Kata". Maybe along with the This American Life epsiode "NUMMI".

Can you provide a thesis so I know what I'm looking for of I chose to do the research you've laid out for me? What is Toyota's goal, if not profit?

This whole reductionist notion that an 80-year-old company with 300,000 employees has a single goal is part of your problem. Embrace complexity. But if you're looking for their take on the basics, start with their website: https://search.newsroom.toyota.co.jp/en/all/search.x?tag=Vis...

I've read War and Peace, I get that you can model the goals of an organization as the integral of the goals of its constituents. I do embrace that complexity.

But reductionism is helpful sometimes. The integral of a complex equation can add up to an integer. Modeling planetary movements in a way that's similar to a spherical cow in a vacuum may not be perfect, but it is possible, and tells you quite a bit.

Reductively, companies pursue profit, in the same way that people pursue money. It's not all-consuming, but if you want to model behavior, that's probably the place to start.

A fine argument for you clinging to your too-simple model and your ignorance. And a good sign that I wasted my time giving you citations you were never going to read.

Profit not being the metric and profit not being the goal are two completely different things.

I'm not sure what you're trying to suggest. But in the case of Toyota and most Lean companies, profit is a metric, but is not the goal.

Unfortunately development does not exist in a vacuum.

The Date(TM) can have a slew of dependencies that chain react down the line. Perhaps your client's training was booked months in advance with flights and stand-in schedules all based all on some dev feature being available in class. Or hard to reschedule subcontractors were booked six months ago for Pen-Testing and The Date(TM) is slipping. Things like this have costs and consequences beyond technical software debt and in many cases come with enforceable penalty clauses which you don't want to trigger.

Yes, there is always a tension between the hands down best guess of what was possible when you made the commitment vs the actual critical path as revealed by experience over time, but you live with those unknown risks and degrees of certainty and proceed. You Meet The Date(TM).

I also think Radical Transparency plays a role here that was lost on the 'successful' PM referenced in GP's story. If you cancel weekends and duct tape together things you need to fix later to meet a date then everyone involved should be 100% clear on why it is so important. And agree. There is a reason some activities are called Sprints. You can't do them all the time. A Sprint is a defined burst of energy- maximally applied. Anything else is a different kind of race. Business as Usual, is a marathon for instance. But, occasionally, you need to produce a Sprint to Meet The Date (TM). Point is that the GP's PM-in-question might have accomplished more with better honesty relationships including a well thought out & generous TOIL [1] program that could compensate for the off chance that some institutional shortsightedness at an earlier planning stage- where commitments were made that may have seemed like a good idea at the time- proved more difficult than anticipated and resulted in a need for Sprints.

[1] https://citrushr.com/blog/leave-absence/what-is-toil-and-how...

I agree that The Date™ is sometimes driven by legitimate factors. And that other things often become tied on to The Date™.

But The Date™ also is a BFD when none of that is true. And if things like client training and penalty clauses were all that important, people would be extremely careful about picking dates that are realistic. Instead, frequently The Date™ is made up and people get fetishistic about Meeting The Date™ in ways that are untethered from actual circumstances or consequences.

That's a sign to me that the real purpose of The Date™ is not the nominal purpose.

> There is a reason some activities are called Sprints. You can't do them all the time.

This is a perfect example of how distinct the nominal purpose and the actual behaviors are. In the Scrum framework, things are structured as an unending series of Sprints, back to back from here to eternity. Whereas in reality, a sprinter like Usain Bolt runs, what, 10 or 20 officially timed races per year at under a minute each? And the rest of the time recovering, prepping, and training.

>Second, if honoring customer promises were the number one priority, organizations wouldn't make them so casually and without figuring out what is possible.

I can't tell you how many times I asked, "who came up with this deadline?" and I get "someone in the sales team," or something like that.

You are flying a plane over the Atlantic and you realize that you may not have enough fuel to make it to shore. Your options are:

1. Focus on flying the plane in a manner that uses as little fuel as possible not bothering to even glace at the fuel gauge, watching the air currents and gliding where possible.

2. Turn on autopilot and spend time and effort trying to calculate the fuel you have to see if you make it to shore before going back to flying as conservatively as possible.

Number 2 feels better. Number 1 is more likely to end with a safe landing.

If you have a deadline and won't be getting new resources any time soon, and what purpose does a estimate serve if you are already committed?

> If you have a deadline and won't be getting new resources any time soon, and what purpose does a estimate serve if you are already committed?

I didn't commit, the sales person did. I'm not in trouble, he and the company is. I can just leave, it is their fault if they make bad deadlines. The only reason they can continue is that engineers don't push back and quit when it happens. You work hard so that the sales person can get his juicy bonus, why should you do that?

The problem is sometimes we wouldn't get a customer unless we meet the deadline the customer requires. I'm fine with that. Most of the time, it's the sales team promising the world so that they get more bonus without having to pay the price for guaranteeing those promises.

There's a solution to that. Internal debiting between departments. Without it (as is usually the case now), sales is always seen as a net plus to the company, because they bring in new revenue, and can do no wrong.

With it, sales departments can spend more than they have, because they sell for X and need to pay production, IT and project management Y1, Y2 and Y3. An unrealistic deadline would put them in the negative due to overtime payments, attrition, etc. So would unclear requirements, failing to do their due diligence, selling science fiction...

You’ve just made an analogy to a situation that ~never happens in complex profession which ~none of us have direct experience in at a professional level. Thats fine, but it doesn’t really help with reasoning.

An estimate serves the purpose of allowing you to say “well the deadline is in 3 weeks and our best estimate is that this will take 9 weeks. Lets save ourselves the effort, cancel the project, and do something more profitable with our time.

In your analogy, that would be the equivalent of… jumping out of the airplane mid-flight, confident that you can just ride brooms to Iceland because thinking of this as a life-or-death decision is incorrect.

You don't really have a deadline in 3 weeks if you can make the choice to not do it. I was speaking to work that has already been committed to. Think things like changes in the law that you must adapt to or close the business.

In that case, 99% of deadlines are not what you'd call real deadlines. Even in your fantasy case, what often happens in the real world is regulators give more time for companies working to comply.

So if it helps you, just imagine here that we're talking about the 99+% case, rather than the extremely rare "real" deadline.

You could make a compromise. If you have a rough idea where you'll ditch, you can radio that to air traffic control and ships can be sent ahead of time so that when / if the ditching happens, they can to try to rescue people. Ships can act as bases for helicopters as well.

What would the business version of that be? Update your Linkedin, do some LeetCode?

In business, life goes on even if the project is late, there just has to be some accommodations. It's less painful the earlier it is known.

The sales team works closest with people who will become customers, it’s not crazy for them to be plugged into a customers needs or priorities. That said, lots of sales people make promises first and figure out the details later which isn’t good business.

For sure. Any good product balances desire and possibility. Salespeople have a lot of data on desire; engineers on possibility. The problem isn't involving sales. It's not involving the engineers!

Heavily sales-promise-driven product development is also bad for the company long-term. Saying yes to everything (for even small values of "everything") creates products that are just feature swamps, and not especially compelling at any one thing.

Depends on the company and sector - companies that aren’t flexible enough during the sales process fail to win business (particularly on the enterprise side) so there is clearly a trade off here.

> First, I see the same sort of dysfunctions driven by The Date™ even when no customer promise is involved. Even when no customers are involved.

I believe it, but I've also seen the reverse too. In a situation where no customers are involved, at a non-profit enterprise. When we have a launch date, it helps provide focus on distinguishing what's truly necessary from what's not, and using our development resources responsibly. The launch date needs to be reasonable, and it needs to be understood that it can slip (especially if new requirements show up), but it provided a lot of focus. Having no launch date, sometimes everything but the kitchen sink ends up being thrown in, anything that anyone ever thought was a good idea, and we lose focus on the mission or the end-user, and the project can stretch on forever never being done.

Good point, I think the problem is not having The Date but as you allude to, having a top-down fearful environment of missing The Date. In an ideal world we would be able to communicate The Same Date all the way up and down with an understanding that personnel issues come up, sometimes estimates aren’t accurate, requirements change, etc. Maybe we expect The Date to only be hit 50% of the time and we’re all chill with slipping a few weeks/months/quarters depending on the size of the project.

This allows you to set dates that force you to prioritize (don’t delay shipping forever to add random features) without burning people out or shipping software that’s not ready for prime-time.

One problem in large organizations (especially ones that grow fast) is there is no common understanding of what deadlines/targets/The Dates mean. New engineers and management come in from environments where missing The Date is met with unpaid overtime or being fired, getting yelled at, etc. This causes problems when people with the opposite understanding - The Date is flexible and just a loose target - set Dates without padding or fully scoping something, or the opposite when those people become beholden to dates set by people who take them extremely seriously.

But this is what (for instance) Scrum and XP should give you. The Date is two weeks from now. What do we intend to ship?

Did we do that? Shall we do it again? We've got another The Date in two weeks. What do we intend to ship?

Repeat until everyone gets acquihired and vanishes into the bowels of AmaMetaFlixgle.

Sure, that's the intent. I trust we don't need to reiterate all the ways it can go wrong, with a management culture that is dysfunctional. (Really, so much of software engineering methodology can be understood as "how do we compensate for dysfunctional management culture, or, at best, what practices can help and support our management to learn to be more functional.")

In some cases, especially in the "enterprise", it's literally not possible to have something you can actually deploy in production (not just a proof-of-concept demo) after a single two-week sprint.

In some cases, you only get so many chances to get press and public attention for something, and have to decide at what point it's going to be sufficiently impressive to do so.

I agree that focusing on getting something live as soon as possible is a good idea, in almost all circumstances. There may be some stakeholders whose idea of "as soon as possible" is not as soon as yours.

In some cases stakeholders deciding "shall we do it again" are basically always going to say "yes", until someone says "enough" and sets a date at which it will be cut off -- which then forces management to be really serious about figuring out the most important thing to go in every sprint until then.

It is always very important to realize "you can have a fixed date OR a fixed feature/requirements set, not both." In some cases, the focus of a fixed date and "now what's really important to get in there if we can't get everything in" is more useful than a fixed feature set "and let's see how long it takes" stretching on forever with diminishing returns and nobody saying "you know, actually, it would be more valuable to the organization to work on something else now?"

In my experience, working in certain kinds of organizations!

Exactly. Releasing early and often to an ever-widening circle of stakeholders provides the same discipline with much less pathology.

Organizations make promises to close deals. Customers usually don't care how long it takes as much as they care about when they need it. There is variability in the bandwidth of those constraints (and in price paid) which is why we have a sales process. At the end of that process, making changes is really hard. So here we are.

> At the end of the day, it doesn’t matter what process is used for coming up with estimates and delivering.

Yes, it does, because:

> Regardless of what you think your process is, somewhere near the top of the leadership pyramid, it all boils down to a customer promise upon which hinges your organization’s reputation.

Right. So if the process of coming up with those promises – of what to deliver at what cost in $ and time – isn’t aligned with what goes on below, there is a problem. So, the process below matters intensely.

> In non-dysfunctional organizations people at all levels can understand and adapt to curveballs that cause deadlines to slip.

“Understanding and adapting to curveballs” in a large organization is itself a matter of process. And a big factor in it is identifying those curveballs with time to understand and adapt. Which is, also, a matter of process, and particularly the processes of estimating, re-estimating, and delivering, exactly the processes you said don’t matter.

> The hard thing is it all requires honesty.

That is among the hard things, sure. But even before honesty, it requires openness to accepting facts, including facts that have been established fairly solidly about what does and doesn’t work when it comes to estimating intellectual labor instead of implementing rote project management processes derived from physical infrastructure work where the bulk of work is easily-quantified construction, not intellectual labor.

I think point of OP is that people and culture matter more then the process.

Whether that is an estimation or popular ways of organizing: Scrum, kanban, waterfall.

If you have good honest people who care more about the project goal then their personal position it works, if you have a culture of "my position" means more then the end goal and the people I work with, nothing works.

Dev projects often over run their deadlines because customers cram additional requirements into the mix if the relationship isn't carefully managed. I have witnessed several projects that stay in a constant state of billing and extension because everyone is simply comfortable with stable paychecks.

An experienced solutions architect and technically experienced project manager are the key to success in mission-critical and time sensitive projects. Company cost cutting is the enemy, whereas roles are under-funded, where interviews are not accountably performed by knowledgeable recruiters and staff. You won't be able to hire and retain efficient and effective staff if you do not properly fund your roles. Attrition is your fault as a manager, not a betrayal from an employee as it's often portrayed. Employees are not indentured servants, they are free people, who can and should always be able to make decisions in their best interests.

A company should survive based on it's reputation for delivery and quality, NOT based on it's ability to "underbid" everyone else.

Projects fail a lot these days because so many tedious activities to maximize billing hours are conducted and weaved into daily processes by them, because too much attention is paid to padding individual egos and to the ideal of phony "corporate culture" kool-aid. Honesty these days goes a lot farther in meeting goals of project success, AND OFTEN A COSTLY "HOLE IN THE BUCKET" TO YOUR DELIVERY BUDGET.

Hire experienced people to lead and mentor teams with a track record of success that communicate well. I have seen entire IT companies overrun with people that don't have a clue about development in management positions... They often win contracts also based on connection to friends elsewhere, but they frequently miss the mark in business, because the staff dedicated to actual delivery is too often a poorly-funded afterthought.

Stop hiring people just because they're "your buddy"... If they aren't qualified, competent for, and educated on the requirements of their role, they're "dead weight" towards meeting your project's success goals.

Pay each employee well, and hold them properly accountable for their delivery and their attitude, but don't frustrate or work to control/restrict them beyond what threatens the project and productivity of others.

Plan religiously for everything to be done within business hours, quit imploring of your employees to put in extra hours. Make sure your contract bid covers proper staffing to not let burnout and attrition become a factor.

If delays arise, tell customers quickly, and let them negotiate trade-offs or extensions early and decisively. Hold customers responsible for being reasonable, stop the trend of putting your teams at the mercy of abusive customers.

You cannot force a team to subscribe to and uphold Agile Methodology while you yourself as the program director does not uphold your role and responsibilities as well.

A manager's primary role is to remove all the barriers to team success.

Everyone should be accountable to their roles in a project/company setting... Everyone. A vast majority of project and delay-related problems occur because accountability and physical work is too often only done at the ground level of a company, while the top stays isolated only makes decisions and measures results.

> Basically, the PM got accolades, the customer got shitty product, and we hated our jobs.

The incentive for a PM or tech lead isn't to help developers do their job well but to make their boss happy. By the developers themselves not being more closely involved with product owners, using a PM as an interface and shield from the rest of the company, they give sole individuals the opportunity to crack the whip for fun and for profit. If a PM is particularly good at what they do, they make everyone believe they are a hero and that nothing would get done without them.

Guess who's getting a raise for the project getting done ahead of time? It probably ain't you.

Just put that "project delivered ahead of time" to your CV and go find the raise elsewhere. That's how the industry works for devs anyway.

..."ahead of time" does not mean much to me, just that someone at some point was off, maybe some hero projections.

On the other hand, "on time and on the budget" means a lot, that all lies got aligned at the end to make it all look like truth - that's a success!

That is how it goes in the end with any estimate. If it is overestimated the product is delivered with high quality. If it is underestimated the product is delivered with low quality. You can always buy time with technical debt. Our industry always does.

I'm in a org that took up a great deal of technical debt, I no longer refer to it as such. It seems to create a false sense of security. Instead I simply call the software what it is, incomplete or unfinished.

"We took on some technical debt in Q1."


"We didn't finish the project in Q1, but we kinda have something in place that sort of works, as long as you don't look at it funny."

I like the term debt because it makes it clear it is something that needs to be paid off and is accruing interest. Meaning the longer we don't pay it the more expensive it gets, until almost no development time is going to the "principal".

Unfinished does not communicate the same. You can have an extremely clean and polished product that is unfinished. You missed the deadline but you're in a good place to deliver future requests.

I have found that the sort of "technical debt" that effects customers has another word for it, a bug. Those issues tend to get prioritized quickly. The other kind of technical debt, that has you working nights and weekends because the system falls over in a light breeze doesn't have a special name. I consider "the system is relatively stable" as a implicit requirement of anything you build, if that requirement isn't hit, it simply isn't done. It probably should go back to the team that failed to finish the work, and the PM involved shouldn't get kudos for it's completion as it isn't completed.

I think a lot of the problems with software dev could be resolved by being more honest with ourselves.

Understood, but in reality the PM is going to get kudos and your complaints that things that were delivered and paid for are "unfinished" are going to be ignored.

Devs are pretty honest, management simply does not listen.

"makes it clear"

You assume a lot about management/product there. The term -should- make it clear that debt has a cost to carrying. But the thing is -they aren't paying that cost-. From their perspective, they're getting free money while the devs pay the interest.

The one side of technical debt that the term doesn't capture is the quality of life issues involved. It's just misery and pain to work in an org with high levels of debt, even if everyone involved is comfortable with timelines necessary to accommodate it.

It's like having a factory with poor lighting or unmaintained equipment that breaks down a lot. There are metaphors that capture that aspect of technical debt better, imo.

That's "paying the interest". Lets say you rush a component of your software out. Every time you make a change it causes massive headaches and issues. Lets say this decreases developer productivity by 5%.

Then any new feature that touches that component increases this "interest". You're now losing 6%, 7%, etc. Only on interest.

The longer this goes on and the more debt you accrue the worse it gets. Getting debt to have something now that you fully intend to pay back is not a problem.

Constantly increasing your debt till you go bankrupt (total rewrite) is though. Problem is management is blind to tech debt. It doesn't show up in their calculations, it's just that thing the devs always complain about but somehow the product still gets delivered, just with more "irrelevant" complaints each time until half the team has quit.

That makes it sound like technical debt is a financial matter, that it's reasonable to bargain about. It isn't - it's crappy code that is a pain to work with, that will make devs quit, and will cause bugs.

I've worked on a project with 200 devs, and one year of rushed history. The entire project was technical debt, which was never "paid off". When I said it needed refactoring, I was told to refactor it myself. After two weeks' work, my effort failed, I got dinged, and my reputation suffered.

We didn't have CI, or a test-suite, not even a decent RCS. I didn't have a chance. I should have kept my mouth shut.

Code being a pain to work with, causing devs to quit and constantly causing bugs is costing massive amounts of money in lost productivity.

Every hour the Senior Engineer wastes putting out fires is $100+ down the drain. Management simply refuses to include these costs in their calculations.

We should start keeping track of this time expenditure very accurately and repeatedly show it to higher management: this is how much tech dept has cost this project this month.

Unfortunately, they still won't listen. We're just whiny little babies in their eyes.

With the word "debt", smarmy PMs love to say we're "leveraging debt" for some great purpose, as though it's a net neutral or better

Calling it incomplete or unfinished may hide the fact that it is broken, maybe beyond repair.

Ive had things like that. Previous to me someone put a large amount of relational data in a object store. It's dog slow and stupidly complex. I call that not fit for purpose, and make it clear that it requires a total rewrite.

It depends on how often this purpose is needed or called by the system. Tackling debt that is rarely executed can cause other debts to mount.

The most bitter pill I have swallowed in my career is seeing the extent to which executive compensation (i.e., bonuses) drive almost everything.

Ironically understanding that was the gateway to a zen like state of, "Sure but it'll impact our ability to succeed in next Quarters objectives"

They do think about those bonuses longer than the next quarter.

> put a midget in the machine

Please don't use this phrase.

I'm sure you don't mean it badly, just a reminder.

Zero trolling. I'm not here to start a Wokeness Holy War. I understand and appreciate the sentiment of your post.

Would it be OK to say "put a little/small person in the machine"? If not, please kindly suggest a better replacement so that we can learn. :)

> The word “midget” was never coined as the official term to identify people with dwarfism, but was created as a label used to refer to people of short stature who were on public display for curiosity and sport. Today, the word “midget” is considered a derogatory slur. The dwarfism community has voiced that they prefer to be referred to as dwarfs, little people, people of short stature or having dwarfism, or simply, and most preferably, by their given name.

> When we surveyed our community about the usage and overall impact of the word “midget”, over 90% of our members surveyed stated that the word should never be used in reference to a person with dwarfism.

From https://www.lpaonline.org/the-m-word which is a dwarfism organization. But as other comments stated it's best to not hinge your metaphors on ridiculing others.

For many people there is one characteristic that strangers always notice or call them out on and seeing that trait brought up in odd metaphors or jokes would likely feel off.

I think the solution is to find an analogy that doesn't hinge on using a person with a disability.

I don't think any meaning or impact of your post is lost if you just swap in "hamster wheel".

I’ve personally never heard the phrase before, so I would question whether it’s so crucial to the conversation that it needs a replacement. It’s not even clear what it means. From context I can gather that it means “do something desperate to try to complete a software development task before a deadline.”

I took it as a reference to the Mechanical Turk, https://en.wikipedia.org/wiki/Mechanical_Turk, cheating by including a clever human literally inside your machine in that case...

Explicitly referencing the mechanical Turk would be better, and a bit more explicit on the approach.

I think the actual reference might be to a Calvin and Hobbes strip, but adding that it's a little person in the atm vs just a person -- an unnecessary addition.


A demon? Nobody ever accused Maxwell of being unwoke (yet).

Now that's technical debt!

I got less than I would have expected searching on that phrase. Is 'midget' the problem word here?

I've known a few people less than 5' high and it is never an appreciated word.

The whole phrase is the problem. Impyig that it's okay to put a "midget" in a bad situation to cover your own failings

I thought the point of the phrase was that you had fucked up?

Yes, I did not intend to be offensive, but you're right. I regret that now.

> Regardless of what you think your process is, somewhere near the top of the leadership pyramid, it all boils down to a customer promise upon which hinges your organization's reputation.

Yep, this is exactly how I think about it as well, no matter how hard you try to train people to adapt to scrum with story points, at some level it turns into time estimates and deadlines anyway. So it's always going to be more or less pointless. Even if you do scrum with story points and estimates perfectly, just the fact that it is detached from time makes it useless anyway so it's just wasted paperwork that nobody wants to see.

you just described pretty much the perfect exemplar case for why/when estimates and deadlines are bad. an anti-pattern to be avoided.

Welcome to corporate America!

An estimate is usually the sum of a bunch of smaller estimates. Those estimates have a probability distribution that is asymmetric. While sometimes they can be quicker than anticipated, the lowest they can go is zero (turns out we don't need it, cut it) and that kind of thing is relatively less common than a blowout the other way. Those blowouts are uncapped, while zero time is a minimum there's no maximum. The unknown unknowns are the things that get you. Seems like it will take X but the frobnicator was broken in ways we couldn't have anticipated so we had to get a different thing and then hack it to make it do what we actually needed etc etc. Each individual item has a low probability of that happening to some extent. When you have a bunch of things in sequence one of them probably will if not more.

IIRC each item is described by a poisson distribution. (Have I got that right, mean, long tail). Nobody models estimates as a sequence of dependent, poisson distributed events, that I've heard of at least. If you did you might at least get a range on the estimate. Between 2 and 20 days. That would be more accurate. Useful to anyone much? Unclear.

And one of the classic problems with estimates is that common processes tend to bias estimates.

For example, what (I think) McConnell calls "picking the first non-impossible date". An exec asks how long project Magic Pony will take. A manager asks the team and the team comes up with a number. If the number is higher than what the exec imagined, or even higher than what the manager imagines the exec imagines, pressure is applied on the number. "Are you sure? That seems like a lot." A negotiation ensues, and often the number lands on the first date that engineers can't absolutely prove is impossible. Or the first date that engineers won't quit their jobs over.

For an estimate, we in theory want one where the team has a good chance of hitting it. Say, a 75% chance of success. But iterative pressure from the powerful will shift the distribution to 25%, 5%, 1%.

The classic cognitive trick--people, unless specifically trained not to, will always estimate the best possible scenario, even if past experience has told them that the best possible scenario never happens.

I'm dealing with this at my company right now. Our clients, and even our internal teams, will always refuse to estimate according to our past behavior.

Us: "We estimate your build will be functional in prod in five months."

Client: "Unacceptable. That's way too long. We need it in three."

Us: "Okay, we technically can do that, but that relies on a big internal release that we think is risky to base estimates on."

Client: "All I'm hearing is that you can do it in three."

Then we get to three months and hit delays over and over again until finally, five months later, the build is functional in prod.

The thing that really frustrates me is that when we deliver the initial estimate, the clients always refuse to accept our medium-case estimate, with the implication that they'll drop us as a vendor if we follow that. So we give them our best-case estimate, fail to hit it, and then they buy more from us anyway, complaining the entire time.

Why don't we all just look at the past 10 builds we delivered for them, admit that the best-case never happens, and stop pretending like it's unacceptable to take longer? The point here is that they keep accepting it. Everyone complains, but we all keep selling and buying to and from each other. Let's just admit it's harder than we're saying to get software built and accept longer estimates, given that we're all going to keep selling and buying anyway.

> Why don't we all just look at the past 10 builds we delivered for them, admit that the best-case never happens, and stop pretending like it's unacceptable to take longer? The point here is that they keep accepting it.

It's theater, mostly. I think some people think they have to do this. How did they get you to keep delivering when you did? It was their threats/complaining! If they stopped doing that... there's no telling how long it might take you to deliver.

Given that every project you've called at 4 months has delivered in ... 4 months, for the past 7 years... your estimating/delivery ability is still tangential to their managing of the delivery by regular barrages of threats, complaints and haranguing.

Wow, great insight.

It reminds me of the dynamics with walking down the sidewalk past a dog. If I walk too close to his yard, he decides I'm a threat. He barks! I walk away. It's easy for him to do post hoc ergo propter hoc and conclude that his barking chased away a dangerous threat. In reality I didn't do a thing different, but his lesson is that barking works!

It's a great point, and one that could be rectified by their teams actually having standards. Threats can't be empty. "Deliver early, we'll give you a bonus. Deliver on-time, we'll pay. Deliver late, we'll hold you accountable and pay less according to this attachment. Deliver really late, and you get a strike. Three strikes and we drop you."

Of course, that would require backbone, fortitude, accurate forecasting, integrity, and management support.

I've had arguments with PM's about this. After a few years of doing similar projects you can sort of squint sideways at something similar and have a gut feel (if it's completely different from anything ever done, that doesn't work of course).

But every PMP (tm) certified PM I've worked with thinks if you can just break the project into granular chunks, estimate those, then sum the estimates, there's your final deadline. The fact that it never works doesn't dissuade them on the next project is the depressing part, it's all just theatre for the execs.

Totally. In most places what people call "estimates" are just wishful thinking.

And that client behavior you describe is what I think of as "Kirk-style management". An executive thinks that the way to get technical things done is to be demanding and shouty toward the technical person, insisting that the number given isn't good enough. Scotty's pathological response to the pathological situation was to lie his ass off about estimates: https://wiki.c2.com/?ScottyFactor

Thanks for that article--bookmarked.

All I'm hearing is that the client signed the contract, which means you won.

Whether your company makes money on the change order is a matter for your boss.

One technique is to ask for 3 point estimates. Get everyone involved to estimate each item in three different ways:

1. How long it will take if everything goes well and according to plan.

2. How long it will take if everything goes wrong and we uncover further problems along the way. Probably getting those risks listed.

3. How long it is likely to take.

I imagine we tend towards doing just one of the first two, but having to do both should lead to more agreement on the third.

That said - its been about 30 years since I was taught this as part of critical path analysis, and I've never seen it used in the wild.

The problem even with that method is that if everyone agrees on "most likely", and that is perfectly correct, and you use it, you'll still be wrong more often than right.

The reason is the underlying distribution: because the probability distribution is usually heavily skewed to the left, with a long tail to the right and lots more room for things to extend than to contract, the day which is "most likely" when looked at in isolation is actually well to the left of the median date, the date with a 50% probability of being before or after completion.

If you want an 80% chance of being right, say, you need to do some more maths with the three numbers to figure out what the right date to use is.

I would very much like to see this done. The problem is that estimation isn't seen as work. It's seen (or, more accurately, framed) as something devs should be able to just magic from thin air with perfect accuracy, so why would we bother adding complexity? Very few organisations bother tracking estimation precision, I think because the incentives are aligned such that it's actually to the advantage of the people writing the cheques that developers under-estimate. I think the power over the developers that a missed estimate gives to an adjacent organisation is a real factor in ensuring this never gets improved.

> you need to do some more maths with the three numbers to figure out what the right date to use is.

We did this all the time at a previous job. We had a spreadsheet in which we put in the three estimates. They were combined in some ratio (I think it was 1 part each for the outliers and 2 parts for the most likely). Then some time was added on for testing, test remediation and project management at a set additional percentage. Worked out pretty well, we seldom ended up with issues when using this method.

Tom DeMarco describes a method in Waltzing With Bears that's quite interesting: use a triangle defined by the three points as a rough approximation for the probability distribution, and then cumulative distribution function calculations are just juggling quadratics that you can plug into Excel.

Mind you, these days Excel's got the distribution function formulae in it that you'd need to do it "properly" with a lognormal distribution, but I like the way the three estimation points have a direct and obvious link to the result with the triangle method.

For sure. I know plenty of ways to deliver solid estimates and hit dates. I think the problem is execs mostly prefer having estimates they like over accurate estimates.

> A negotiation ensues

... and then the estimate is no longer an estimate, it's the outcome of a negotiation.

> Are you sure? That seems like a lot.

"No, I'm not sure, boss; it's an estimate."

Lognormal distribution is always a good one.

It's applicable where the randomness has a multiplicative rather than additive effect. For example, the work may double or be halved. Lots of natural processes work this way.


An useful property is that the Law of Large Numbers doesn't apply to it, so all of your estimate history is basically useless. Also, when you add enough estimates, the total time to completion is mostly determined by the error in one or two of them.

I've written about this before more extensively, but my previous company tracked time vs predictions very rigorously and extensively, and they always fit a lognormal distribution shockingly well.

I tried to do that! Didn't use Poisson, used normal distribution, but I think that's not so relevant - at the end of the day it's still just more elaborate guesswork.

I think what's useful is thinking about the problem, and communicating in group/ disseminating the information (this is required in order to do the typical "planning poker" or whatever planning technique the team happens to use). Whether you end up with L/ S /M or 7/13/21 points or whatever else you use - it's irrelevant, so no need to sweat it; what is relevant is that you discussed the tasks and everyone has some understanding, and maybe they're a little bit better spec-ed now.

Normal distribution is deeply inapproprite for time estimation because it yields the same probability for 3x the estimated time than for the negative amount(!) of the estimated time.

Your second point rings closer to what we should be doing instead. We should put more effort into analysis and refinement. In my experience, estimation meetings where all tasks got estimated were almost useless in the end; and the most valuable meetings where when we didn't finish to estimate, but instead gained a deeper understanding of the requirements and of the domain.

Product and sales always want exact dates from tech but always seem to be a lot more vague when it comes to customer numbers and revenue numbers.

I've found the following system to work well. Estimates are always given as the following: hours, days, weeks, months, quarters. That's it, not numbers attached. It means "some small number of _______". This allows for discussion without everyone feeling trapped. If a feature takes "weeks" and Product thinks it should take "hours", then that is a discussion that can allow for a change in scope or requirement clarification or just education about how the system works. Of course, PjM systems only work if everyone buys in and that is always the bigger challenge.

For projects with a deadline that matters, I've found that explicitly estimating to "80%" likelihood has helped me hit deadlines much more consistently. I used this to hit a deadline on a 4 month software project only 10 days out, which I think is good for software.

I think implicitly most estimates are the 50/50 case. "I'm 50% sure I can do it in this time". Much of the time I don't think this is very useful, and isn't very well thought through.

You have to assume you're going to hit a brick wall at least a few times in any given project, and something you thought would be a snap turns out to require days of refactoring. Especially if there's some aspect of business logic that wasn't clearly communicated beforehand and/or the customer hadn't considered until it came to light as a code problem. This should be built in to the estimate, but always come as a surprise gift to the customer when you still manage to deliver it on time.

In design and code, customers need to be consulted and asked what they wanted and made to feel important but not about anything actually mission critical. That's what you're hired for. At least a few times in any long project you need to come up with a meaningful sounding but essentially empty aesthetic decision for the customer to make. This allows you to both show progress and let them feel they're in control. It also allows you to extend the deadline if you feel yourself behind.

50% is optimistic. Some (ok, many) estimates are actually 1% projections. The soonest we could possibly be done.

This, too, is not very useful.

The poisson distribution is quite-light tailed. With a poisson distribution, the probability of completing the task per unit time is constant.

Real world tasks are likely to take longer if they have already gone on for longer; maybe a Pareto distribution would be more appropriate.

I think you meant exponential distribution? Poisson is a discrete distribution ( number of task completion events occurring in a given time interval if task completion is exponential)

Exactly. One of the fundamental tenets of science is reporting estimates with appropriate precision, e.g. significant digits. If the probabilistic interquartile range spans multiple months, why would I report the mean of the estimate in "hours of effort"?

What's frustrating is I've articulated these thoughts clearly to managers in the past. They nod their head and agree and then ask for the hourly estimate again and we all collectively share our disappointment when our estimate is inaccurate after the fact.

Nothing like running head first into a wall and expecting a different result.

My management suggestion? Frequently identify and advocate for simplifications, and modulate the degree of simplification based on progress to date. If people enjoy their work, they will work hard regardless of timeline hawking. So the variable factor is really the volume and complexity of work, not worker intensity or motivation.

Yes that would mean managers have to do something, and while some are incidentally useful and productive, there are few external incentives for them to be so. Yet there are many incentives to let you burn out after hitting a wall a hundred times.

So a Poisson distribution is for count data, not continuous like time. A gamma distribution is a continuous distribution that have the shape you want. But, this approach should involve getting data and plotting it to see empirically what sort of distribution it might have.

I’m also unclear how the dependency you mention is going to come in. That said I agree with you in general that some statistical reasoning should be brought to this problem. Maybe just showing people the way variances are additive and the impact that has on the overall variance of a project made of many small projects. You could pick a plausible gamma distribution for individual task duration and play around with seeing what the sum of a number of IID gammas looks like —- the sum is a gamma distribution also; see wikipedia but the formula.

> Useful to anyone much? Unclear.

If you're the leader/manager/person in charge, do you want to see reality, or do you want to keep your head in the sand so you can have a simpler planning process? If you want reality, and that's what the most accurate view of reality, then by definition it's more useful than anything else.

Is there a way do calculate this in Excel without using a Monte Carlo simulation?

Years ago, when I was working at a company, I did a detailed estimate of a large project.

I broke it down into components and individual tasks, then put estimated hours against each one, then put it all into a giant spreadsheet, to track planned vs actual times.

If I remember rightly, it totalled about three months of elapsed work time. And it took me over a week to compile the estimate.

I also remember that I hit the predicted end date to within two days.

But the interesting bit was each individual task was inaccurate - lots were wildly underestimated, balanced by those which were wildly overestimated. Stuff like things I thought would take a day taking 15 minutes, other things I thought would take an hour taking a week.

Back in the first dotcom boom, we started tracking our rough swags that a dev would throw out on gut instinct vs. the detailed estimates we put together for actual customer projects. We founds the swag were more accurate, exactly for the reasons you stated -- we would over and under estimate details, but they balanced out and the scope of the overall project was correct. We then got into talks with other dev shops, with more experience than our young group... and they had the same result. Young or old, experienced or not, a rough swipe at the scale of a project is usually accurate enough to know what it will take.

I've seen the same thing play out in Agile processes - we are asked for story points, and while many are accurate, some are way bigger than we thought, some smaller. But the overall time to complete a project tends to fall right about where the devs thought it would before the pointing exercises.

All the estimation in the world will still put the same answers into the hands of your project managers - "Pick a date, or pick your scope. Not both."

Interesting - and no one tried to beat you down on the end date when a task went quickly, or berated you when one overan?

I truely believe that developing software is a creative process that you are always doing for the first time (unless you did it wrong the first time). Why does everyone expect us to know how long its going to take.

The basic rule of thumb I use is to take my best guess and multiply by PI. (answer: because 3 isn't usually enough).

I basically believe that building a house is a creative process that you are always doing for the first time.

As a general contractor you can’t expect me to provide a timeline of when it will done. This house has never been built this way before.

That way of speaking would fly in no other engineering discipline. Software, while it possesses some interesting differences from other fields (eg mechanical, electrical) has some massive growth to do in terms of actually developing engineering skills.

The biggest problems I see with sw engineers who complain about estimating work are (1) lack of experience (2) lack of rigor and (3) lack of effective teamwork.

A mature sw team with good disciple around getting user input and estimating work is totally possible. I’ve seen it.

This who think software is un-estimable compared to other engineering fields are just giving themselves an excuse from accountability.

"This house has never been built this way before." That implies someone's already designed how it's going to be built?

What I often see in software that you're expected to give estimates on the spot before there's a design. You're expected to be the designer and implementor, and you're expected to say how long it'll take before you've started designing.

"I need features X, Y and Z" isn't a design.

And if you think other engineering disciplines are different, please go ask an EE how long it'll take -- or what it'll cost -- to make a board with a 2 GHz SoC and 8GB of LPDDR4 and PCIe. (That's not a design, and if you don't get laughed at, you'll learn that the answer depends massively on the design)

And just to add to all the other responses pointing out why this is a bad analogy, estimates on building a house assume fixed dependency costs. You need wood? You know where to get it, how quickly it will arrive, and what it will cost you. Software, you need some data? You have NO IDEA what the cost of getting it will be; who owns that system, how quick they are to respond, is there an API already in place, is the data in a format you can work with, etc. What was a "yeah, we've got an API you can use" "great, a day" turns out "Oh, that API isn't actually able to provide you data at the scale you want, and it also only allows querying by ID; if you want ALL the data we'll need to build a new API, and you'll need some sort of caching layer, and we need to discuss the ways we invalidate that cache to make sure it's sufficiently consistent for your needs and etc etc"

It's like what happened to the home building industry with the pandemic. Suddenly the cost of materials has shot up, the availability has dropped, and everything went up in the air. That's simply the default state in software, because every bit of software -is- unique. It would be like if every housing project was using unique materials. Literally a new material, that has to be fabricated, whose properties are unknown. Sure, it might be an alloy of something that previously existed, but it still brings in a bunch of unknowns.

The design of a house doesn't change every 3 weeks while it is getting built.

House construction is usually only started after a massive amount of detailed design and planning is done. This is priced in. This is almost never true for software.

Even with the above houses often take longer to build than expected and unforeseen issues can arise. Even the weather can screw things up.

The amount of unexpected issues that can come up in software is huge. Everything from a bug on a library that you use to differences in the exact machine that the software will run on vs where it was developed.

You say this like engineering is well known for accurate time estimates (especially for design). I haven't seen it. Even very big projects which are well understood overrun all the time in civil engineering, electronics engineering, aerospace engineering, the works.

I would say, firstly, the construction industry has a reputation for going over-budget and over-time. The difference is they are much less likely to have stuff that's not fit for purpose at the end of it (although there are exceptions).

The reasons for that?

There's thousands of years of experience in humankind of various types of construction and in-depth knowledge of the materials. Whereas in software we've only got 60-odd years of experience and the materials we use are generally untested.

On top of that, when they use new materials in construction, there is a legal and engineering requirement to test those materials before they are used generally. In software we just go "ooh, look new Javascript framework" and dive on straight away.

To add to the house analogy...

Someone is given a house building estimate of 6-8 months.

They they proceed to have business cards printed up, and start ordering materials to be shipped to their house for their new home business, and they slated everything for month 5. And they've already invited their family to come stay with them on month 6 day 2 to stay in the guest bedroom in the basement. ("Yes, I told you I needed a basement last week! Someone on your team was in the room and they didn't say no!").

The house building analogy is pretty far off in many many respects.

There are codes and inspections you have to comply with. You want to change X? That will mean new inspections and new codes to follow. There are essentially no codes to follow in software (I wish there were).

Regarding houses, an estimate is an estimate. Unfortunately for the builder, though, it is usually treated as a quote. IME the work is never completed on-time, but the estimate (rather than the time) is still the basis for the invoice.

It is different with software estimates; I've never given an estimate to a customer, it's always been to a PM or a salesman, who then imagines a quote to give to the customer.

In the early days, devs were "given away" by the salesman as part of the hardware deal. As a consequence, dev work was a cost-centre. I'm glad that stopped happening!

As far as I'm aware, the construction industry uses different formulas for calculating how long typical classes of work will take to accomplish. They can plug the square footage of some task into a spreadsheet and have a reasonable estimate for building time.

From what I understand, a construction "blue book" would give work rates for common tasks. How long it takes one man to dig a trench 10 yards long, or lay a 1000 bricks. From this you could calculate how long the entire job would take.

When I first encountered "Project Managers" in the 80's they were still trying to fit this model to software development.

i've heard business leaders express the suspicion that sw eng teams are trying to avoid accountability by claiming estimating is too hard.

and every time i've seen people try to correct for this it results in different layers of management adding arbitrary fudge factors to the time estimate which inflates the actual time that it takes to do something. these corrections build inefficiency into the beginning of the project and bake them in.

it's an answer to "how do we not be wrong about when things will be done?" over "how do we move faster?"

> i've heard business leaders express the suspicion that sw eng teams are trying to avoid accountability by claiming estimating is too hard.

If we say that we don't know how long its going to take or how much its going to cost - can you blame them?

Lots devs like me have a comms issue. We would love to deliver stunningly brilliant complete package on the first attempt and we come across as secretive. We need to made to think about delivering an MVP and building a product up incrementally. This way those nervious bosses and investors can see that something is happening. I think the question is often not "how long is it going to take?", but rather "will you ever deliver something I can sell?".

> I truely believe that developing software is a creative process that you are always doing for the first time

This is absolutely true. And that's because when we find ourselves doing the same thing repeatedly, we abstract it into a library, a framework, or a service so that we don't have to do it again.

The easiest thing to predict is something that has been done a zillion times. So if you're building 100 houses in a subdivision, you can get really good at estimating and hitting those estimates. But the more novel something is, the harder it is to predict. And software by its nature is novel. If it isn't, we're doing it wrong.

I think it’s because of perpetual instability of the tools we use. There are always new libraries and apis to learn. I think a lot of it is self-inflicted. If we built stuff the way we built it 10 years ago, we would still get the same product finished but having done the same thing in the same way, estimates would be more reliable.

But boy did we succeed at allocating a heap of VC cash into devs pockets.

For me and dev work it's generally about misunderstanding the requirements.

They ask for X, I think X involves A, B and C, when actually it involves A, D, E, F, G and H.

Yep. And that’s on you and the product owner to get aligned on. Fix that alignment and you fix the problem.

Time needed to fix that alignment is impossible to estimate, and also impossible to prove you have done sufficiently.

Not to mention that often they ask for X while actually meaning Y.

A relative error factor of 40 sounds quite high. I also often overestimate task complexity but not to that degree, except maybe in research-like projects.

I think often these over-estimates are caused by tasks where you just know the requirement but not yet how to realize it. For example, estimating the requirement "setting up TLS certificates" might yield an estimate of one week if you have never done that before. After researching how to do it you might end up with one hour of work, e.g. because you learned that you can use ACME to auto-generate certificates. Does that sound right?

This was dev work - and generally when something goes over by that amount it's because my understanding of the requirements and the client's understanding of the requirements were very very different.

Years ago, I was given training in an estimating technique called Function Point Analysis (yes, yet another estimating technique).

You broke the task down into (I think) five elementary components: inputs, outputs, processes, inter-process exchanges, something. You rated each component out of 5 for difficulty. Each type of component has a weight, so you can get a sum of the weighted scores of the components, apply some kind of fiddle-function to the sum, and that's your function-point count.

You can then use intrusive management to measure the function-point output of each developer per day, so that after some months, you can make accurate estimates. Because the task estimate is denominated in function-points, not days, you can use the FPs-per-day rates of individuals or teams to estimate the elapsed time for a task.

I tried to use it a few times. What I learned is that it's not possible to make accurate estimates.

Actually, that was one of the observations of the FPA training: that you are supposed to keep on doing it all through a project. Well, you really needed a whole corporation to commit to supporting something like that. I'm not surprised it was no use to me.

This all depends on being able to make that detailed task breakdown at the _beginning_ of the project. In my experience software development is an endeavor where that level of understanding is seldom present at the outset.

Typically, at least in the fields I've worked, at the beginning you may have answered the question "is this even possible"? (but not always), but your task breakdown consists of a small set of "figure out how to approach X" tasks, and a scattering of "well-understood thing we know we need to do sometime". In this context, it's impossible to estimate the project. The best you can do is to plan roughly what work to do in the next few days/weeks, then re-group to see what comes next. I believe Agile incorporates some of these insights.

Did you use three point estimation: https://en.wikipedia.org/wiki/Three-point_estimation

I didn't know about that - but I suspect (it was a while back) that I did two points - best and worst case.

Years ago, I also needed to make an estimation, so I simply described which functionalities will be available in each week. I also added a 2 weeks margin to polish everything at the end of the planned deadline, and it worked great. The client was happy because he could see the progress every week, and I was happy because I was on time with everything. During the polishing phase, we were meeting every day to make sure everything is adjusted or fixed properly.

> But the interesting bit was each individual task was inaccurate - lots were wildly underestimated, balanced by those which were wildly overestimated.

This is called random luck. Something tells me that if we repeated the same experiment with different projects, that distribution of errors wouldn’t cancel out most of the time.

Not all the time, but most of the time, yes it cancels out. If you work in a business like consulting, you'll quickly learn that you make more money by being able to give estimates in this manner. And when it's part of your job, it becomes a skill that you need to train and become better at, and your estimates get more consistent and more accurate.

Yeah, I’ve done it myself.

I’ve given price estimates for complex PCBAs before designing them within 0.5%, but I don’t attribute that precision to talent, I probably get the estimates in the +/-10% and got lucky in that case.

Ah, sorry, I misread your original statement then. Were you trying to say that getting an estimate right within 2 days was lucky, but something like 1 week would've been totally normal? It's hard to tell the percentage, since OP just said "large project".

>There is back-and-forth as the estimates are questioned for being too high, almost never for being too low.

I believe this has only happened to me once in my career. My employer had recently switched from flat rate project bids to time and materials. A data transformation project that looked like it would take five or six weeks turned out to have repetitive tasks that could be automated easily. My estimate of three days, which was already padded, wasn't appreciated by the PM what wanted a month of revenue. After being told several times to account for contingencies and not budging much on my estimate, the project was taken from me and given to another engineer who gave the PM the estimate she wanted. While this was going on, I had already written the necessary code. My coworker had a month of mostly relaxed days as I turned the code over to him and he got busy finding ways to look like he was doing work for a month. Maybe I should have done that instead.

To be clear, if I was your PM I wouldn't have done it, but... You should've done that instead because likely not even your customer cared. They okayed a month and would've been unhappy if it was late, but otherwise they usually don't care. Being predictable is more important than being cheap.

This is important to understand. When you agree for a contract, do not renegonatiate the offer by lowballing yourself because you found a way to do it faster. This is unlikely something that will be appreciated by management.

I can see how this makes people cynical, it does to me to some extent.

But now I view it as: your goal isn't to perform the best on your term. It's to perform the best on terms of whatever your management says you should do. Hopefully, that's in alignment. If it isn't, then keep on searching for a job while working there, or simply have peace with it.

Personally I view it as a successful project for the client.

The client wanted something done but they only had some time/budget to work on it. The developer looked into it and was able to get the project done. It's great.

Let's not forget that the job done is not done for myself, but for the company which employed me to do the job. So if it takes less time, I'd report it and if the others refuse to budge, I will use that extra time to do some courses or such - also very useful to myself AND the employer. Win win and no need to get cynical.

That’s one way of looking at it.

The other way is: imagine how much your client is going to trust you forevermore if you tell them you did it in half the original estimate. The next time you slip they will happily pay you because they know you are honest. They might like you so much that they will do referral calls for other clients. Etc.

Reputation matters.

Yeah, let your competitor do it for you and drive you out of business.

You did the right thing. If it were a client, I would have said hey, I wrote this amazing way to automate your job and I'm going to save you half the time I estimated.

Half being contingent on how much originality went into the automation process. If you just did away with 300 hours of work for yourself in one hour, charge 150 hours.

[edit] How many hours they were expecting is also key here.

[edit2] but you morally did the right thing, and your colleague who took the summer off is a jerk. Hopefully you spent the time doing something more useful than repetitive work, and that is a reward in itself.

Looking at it from the PM's perspective they may well have been burnt in the past by developers giving optimistic estimates. I've done a few data transformation projects and they rarely go as planned.

Morally certainly the right thing, unfortunately morality isn't what keeps business and career afloat.

I ended up quitting that job over a dispute on double billing clients. If they had changed my timesheet entries after I made them, I would never have known but they insisted that I enter incorrect inflated hours. They justified it as recovering PM time the client wasn't willing to pay for which only made me less willing to falsify my time. Scummy company but this was during the dot com bust and they did survive when many of our competitor's died.

If you're consulting, you absolutely need to have estimates -- because that constitutes the bulk of the Statement of Work (SOW), that is, the contract.

And if that estimate isn't within 10-15% (the usual error bars) of the actual project duration/billable hours, there's going to be a problem.

What's more, while the developer(s) involved should have some input into that SOW, they shouldn't be writing that document. Rather, they should be writing code for other projects already estimated and sold.

As for in-house projects, that's a whole different story and is really dependent upon the processes of that particular organization.

All that said, developers should be spending their time developing, and their managers/managing consultants/salespeople should be doing all the other non-development tasks.

Edit: Fixed grammatical errors.

> And if that estimate isn't within 10-15% (the usual error bars) of the actual project duration/billable hours, there's going to be a problem.

Alternatively, if you’re doing project-based billing instead of hourly billing then getting your estimates wrong can result in a lower effective pay rate for the job.

In other words, bad estimates can cause the consultant to lose money.

Doing freelancing work is one of the quickest ways to force yourself to learn how to do good (and fast) estimation of projects. Estimation skeptics into believers very quickly when it’s their own money at stake.

>Doing freelancing work is one of the quickest ways to force yourself to learn how to do good (and fast) estimation of projects. Estimation skeptics into believers very quickly when it’s their own money at stake.

Freelance work has never once worked for me like this. Coz it's my own money at stake I bill by the day.

Even if I could provide 100% accurate estimates (impossible) the following assumptions never hold true:

* The client has a clear picture of what they want up front.

* The client won't change their mind about what they want along the way.

* Circumstances won't force the client to change their mind about what they need from you.

* Hidden traps won't suddenly spring up (regulatory, technological, etc.).

I quickly learned in my early 20s that offering a client a fixed price for almost any kind of project was sheer insanity as A) you inevitably end up taking on risk that belongs to them and B) they'll be forced to lock in their requirements from day 1.

I always tried to pressure clients to get to MVP and then iterate on a quick a cadence as possible. Unfortunately it's rare that clients fully embrace this and there's always a tendency fatten up MVPs and create "plans" 9 months out for a project that hinge upon a nest of assumptions many of which will almost certainly be invalidated.

Unfortunately the most common outcome is some sort of uneasy truce where I give wildly inaccurate estimates which I say are probably wildly inaccurate.

>> Coz it's my own money at stake I bill by the day.

That's orthogonal to what the parent was talking about.

You can't just say "I bill by the day" until the MVP is complete. Whoever is contracting you out will want a ceiling on how many days it's going to take. They don't have unlimited money or time. And once you give them that initial number of days for the MVP, congrats, you just made an estimate.

The client should put their own cap on how much money they're willing to spend before cutting funding.

If the MVP is truly an MVP and the client isn't chronically short of cash it should be at least an order of magnitude below what their initial budget is.

I tend to find that when a strong emphasis is put on estimates it's because:

* The client simply can't conceive of reality in a non-waterfall way. This extremely common, but, in which case they're putting themselves at a competitive disadvantage in the software business to those who can. If you've ever wondered why big business can enter the tech market and spend 100x as much as a scrappy startup and still get completely thrashed in the marketplace, well, this is a large part of why.

* There has been some breach of trust.

Unfortunately, I find a breach of trust caused by missed estimates tends to spiral into an even greater emphasis on estimates in a kind of negative feedback loop ending with big balls of mud, stressed developers, development velocity that grinds to a halt, buggy releases, etc.

On the other hand, you can cause a positive feedback of trust and decreased reliance on estimates by delivering reliably.

>> And if that estimate isn't within 10-15% (the usual error bars) of the actual project duration/billable hours, there's going to be a problem.

>Alternatively, if you’re doing project-based billing instead of hourly billing then getting your estimates wrong can result in a lower effective pay rate for the job.

Exactly. That's certainly a problem, just not one that the the client is going to complain about.

>Estimation skeptics into believers very quickly when it’s their own money at stake.

Except that's not the only reason why. As a consultant you're more likely to be allowed to pick your tools to fit you, reducing the amount of unpredictable gotchas, plus you usually have a lot less red tape than FTE.

Not really. It goes both ways. Working as a consultant in many companies will add additional layers of red tape because consultants aren’t granted the same level of trust, access, and internal communication as full time employees.

It’s one of the biggest variables that have to be considered when scoping out consulting projects.

While I hate doing estimates, mostly because customer can’t tell estimates from promises, you’re right that they are needed for consulting. It’s unreasonable to expect a customer to sign off on some potentially endless number of hours. At the same time, customers also need to understand that tasks which have not been clearly defined, or cannot be clearly defined, can mean that the entire project might result in wasted efford that they will be require to pay for. Estimates exists as an emergency break for run away projects, they should not and cannot be used as deadlines.

>While I hate doing estimates, mostly because customer can’t tell estimates from promises, you’re right that they are needed for consulting. It’s unreasonable to expect a customer to sign off on some potentially endless number of hours. At the same time, customers also need to understand that tasks which have not been clearly defined, or cannot be clearly defined, can mean that the entire project might result in wasted efford that they will be require to pay for. Estimates exists as an emergency break for run away projects, they should not and cannot be used as deadlines.

You are absolutely right. The caveat there is that a SOW (which includes an estimate) is the meat of the contract between the client and the consultant.

Assuming that the SOW clearly defines the scope and functionality, if the consultants can't meet the terms of the contract, they screwed up.

Alternatively, if the client changes the requirements or doesn't provide clear guidance as to what exactly it is they want/need, then it's the client's fault. That said, it's the consultants' responsibility to makes sure everything is clearly defined (they are, or are at least supposed to be, the experts).

There are certainly circumstances where, even with clearly defined tasks/goals, the project can't be completed within the strictures of the contract. At which point the contract needs to be renegotiated.

Which is generally bad for everyone.

If you're consulting I would suggest not billing by time but by value provided. I'm not a developer so I don't know how impossible that is to convince clients but this is something I do as a marketing consultant and it works extremely well.

> If you're consulting I would suggest not billing by time but by value provided.

If you're doing that then accurate estimates become even more important. If you are billing by time and things take longer than you expect then after a while your customer is going to get angry and that's going to have consequences. But if you are billing by value then if things your employees are working on take longer, you're eventually going to make a loss on the project as a company. So you need to price the value right, and that involves knowing how long things will take in advance.

>If you're consulting I would suggest not billing by time but by value provided. I'm not a developer so I don't know how impossible that is to convince clients but this is something I do as a marketing consultant and it works extremely well.

That's certainly possible, and many contracts are done on that basis. Others are done on a time and materials basis.

Generally (as I'm sure you're aware), that's negotiated by the parties involved. I'd posit that while it may be a good idea to perform some services at a flat rate, it's not always the right call and is highly dependent on the work being performed.

Edit: Fixed awkward usage.

You hit the nail on the head. Developers are underpaid.

> you absolutely need to have estimates

'How many fingers, Winston?'

'Four! Stop it, stop it! How can you go on? Four! Four!'

'How many fingers, Winston?'

'Five! Five! Five!'

'No, Winston, that is no use. You are lying. You still think there are four. How many fingers, please?'

See, I knew this comment would show up here, as it does on any discussion of software estimates. It goes like this:

"Accurate software estimates are impossible, here's empirical proof and reams of evidence."

"But we need estimates, therefore they are possible. I win."

I wrote:

   If you're consulting, you absolutely need to have 
   estimates -- because that constitutes the bulk of 
   the Statement of Work (SOW), that is, the 
Please don't quote me out of context. That fairly screams bad faith.

Context matters. That you chose to ignore it in this case says more about you than the topic at hand, IMHO.

If you disagree with what I actually said, please feel free to make a relevant argument.

In fact, please do explain exactly how one might draw up a legal contract for services that contains no goals, milestones or time/cost estimates. At least one that any client with half a brain would sign off on.

This is a very exciting prospect, and I'll be awaiting your response with bated breath, as you could revolutionize the consulting business with that.

>"But we need estimates, therefore they are possible. I win."

You seem to be under the misapprehension that I'm somehow trying to one-up some unknown other person or persons. Nothing could be further from the truth.

I merely shared my (decade plus) experience providing professional services. If you are uninterested, that's fine. If you disagree, that's fine.

However, your comment didn't add anything to the discussion, nor did it provide useful information of any kind. Please try again.

Edit : Fixed formatting.

The article was doing pretty well until they brought up historical data. The exact same software project done twice can take wildly different times because the team and/or the requirements are different.

That's the whole point of Agile, you need regular interaction with a customer to slowly build the software to a state they are happy with. And they should keep paying for the work until it is done or accept whatever state was delivered by the time the budget ran out.

If that is not how you are doing Agile you have missed the point. It is not possible to predict software development because unlike a bridge the requirements are never fixed and there are too many unknowns.

If you get your requirements fixed at the start, they never change, and you are not going to suddenly have to deal with some library/framework change in the middle of everything, then you can estimate, but you are not doing the software development 99% of the world is.

Just to clarify what I mean by "same project, different requirements": you deliver a dashboard to customer A, who is quite helpful and clearly communicated what he wanted. It took 6 months.

Customer B asks for a dashboard like that of customer A. However, unlike customer A he keeps nitpicking the design to the pixel, and forgets to mention the company is in the process of migrating database providers.

Meanwhile the senior dev who put out all the fires that allowed the first dashboard to be done on time quit due to burnout. The team has no idea why the Jenkins machine keeps crashing.

Good luck with your estimate.

I like doing detailed estimates, because it helps me decompose the problem. This gives me a chance to get a good understanding of the problem, and get up-front clarification from the customer on even the smallest details. The side benefit is that it also gives me an idea of how long the project will take, with regards to the knowns. Then I triple that as my estimate to cover the unknowns, with a bit of wiggle room to spare. Then I stick to my guns with management (a luxury, I'm aware), and the customer's expectations generally get set accordingly. So, while it may not be an exact estimate, I have an accurate understanding of the requirements up front, and the customer usually ends up happy on the back end.

I think this is key. I often struggle to communicate that estimating certain tasks essentially means starting them and that giving a proper estimate might require a day of work (or more), e.g. when getting familiar with a new API or working towards some performance criteria. However, this is rarely taken to heart so most of our estimates are more "if you actually want an estimate it will be essentially random from the top of my head", which satisfied the product owner for now and defers the dissatisfaction that comes when said estimate turns out to be inaccurate.

On my current project we don't estimate. It might not be suitable for most projects but we just pick up tickets off the kanban board instead of wasting days every month arguing whether a ticket is a '5' or an '8' we just do the work that needs doing the most.

This works well with senior developers with the same background as people that drive the business decisions, otherwise developer's understanding can be completely different.

I think you need to discuss everything that is going to be done as part of a ticket anyways. Asking at the end "Is everyone ready for estimation? 3, 2, 1 go" takes an extra minute.

We do this sometimes and it is interesting to see how big the range can be. The motivated, new developer gives a 3 for reasons while the senior who's been burned by the big ball of mud too many times gives a 13 and the PO somehow thinks this means it is an 8.. For me it is a wasted minute (times tickets), but you gotta pick your battles.

Last time I was in a team that did this, it was pretty useful.

A wide range would spark a brief discussion (the senior would bring up the points that the junior is missing - yay knowledge sharing) and high estimates would usually result in breaking down.

The key is to keep it short, if need be timebox.

That, and time estimates are output. Estimate by story points or T-shirt size, track how long tasks take to complete, and you can trivially derive a mapping from one to the other. Revise during implementation, too, if need be. Something that was estimated as a 3 and turns out to be a 1 gets re-pointed, and vice versa.

This helps in all kinds of ways. Engineers don't have to think about time when estimating, which tends to make estimates freer and thus more accurate. PMs/POs don't have to deal with too much uncertainty, because the distribution of time to completion over a given estimate is right there. It's easier to know how much can fit into a given sprint, and as soon as all the tickets in a given epic are pointed, you can know about how long the epic will take to get done.

Engineer time remains nonfungible, of course, but if the team is well balanced that seems to average out, and I don't (yet) know a better method of squaring a team's need for looseness with an organization's need for legibility.

> Estimate by story points [and] track how long tasks take to complete, and you can trivially derive a mapping from one to the other.

I like this approach because time is such a weird thing to try and figure out beforehand. You either overpromise and look bad or over deliver but the thing you're delivering could get delayed because another team isn't ready. This other team will likely not be developers too, it could be a timeline imposed by the product team.

Time still has importance tho because something can be easy but still take a decently long time. You could rate something 1 story point but it could still take 3 hours all-in from dev time to do because it involves making a very straight forward change to 8 services which entails maybe creating an epic Jira ticket to explain the situation and then 8 individual tickets (1 for each service repo) + 8 PRs + 8 code reviews + updating 8 release docs.

For most purposes we don't try to be more granular than days, rounding up. When hours do matter, it's usually in a context where estimation isn't as important because whatever it is needs to be done ASAP.

True. Our default is to go for higher estimation especially coming from a senior. 13 should be broken down tho.

I know that this is a commonly held belief, but: why does it need to be broken down? Can a 13 not just be a 13?

In our system it implies a major piece of work that can almost always be split down into multiple tickets that can usually be worked on in parallel.

2nd Splitting it in smaller parts forces you to examine all the parts and inner workings. Sometimes you will realise entirely different solution is needed, sometimes that there is an api call that will incur costs… and some of these things definitely have to be communicated to stakeholder, ideally before you spend significant times in development.

It works well with junior developers too.

The problem isn't really seniority or juniority it's that doing this usually requires buy in all the way up to the CEO.

If it doesn't then the CEO pressures the CTO for estimates, the CTO badgers the middle manager for estimates, the middle manager badgers the project manager on your team for estimates and he comes nervously to you asking if you could please give him something resembling a date while he nervously wonders how much to pad it.

Each one has their asses on the line for the date.

You can be as "agile" as you like on a team but this wave of estimate badgering reliably turns on the waterfall every time.

You don't need to discuss everything.

The the oh-so-common 2 hour+ full team meeting scoping session, where half the team dosen't care what the other half is talking about since it's irrelevant to them - it tires everyone out and produces very little value for the impression of "we're now aligned".

Things only need to discussed if, when and only by as many people as required, everything else can be followed up later, it's really okay. Personally, I've always found things like 3 amigos to be much more time effective.

The 5s and 8s are like the Holy Grail. The Grail isn't the point - the quest is.

We don't do detailed estimates, but we do some estimation. Often the discussion around the estimation leads to a change in the task, which drastically impacts the estimate.

That is, by thinking about how to solve it, in order to estimate time taken, one can come up with alternatives which might achieve the same or a similar goal with less work involved.

Maybe the customer doesn't need such a bespoke solution after all? Maybe we can ask the customer if a slightly different solution is suitable?

For example, recently a customer asked if we could make a new web API that they could query for some data. While estimating the point was brought up that just transmitting a xml/json file with the data once a day might be just as good and less error prone for the customer. After all the underlying data doesn't change more than once a day anyway.

So we ask the customer and once they think about it they agree it's a better solution all around. So they get what they wanted for a much lower cost, and we didn't have to commit a lot of resources to make a new API.

An estimate is guess but is never treated that way.

Budgets are built on it, deliveries agreed. Doesn't matter how much you decompose the task - unless you've coded that exact same thing many times before in the same way under the same conditions then it's just a guess. Been estimating for industry for about 20 years (although not the last 6 because we're agile in the truest sense) and most of the estimates have been wrong - sometimes over, sometimes under. Tech estimates are a lie that loads buy into.

> An estimate is guess

It took me a long time to internalize that. See, there's an implied threat - "esitmate correctly or we'll replace you with somebody who will." And believe me, given the mentality of your average project manager, it would be "we'll have you executed first" if executing people who didn't do what you told them to do weren't against the law. What I finally realized was that the implicit threat was a hollow one - there's nobody out there who can do it either, and they know that, as much as they ball up their fists and gnash their teeth when you deliver "late".

You're right that there is an implied threat. I've seen work go offshore because there was a dev team to the east that said yes. They said yes to everything. They said yes even though they didn't understand the problem. Yes yes yes. A year later the project was dropped because all that yes was just another lie. Lots of managerial egg on faces and the customers went elsewhere. That kind of management just wants a lie so that they can apportion blame. They're not actually interested in getting a good bit of kit out the door.

My God I remember the days of coming up with estimates for like 40 tasks, each a few days, say, then "negotiating" with sales/PM where we could shave a day off here, a half day there.

And lo and behold, we price for a 30 day project that ends up taking 50...

So ridiculous.

And never once did they learn a thing from the told you so. Sales is ridiculously decoupled from all the chaos they cause in most companies. Must be a easy job, selling something a company can not offer and then not having bear responsibility if you can not deliver.

We did actually make some progress on this in one place - if sales wanted a 50 day project priced at 30 days they had to get the executive team to approve a discount rather than sell an artificially low estimate.

Happily those days, for me, are long ago :-)

> For many of us a dearly loved date has already been selected, which makes our estimation efforts really interesting as we endeavor to shove more and more "stuff" into that bag.

The problem is not to ask for estimates, companies need to budget, plan hiring, and inform clients. The problem is that many companies do NOT ask for estimates but create delivery dates out of thin air.

So, I agree that estimates are a waste of time if they are not used. But they should be used as are part of any reasonable-managed engineering project.

Estimates are only useful if accurate but to create accurate estimates is hard - my standard line is “if you want me to get a better estimate i need to spend about 1/4 of the time estimating to work - so if I think it will take a day, I should have spent a couple of hours on it, if I think it will take ten days then I need two and a half days to estimate it - do you want me to do that?”.

Every.single.time - the answer comes back as “no, don’t worry, just a best guess” aka made up useless estimates which are invariably wrong.

I don’t disagree but that probably means we as an industry need to figure out how to get better at this. It’s not unreasonable for stakeholders to want to have a rough idea of how long a project is going to last.

Maybe this is something software engineering professors can research.

I think the methods are well known, they just take time. Rough estimates do have their value (is this a 1 day feature, 1 month feature, 1 year feature?), problem is when someone starts adding rough estimates to project a date.

To get a real date, you run a feasibility study - you do PoCs, you deep dive on features etc. Then, you freeze the requirements and implement based on the study.

This will probably take around a quarter of the total time of the project.

Alternatively, you stick to rough estimates, and you will get roughly the features you asked for in roughly the timeframe estimated.

You could do both. Begin with a rough estimate, preferably a range (order of magnitude). If this gets greenlit, do all the things you mentioned but also just start on the project. Iterate on the estimate like you iterate on the deliverables, the estimation will get increasingly accurate as you develop.

There's a bit more upfront work here that goes into the estimation, but it won't be like 25% of it is pure waste. Maybe 5 to 10%, depends on what the quality of the estimate must be.

Somehow this never works in practice though, I suspect the initial guesstimate will prime the expectations of everyone involved and will be seen as a target. Clients will be disappointed if the project takes longer and frame it as such (it's late, takes longer, over budget, etc.). I suspect that in order for this to work, the initial range must be large and the adjusted estimates in the project as well as the delivery date must fall within this range.

I think this is a basic function of trust: will these people do as they say or not?

The point is, it is hard to get to a rough idea, when you don't do the work needed to get to that rough idea. Even worse, many times the stakeholders don't really know what they want, and you are faced with unstable requirements; this means more uncertainty, worse estimations, etc.

Whenever my boss asks for estimates (which thankfully is very seldom), I always give ranges, and I always make sure to highlight the ones I feel confident in and especially the ones I feel less sure about in.

Last time I broke it down into about four pieces, two of which I had a firm grasp on and could give a fairly solid estimate, "about half a day", and one part which I said "this might take a day, it may take a week, I'll know once I've worked on it for a bit".

I'll state what my assumptions are for the estimate so he can red flag things if the customer changes their mind, and I might offer some alternative estimates for different scenarios if I sense the goal is not entirely set in stone.

This gives my boss what he needs, which is a rough idea on the scope of the project and whether it's a slam dunk or "here be dragons".

> It’s not unreasonable for stakeholders to want to have a rough idea of how long a project is going to last.

That is not unreasonable. What is unreasonable is falling into the same trap every single time, and then doing it one more time expecting a different result.

> Maybe this is something software engineering professors can research.

Good idea.

We should also ask them how long that research will take so we can have a rough idea of how long we'll have to wait.

AFAICT academics never have deadlines to produce anything specific. Sometimes they’ll have requirements to publish X papers in Y years, but never solve this problem by date certain.

Which makes sense for open ended research projects, I don’t have any sympathy for a CEO complaining that no one can tell him how long to fully autonomous cars. It’s less fine for build me a website type projects.

And they should just give us that estimate quickly before they even figure out how they're going to carry out the research.

Some managers will still use that best guess as a stick to beat you with. The reality is most developers are powerless to fix these issues. It is a management problem.

I very recently joined a new large multi-disciplinary engineering team after running my own product-engineering org as a single-threaded leader for a few years in a related field. I'm absolutely shocked by the product-engineering dynamic here; PMs pull a date from thin air as you say, plop it in a document, and then bypass TPMs and engineering managers and take it straight to their favorite engineer to have the work started mid-Sprint.

I can't wait to fix this mess. PMs should never put down any date themselves unless the task has 0 engineering dependencies.

I see it as estimating a project vs estimating individual tasks.

The former is a business need. The latter is not.

After some high level analysis and decomposition, I can say a project will take 3 months. But there’s no need to spell out every minor widget or method with hours.

And this is separate from any t-shirt sizing a team does internally to plan their own time. This should be strictly internal and used as much to generate design discussion as anything else.

I work with utility companies on multi year migration projects to move their entire tech stack. This includes, a different database, different data schema across thousands of tables, new front end components, desktop and mobile applications, multiple third party integrations and the training and documentation for the whole thing.

The utilities are heavily regulated, often beholden to taxpayers, lawmakers and public utility boards and working on use it or lose it budgets with hard cutoff times for delivery and go-live dates.

It's not just budgets and time constraints that make it impossible to do flexible estimates. Their internal personnel all have their regular work duties to attend to while supporting the migration and they can't drop everything for years at a time to dedicate support for the project. Add in the coordination with multiple vendors and you need to be hitting your time estimates, planned years in advance, within a week. Sometimes budgets don't matter that much, once they are a year into a three year project, they will find the money if it's needed, but the coordination alone requires this kind of accuracy.

It's amazing what necessity does to your estimates. We have become really good at it. This includes things like "Oh we're working with Oracle, add 3 weeks to that integration just because" or "this is a mobile app for the service techs who tend to be resistant to change, add another week for training and a month for revisions". Yes this is just "padding" but it's very specific padding that's tailored to the type of work being done and has so far been accurate and continually getting better for us.

edit: I should add that our estimation process involves multiple week long workshops with all parties. We go over every tool, process, integration and technology currently in use and then write a detailed design document that goes through 2 rounds of review with the customer before being signed off on. These design documents become the basis for a secondary contract to do the actual work and the customer understands that if it's not in the document, it's not getting built or migrated. Any additions require a contract change order.

Just do an aproximate estimate, multiply by two, and add 30% ... it won't be enough anyway.

Decomposing projects before you even start coding is everything.

The bigger the project, the more likely it is that some small thing - something in the original spec, maybe, or more likely, an unforeseen interaction of its pieces - will be missed and will take an inordinate amount of time to deal with. All it takes is one of these to thwart the entire estimate.

Going too deep on any estimate is actually a bit of a pitfall. We see this in my world (large scale project management) where some folks want to plan activities down to the tiniest tiniest level, and it just doesn't pay off.

In software, it's hard to do estimates at all if you're a blue-sky environment. Once you have a codebase you're working on, it can be possible to give a better estimate of what it will take to implement a new feature, but the more elaborate the feature, the softer the estimate.

We have just shipped a very big change to our product, and we thought it would be a 3-6 month effort to do so. It took 3 years, because it was VERY invasive and VERY complex, and we just didn't understand the underlying challenges enough when we set out to do it.

> an agreed upon estimate

That's not an estimate; that sounds like the outcome of a negotiation. If someone demands an estimate from me, they get one - if it's not the one the manager wanted, he's free to substitute his own.

I'm accustomed to managers upping my estimates by 10%, and I've known them to increase them by 100%. For small jobs, requiring an estimate instantly doubles the estimate, because it takes longer to produce a good estimate than it does to do the work.

If you reduce my estimate, or try to talk me down, the new estimate is your estimate, not mine. And if you think it's fair to try to nail me to my estimate, then you obviously don't know what "estimate" means, and I need a new employer.

> If someone demands an estimate from me, they get one

See, you're missing the point of the estimate game. It's not to figure out how long it's going to take - they already know nobody knows the answer to that (and they've already decided how long it's going to take anyway). The point of the estimate is to bully you into making a completely unrealistic promise and then use that promise to bully you into working nights and weekends to keep it (one of the reasons they love work visas that are tied to a specific employer). It usually works on young naive developers for a while, until they either ulcerate themselves into an early grave or develop a healthy cynicism about estimates.

Actually trying to deliver quality software is, incidentally, never one of the goals, just getting promoted by abusing people below them.

The article jumps from detailed hour-driven estimates, to just duking it out. Why does everything have to be so black or white? Why does it have to jump from hour-level to prophecy?

Here is what works at a project level:

* When estimating, never go any step beyond the Feature level (don't split tasks, most of the time not even stories – just Epics or milestones are enough)

* Do RELATIVE COMPLEXITY estimate. Not time. If <epic1> is a medium, then relative to it, is <epic2> large or small? Stop at that level. Don't split it down any further.

Now compare just one of the epics to past history. That's all you need to estimate the rest of the scope, as it's all relative. It takes not more than a few hours for due diligence.

This jargon suggests that the scope and complexity of your "epics", "milestones" and "features" are abnormally unsurprising and consistent. Is it because whoever defines them is very good at detailed estimates, or because the work is repetitive and predictable, or because the deadlines are weak and elastic?

I have heard these terms (stories, epics, milestones) a lot. Is there any standard definition for these terms or do they differ from project to project? Any good literature on how to think about them?

A co-worker of mine had a good idea: require management to seal within an envelope the cutoff value that would determine the outcome the decision that required the estimate (e.g. Estimate > X then Do A and Estimate < X then Do B). When the estimate comes back open the envelope.

So often, management wants a certain outcome, but needs the estimates as cover for making the decision. Just demand a detailed estimate, fool around with the estimates details--no matter what the estimate ends up being, and finally say "Estimates indicate that we should do this.", which is what they wanted to say all along.

I've tried to get into the following routine for my team:

1. A daily standup bot pings us with the tickets assigned to us. We respond with a gut-level "percent complete" number for each ticket.

2. The bot tracks these estimates and over time, each team member can see whether they tend to over or under estimate.

The point is that we don't try to cram accuracy into developers, we just let them guess and let them see over time how good they are at gut-level estimation. The hope is that they'll eventually improve their estimations, but we're not going to tie it to performance or anything.

After a couple decades working at companies and projects of all sizes, I always cringe when I hear prescriptive ideas about some abstract approach working or not working. In my experience, lots of different things can work, but there are more ways to fail than to succeed. When we're talking about large scale software projects with fluid requirements ultimately success hinges primarily on having competent individuals and trust between them.

Without trust everyone is trying to cover their own ass, and in the case of large projects most of them will easily succeed since you only need one scapegoat. This is the type of environment where detailed estimates are demanded so that management had a paper trail, or where engineers implement the letter of a PRD and never propose changes to inconsistent or awkward requirements because it's too much energy and they'll be gone before they have to deal with the tech debt anyway. Often times in these type of environments someone will propose a process such as scrum to address particular pain points, but layering a process on a dysfunctional team doesn't address the core issue; at best the routine can shield individuals from chaos, but it won't actually improve throughput in any meaningful way.

At the end of the day, the best you can do with large scale estimation is get a handful of your best engineers who can roughly envision what needs to happen and have them chalk out a rough plan at a very coarse granularity and with key assumptions enumerated. Then with your best product people chalk out a strategic roadmap showing where they believe the product should go over the next 5 years so they can take that as input into account for architectural strategy. The key thing is that everyone understands that all long-term plans are subject to unknowns and change for all sorts of reasons—the point is not to pin people down but to leverage individual expertise to develop a best guess at what is possible. This is where trust is at its most tenuous and stands on a razor's edge; all it takes is one bozo to treat these things as guarantees and start throwing people under the bus when things go wrong, and before you know it trust is gone and everyone is in cover-your-ass mode. Now the group has lost the ability to accomplish the most ambitious goal of which they would otherwise be capable.

Estimates for tasks/apps/games/projects that have been done before, or known from previous work are usually pretty close. Like for instance making another version of a puzzle game with changed assets and maybe a few new features.

Estimates for tasks/apps/games/projects that have NOT been done before, or known from previous work are usually wildly incorrect. Like for instance making the first version of a puzzle game with new everything. Even more so for a new game type that you haven't done before.

Software estimation is like 65% correct (2/3rd) usually. There are so many internal, external and just unknown areas that usually these are underestimated greatly. The estimation is off by more when there are third parties or components/frameworks that get you 90% of the way but make the last 10% more tasking than custom sometimes.

The nature of software design and development is usually creating new value, in that case lots of those projects are unknown or the first time through something, estimation is almost useless in those areas. It is better to do prototypes to help refine and break it up into parts that can better be estimated.

Anyone doing an estimate on a new area that hasn't had a prototype will always be wrong. Estimates more than a month out are also wildly wrong. When you are asked to estimate something big, always just do an estimate for a prototype first before you ever begin to try to estimate the rest.

This appears to be based on a strange assumption that developers' estimates can't take historical work into account, but dev managers' can; and that estimates coming from dev managers and based on historical work will not be questioned for being too high even in orgs where the devs' estimates will.

I'm not sure the prescription given fits the disease described; it feels more like passing the problem on to a different role than actually changing the approach.

To me, detailed estimates can make sense if you're working in a factory/conveyor-belt environment - where the products are very similar, and you (the company) has shipped hundreds to thousands of such products.

At least then you should have some data to look at - but when it comes to boutique products, with new clients, new teams, etc. who knows - you could easily get stuck on something for weeks to months, with no obvious resolution.

In my essay "The worst project manager ever" I talk about some of these same flaws, but I also talk about the best project manager that I ever worked with, Sonia Bramwell, and I recount some of the lessons she taught me about great project management:


About this:

>There is back-and-forth as the estimates are questioned for being too high, almost never for being too low.

Sonia did not allow us (the engineers) to talk to upper management, so she handled the translation herself. In some cases she was worried about macho engineers who competed on how fast they could do something:

"I can do that in a day"

"Oh yeah? Well, I can do that in 4 hours!"

"Ha! You two suck! I can do it in 2 hours!"

Perhaps Sonia's greatest ability was to figure out exactly how much each engineer tended to overestimate or underestimate tasks, and then to weight their answers accordingly. For the upper leadership, she was the only one who continuously offered accurate estimates of how long big new features would take.

> Sonia did not allow us (the engineers) to talk to upper management, so she handled the translation herself. In some cases she was worried about macho engineers who competed on how fast they could do something:

That strikes me as incredibly condescending. Normal estimates and processes already do quite a bit to erode developer agency, and seems like this attitude just doubles down on that dynamic.

It would have sounded that way to me, too, a decade or so ago. I've learned a lot since then about the internal politics of organizations, and these days for most purposes I'm quite happy to leave them in the hands of a skilled specialist when available.

From that very same article:

"There are moments when it is useful to have the engineers (or any kind of staff with specific skills) talk to upper management and talk to outside clients. But those discussions need to go through a specific process, they can not be allowed to happen randomly."

She was right about “macho engineers” in my opinion. They can be extremely disruptive for both a team and an org.

"What a useful thing a pocket-map is!" I remarked.

"That's another thing we've learned from your Nation," said Mein Herr, "map-making. But we've carried it much further than you. What do you consider the largest map that would be really useful?"

"About six inches to the mile."

"Only six inches!" exclaimed Mein Herr. "We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all ! We actually made a map of the country, on the scale of a mile to the mile!"

"Have you used it much?" I enquired.

"It has never been spread out, yet," said Mein Herr: "the farmers objected: they said it would cover the whole country, and shut out the sunlight ! So we now use the country itself, as its own map, and I assure you it does nearly as well."

from Lewis Carroll, Sylvie and Bruno Concluded, Chapter XI, London, 1895

from Wikipedia: https://en.wikipedia.org/wiki/On_Exactitude_in_Science#Influ...

When I was working at a big online travel site, we used to have a requirements document template which started with a "flexibility matrix". It was a two-by-two grid with "scope" and "date" along the y-axis and "most flexible" and "least flexible" along the x-axis. The stakeholders were supposed to indicate to us whether the scope or the date was "flexible". Of course, we got an unending series of requirements where the date was "least flexible" and the scope was "most flexible", never ever ever the other way around.

It got to the point of such ridiculousness that we finally started trying to flex the scope because the dates were so unrealistic.

Management's response? Add another row and column in the matrix of "resources" (that is, it's ok to add people if you need to) and "somewhat flexible". So after that all of our requirements were "date least flexible", "scope somewhat flexible" and "resources most flexible".

I recently left a company that had settled on using monte carlo estimation - ignore the ticket sizes and monitor the spread of ticket estimates in order to provide a % estimate to management about when something could be expected.

It was pretty crushing to have to constantly explain that my tickets were bigger than the average to a guy who obviously only cared about getting his KPIs down. I left.

Among other things, Extreme Programming says that estimates increasingly diverge from reality if they're longer than three weeks. So: Do you need accurate estimates? Then you need detail. The fuzzier the estimates can be, the less detail you need.

Do you need developers to do the detailed estimates? Yes, for two reasons. Politically/socially/culturally, having someone else telling you how long something is going to take you to do is... not received well, given normal human nature. Functionally, the developers have to be the ones to do the detailed estimates, because they're the ones who actually know what the details are.

All that said... overly detailed estimates are a waste of time. Don't break it down into a series of tasks, each of which take one hour or one day.

>> ?+?+?+Contingency™=The Date™.

I've always had a strong feeling that my strong feeling about when something will be done is fairly accurate. Decomposing that into ?+?+?+Contingency for the client tends to be the hard part.

Reading a book where Tom Sheppard (former RAF pilot, solo desert adventurer) breaks down the fuel estimation process as:

mpg * terrain factor * safety factor

it's pretty important to him to get his estimates right, so he is rather serious about this.

Assuming your 'gut feeling' is pretty accurate (I suspect it is), you could probably break it down into:

- these components will take X days,

- getting them to work together is an extra Y,

- contingency factor is Z

I honestly don't understand how your gut feeling could be accurate unless you had a well developed sense of those factors, or some similar breakdown of what it takes to deliver a project and where the complexities/unknowns are.

It's like any other skill you develop from watching something carefully enough times, long enough. I can tell if an edge in a graphic is off by one, two or three pixels without needing to measure it. Mechanics can look at a screw and know what size wrench, and whether it's metric or imperial. Estimating times and prices does involve juggling a lot of variables and potential for overrun or mission creep... but it's certainly not just true in the code world. The design world is worse because the deliverables are so much harder to nail down. If you play "price is right" enough times you just develop a feel for eyeballing it that tends to be accurate. In the end, the most sophisticated system in the world for estimating a project is a black box AI that's constantly adjusting its parameters and weights, and that's basically what an experienced brain does without breaking it down into steps. You can of course break it into steps, but they're all hypothetical and only you understand how one can be extended and another reduced to meet the same target.

It's like cooking.

Okay... yes, ingredients, preparation, bake time... you have to take each into account. It can be hazy when you're estimating 6 months or 8 months, because you're not sure how it will come together after the first 4. That's true, for something that large.

We humans work by analogy. Experienced people think, probably unconsciously, it is similar as these tasks that took about this time. No breakdown required.

I think I know my requirements are so fluid and badly defined that the estimate will never be anything other than 100-1000% contingency * ???

I learned very early on in my career that the right way to do estimates is to do multiple estimates and cross-check them. If you have multiple estimates using different methodologies and they all are fairly close then you know you have a good estimate.

Time estimates are still useful even if the range is large. If a high-bound estimate is still acceptable - great, ship it. If a low-bound estimate is barely good enough, that is a huge risk. If you get multiple estimates from multiple developers and they are wildly different, there's a conversation (i.e planning poker)

It depends on the developers process of coming up with the estimates. If they're breaking down the tasks, thinking of all the things that can go wrong and factoring in contingencies, sequencing them understanding dependencies, and work other than delivering code, data migrations, support, monitoring, and so on, the estimate could be thrown out without anyone ever looking at it but the developers would have a much better understanding of the project and a good sketch of a plan of attack to make the time making the estimate all worthwhile.

The iron rule is the person doing the work also supplies the estimates, which are the only ones with any chance of getting them right. This article seems the developers have not been properly trained on how to give accurate estimates.


And from my experience, one meeting with the PM asking how long will it take... is not going to cut it either. Until i really dig into the requirements and dependencies i will probably fall out of the estimate.

The biggest bottleneck for me is usually other stakeholders.

I would strongly disagree and Joel offers no evidence in defense of that statement. I wouldn't expect junior employees to have a strong understanding of task complexity, their own ramp-up period, available resources, unforeseen impediments, the experience necessary to estimate accurately, etc..

Estimates are best done by the team as a whole as it is the only way to rely on everyone's collective judgment ("I don't think we need this requirement," "we are going to run into this problem," "this is similar to task X we completed 2 years ago"). If done in this manner, the process also serves as training opportunities for the more junior employees.

I have read the Joel link, but I actually didn't need to read it to know what it was going to say because it says the same thing every "you just need to learn how to estimate" article ever says: "First, write down everything you're going to do. Second, write down how long each of those things will take. Third, add them all up and voila, you have a 100% accurate estimate! Easy!"

Estimates are mostly wrong. Retrospectives always have hard data. Many projects seem not to have the time for retrospectives. There are few other places in project management overheads to look to find the time for retrospectives. Cut the time for estimation, and reduce the focus on them. If you use retrospectives well, you may find you do not need to spend that much time on estimates, and even further reduce the use of estimates.

I think this article misses the point. Estimation should never be a goal. It should merely a tool that helps with identifying risks early on and that helps with identifying the "minimum viable product (MVP)". The term MVP was never mentioned in this article and I don't see how you can have a serious discussion about project estimation and dismiss the benefits of agile without even mentioning the term MVP...

It's surprising that developers and business people still haven't found a way to come together on this stuff and explain to each other why our views on estimates don't line up.

At the end of the day a business is always going to need some sort of idea of what's being built and when it might be ready so that they can get a rough schedule together. No matter what games developers and project managers play with story points, or t-shirt sizes, or the myriad of other coping strategies they come up with, someone in the chain is going to try and map those to time scales.

In my experience, a reasonably senior developer can give a rough estimate of how long something is going to take in any case that isn't a complete unknown, if they really can't then you need an investigation project and that can be given a set time. But for almost everything else you can roughly say if it's an hour, or a day, or a week, or month. As long as everyone accepts that sometimes that will be over, and sometime under. If its an hour that doesn't mean you can do eight of them in a work day, but it means you can prioritise work.

That's the key I think, if you have a fixed date or a fairly fixed date, then it's often a good constraint. What you need to do is prioritise the work, and the only way you can do that is if you know that the things in your list could reasonably be done in that time. You can't have no estimates, and task three means your entire team needs to invent an AI or something stupid that will burn the whole timeline down.

If you've put a rough estimate of a week on something and you're reaching the end of week two, it's a great time to reassess. What technical people often don't get told, or don't understand is that sometimes something is only valuable if it can be done in a certain time. The business might want a feature that could be done in a week, but have no interest in it if it'll take six months.

If both sides can admit to their fears then it shouldn't really be too complex. Business is scared you'll get to the deadline and have 20% of the work done, with a good prioritised list they'll probably be happy if you're 80% of the way there. Developers are scared you'll hold it over them and berate them if their one day estimate becomes two days, but they probably know their one week estimate is bullshit and they just want to cover their ass a bit.

So incredibly true. I used to be staunchly in the "estimates are empty promises made by business people" camp. The last couple of years I've worked in an environment where both engineering and product understand that estimates are not created out of thin air and they have real consequences on both sides of the coin.

Our estimates have become far more accurate not only because we all understand that they _are_ estimates but because engineering is able to creatively come up with solutions that meet the business needs, not just the outlined "requirements." Most importantly, any potential shifts in our estimates are communicated every week and we have built in float time to account for these shifts. We haven't missed an estimated date in over a year.

Discussing about the effort required for each task/story, is crucial to generate alignment within a team, determine better the scope of a task/story -- and decompose it, if it appears that the effort would be so big that tracking what has been done and what remains to be done would be intractable.

The by-product of these discussion may be summed up as a number. But oh yes its unit must not be time.

I think it's fun to read all the stuff about how it is impossible to estimate what software costs.

What is so magically different about it? All other professions can do it. And not just the other professions, our company is one of many software shops that sells projects. We need to be able to make good estimations to be profitable, and we are.

Maybe it's all the VC and BigCorp money that is stopping you.

> What is so magically different about it? All other professions can do it. And not just the other professions, our company is one of many software shops that sells projects. We need to be able to make good estimations to be profitable, and we are.

And all other professions regularly overrun costs and dates of delivery just like software. Let's stop acting like other engineers actually hit their deadlines regularly - just last year was there a news about a miraculous Swiss tunnel project which, as a very notable exception, actually hit its target deadline.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact