Hacker News new | past | comments | ask | show | jobs | submit login
Hacks for Engineering Estimates (shubhro.com)
107 points by shbhrsaha on May 10, 2022 | hide | past | favorite | 39 comments



One estimation trick that I've found effective is the following: (1) determine the smallest number that your sure is larger than the true answer. (2) determine the largest number that you are sure is smaller than the true answer. (3) take the geometric average of the two. (i.e., sqrt(a * b))

The reason this does well, is that oftentimes, (1) overestimates the true answer by roughly the same multiplicative factor as (2) underestimates the true answer by. So the geometric mean cancels the over and under estimates in order to get an estimation that does pretty well.

I find that this works remarkably well for estimating the dimensions of buildings, trees, etc.


So basically, if there's a distribution of possible outcomes (with estimated duration plotted on the x-axis, and probability on the y-axis), then we put bounds on either side of the distribution and take a 'central' point on the x-axis between them. But instead of a simple average, we can use a log scale on the x-axis, so that we represent estimates of different magnitudes well. Now the simple half way point on the x-axis is on a log scale, and (I think) equivalent to the approach you described.


The geometric mean being equivalent to the arithmetic mean of values transformed under a log scale is an interesting perspective that I haven't considered, but appears to be well known. https://en.wikipedia.org/wiki/Geometric_mean


Thank you. I love this.


I think the biggest thing holding engineering estimates back is that the people asking for them are actually not interested in accurate estimates; instead they are looking for inputs to be used in various games of corporate politics.


This is precisely the issue from my perspective as well.

Most project managers I've worked with either have a desired estimate already in mind or they don't care about any of the extenuating circumstances.

On one hand, the desired estimate is often based on the knowledge that projects estimated to take more than a quarter aren't going to get a green light.

On the other hand, it's ridiculous how many projects blow through estimates when external dependencies are ignored, newly-hired engineers create a burden on the project, and de-scoped work turns out to be necessary.

Those project managers also pursue the same estimation agenda even after several projects turn out the same way.


And as someone asking developers for those estimates, I often see all sorts of equivalents that we complain managers have in the form of politics.

Devs over-engineer, add way too much padding for refactoring and cleaning out tech-debt than is necessary, devs engineer solutions with resume padding, devs like playing with cool tech or trying new tech instead of just using "the boring old thing", they over-engineer (saying this one twice), they get it wrong, devs over-compensate because they got burned previously, they over-compensate because they got negotiated down and then it went bad, they want to impress their peers or whoever they report to, they get bullied by end users that somehow get access to them, etc etc. Yes a lot of those are avoidable, but we don't live in an ideal world.


> Those project managers also pursue the same estimation agenda even after several projects turn out the same way.

This is an aspect of the https://en.wikipedia.org/wiki/Planning_fallacy.


Correct. The good answers assume the askers actually want ACCURATE estimates. In many many cases this is not true at all.

I've worked for guys that shop around and give the work to the lowest estimator, even when they have a track record of low-balling and then running 5x over their estimates.

In other scenarios, to your point, optimistically low estimates are used as a political tool by product/management to wrestle some task/responsibility from some other team in the org.

Inevitably what I see again and again is everyone takes (and fights devs for) low-ball estimates, which assume the happy path of "nothing can go wrong". They are then happy to hear & communicate to clients the various excuses when each "downside surprise" is discovered through the development process. Of course the estimate high-baller has built in time for these as theres rarely positive surprises that make tasks faster, and few tasks are surprise free.


Yeah. I am going through this right now. And they have no interest in increasing productivity. Instead of looking at slow processes that can be improved, management wants to add more process to get "better" estimates. These estimates will fall apart quickly because of constant scope changes. It's really infuriating. I understand that there is a need for budgeting and stuff but I also know the only way to get things done is by doing them. No amount of estimation helps if your processes are inefficient.


Wow this is so true. Never thought about it like this, but as a product manager myself you are absolutely right.


"Project Management Theater"


My go-to heuristic is three point estimation, basically a weighted average of the best, worst, and average case [0].

(Best + Worst + 4 * Average) / 6

One nice property is that it imposes a distribution that adjusts for longer tailed risks.

https://en.wikipedia.org/wiki/Three-point_estimation


So one question here is: Why reduce the distribution (with long tail or whatever) to a single estimate number? If the distribution represents the range of possible outcomes well, then the single number throws away most of the information in the distribution.


I strongly agree, giving people the distribution conveys a lot of information especially if everyone is clear on what the parameters of that distribution mean (ie: what's the low estimate mean?).

At the same time, there are occasions where it can be useful to collapse a distribution for some types of reports, or for quickly looking across estimates.


Things, IMO, which are spoilers:

1. Starting with an End date and create a plan from there. Some top leads want to get promoted and want to achieve X by this quarter or next.

2. With so many people leaving, there is not enough time and resources to onboard new folks, who are accounted in deliveries

3. Too many parallel initiatives

4. Unstable production taking daily attention

5. Not able to priortise tech debts over business features

6. Decision makers at top don't have grass root visibility or don't want to have.


I just finished a big project and those indeed were big spoilers. But also:

7. Difficulty in getting clearer specs when it's discovered the original specs are not detailed enough.

8. Decision makers at top (or middle?) having too much grass root visibility and micromanaging the project.


Interesting article. Our PO often almost demands estimations from us. Usually I am already responding in a best/worst case fashion. In the end PO only seems to remember the best case and takes it as commitment. Since I was fooled by this a few times, I am now collecting a paper trail and am quiet reluctant, when giving "just a ballpark figure". My key takeaway was, that estimations mostly aren't about accuracy or getting a value, but rather managing people's expectations and navigating corporate politics.


Agreed. I'm much less liberal with my estimates with external stakeholders than I am with close associates, purely for political reasons. If I say 8 weeks to my team, I'll say 11 to management/others. I get no benefit out of delivering on time, a little for delivery early, and a massive loss by delivering late, so I have 0 incentive to give them an "early" estimate. Under promise and over delivery is corporate strategy 101.


When estimating a software project of any size, try to imagine the most optimistic number of hours needed for each activity you can think of.

Then multiply by 4.14.

This provides room for

Phase 1: First verions of the deliverable: xpi/2

Phase 2: Trying to work around all the cases where the initial design was bad: xpi/2

Phase 3: Re-factor from scratch (with the same team) : x1

Sum phase 1-3 : x(pi+1)

This is for facing the customers/stakeholders. When facing the team, only present them with Phase1, with the estimate of xpi/2.

Otherwise, phase 1 alone will take x(pi+1).


The best estimation tactic I've found in practice is ROPE, which works because it helps different kinds of stakeholders understand different kinds of estimates and ranges.

R = Realistic estimate. Based on work being typical, reasonable, plausible, and usual.

O = Optimistic estimate. Based on work turning out to be notably easy, or fast, or lucky.

P = Pessimistic estimate. Based on work turning out to be notably hard, or slow, or unlucky.

E = Equilibristic estimate. Based on 50% probability suitable for critical chains and simulations.

https://github.com/sixarm/rope-estimate


I usually give a range of estimation.

The issue I've had is that most "stakeholders" just want it boiled down to one number and don't care about any nuance. If I say it will take 2-4 weeks, they will assume 2 weeks if it's convenient. If there's a list of tasks, they will add up the lower bound of each task and discard anything they don't deem necessary.

It seems to me that any estimation tactic works well if you assume fair intent.


When I make an estimate, I just double what I think it is and this is usual pretty accurate. I keep underestimating, sometimes doubling down I wonder myself, will it really take this long? 99% of the time, in the end, the answer was yes. So this is my goto method. People still think I finish stuff quick, even if I think myself it's too long.


My favorite estimation hack / joke is similar: take your best estimate, double it, and move up to the next higher time units.

So "oh, an hour or so" becomes 2 days. A week turns into two months.

I don't usually express those estimates, but it gives a good check on an initial, usually optimistic guess.


I like your method. A 1h task usually is done in two days after the start anyway due to like testing or reviews or what ever.

The main feuture is that you can report more work than there are hours of work. E.g. complete 8 days of work in two days by doing 4 2-day tasks.

I find it way easier to estimate when something is done rather than how much time it will take to do.


This sounds about right. I worked with a guy who never gave an estimate more than "an hour" or "I can bang this out in a weekend". Of course quite often he'd end up working on the feature for 3 months...


This is exactly my hack too! Especially when there are outside dependencies that can easily stall your progress.


Engineering projects often already encompass this whole philosophy by building estimates that roll up to a P10 and a P90 outcome.

P10 = 10% chance of occuring, P90 = 90% chance of occuring.

How it's typically done is standard scheduling packages, like Prima Vera, allow you to specify a band or range of duration/effort for individual tasks/activities.

This, when combined with task dependency information (which you must give it in the form of a Pert chart or similar, it takes a few different formats of data) means it can calculate the critical path across the whole range of activities for an overall outcome and yield the project P10/P90.

Then, you can run sensitivity analysis and identify key pivot points, look at assigning more resources to certain efforts etc etc and optimise the schedule, plus track actual progress as you go and make forecasts.

But this is all based on the premise of doing the kind of engineering where you have some reasonable idea of what your actual goals and methods are before you start, so if you are running under agile you are probably screwed because even if you tried this planning, your planner/s could probably never keep up with actual.

To understand the difference between an engineered project and an agile one see my comment https://news.ycombinator.com/item?id=31299834#31301616


Using Steve McConnell's terms, I think of things playing out as Plan, then Estimate, then discuss the Target, and then cut features before Committment.

Having multiple people independently estimate a given problem is a more robust way to estimate, in general. Other than as a lead and a dev each sizing something up, I don't think I've seen that in practice. It feels a little weird for developers to be estimating each other's work. I think it's interesting if and only if Committment is truly separated from estimation.


I feel like "Note the Precision" is one that I most see missed.

Partially, this is on engineers. They'll say "That'll take 36 days" and not realize that they're implying a higher level of specificity than they intend.

Partially, this is on consumers of estimates (managers, etc). They'll hear "It'll be about 36 days, but that's just a super rough estimate, we haven't planned it out yet, it could be way more..." but they stopped listening and wrote down "estimate: 36 days."

My current eng team has fixated on two distinct levels of eng estimates. The first is super high level: "minutes to hours", "hours to days," "days to weeks", "weeks to months," or "months to quarters." The second is a number of hours. We give out the high-level estimates freely - they're super helpful for project planning. We give out the second number only when we have a pretty solid plan with estimated tickets.

It's worked pretty well because engineers can always be clear on which estimation type is called for. It's also helpful because non-engineers can get used to hearing the high-level estimates pretty quickly and know to treat them as super vague.

* We actually deliver all hour estimates as 30/60/90 estimates: "we're 30% sure it'll be done in 36 hours, 60% sure it'll be done in 50 hours, and 90% sure it'll be done in 80 hours". There's still a tendency of people to just use the 60% estimate as "The Estimate," but it's better than nothing.


I really like "3 point estimates". Rather than a single estimate you give three - one that's your expected timing, one for the best case if everything goes perfectly, and one absolutely worst case scenario time. The difference between "best case and expected", and "worst case and expected", indicate the risk factor. If best case and expected are similar then it's a low risk feature - you understand the complexity and there are few unknowns. If the worst case and expected are similar then it's a high risk feature that you don't expect to go to plan.

I've never been in a position to actually use this approach well, but I like the idea of it a lot.


Always keep in mind that estimate errors follow a long-tailed distribution. That means that if you estimate enough things, your total error will be defined by one or two tasks, it doesn't matter how you did for 95% of them.

Thus, if your goal is to lie to management stating that "we met 98% of our estimations last year" implying that because of this, there is not problem, then yes, go work on improving your estimates.

If your goal is to get things done so you can get some real progress, go set realistic targets on time or ROI and learn to throw away tasks that failed it.

And if your goal is to never get an estimation wrong, because they are commitments that you can never get free after you made them, go practice your interview skills and move to a better environment.


There are no hacks. Making estimates is difficult, the more information you can add into your equation as early as possible the better prediction you will get.


Let's go with "making estimates is _impossible_", at least in the vast majority of cases.


I usually go for the scale like hours, days or weeks. It comunicates the accuracy of the estimates. If something more accurate is required I like thrashtester's method of the 4.14x


some things can be estimated, others not so much

often folks don’t know if what is being estimated is actually knowable.

building something new pre product market fit is a crapshoot until it’s not.


yes this is one of the agile issues

we try to tightly manage estimates & timelines on small iterative tasks, breaking up any larger work items into smaller and smaller stories, in beautiful hierarchies of tickets..

the only innovative work happens out of sprint / nights & weekends as unapproved stuff that would have been nickel&dimed into 27 stories across 10 sprints if it went through the agile process

in a Greenfield project these sorts of estimates driven iterative work methods are innovation killers


It's shocking when so much time is wasted on daily and weekly meetings and the project is still a year late or fails.


That's shocking?

It seems incredibly ironic that one would be shocked by this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: