Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: I built a free app to help devs create more accurate software estimates (estimatrapp.com)
99 points by davzie on April 2, 2016 | hide | past | favorite | 55 comments



I've a theory about that too. If you take a software development project, 80% of the stuff takes 20% of the time and 20% of stuff takes 80% of the time. Let's call the former noise and the latter the bottlenecks.

A beginner estimates everything like the noise. So he's dramatically wrong. The novice though, estimates everything like bottlenecks. So he's dramatically wrong too.

This is actually why, in the industry, those agile methods works quite well. They handle that without people realizing.

The next key is to understand that estimates never determines the deadlines. What you can tell the stakeholders does.

So, the only way to succeed is to get the best dates you can from the stakeholders, then build the scope of the app accordingly. Then start treating everything like noise. Find the bottlenecks, focus on them, solve them, repeat.

Lastly, my manager setup a meeting with engineers to estimates the tasks in order to determine the launch date of our app. Before the meeting, he was saying that currently, the date was 15th of may. Then we've made the meeting, the novice estimates 750 days of work. 2 days after, we learn that the launch date is for 15th of may.


It's more like: The first 90% of the code takes the first 90% of the time, the remaining 10% of the code takes the other 90% of the time. [https://en.wikipedia.org/wiki/Ninety-ninety_rule]


This theory is called the Pareto principle and dates back to 1896. The wikipedia article is quite an interesting read: https://en.wikipedia.org/wiki/Pareto_principle



Don't "beginner" and "novice" mean the same thing? I am curious how you are differentiating between the two.


The problem with software estimates is making an estimate based off of the unknown for the first time.

Somebody asks you to walk a certain route through a city and give them an estimate of how long it's going to take. This is the first time you've ever been asked to do this. The route has stop lights, paved walkways, thick forest where you have to trample your own path, crowded areas, construction, and all sorts of other characteristics.

Think you're going to estimate that perfectly the first time? Nope. But guess what, you'll estimate it pretty well the next time you're asked to do the same route.

Therein lies the problem with software and estimates. Many times what engineers are asked to estimate is an unknown or has portions of unknowns. Without having solved the problem prior to being asked it's going to be real tough to estimate it properly. Second time around, your estimate is going to be much much better.

Most software is _not_ being asked to do something you've done once, twice or thirty times before. It's solving problems for the first time, estimates will be wrong.


Yep. I like the idea of an agency or internal team that does the following:

Come up with risk classes and complexity classes, with objective measure for what is or isn't in each class. ("Class X risk involves a library/technique/platform with we are not yet familiar.").

Measure time spent on all tasks.

Collect statistical distributions for each portion of the risk/complexity grid.

Use those distributions and monte carlo simulation to estimate future projects to an appropriate degree of confidence (e.g. 95%).

This probably means scheduling lots more time than you would have before, more than you will often need, use the rest for refactoring, training, completing unforeseen and therefore unestimated tasks (bug fixes?) maintaining the FOSS you are dependent on, etc.


Estimates are, by their nature, uncertain. The problem is less about uncertainty and more about having punishment or reward tied to them, implicitly or explicitly.

Too often the first estimate turns into the plan. That's what makes us all so gunshy of estimation.

That's not a problem with estimation, though.


Please, please, stop adding background music to these sorts of videos. It adds nothing but noise, and makes it hard to hear what the person is saying.

If for some inexplicable reason you feel compelled to add music to an informational video, turn the volume way down and pan it left or right so those of us without perfect hearing can have some idea what's being said. Better yet, add subtitles.


I have good hearing and the music bothered me too. Way too loud.


Noted and fixed ;)


Here's one way: double the estimate and then cut the scope in half.


Or the other standby: take the dev estimate, double it, and increase the order of magnitude:

  Dev Estimate     Quote
  ------------     -----------------
  2 hours          4 days
  1 day            2 weeks
etc...


There's only one problem with that rule: This is not done to dev estimates, but to the estimate of the next lower level of management. If your project has 5 levels of management before all participants are under one umbrella, team estimates in days will lead to a project taking years.

Claiming that the organization is flat is no cheat though: Create phantom managers so that ever 6 people have a manager, and those managers have managers, until the structure has a root. 300 programmers without managers would get 50 phantom first level managers, who have 9 phantom second level managers, 2 phantom third level managers, and a single phantom project managers. If an organization has more management layers than that, they will be even slower.


Hah, this was a running a joke at my last place of work ('Double it, change the units') where we were developing for clients and so were often publishing estimates externally.

More recently they moved into the world of science and actually fed back the results of previous estimates into the estimation process, to try and address systematic mis-estimation. Yes, each estimate may not be right, but it's surprising how if you average them out you can get them pretty close.


I used to suggest this method of estimating at my previous job. People thought I was crazy. I was right every single time.


This comment and the comment you replied to, are probably the most useful things to be read on HN in a long time. ;)


It surprises me how well this worked every time I applied it.

It stops working very well when you get to months and years planning however :(


From experience, you're probably not wrong.


anddd get the best date you can. If you don't get it from stakeholders, it doesn't count. Get it from stakeholders.


Engineer's Savings Time: triple the time estimate.


I'd like to give this a try, if only because I am not sure I believe anyone can accurately estimate non-trivial programming tasks. However, there doesn't appear to be a way to get any information without connecting your app to my github account. Seems like that process ought to go the other way around.


The thing that helped me the most personally with my estimate accuracy is estimating times beforehand, and then tracking the actual time and evaluating the accuracy of my own estimates. This is partly to get out of the mindset of trying to explain why past estimates were over/under and focus on being more accurate in the future.


Did it help? How often did you end up repeating the same task?

(Personally, I have this feeling that my tasks tend to be varied enough that 'number of times done' would almost always == 1, however, I don't have any actual data to back that assertion up. (And maybe you used this to hone in on the "true hour cost of a 2 story point task"???)


I am probably thinking here at a different level of granularity than you are, I would be talking about days to weeks here. I can't recall ever repeating the same task. It helped by letting me notice systematic errors and by helping me calibrate optimism / pessimism.

In my estimates I was focused too much on development time and not providing adequately for testing and deployment. On an existing project with the procedures already well in place it was fine, but for new or new-ish projects I was always under-estimating.

The other thing was calibrating my margin of safety. I now typically estimate at 2x the time that I think it will take to allow for unexpected issues and unrelated tasks that always pop up. That also works for me with the expectations of the people that I work with -- the estimates are used for planning and coordination and aren't effectively deadlines, but if I underestimate too often is causes coordination problems with the business people.


Surely you're getting the sense of the patterns of things you're doing.

Maybe something like "that's probably 3 loops and an email"

That's a couple hours for the loops... A couple more for the email, assuming HTML has to be dynamic based on a model with 5 values... Etc...

Sure it's not identical every time, but the patterns are the same.


Sure, sometimes. Between that, the story point ideal of "... relative to other tasks completed in the project", and a long enough term project, things may start to look the same _in that project_.

It also depends on how much leeway your estimates can have: if it turns out to be waaaay more than two loops and an email, what happens? Does management just nod, or do they start "negotiations" about not paying for the work it took you over the "estimate"?


It's funny, we've got the ability to do that at work, we track our time in 15 minute blocks and break each task down.

As an agency (maybe it's the same everywhere), I think there is wild external and internal pressure against giving accurate estimates.

When you get in the room with the project managers and the sales people, you can see their eyes rolling as the costs escalate.

"How long is implementing Google analytics going to take?" Maybe 3 days? "3 days really? That seems high, can we write in 2 days there?" Well we can, but it's still going to take the same amount of time.


I've seen this too. I'm lucky to work at a place where engineering estimates are taken seriously.

Most of the objections I see to estimating are objections to their abuse.


"Tasks" in Intellij is great for this. You can hook it up to you project management tool (github, JIRA, trello etc) then checkout an issue and it will create a branch for that issue and track the time you spend on it.


This classic is still relevant today:

"Evidence Based Scheduling" by Joel Spolsky

http://www.joelonsoftware.com/items/2007/10/26.html


Definitely do the video again without background music. It is hard to hear you.


You're right! That's my job for Monday now that I have a proper audio unit setup. Thanks for the feedback ;)


I spent a lot of time messing around with doing something similar and not succeeding[1].

I wanted to do full-on decomposed 3-point estimates. Which means representing something that is not quite a tree and not quite a table. Turned out to be harder than I was smart. I got the underlying calculation code to work fine, but never worked out how to get HTML to play along.

I did find reading about estimation to be quite enlightening though. My suspicion is that most of the improvement seen in 3-point estimates comes from decomposing the elements, not from the PERT formula.

[1] http://confidest.com


Please, please, please remove the background music -- it's annoying and overly loud.

The product looks very interesting, and it's nice that you offer a free tier. I'd hate to see your pitch hampered by the music.


I appreciate more attention being given to software estimates, but it seems like there's a lack of looking at the research in this area of software engineering. Most people have a single hocus-pocus equation that generates numbers they find convincing. Furthermore, it's often the case that these numbers are never changed over the life of the project.

That would be like predicting the weather with a single model, once, for the next three months. Of course that isn't going to work.

There are a few key points I think are often overlooked:

* Estimating is less about understanding time, and more about sizing a project and effort -- a study in software economics.

* Bottom-up estimates need to be based on historical performance. They should always be ranges and should include a notion of confidence. They should be democratically created if you're estimating for a team. You can do something simple (like Estimatr) or use a tool with a lot of data behind it (like Personal Software Process) - I do both.

* Top-down estimates should also be used. I often use COCOMO and COSYSMO, with monte carlo risk calculations (http://csse.usc.edu/tools/COCOMOII.php).

* The two approaches will give you two answers (both in ranges), with confidence intervals, which provides you a better sense of the size of the effort.

* Generating and publishing an estimate has a psychological effect on the team - consider that wisely (read Peopleware).

* Estimates are good for about two weeks, after that, they've become stale.

* You're not done with a project until you've recorded your performance data (for future bottom-up estimates).

* If a project is going to last three months or less, the research shows that nothing matters at all -- estimates, process, etc. -- do whatever keeps you/the team motivated.


Do you have a source on the last point here? Hadn't heard that before but seems pretty noteworthy.


It's a finding from a Capers Jones report (I believe on the analysis of agile vs waterfall projects of varying sizes and durations). I'll see if I can dig up the exact report for you...


Love the idea.

But, please, if you do a demo video please include an alternative way to learn about your product. I recommend either a list of features and/or screenshots.

I can't always watch video:

1. Sometimes I don't have the time

2. Sometimes I don't have audio

2a. My headphones are packed away

2b. I'm using my cell phone in a public place

3. You have 30 seconds to sell me or at least get me to learn more. If your video is 2 minutes make sure you don't burry the lead. I have no idea if this is the case here because I couldn't watch the video (see: 2a) :(

I'm using "I" and "my" here but I'm sure I'm not alone. Same goes for the trend in news sites now to only have a video on their website and no summary.


Thanks for making this, it's definitely useful and the UI is very clean.

Bug Report: If you enter a non-int value for one of the values it returns NaN. I.e. I entered 1.5 for a worst case estimate and it did that.


Reminds me somewhat of this[1] article.

[1] http://xkcd.com/1658/


Cool idea and this is probably the first time I've seen Vue.js be used in production (that I'm aware of). I'm in the US so I think changing the currency symbol (as a profile setting) would be a nice feature. Very clean design!


Thanks man! I really enjoy Vue. Multi-currency is definitely on the road-map for sure. It's all baked in really except for the user settings area.


That seems pretty useful, thanks!

Would be perfect with an export function to send it to the client.


Is there any more to this app than a spreadsheet/data input form?


Not yet. I wanted a prettier way to use standard deviation for assessing the most likely value for estimate line items as well as saving those in a nice convenient way. So I built this. Eventually this will branch out to have more features, specifically for agency-style businesses where they may have several worker types with associated internal / external rates etc. For now though, I'm just gathering feedback as to whether this is a problem for people or not.


Nitpicks, the term you want when you've been using "standard deviations" is either:

Distribution - general term for the shape of the relationship between probability and task length.

Triangular distribution or PERT - the specific shape of least/greatest/most likely you are using

Risk or Probability - the general concept that things are not definite and involve chance.

And where you are using "Total" for each item, you want "Mean" or "Median" depending which it is.

And I don't care to know the total mean or median time for scheduling, because the project will exceed that time 50% of the time (if it's the median, more if it's mean with long-tailed distributions) I would instead want some high quantile such that the chances of exceeding the schedule/budget are low (and which quantile I pick depends on my project, what's at risk if the schedule is exceeded).


Just spent 5 minutes playing with it, looks very nice .. I especially like the visual cues about what is and isn't editable, and the whole app flows pretty nicely .. will come back and have another 5 minutes play with it in a month or so, in case you're going to push things forward .. I'd love to see some graphics in there, rendering the time-range data and showing relevant gravity between them, somehow, akin to some sort of waterdrop diagram .. anyway, good stuff, well done.


Thank you for being honest! Upvoted.


    (let [estimate XXX]
      (* estimate 2))


My formula:

bestcase * 1.3 + (worstcase - bestcase) * 0.5


So worstcase * 0.5 + bestcase * 0.8?


So if you think it might take 2-4 weeks, use this calculation to arrive at 3.6 weeks?


Indeed. Better safe than sorry and we're all happy when we finish earlier. It works well for me ;).


what is it called, "multiply by three"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: