A beginner estimates everything like the noise. So he's dramatically wrong. The novice though, estimates everything like bottlenecks. So he's dramatically wrong too.
This is actually why, in the industry, those agile methods works quite well. They handle that without people realizing.
The next key is to understand that estimates never determines the deadlines. What you can tell the stakeholders does.
So, the only way to succeed is to get the best dates you can from the stakeholders, then build the scope of the app accordingly. Then start treating everything like noise. Find the bottlenecks, focus on them, solve them, repeat.
Lastly, my manager setup a meeting with engineers to estimates the tasks in order to determine the launch date of our app. Before the meeting, he was saying that currently, the date was 15th of may. Then we've made the meeting, the novice estimates 750 days of work. 2 days after, we learn that the launch date is for 15th of may.
Somebody asks you to walk a certain route through a city and give them an estimate of how long it's going to take. This is the first time you've ever been asked to do this. The route has stop lights, paved walkways, thick forest where you have to trample your own path, crowded areas, construction, and all sorts of other characteristics.
Think you're going to estimate that perfectly the first time? Nope. But guess what, you'll estimate it pretty well the next time you're asked to do the same route.
Therein lies the problem with software and estimates. Many times what engineers are asked to estimate is an unknown or has portions of unknowns. Without having solved the problem prior to being asked it's going to be real tough to estimate it properly. Second time around, your estimate is going to be much much better.
Most software is _not_ being asked to do something you've done once, twice or thirty times before. It's solving problems for the first time, estimates will be wrong.
Come up with risk classes and complexity classes, with objective measure for what is or isn't in each class. ("Class X risk involves a library/technique/platform with we are not yet familiar.").
Measure time spent on all tasks.
Collect statistical distributions for each portion of the risk/complexity grid.
Use those distributions and monte carlo simulation to estimate future projects to an appropriate degree of confidence (e.g. 95%).
This probably means scheduling lots more time than you would have before, more than you will often need, use the rest for refactoring, training, completing unforeseen and therefore unestimated tasks (bug fixes?) maintaining the FOSS you are dependent on, etc.
Too often the first estimate turns into the plan. That's what makes us all so gunshy of estimation.
That's not a problem with estimation, though.
If for some inexplicable reason you feel compelled to add music to an informational video, turn the volume way down and pan it left or right so those of us without perfect hearing can have some idea what's being said. Better yet, add subtitles.
Dev Estimate Quote
2 hours 4 days
1 day 2 weeks
Claiming that the organization is flat is no cheat though: Create phantom managers so that ever 6 people have a manager, and those managers have managers, until the structure has a root. 300 programmers without managers would get 50 phantom first level managers, who have 9 phantom second level managers, 2 phantom third level managers, and a single phantom project managers. If an organization has more management layers than that, they will be even slower.
More recently they moved into the world of science and actually fed back the results of previous estimates into the estimation process, to try and address systematic mis-estimation. Yes, each estimate may not be right, but it's surprising how if you average them out you can get them pretty close.
It stops working very well when you get to months and years planning however :(
(Personally, I have this feeling that my tasks tend to be varied enough that 'number of times done' would almost always == 1, however, I don't have any actual data to back that assertion up. (And maybe you used this to hone in on the "true hour cost of a 2 story point task"???)
In my estimates I was focused too much on development time and not providing adequately for testing and deployment. On an existing project with the procedures already well in place it was fine, but for new or new-ish projects I was always under-estimating.
The other thing was calibrating my margin of safety. I now typically estimate at 2x the time that I think it will take to allow for unexpected issues and unrelated tasks that always pop up. That also works for me with the expectations of the people that I work with -- the estimates are used for planning and coordination and aren't effectively deadlines, but if I underestimate too often is causes coordination problems with the business people.
Maybe something like "that's probably 3 loops and an email"
That's a couple hours for the loops... A couple more for the email, assuming HTML has to be dynamic based on a model with 5 values... Etc...
Sure it's not identical every time, but the patterns are the same.
It also depends on how much leeway your estimates can have: if it turns out to be waaaay more than two loops and an email, what happens? Does management just nod, or do they start "negotiations" about not paying for the work it took you over the "estimate"?
As an agency (maybe it's the same everywhere), I think there is wild external and internal pressure against giving accurate estimates.
When you get in the room with the project managers and the sales people, you can see their eyes rolling as the costs escalate.
"How long is implementing Google analytics going to take?" Maybe 3 days? "3 days really? That seems high, can we write in 2 days there?" Well we can, but it's still going to take the same amount of time.
Most of the objections I see to estimating are objections to their abuse.
"Evidence Based Scheduling" by Joel Spolsky
I wanted to do full-on decomposed 3-point estimates. Which means representing something that is not quite a tree and not quite a table. Turned out to be harder than I was smart. I got the underlying calculation code to work fine, but never worked out how to get HTML to play along.
I did find reading about estimation to be quite enlightening though. My suspicion is that most of the improvement seen in 3-point estimates comes from decomposing the elements, not from the PERT formula.
The product looks very interesting, and it's nice that you offer a free tier. I'd hate to see your pitch hampered by the music.
That would be like predicting the weather with a single model, once, for the next three months. Of course that isn't going to work.
There are a few key points I think are often overlooked:
* Estimating is less about understanding time, and more about sizing a project and effort -- a study in software economics.
* Bottom-up estimates need to be based on historical performance. They should always be ranges and should include a notion of confidence. They should be democratically created if you're estimating for a team. You can do something simple (like Estimatr) or use a tool with a lot of data behind it (like Personal Software Process) - I do both.
* Top-down estimates should also be used. I often use COCOMO and COSYSMO, with monte carlo risk calculations (http://csse.usc.edu/tools/COCOMOII.php).
* The two approaches will give you two answers (both in ranges), with confidence intervals, which provides you a better sense of the size of the effort.
* Generating and publishing an estimate has a psychological effect on the team - consider that wisely (read Peopleware).
* Estimates are good for about two weeks, after that, they've become stale.
* You're not done with a project until you've recorded your performance data (for future bottom-up estimates).
* If a project is going to last three months or less, the research shows that nothing matters at all -- estimates, process, etc. -- do whatever keeps you/the team motivated.
But, please, if you do a demo video please include an alternative way to learn about your product. I recommend either a list of features and/or screenshots.
I can't always watch video:
1. Sometimes I don't have the time
2. Sometimes I don't have audio
2a. My headphones are packed away
2b. I'm using my cell phone in a public place
3. You have 30 seconds to sell me or at least get me to learn more. If your video is 2 minutes make sure you don't burry the lead. I have no idea if this is the case here because I couldn't watch the video (see: 2a) :(
I'm using "I" and "my" here but I'm sure I'm not alone. Same goes for the trend in news sites now to only have a video on their website and no summary.
Bug Report: If you enter a non-int value for one of the values it returns NaN. I.e. I entered 1.5 for a worst case estimate and it did that.
Would be perfect with an export function to send it to the client.
Distribution - general term for the shape of the relationship between probability and task length.
Triangular distribution or PERT - the specific shape of least/greatest/most likely you are using
Risk or Probability - the general concept that things are not definite and involve chance.
And where you are using "Total" for each item, you want "Mean" or "Median" depending which it is.
And I don't care to know the total mean or median time for scheduling, because the project will exceed that time 50% of the time (if it's the median, more if it's mean with long-tailed distributions) I would instead want some high quantile such that the chances of exceeding the schedule/budget are low (and which quantile I pick depends on my project, what's at risk if the schedule is exceeded).
(let [estimate XXX]
(* estimate 2))
bestcase * 1.3 + (worstcase - bestcase) * 0.5