Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly, Thoughtworks have observed that over a project they get the same consistency simply counting number of tasks as counting effort estimates for those tasks. The key for project management is therefore to focus on maintaining a backlog of manageable tasks and throughput, not in estimating them individually.



We're finding this out in a lot of places. The key issue is variability. Larger sample sizes usually lead to less variability, which means flow-based systems will hold up better.

If you've broken your work out into tasks, instead of stories, presumably you have a lot of them, and they're mostly the same size over a large set. So sure, should work fine.

If, however, you have a small number of highly-variable chunks of work, then flow-based systems fail. It all depends on the nature of the item pool.


Summation can apparently reduce error without, actually, reducing error. I wrote an article about different ways of measuring estimate accuracy[1] that explains the difference.

It's a question for each business as to whether that matters.

[1] http://confidest.com/articles/how-accurate-was-that-estimate...


That's an interesting finding. Do you have a link to the research?





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: