Hacker News new | past | comments | ask | show | jobs | submit login
Thinking Journeys rather than Estimates (riskfirst.org)
38 points by bobm_kite9 39 days ago | hide | past | web | favorite | 15 comments

> Eventually, I decide I’ve done enough planning. How? I stop at the point where I’m happy with the risks I’m taking, or unable to mitigate them further.

There are other important places to stop. An important one that planning-eager folks neglect is complexity.

The high-level estimation needs to be light enough that you will redo it, perhaps entirely, if the facts on the ground change. You will need to assign some risk to the plan itself since it will cost some time, energy, money, communication bandwidth, political capital, etc. to change the plan.

How much planning risk depends on a few things, but how much the details of the plan are exposed is an important thing to factor in. If you've signed contracts with 1000 people promising that you will complete step M on exactly the week of April 8 and it will be entirely free, you have made it hard on yourself to change many things in your plan.

If the prevailing culture interprets estimates as plans and interprets plans as commitments written in blood, nobody will actually be making estimates or plans.

I do agree that estimation comes at a cost, but I suspect that tool support could make that go down a lot.

The last sentence of this article rings true. Do the riskiest parts first, essentially. The rest feels prescriptive and way too rigid and I have a hard time believing teams would want to or get a benefit from sticking to it.

Eliminating risk is awesome though and even that practice is compatible with estimates if thats what your team is doing. Simply asking the question “whats the riskiest part?” prompts some healthy discussion.

I would qualify that idea.

Momentum is important and you can lose it by tackling risk too late or too early.

When a new team or project is forming, doing the riskiest thing first can become a self fulfilling prophecy. I suspect that’s a big part of why so many of us get wedged trying to start personal projects or businesses. We go toward the risk too quickly in an effort to save time and then we save loads of it by giving up.

You do want to build toward the risky things, both as a team and individual contributors. Some teams fail to thrive because the senior members continue to take on the bulk of the risk, while new or younger members are forever stuck at mid-level until they quit and join another team.

I wish as an industry we practiced quantitative risk management more. I run a small training workshop about producing better estimates, and the single idea that I want people to take away is that a single-valued estimate is a really terrible shorthand for describing a probability distribution.

If you describe estimates as ranges or distributions, a whole world of risk analysis tools open up - PERT, Monte Carlo, etc. More crucially, it allows you to start communicating and managing risk openly with everyone involved, instead of pretending it doesn't exist.

Well I’m not going to disagree, obviously. The hard part is figuring out what these distributions look like.

For example, in the summer I got an intern to build an app. It demoed well and people naturally asked when it would be in production.

Working in quite a regulated environment, I knew this wouldn’t be as simple as just firing up a couple of EC2 instances, so I said a couple of months to get it live on the ‘internal cloud’.

It still isn’t live. Maybe someone more experienced than me could have predicted that.

The smaller the team (by team I mean everyone in the project not just programmers) the easier the estimation can be done and the more thought you can put into it. When you get into projects as big as the two (simultaneous) I am in the more the estimation becomes random guesswork. In one project they defined 1400 or so user stories, a portion of which I had to estimate in less than a day which is basically dice throwing. But this is a huge company, and the ultimate money involved is also huge in both cost and revenue. I wish we could spend more time thinking of it as a journey instead of launching off a cliff.

At the scale of 1400 stories, using Little's Law as an estimation tool would become tractable. As you proceed through the stories you would have an average latency and throughput. When multiplied these give the total of all stories either in the backlog or in-flight. Then you solve for when that number reaches zero and you have a fairly decent estimate of the 50% completion time.

What you couldn't do well is say when particular stories would be completed, or what the 80%/90% etc times would be. You'd need to use something more sophisticated.

I think I understand Little's Law, but I'm struggling to see how it works here - are you saying the backlog size would be predictable?

More that since you know the backlog size, you can use figures on either average latency or average throughput to calculate when the backlog will be drained.

Serious question, what is the difference between software development and alchemy?

I guess alchemy eventually grew up and turned into chemistry. I really hope the same happens to dev but I’m not expecting it in my lifetime

Quite a lot of difference. Alchemy is science and software development is art

From my dictionary:

Alchemy: the medieval forerunner of chemistry, concerned with the transmutation of matter, in particular with attempts to convert base metals into gold or find a universal elixir.

Cool, so alchemy was like medieval chemistry! But I guess the motive was to make a quick buck by making gold! :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact