
Forecast, don't guesstimate your software projects - casseys
https://www.reaktor.com/blog/forecasting-method/
======
idoby
TFA uses a lot of words to say very little.

I don't care if your estimate is drawn from the hip or projected using a state
of the art Monte Carlo or machine learning model. It's still an estimate.
Still, any number of things that weren't in your model could shift the
deadline: people getting sick for a long time, people quitting, people getting
promoted out of their critical role, organizational dependencies not
delivering on time (other dev teams, legal etc), all of these things have
happened to me, and when they do, they can throw off your project for
_months_.

No model can take these things into account, and if it does, it will yield an
estimate like "three weeks up to a year" which is useless, and I didn't need
your SOTA model to get that answer. Unless you're really only doing cookie
cutter stuff, the best form of estimate I've seen used is continuous
estimation + being willing to cut features to make it to deadlines with
something usable, even if incomplete (build a bike, not half a car). This
isn't always possible, but when it is, it saves a lot of headache and makes
everything run smoother. But it starts with accepting the fact that you don't
know everything from the start.

~~~
erpellan
The article is about forecasting not estimation. That's the point. Don't
estimate. Measure.

It usually goes without saying that most forecasts do not include provisions
for black swan events. It's generally assumed that going bankrupt or other
project externalities will have an impact on the delivery.

~~~
dirklectisch
Author of the article here.

I agree with this response. We are normally not asked to predict for
situations where something big changes in the team. But I of course
acknowledge that these things do happen. When you have a stable team, the
numbers that this method yields are also very stable.

~~~
qhalCAZ
I agree with the top comment. The "method" is basically:

 _Instead you can look at the team’s historical data and apply statistical
techniques._

Except that is already what every experienced developer is already doing,
albeit in an intuitive way.

Intuition is superior here, because statistical models don't work for creative
domains, and anyone who says so has something to sell.

~~~
bumby
My experience is that most project managers take a non-probabilistic approach.

Say you have your usual list of breakdown tasks and assign a time/budget
estimate for each in terms of “low”, “most likely”, and “high”. The intuitive
answer is to sum up the “most likely” for your total estimate. However, this
ignores the probability that a delay in one task affects others.

Instead, if you take into account the covariance relationship between tasks
(using historic or simulated data) you often find that “most likely” summation
has a quite low probability of being met. For the org that applied this, there
was a less than 20% chance we’d meet or best that intuitive estimate. No
wonder we were chronically over budget and over schedule!

~~~
myth2018
I've been reading the "Software Estimation: Demystifying the Black Art" from
Steve McConnell.

He introduces a distinction that, at least for me, has been instrumental:
estimations and plans are different things.

Estimations are honest, based on past performance data and probabilistic on
their very natures.

Plans are, on the other hand, built with a target date in mind, taking into
account the estimate previously made, desired delivery dates from customers
and everything we are so used to.

By planing fulfillment of tasks closer to the estimates, you decrease the risk
of the plan failing. You can build a shorter schedule and assume that staff
will work overtime, assume more optimistic estimates and so on, but, then, the
risk of failure will be higher. Such risk will, of course, never be zero
though.

It's a simple distinction, but it has important implications. We don't feel
anymore the pressure of making pessimistic, therefore dishonest estimates just
out of fear of being pressed to cut the schedule. And also gave us a better
argumentative tool to negotiate schedules with our clients.

I think it's also useful for making all the probabilities a bit clearer to
project managers. It's like "OK, I know that you need me to commit with a
delivery date, but I'm also going to make clear to you that there are some
risks involved and I wanna make everybody aware of them"

~~~
bumby
That’s an important distinction. The way we handled it was by letting managers
define their acceptable level of risk and then use the model to define the
estimates in that context.

For example, if they were ok with a 60% chance of making or beating a cost
estimate, the forecast could be much more aggressive than, say, a management
expectation of 90% chance of being on budget

~~~
myth2018
Thanks for sharing this. I think I'll experiment presenting the situation to a
customer using such model as soon as I have a opportunity. Sounds good.

~~~
bumby
This might be helpful:

[https://www.nasa.gov/pdf/741989main_Analytic%20Method%20for%...](https://www.nasa.gov/pdf/741989main_Analytic%20Method%20for%20Risk%20Analysis%20-%20Final%20Report.pdf)

It’s a straightforward enough primer that it can be done in Excel, including
simulating the data if necessary.

Even if this type of model is too simple for actual estimation, it’s a useful
(and sobering) tool to help managers understand why their intuitive estimates
can so often be incorrect.

------
kmclean
> And here’s the brutal question: what good are estimates if they hardly ever
> align with reality? You could have spent that time on building software.

No one has ever been able to answer this question for me.

In my experience the accuracy of estimates is highly variable. Experienced
developers who know each other and the system they're working on well tend to
offer more realistic estimates, but even on the most smoothly run teams it's
still fundamentally guesswork.

From my perspective it seems like the only real effect of estimating units of
work is making developers resent people outside their team. Sure, it gives
product people a number they can say when they get harassed about when
something will be done, but it's no more accurate than one they could have
just made up on their own.

I don't think I see any fundamental difference between estimating and what
this author calls forecasting in this respect. It does seem like generating
these metrics could consume less time than meeting to make up estimates, but
it's not obvious to me that it always would be.

What real value does this add to any business? Any product or business people
here? I'm genuinely curious what the purpose of estimating is. It feels like
nobody is winning. Product and business people get annoyed by missed
"estimates" because what they really want is to know the future, which is
impossible. Developers resent being asked to predict the unpredictable. Not to
mention I don't know any programmers who prefer talking about doing stuff over
actually doing stuff -- all the meetings that come with management styles that
include things like estimates feel like an enormous waste of money.

Who is winning here?

~~~
hn_throwaway_99
There are two very important reasons for estimation:

1\. Coordination with other teams or real-world events. In most decent-sized
companies there are many people involved in a product or feature launch:
product marketing, brand marketing, support, not to mention other dev teams.
Many times these teams will need to broadcast external, hard dates (e.g. "our
PR firmed lined up exclusives with journalists on date X") Being substantially
late can affect lots of other people. However, importantly, this is not always
the case. If you're estimating for the benefit of coordinating teams, _be very
explicit about which other teams depend on your estimates, and why_.

2\. To prioritize and decide which features to build. Prioritization is solely
about balancing expected benefit with expected cost, so if you're wildly off
about the cost, your time may have been better spent building something else.
Again, though, if you underestimate by a bit, but in the end come up and say
"I wouldn't have really worked on things in a different order anyway" then
nothing was really lost. I can definitely think of cases, though, where if I
really knew how long something would take at the start I would have chosen to
do something else, or at least do it in a different way.

~~~
pif
> our PR firmed lined up exclusives with journalists on date X

That is the problem.

Wait for the product to get to an acceptable form, and then call the press in
for tomorrow!

~~~
szaroubi
Getting a PR or marketing campaign up must be done in parallel to the dev
work. It can take a few months to find the right journalists in the right
publications , get their attention and get their time. On another note, some
product launches are also times with fixed date events (Back to school (media
industry), CES (Consumer electronics), any other expo in any other industry).
it is key to time the dev deliverables. and if you need to chose between two
features to dev, well, you need estimates as in inputs.

Estimates are not perfect, you should not spend 5 man days to plan 2 months of
work. Planning can be done with guestimates and best effort and when done
right, are super useful.

------
hliyan
The disdain for estimates in some of the comments here indicate that there is
an organizational problem in the way those estimates are solicited, rather
than a problem with the idea of estimating work itself.

I used to work for an organization where engineers so universally loathed the
idea of estimates that I was convinced that it was just a necessary evil. But
a few months ago I joined an organization where the relationship between
engineering (which I'm in) and business is not very adversarial. Here,
estimates are a tool to answer the question "how much work should we take up
in the next sprint without burning ourselves out?". The estimation process is
not loved, but it's not hated either.

~~~
mdorazio
1) Most of HN is young and has never worked on the business side of an org
where they had to do things like lose a customer because the feature they
wanted is taking too long to finish or repeatedly push back multiple expensive
dependent non-engineering projects because engineering keeps hitting
development roadblocks.

2) A lot of developers really hate committing to any kind of estimate because
they're really just not that good at estimating for a variety of reasons and
getting stuck in a commitment that turns out to be unrealistic really sucks.

3) In my experience a good portion of orgs really are bad at business-
engineering interactions and end up in adversarial relationships like you're
talking about where engineering is seen as "those guys that refuse to commit
to any deadline and suck at communicating status I can plan around" and the
business owners are "those guys who don't understand what they're asking for
and are constantly trying to micromanage to get blood from a stone". Also in
my experience, neither side is entirely innocent in these situations.

~~~
pif
> A lot of developers really hate committing to any kind of estimate

In development (as opposed to production) I personally fight against any
commitment if it comprises the "what" as well as the "when".

You fix the date of the next release and provide me with a prioritized list of
bug fixes and features to work on? Nice, let me start to work!

You think that a specific new feature could improve our sales and you let me
drop anything else and call you back as soon as I have any news? Wonderful,
I'm already analysing the issue in my head!

Anything else, and I'll let you know that your dreams are not likely to come
true.

~~~
sooheon
The iron triangle[1] is a well known model. If the "business" side insists on
"engineering" breaking it, they are not serving their function.

[1]:
[https://en.wikipedia.org/wiki/Project_management_triangle](https://en.wikipedia.org/wiki/Project_management_triangle)

------
randomsearch
I thought that we’d solved this problem - don’t do estimates but rather adjust
what we build to fit the available time. And estimates of what is going to be
shipped get more and more accurate as time goes on (because you can ship what
you’ve already built), whereas estimates of time remaining can continue to be
wildly inaccurate for as long as you like (scope creep etc).

Then the end of the allotted time you have a product. Maybe it’s not good
enough. So you either give up or you allot more time to it. But the product
should be usable as-is and solve some of your problems.

Problem I have with the above is that it requires developers to be honest and
put in consistent effort, but that’s more of a leadership and motivation
issue.

~~~
idoby
I agree, but that's still a form of estimation though, you're just estimating
that you'd be able to deliver on your list of features in the time you have.
You could be wrong, and then end up with less features, or be forced to spend
more time.

~~~
waynesonfire
i don't think the claim was that we're no longer estimating. the author was
attempting to articulate the process of estimation, where initially one
promises the world in N amount of time. As N draws near you reduce feature /
scope as needed.. and perhaps when the deadline hits and all you got is a int
main(void) { printf("hello world"); return 0; } you figure out what you can
make due with where you're at. i like this approach and it resonates with a
lot of other great comments in this post.

------
dpenguin
99% of teams that need estimations are not in a phase where they are designing
something that is hard to estimate. And those that are usually are prudent
enough to not bother R&D with project planning(yet).

99% of engineers are working on things that CAN actually be estimated fairly
accurately. New CRUD APIs? New ETLs? New React component? New integration with
a product that has a published API? New service? New message format? New
protocol? Whatever. Just break it down enough until it’s very clear what it
takes to get it done. There will be a few unknowns but you can call them out
as risks. Try to eliminate risks first and raise a flag so your manager/PM can
adjust his forecast.

The thing is, everyone wants to be “working on the next big thing” so much
that they actually believe they are working on something groundbreaking that
cannot be estimated. Either that or they are not qualified for the job
yet(hence they need to learn something new, which can be unpredictable). They
just need to be grounded and it’s a fairly easy journey. They are all probably
emulating the symptoms of those in 1% by believing they cannot estimate their
work.

~~~
quadrifoliate
> Either that or they are not qualified for the job yet (hence they need to
> learn something new, which can be unpredictable)

IMO this attitude is pretty prevalent, and _extremely_ harmful to software
engineers. Software frameworks, methodologies, and techniques change very fast
[1], and if you think that someone who needs to learn something is unqualified
for the job, you are going to have an inherently adversarial relation with the
engineers on your team.

Learning is and should be a normal and expected part of the _day-to-day_ job
of a lot of software engineers today. I think this contributes a fair bit to
the problem of estimation.

[1] In contrast, I haven't seen a ton of novel developments in project
management since the early 200s, we are still beating the drum of "Do Agile
Not Waterfall" almost 20 years later

~~~
Groxx
Yeah, the status-quo for me for several years has been "I don't know" because
_most_ things I build end up needing to debug something I don't have an
immediate answer for, or construct something new, or talk to a new team, or
find a new lib, or change a new setting, or migrate to a new platform. If I'm
repeating work I'd done many times before, that's something that should be
made more flexible and automated so I don't need to do it any more. (these of
course exist, but you're pretty easily replaceable if this is a super-majority
of what you do daily)

So my estimates are pretty much always "if it's well-built, documented, and
fits what I expect, a week or less. If not, up to 2 months or more, but I
should know more in 2 weeks". Getting a more accurate estimate quite often
means spending as much or more time investigating than it would take in the
optimistic case, so usually the answer becomes "give it a shot and we'll
decide again later". Sometimes I have good news, sometimes not.

\---

We sit on top of _millions_ of lines of code, changing faster than we can
_read_ much less _understand_. You cannot know it all, nor is it worthwhile to
try in the vast majority of cases. Poking into new territory is common, and
the ability to do so effectively is a super important skill.

~~~
dpenguin
Code organization (part of Architecture) matters a lot here. Especially in
large code bases. Software is meant to be “soft” or pliable easily. To do
that, one should be able to alter a module without having to understand the
whole system. If a small task takes too long, there’s a problem with the
software architecture or code organization.

~~~
sooheon
I think GP's "millions of lines of code" references the code that runs all the
systems outside of your particular org's architecture.

~~~
dpenguin
Got it. Better to stick to tried and tested “boring” stuff and actually code
small functionality yourself than chasing a thousand line library for trivial
stuff.

At work, someone used a 3rd party library to send stats to a server in json
format and “standardized” on it. 100s of developers had to now learn this new
library and it’s APIs and one bug caused everyone a bit of hassle. It probably
made the original dev’s job just a little easy but made it a bit hard for
everyone else. These things do happen. The point is to learn from this and
make better decisions over time, not give up and say we can’t get better.
Trying to provide an estimate forces you to think more carefully and become
better over time.

~~~
sooheon
A poor library choice or two is not even 1% of the systemic dependencies any
system has if it's working at a reasonable level of abstraction.

> "not give up and say we can’t get better"

Not sure why you feel the need to state this straw man? That is not my
opinion, nor do I think it is the GP's.

~~~
dpenguin
It was not a counter argument to anything per se. Just pointing out that even
with episodes of failed estimates, there is reason to continue estimating.

------
steve_taylor
The purpose of providing an estimate is to get the person asking off your
back, so you can go back to getting work done. Beyond that, there’s little to
no value. People ask for estimated because lies are strongly preferred over
uncertainty.

~~~
harryh
Estimate are useful in software businesses for a wide variety of important
reasons:

\- so you can tell the customer when they can reasonable expect to get
something they need

\- so you know when you will need to be ready with the next project on the
agenda

\- do that you will know when you might consider adding resources to a project
to make it go faster (yes, this is not always possible, but it sometimes is)

\- so that you will know if your funding runway is sufficient for your R&D
needs

\- so that you can compare two or more different projects when deciding what
to work on; time to completion can be an important consideration in these
decisions

------
hrktb
> For example, the historical throughput of your team can be used to run a
> simulation of future throughput. Some outcomes will be more likely than
> others and you can use these numbers to come up with statements like: “41
> items or more can be completed in 30 days with a certainty of 85%”.

As usual, there are two base assumptions to this:

\- your work environment didn’t and won’t have any significant change for a
long time (tools, requirements, team composition etc.)

\- your team output is somewhat stable, with no month with 300 tickets and
others with only 50 tickets.

In my experience these two assumptions are surprisingly rare to justify in any
decently growing company.

In particular as company grows, tools change, organization and processes
change, people change. On top of that there is often seasonality.

We can still throw around forecasts, but IMO in these conditions it has no
more legitimacy than random guesses.

~~~
dirklectisch
Author of the article here.

Significant changes will indeed make your forecast not come true. But mostly
people are not asking to predict for situations with significant changes.

Incremental changes will just be incorporated in the forecasts over time. The
forecast continually adjusts to new information becoming available.

It's interesting to know that after about ten work items completed the number
become pretty stable. So it's easy to reset or adjust the forecast in case of
a big change in the team.

I was personally surprised to learn how stable the output of a team is if its
composition doesn't change. Tools have some impact on productivity but so big
that you have to throw away your predictions. No one is complaining anyway if
you over deliver a little.

~~~
hrktb
> people are not asking to predict for situations with significant changes

That’s for coming up changes. Your predictions will also be of poor quality
after a significant change as you lose the link with historical data.

In the end there might be only a small window between two changes where they
are worth anything.

To be clear I am not saying you shouldn’t make prediction anyway, just that
the effort to come with “a system” is not worth it in a lot of situation.

For context, tech industry has one of the highest turnover, a team
losing/gaining a member is not some rare event. A new boss coming in to change
process or teams isn’t either.

[0] [https://business.linkedin.com/talent-
solutions/blog/trends-a...](https://business.linkedin.com/talent-
solutions/blog/trends-and-research/2018/the-3-industries-with-the-highest-
turnover-rates)

------
cromulent
Also see the classic Joel on Software article from 2007 - _Evidence Based
Scheduling_

[https://www.joelonsoftware.com/2007/10/26/evidence-based-
sch...](https://www.joelonsoftware.com/2007/10/26/evidence-based-scheduling/)

It's still in FogBugz.

[https://www.fogbugz.com/Evidence-Based-
Scheduling](https://www.fogbugz.com/Evidence-Based-Scheduling)

------
harikb
The only projects I can accurately forecast is the one I have already
implemented elsewhere. Meaning 90% of it is a rehash of something done
elsewhere. Companies do this because... well.. reasons. Anything else
novel/unique/innovative by definition can’t be predicted well because if you
knew everything from the start, there isn’t any innovation being made. Many of
the good companies that exist today pivoted from a broken prior idea or
discovered a market while experimenting. This applies to every part of
software development.

~~~
dpenguin
How often do you _really_ do novel/unique/innovative stuff? Honestly.

~~~
brazzy
There's pretty much always something novel or unique in the way things are put
together. I recently had a project that was at first glance a pretty trivial
"enter data, verify it, package as XML and send to a backend". Until we
realized that the combination of privacy requirements and an industry spec
would have required us to do schema validation of the XML in the browser. And
apparently the only Javascript library which does that is machine translated
from C, not very stable and doesn't support UTF-8.

~~~
tannhaeuser
O/T but was it XML Schema specifically, or just XML DTDs that you needed to
validate against? For the latter case, there's [1] (my proj), plus the option
to rewrite XSDs into DTDs (under natural assumptions eg no local element
redefinitions) without having to wrestle with Xerces and/or libxml (which
hasn't formally completed XSD support anyway) and emscripten. You might also
have luck by transpiling Xerces/Java to JavaScript using Google's
j2cl+closure-compiler (which works very well if you can swallow bazel and/or
isolate the actual j2cl Java app from its bazel wrapper).

[1]: [http://sgmljs.net](http://sgmljs.net)

------
Tade0
_Some outcomes will be more likely than others and you can use these numbers
to come up with statements like: “41 items or more can be completed in 30 days
with a certainty of 85%”._

I did something similar recently:

I started with taking the 20 tickets we closed and plotting story points vs
actual time they took. Results were as follows:

\- 8s were mostly garbage, but consistently took more than one sprint.

\- 1s and 2s were also garbage but always took less than a 5.

\- 3s and 5s were fairly consistent, but at the same time could take up to
100% more.

Conclusion: split up all 8s if possible, treat 1s and 2s as 3s.

Second, I removed the outliers and created a distribution of point-per-sprint
rates.

The result: 5.6 points per sprint with a standard deviation of 2.5 - pretty
bad, but from that I could tell that our chance of delivering more than a
given amount of points per person per sprint was(assuming a normal
distribution):

85% for 3 points. 60% for 5 points. 17% for 8 points. 4% for 10 points.

Conclusion: per sprint we should either take one 5 point task or two 3 point
tasks, but avoid 5 + 3 and never take on a 5 + 5. Also take a 3 if we didn't
deliver the previous sprint.

Last point: some outliers were estimated somewhat highly (5), but took much
less. One feature they had in common was the word "analysis" in the ticket
title.

I guess the lesson here would be to ask: are we really planning on taking a
whole sprint(two weeks) to analyse something?

------
monoideism
I appreciate new approaches to software estimation. But while this is an
interesting approach, it fails to adequately address the primary issue:
defining a "unit of work".

If it were possible to predict, with any degree of confidence, that task A (to
do) is approximately equal to, or a 1/x fraction of, task B (completed), then
the estimation problem would not be nearly so difficult to begin with.

Yet that's what this approach requires.

So cool idea, but not terribly applicable to actual projects in my opinion,
unless you're able to break down projects into easily definable and
predictable units of work (and yes, I know that there are various frameworks
that attempt to do just that, with mixed results).

~~~
dirklectisch
Author of the article here.

Unit of work is explicitly vague because it refers to the units you are using
in your project, be it user story, requirement, epic, bug etc.

You do not need to know the size of each unit. In fact the method I describe
acknowledges that there is variance in the size of each unit.

Some people advocate "same sizing" units of work. I have always thought that
was a weird idea because it would imply making small units bigger. How would
you make the unit that describes changing the color on a button the same size
as integrating with an API? You can't even if you would have perfect knowledge
of the amount of work required.

~~~
lordlic
From TFA:

> Throughput is the amount of work that comes out of your project by unit of
> time. It’s up to you to decide what units make sense for your context. Days,
> weeks, sprints, stories, bugs, epics – anything goes as long as you’re
> consistent.

Using days/weeks/sprints as your unit of work for determining throughput seems
circular. If you want to know how many weeks of work your team can produce per
week then you don't need a monte carlo simulation to tell you that.

Using stories/bugs/epics is flawed too, I think. You can have a fantastic
model for your team's throughput in stories per week, but that doesn't tell
you anything about when the project is going to be done unless you know how
many stories there will be. There are two variables here (throughput and
quantity) and you can't get useful information out of the product of them for
free.

To see why, imagine that you take only the minimum amount of effort to very
roughly divide the project into sensible-seeming chunks. In that case, your
throughput in chunks per week will be meaningless (i.e. your model will have a
confidence window which is uselessly wide) because the chunk division will
have barely any relationship at all to the amount of actual work in each
chunk. Now imagine that you're a bit more diligent in your project planning
and look in a bit more detail at the work that will be involved. You've done
some work to clarify what code will need to be written, and your confidence
window will narrow accordingly. Now imagine that you're even more diligent.
And so on, and so on. You eventually end up with zero error in your model, but
in the process you've completely determined what code will need to be written,
and the project is finished! Congratulations, you've invented waterfall.
There's no free lunch here as the intro paragraphs of the blog post promise.

I assume that in practice you're stopping at some point in the middle of the
extremes of doing nothing and planning out every line of code, and then
applying your statistical model at that point, but I think you'll still be
thwarted by the (at that point) _partial_ disconnect between the chunk
divisions and the actual amount of work in each chunk. We've all experienced
innocent-seeming tasks that end up consuming vast amounts of time
unexpectedly, and various degrees of this phenomenon is what fundamentally
ties the amount of error in any pure "stories per week" model to the amount of
effort you spend planning and estimating the stories. No free lunch.

~~~
monoideism
Exactly. I read the article fairly carefully, but the logic seemed circular to
me. For a novel project, the problem of how to divide it up into predictable
units of work remains. It's always the black swan little subprojects that
destroy an estimate, and this approach doesn't help with those unforeseen
events. If I could avoid those that reliably, I'd be able to estimate projects
with a high degree of accuracy, even without using this approach.

In my experience, only increased subject matter expertise and a solid team
with lots of experience can minimize the black swan events, and even then not
100%. They still happen with disheartening frequency.

And as you say, it doesn't buy you much if you want to measure how much work
your team does per day/week/month, either, for the reasons you describe.

------
mbeex
As a freelancer, I often do not see the problem in the estimate or forecast as
such, but in the willingness to accept an honest / realistic one. This of
course has a direct impact on my sales position. To make a long story short:
Most people want to be lied to as long as time is what is being sold.

------
jt2190
There are a lot of responses here that boil down to: “Just use your
intuition.” This _is_ a legitimate way to estimate, however, like any
estimation technique, it has weaknesses that make it inappropriate for many
situations.

What’s most helpful is to abandon the idea that there’s “one true way” of
estimating and instead educate yourself about the many estimation techniques
that exist, and when and where to use them.

Also keep in mind that estimates are inherently political, that they can be
used to guide or to destroy, and that may be the dominant factor in the choice
of technique to use.

For a very good introduction to all of these factors (and others), and a
survey of numerous estimation techniques, I recommend reading “Software
Estimation: Demystifying the Black Art” by Steve McConnell.

------
_gtly
FYI: Scrum.org changed its Scrum Guide in 2011 to use the term "Forecast"
instead of "Commitment":

source: [https://www.scrum.org/resources/commitment-vs-
forecast](https://www.scrum.org/resources/commitment-vs-forecast)

------
gitgud
Meteorologists have trouble predicting if it will rain in 2 weeks. And that's
with a highly sophisticated model of the earth's weather.

How can a team predict when various idea's are implemented into a software
system... Too many interdependent things going on in software...

------
qznc
> you can just start by writing down start and end dates of each chunk of work
> in your process

End date is usually easy: "when someone closes the ticket" but maybe "when it
was released" is better?

I have trouble determining the start date though. Some ideas: "When the
customer reported it", "when the ticket is created", "when someone planned a
release date for the ticket", "when the ticket was commit to in a sprint
backlog", "when the ticket is assigned", "when the ticket was assigned to the
one who finished the ticket", "when someone clicked on 'start progress'".

Each possibility comes with its own kind of noise and some requires additional
discipline.

~~~
dirklectisch
Author of the article here.

I normally recommend when starting out to take a small segment of your process
first and then widen it as you get more experience.

For the start point I usually try to think of these aspects:

\- Can work still be cancelled at this part of the process. If so it's
probably not the right spot to measure the start date.

\- Can work still be reprioritized at this point in the process? If so that
will significantly widen the precision of the predictions that the forecast
will make.

Also note that setting a the start date at the moment of commitment still
gives you valuable insights in how much work you will be a able to next.

For setting the end date I think when the software is released is the most
interesting point to measure. As always it depends on context. If you do a
yearly release this doesn't make any sense because the granularity of all your
predictions will be a year.

------
namenotrequired
In short: don't try to estimate how much time each item takes. Calculate the
baseline of past tasks and extrapolate into the future.

It's a good start on the path to more realistic estimations. But it does not
yet satisfy what has in my career always been the main purpose of estimates:
_relative_ cost analysis.

When colleagues ask me how much time a task will take, responding by "well,
tasks take on average 8 days" is not helpful. They need to know _which tasks
are worth the cost_.

How could we come up with estimates that are both evidence-based and relative?

------
lifeisstillgood
Forecasting time to move an item through dev cycle is best done this way,
using historical data.

But if you want to know how long it will take to build "new system we have
never done before" then, we are back to the estimation because i have to build
a model of what the system will be - and although Incan think Inammaccurate I
won't be ...

In then end forecasts and estimations are just able to answer two things -
when _wont_ it be ready (ie it's definitely going to take longer than two
months) and _have you thought of all the foreseeable risks?_

------
vaidhy
Reading though the comments, it strikes me as we are all talking about things
at many levels of abstraction. Forecasting should work fine for tasks defined
at a fine granularity - we mostly take 5 days when we estimate 3 days.
However, they fail fast when you are talking about a 6 month project. There
are more variables and dependencies involved.

It also seems like the author is not from US and their definition of stable
team might be very different from what I have experienced :)

------
RNeff
I used to say the software project would be done two weeks after the last
change to the requirements and specifications. No more "just one more thing".

------
waynesonfire
i kinda started thinking about how project estimation and deadlines fit into
our process of software development. For example, say you're building some
feature and you guesstimate it'll take you 3 weeks to implement, maybe it's
non trivial but not too hard. How do you convince yourself, or your
stateholders, that spending an extra 2x-3x of time would result in delivery a
better product.

in other words, if all software was built with the bare minimum amount of time
to ship, i think we'd miss out on a lot of quality and a bunch of features
that weren't thought out enough. so there are people that somehow are able to
stop this rushed process of development and posit that the solution requires
additional thought and planning. They go slower to move faster and knowing how
to do this I think is a mark of an excellent engineer.

anther interesting element to this is that when one is granted the additional
time, the outcome progresses our field. As an example, someone spent _a lot_
thinking about how to implement a concurrent wait-free list data structure.
how are you to convince your PM that we have to delay the "onboarding popup"
feature because you'll be spending the next 3 months investigating wait free
data structures.

------
kipply
There's an entire book about forecasting which may also be relevant (not just
to time estimates)

[https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_...](https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction)

~~~
qznc
I would love to have a prediction market for the plans at work. I fear
management would mostly not appreciate such honest assessments though and
nobody is prepared to think in probabilities.

------
ssss11
Good estimates come from experienced estimators, and to me that is pretty much
“guesstimating”. No one is ever completely correct though, the estimate needs
to acknowledge risks and how they may blow out time, cost or quality, then
mitigate those risks or manage them.

------
robador
Unrelated; Why is there no rss feed for this blog? I can't find it. It seems
they've interesting topics and I'd like to keep posted when there's new
content. Do the expect me to go to their website every day to check?

~~~
dpatty
Here is the RSS feed:
[https://www.reaktor.com/feed/](https://www.reaktor.com/feed/)

~~~
robador
Thanks Couldn't find it on /blog/feed, also not in source.

------
k__
Everything based on historical data doesn't work for me, because there are too
many variables.

------
choward
Guestimate? There's already a real word for that (estimate). Is it automagic
too?

------
spaetzleesser
Isn’t this basically what Scrum and story points do?

~~~
ssorallen
Although wait, what is this?

> In one project my team decided to start pair programming when the time it
> took to complete a task was in danger of violating our forecast.

They tried to hurry up to stick to the "forecast"? Okay no thanks, not for me.

~~~
dirklectisch
Author of the article here.

The thinking here was that work that reaches a cycle time that is higher than
85% of the other work we completed is probably worth some extra attention. Is
the person working on it stuck? Is the problem so complex that it might be
useful to sit down and have a look at it together?

We all felt it was a good motivator to work together on a regular basis. Or at
least step up the amount of communication around that unit of work.

~~~
ssorallen
My intuition would be: our "forecasting" solution is never going to be 100%
right. Here's an example where it's completely wrong, scrap the "forecast" and
get the work done. Don't sleep in the office overnight, don't throw more
developers at it; acknowledge no forecasting system ever will get creative
work predictions/forecasting correct.

