
Mathematical Limits to Software Estimation (2013) - lifeisstillgood
http://scribblethink.org/Work/Softestim/softestim.html
======
ktRolster
His point is that _" Objectively_ estimating program size/complexity is not
possible." The 'objectively' qualifier is important.

And while that's true, it's not a very useful result. There is a large subset
of programs that _are_ easy to estimate. If someone says, "write 20 standard
CRUD services," most people can accurately estimate how long that will take.
There is nothing tricky there. If someone says, "write a program to solve
Goldbach's conjecture," then it is true, you can't estimate how long that will
take (maybe someone can, I don't know how).

In practice, for the kinds of tasks programmers usually do, experience shows
that 90% of the time we can estimate correctly.

~~~
abainbridge
Imagine you've lost your keys. You've already looked everywhere for them. Now
how long will it take you to find them? This what software estimation feels
like to me. Simple tasks regularly turn into "find out why this simple code
doesn't work". From there, you have no idea whether the solution will come in
minutes, weeks, months or never.

It is still usually necessary to at least try to estimate as well as we can.
And the higher level the task, the more the "detailed crap" averages out. But
it is still not easy. And delivering software on time is usually as much about
changing the spec as it is about doing the coding.

~~~
adrianratnapala
I don't even know that "detailed crap" averages out, because crap happens at
all scales.

For example: we launched into an embedded project and were ticking along
nicely, and suddenly discovered that we actually had to program a whole extra
device, which we had thought came with all built-in firmware.

------
psyc
Estimation is a problem of horizons. Before you begin, the solution is N
horizons away. You don't know what N is, and you don't know what's over the
first horizon.

~~~
joe_the_user
That is the problem of any-old-estimation-problem.

I think that the way a software engineering estimation problem _feels_ is that
you begin with N component that will combine to create the cost. You add or
multiply the components together to get your estimate. It seems reasonable.
Then suddenly one of those components turns out to contribute ten times as
much to the cost and your estimate is doubly shot, shot in terms of time and
shot in terms of what you will be spending your time on. These two parts
things together tend to make bosses and customers unhappy.

It would be nice if one had mathematical insight into such process.
Unfortunately, I'm not sure how much big-O-type analyses give us, since
they're worst case scenarios. By the halting problem, we know there's no
general way to determine if a given program we start is going to crash. Yet we
produce mostly bug-free programs, at least sometimes.

------
heisenbit
Software creation is a process in which humans constantly make decisions. Each
decision has the potential to impact the scope. A lot of decisions have
limited impact and may be estimated. Some decisions at the margin have the
potential to significantly alter and blow up the scope. Not all critical
decisions can be identified in time and there is a good chance of unforeseen
consequences. Containing those is vital.

For large scale software projects management and governance are often the
determining factors for quality, cost and timeliness.

------
DonaldFisk
I posted a link to this yesterday in the thread Project delays: why good
software estimates are impossible (2015):

[https://news.ycombinator.com/item?id=12711732](https://news.ycombinator.com/item?id=12711732)

