Hacker News new | past | comments | ask | show | jobs | submit login
The work is never just “the work” (2022) (davestewart.co.uk)
166 points by davestewart on June 1, 2023 | hide | past | favorite | 75 comments



Why is estimating hard?

1. You haven't done it before. Not exactly the same thing. You think you've done it before, but you haven't. If you had actually done it before, you would probably be faster this time, not slower.

2. Other things besides you have changed, or will change. Coworkers, company, customers, tools, economic realities, the weather. Either it's already changed, or it's going to change. That has an impact on deliverability, even for the exact same work. (but it's not the same work, because you haven't done it before)

3. You have changed, or will change. You're literally a different person since the last time you did it, and you haven't taken that into account, and now things take a different amount of work/time.

4. You're fallible. You just remembered wrong, guessed wrong, fucked up, forgot, etc.

5. Estimating is a skill. After a long career, you can look back at what you did and what happened, and look at what you've got now, and make an estimate. The more experienced you get, the better your estimates get. That doesn't mean they're accurate, because you can't control #2-4. But they'll certainly be closer.

My favorite is when I actually have a very strong feeling about an estimate, and I tell my boss, and he doesn't want to believe it, so he arbitrarily tries to find a way to get a better estimate. I end up right, he conveniently forgets my original estimate.


I have a story about the boss: years ago the boss walks over to me and describes a straightforward change with priority that it should start right away. I said it was a two or three day job and that considering testing and other internal process could be confidently delivered in two weeks. "That's too long, never mind". I then overhear him asking a coworker about the same project. He says: "Easy, that is a two or three day project", "great!".

What happens next? The project was delivered after two weeks due to back and forth testing and integration, debugging, etc.


Had a boss that did this, but always picked the overconfident, unreliable, under-bidder. He was incapable of bidding anything in increments of days/weeks. Everything was "an hour" to "an afternoon".

So I'd say "2 weeks to complete end to end".

The other guy would say "I'll crank it out this afternoon". Other guy would deliver code that wouldn't run, and then require 3 weeks from me, QA, and him to actually massage into something that works the right way.

So instead of 2 person-weeks, it would take like ~5-6 person-weeks all in. Great stuff.


Congratulations! You've just gone through a crash course of Bidding On Enterprise Projects, For Dummies.


I’ve been there many times too.

My conclusion in former years was that I had won, as I’d been right. But my conclusion now is that the other employee won, as they get to deliver the software, increase their “power” as they now know about one more piece of software or feature (on account of having written it). The overrun will be forgotten, but “this person delivered the feature” will be remembered.


>1. You haven't done it before. Not exactly the same thing. You think you've done it before, but you haven't

Not to mention, even if you have done the exact same thing before it's unlikely you'll be satisfied doing it the same way. What % of projects are finished with everyone thinking "Yeah that was done perfectly there's nothing I would change if I could start from scratch"?


Not to mention, you're still just learning. You don't get proficient in anything by doing it once. After you do something a few times (in relatively short succession - enough for the sequence of actions to stick around in your memory; compare: spaced repetition), only then you can have a good feel for how long it takes (and you have a baseline against which to compare your ideas to improve the process).


Note that asymptotically, believing that work is worthless is the same as ignoring it. I think this is interesting because in theory there is an infinite amount of potential work that everyone is ignoring at any given point.


Thanks. I just woke up and you're telling me I might as well go back to sleep since statistically I will make zero progress today.


Fantastic points!


This is actually a really good summary of how projects should be estimated. A lot of people I know just focus on the core task or the minimum viable product, ignoring everything around it.

I especially find it frustrating that many PMs think that the instant a system goes into production, the project is finished and everyone should be putting tools down and walking away. In reality, the post-go-live support can take a surprising amount of time, but is rarely accounted for.

One particular customer, a telco, had PMs that started in the telephony space and were especially stuck in this mindset. Imagine digging a trench and laying down some fibre for a customer. The second the hole is filled and the fibre is lit up, the project is done. There is zero ongoing maintenance! Fibre doesn't need monthly patching, or regular replacement. It lasts decades. Sure, it might be cut by a backhoe, but that's break-fix, not project work.

Now picture the same mentality applied to general IT projects. Build it, mark the project as closed, and walk away. Never patch anything. Never upgrade. Never consolidate. Just leave things precisely as-built forever or until it breaks...

... or gets hacked and makes national headlines.


> I especially find it frustrating that many PMs think that the instant a system goes into production, the project is finished and everyone should be putting tools down and walking away.

This mentality and Agile mixed together creates monster technical debts. A team is rushed to create an MVP. Since it’s an MVP, things are skipped, rushed and riddled with edge cases. This is fine if the team can now use the customer feedback and deeper understanding of the domain to iterate and polish the product. Write some documentation, refactor rough edges etc.

But of course, this often doesn’t happen and the team begins sprinting to deliver on yet another “top priority”. After a while, management starts to wonder why doing anything takes forever and devs are leaving.


Scrum is so anti-agile it's not even funny. Just TRY to convince a scrum lord that maybe we shouldn't put process before people. It's a treat.


As a PM I feel seen. So many past experiences in companies with poor product culture where this happens. Companies like to say they’re hot on agile and have a strong product culture involving mvp, test, learn, iterate frameworks but in reality so many are just; mvp, test, learn, release, move on to the next different mvp

Leadership often say they want agile processes and iteration but at the end of the day they just want releases of new features


> but at the end of the day they just want releases of new features

I do not understand how this reality could surprise any professional developer.

Code quality, clean architecture, elegant algorithms, technical documentation, unit tests... they do not sell. Whatever my process is, if it is not aimed to a constant output of new features and bugfixes, then my process is not good, and I am an idiot to insist on respecting it.


> In reality, the post-go-live support can take a surprising amount of time, but is rarely accounted for.

Really the most frustrating part of PM. I have never had a PM that understood "we could implement features faster if we took a little time to go back and tighten up the code". Instead it's just a constant stream of one feature after the next with whatever got written down the first time by a fresh college grad being the golden solution.

It's particularly frustrating because PM tend to measure success by the number of features produced. That alone incentivizes a fair number of devs to just pump out shit without thinking about what they are producing.


It's like running a marathon and you keep falling on your face because your shoes are untied, but you "don't have time" to stop and tie them, so instead you just keep tripping over and over and over while your competition recedes into the distance.

Bonk.


In the limit, you reach the level of that joke I remember hearing as a kid (and I think it dates to post-WWII rebuilding / soviet era). One possible retelling:

An outside inspection comes to a busy construction site. As they watch the workers running with wheelbarrows, back and forth between material storage and active construction, they notice that all the wheelbarrows are empty. They flag a foreman, and ask him, "why are all these workers pushing empty wheelbarrows around?", to which the foreman replies, "we're so swamped with work, there's no time to load them!".


Also heared the story in my childhood. The reason they are keep doing that is that the loader guy is missing and they are not going to be paid unless a reward from some work.


From my experience, it's the business pressuring the PM into situations like this. If PMs want to keep their jobs, they're often just as helpless as the devs having to execute the work.


Sounds like Xtreme Go Horse Methodology. It is the fastest and therefore best. https://github.com/Brunomachadob/xgh


As a PM of a large ML team, articles like this and comments like yours keep reminding me of why I should push back when I get asked for estimates and if I really need to give one, then at least I can empathise with my team and ensure that either estimate is large and roomy


I’ve come to the recent conclusion that estimating is not so much hard, but uneconomical.

The amount of time/effort required to create a reliable estimate would probably double the cost of the completed project (ie, even after taking into account the overshoot on the informal estimate). The number of failed projects as a percentage of all projects would fall, but less projects would even start.

We conflate “can’t estimate” with “not willing to do all the work necessary to create an accurate estimate” because we know that it’s easier and more economical to just jump into the code, than it is to do a deep, formal analysis.

What’s really happening is that we are giving up accurate estimates to reduce overall cost.

Unfortunately, businesses only see the overshoot on time (which they incorrectly equate with project cost) and not the actual undershoot on budget relative to a project that has been fully planned.

(Note that I’m not really talking about waterfall. More “plan to build it twice, because you will anyway”)


A few years ago, our management summoned some contractors to enlighten us on how to properly estimate a project. They taught us The Way (TM), and proudly stated that they were renowned for their respect of the deadline.

I wasn't snarky enough to remind them that shipping is extremely easy when you take no responsibility for maintenance, but I expressed my curiosity about how much time would be devoted to the preparation of the estimation itself.

I appreciated their honesty when they replied that, for a typical 1000-hour project, 250 would be devoted to the estimation.

I then strolled back to my boss and thanked him for letting me enter the Illuminati circle, and showed my impatience for showing him my progress with the next project. I promised him the best estimation he would ever get, and just warned him: "It usually takes us two years to develop a new machine. Next time, just let all of us (~100 people) disregard any other task and focus entirely on the estimation, and in six months top you will know whether the new machine is feasible."

He was not thrilled.


> I appreciated their honesty when they replied that, for a typical 1000-hour project, 250 would be devoted to the estimation.

I did not expect that plot twist to your story. Sounds about reasonable. Proper project management takes alot of time and need to engage the actual implementors.


> Proper project management takes alot of time

Typical proper management is based on estimations given in front of the coffee machine.

Exceptional projects have the coffee machine substituted with a wine bar.


Never have I gotten more honest feedback from a manager than after a split bowl of punch and three espresso martinis


Exactly. And even then.. We had a consulting company come in and do an entire paid discovery & project planning phase. They came back with a proposal for phase 1, which they overran by 100%.

The entire project ended in acrimony since it was a statement of work, not time & materials, so at some point they are calculating how much of a loss the project is causing them to incur.

Change requests, threats of litigation, meticulous reading of specs to see what we could try to wring out of them let us with a product that never went to production.


Without a pretty good understanding of what a project or activity has to deliver cannot provide a good estimate, regardless of time spent.

Also, if you don't know WHO will build it, and their capabilities, any accuracy is impossible. When people approach the limit of their abilities, time spent on tasks goes up exponentially.

However, if someone has a very good understanding of what they're supposed to build, the team that will build it, reasonble estimates can often be produced quite quickly. Obviously, there can be risk factors, such as customers or managers with unreasonable expectations, but those can typically be managed by experienced teams.

Anyway, raw estimates should never be seen as budgets. Budgets should be much larger, especially if they can be kept hidden from the developers, since budgets need to take all sorts of risks into account. But most team if they know what the budget is (and it's big enough), may tend to relax a bit too much early on in a project.

Most software projects go a bit (or sometimes quite a lot) over estimates. Which is often fine. The estimate may still have been a useful exercise, in that it forced someone to think about most details of the deliverable early on, and also provided something to track progress against.


> When people approach the limit of their abilities, time spent on tasks goes up exponentially.

This is an excellent observation.


> The amount of time/effort required to create a reliable estimate would probably double the cost of the completed project (ie, even after taking into account the overshoot on the informal estimate).

I've seen it done, and it's more than double. You have to have most of the work done before you deliver the estimate. The thing is, you have to deliver an estimate that is at least as long as the time you spent producing the estimate, but you're already mostly done, so towards the end of a project there's a lot of gold plating and general screwing around. It's a silly way to work.

And everybody knows what's going on, because people talk. You can't keep it secret. You can only do it as long as upper management thinks it benefits the company. The customer-facing side of the business (sales, customer success) makes a very strong case that faster delivery means happier customers, and more optimistic timelines mean more sales. As much as accurate estimation is an eternal management dream, there's not much benefit to it other than a little bit of dubiously valuable trust and credibility with customers. The idea that "if only we could estimate accurately, we could execute grand strategic visions" is mostly bullshit. You're slowing yourself down, and the only thing you get out of it is that engineering management gets to pat themselves on the back for hitting estimates.


> The idea that "if only we could estimate accurately, we could execute grand strategic visions" is mostly bullshit.

10000x this. It’s complete bullshit.

And the bullshit is often used to shift blame to the dev team in order to cover up for a lack of management competence.

I started to think about how to describe the problem in business planning terms (ie, money) because I needed to come up with language that senior management and non-software CEOs could understand, for a series of meetings I recently attended. I realised that just saying “you can’t do it” is unsatisfying, and just leads to arguments.

“Yes you can have accurate estimates but it will >2x the project cost” seemed to resonate.

(And yes I think it’s much more than 2x too)


> The amount of time/effort required to create a reliable estimate would probably double the cost of the completed project (ie, even after taking into account the overshoot on the informal estimate). The number of failed projects as a percentage of all projects would fall, but less projects would even start.

Another way of looking at it is that for most projects, the uncertainty in the estimate only reaches zero at the end of the project (when you know, hopefully, how much was actually spent). If a project is lucky, the initial uncertainty is low and reduces monotonically over time as the project progresses, so that the estimates gradually converge on the real cost.


And the unlucky projects are the ones so poorly planned that the uncertainty actually grows over time as new issues are uncovered.


Yes.

It’s like the old saying:

“Plans are nothing. Planning is everything”.

The work of estimating the effort for a project is worthwhile, even though the resulting estimate is worthless.


> We conflate “can’t estimate” with “not willing to do all the work necessary to create an accurate estimate” because we know that it’s easier and more economical to just jump into the code, than it is to do a deep, formal analysis.

True. This is often justified as "we don't do requirements/design because we are agile". Which is BS. If you have no big-picture design, you just end up with a patchwork of "features".


I would agree if the distribution of errors in estimates as normal with the mean estimate and reality matching.

But if estimates are consistently wrong in one direction, even accepting greater variance, that says the process is broken, probably missing a feedback loop to learn.


I’m not sure I agree, but I think it’s an interesting point.

Since management people seem to like construction analogies, I’ve started to compare software projects with tunnel projects. I understand that tunnels are notoriously late, because you really can’t tell what you’re going to be digging through - until you’re digging.

Of course, you could estimate more accurately by digging, say, a 1 meter pilot tunnel before you start the full size tunnel.

But now you’re digging two tunnels. And you still won’t be able to estimate the first one…

My point is that Hofstadter's law will apply to estimating, just as it will to the main project.


High is the number of freelance gigs I've foregone, because they asked me to estimate the work ahead of time on my own dime.


The article description is "A deep dive on why projects always overrun and a framework to improve future estimation"

This was a personal investigation into my faulty estimation skills, off the back of a small project which became a medium project, which became a large project with no shortage of surprises, overruns and pain.

The blog post was born from an honest and thorough postmortem where it turns out most of the work was simply not expected – and so wasn't accounted for.

I then went a stage further and attempted to outline more general reasons for this and to visualise how might look in terns of time and effort. It's not meant to be scientific, but is certainly an interesting way to look at things.

Anyway, I hope someone finds it useful.


Golly gee, someone that realizes software engineering isn't entirely just sitting down and pumping out code? Why aren't you in management? (don't tell me... it's because you're honest with yourself)

I do wish this becomes more popular (and the graphic does help for those without imagination ;).

All venting aside, if you're familiar with a system, its development and deployment environment, and all the quirks of the tools you're using, you can easily "feel" around for how much "dark matter" work you're going to have to do -- and your estimates become a lot more tight, and close to reality. That is, once you throw away -- as you said -- "happy path" optimism and get down to business. An extra padding of cynicism will make sure you never miss a deadline; and if it turns out you were too cynical, you can always under-promise and over-deliver and wow everyone.

But padding is necessary, and I wish it didn't have this air of dishonesty associated with it. Realistically, shit happens. Stakeholders can huff and haw because it gets in the way of their aggressive plans, but it's not dishonest to say it's going to take another 3 months, because you're asking me to dig you a hole with a plastic sand-castle shovel. Yes, I can actually pound Red Bulls and wear myself down to the bone digging that hole as fast as humanly possible -- but I'm not going to.

If that's not good enough, the next best is to ship in phases (what little "A" agile tries to do, but fails in practice): a group of features will be guaranteed to ship at some date. At that point, we'll reassess and estimate the next batch -- until the project is done. We can even do soft estimates on all the batches, but using ranges (e.g. 3-6 months, rather than something concrete -- because they'll become de-facto deadlines), to give a rough "total project estimate." I like the month-by-month "sprint" model more than the 2 week model. Analysis and planning isn't free. It costs time, effort, and mental resources. Doing it every 2 weeks is absolutely ridiculous. Monthly strikes a nice balance between giving you enough time to actually do the "The work" and give stakeholders a feeling that they're "on top of things."


> Golly gee, someone that realizes software engineering isn't entirely just sitting down and pumping out code? Why aren't you in management? (don't tell me... it's because you're honest with yourself)

My ironic situation right now is that I AM management in my company, and I'm spending a lot of time shifting the culture of our Engineers away from "just pumping out code" to shipping and supporting products. I'm literally being asked questions about why we need logging when we can just use a debugger, and I'm fighting the "that'll only take 1/2 day" (which it never does!) estimates from the team.

It feels like bizarro world to me. I had a pretty long IC career before moving into management, and felt like I was fighting the exact same battles against management,


In find that even though I’m typically pretty spot on in the amount of actual design and coding time a feature requires, the actual wall clock time can be way off. Ad hoc meetings, days off for holidays/corporate events, waiting on external feedback, etc. are all very hard to account for.


Something I realized while skimming this thread, and moments before opening the article itself - I'm definitely underestimating things by factor of 2x, because I keep forgetting about the denominator - that my one work day isn't really 8 people-hours of project work!

In reality, it's closer to 3-4 people-hours on average, after I account for time consumed by team meetings, corporate paperwork, 1-on-1s, code reviews, lunch break, help requests from teammates, IT/devops doing maintenance on infrastructure, requests to opinionate on / get involved with some discussions about new projects or with new customers... - a lot of work that's mostly necessary, but isn't relevant to the particular thing I'm focusing on (or estimating).

So on top of the excellent framework from the article, I'm going to keep reminding myself that, for the purpose of estimation and project work, I'm working 3-hour days, not 8-hour days (and that's before we factor in any fuzzy human stuff like kids getting sick, becoming burned out, etc.).


I have the same 3-4 hour estimate, but I think this stood out to me because of working a labor job as my first:

- 8.5 hours on site

- 1 hour of breaks + lunch

- 30 minutes of getting into/out of lunch (tidying up + restarting)

- 30 minutes of gathering tools/supplies in the morning

- 30 minutes of putting stuff away at close

- 30 minutes walking around/getting stuff you didn’t expect to need

- 30 minutes of talking to boss/coworkers about their problem or yours

So you’re talking 5 hours of “real” labor in an 8.5 hour day — and that’s on days you have a consistent project. Days where you have a bunch of small tasks all over the apartment complex might be 3 hours of “swinging hammers”.

That’s normal — and I think people tend to forget process when estimating things.


> and a framework to improve future estimation

The best framework I've heard of and used in practice is to simply ask one question: "How long have similar projects taken in the past?"

Chances are this one is no exception. It will take about the same time as similar past projects.


Thanks Dave, as a PM this is super useful to me to help me emphasise with my team.

I love your illustration of the work involved to get a project live. I think I might bookmark this for when I encounter that pushy stakeholder that wants a fixed estimate


You're very welcome!


I really like this article. I think impressive how informative it manages to be in such a short space, and how the visual work helps tell the story. I’m definitely going to use this (with full credits) from time to time when the discussions on how a team needs to work with estimations.

What I wish it also included was the not-so-academic parts of estimations. Like how we’re often operating in systems where it is much better to over than underestimate, because it’s much better to never let a client know that you didn’t actually spend that X time compared to letting them know that you need X more. Or how estimating things you’ve done before can also be hard because people don’t actually deliver Y “story points” every week, because sometimes their children get sick, they sleep poorly, whatever. Or how even the most senior developer can spent 3 hours on something silly.

But I guess a lot of that plays into a different sort of discussion. One where we talk about whether a lot of the “control” parts of the work process are actually necessary or just wasted resources. And I don’t want to sound like I think it’s wasted resources, because it can be both and often it’s a combination. Personally I tend to avoid working in areas with too much “control”, if you estimate by anything less than a day, then I’d like not want to work for you. Not because I can’t do it, but because I hate spending time on things that aren’t “the work”. I’m not sure how to include that sort of metric into the discussion, but I think it may be relevant. Because I doubt I’m the only one, and while completely anecdotal, I wouldn’t want to invest in any of the places I wouldn’t want to work because they tend not to do so hot in the long term. So maybe looking at overhead is important? But my sample size is way too small to mean much.


A nag popup where the dismiss button just changes itself instead of dismissing the window? Oof, that's a new one.


I didn't even realize that "Whatever" was the dismiss button.

The Kill Sticky bookmarklet is your friend: https://alisdair.mcdiarmid.org/kill-sticky-headers/

There's a newer version, but it breaks a number of sites for me, so I don't use it: https://github.com/t-mart/kill-sticky

The internet is a hostile place without Kill Sticky, uBlock Origin, and uBlock Matrix.


i customized the first one or one very similar to it to remove the "sticky" ones and also to remove the ability of the page to stop you from scrolling as some overlays do. I think it's a large improvement but there's nothing stopping you from having multiple of them(not well formatted here unfortunately but it should be fine to just copy and paste as a bookmarklette):

javascript:( function(){ let i, elements = document.querySelectorAll('body *');

for (i = 0; i < elements.length; i++) {

if(getComputedStyle(elements[i]).position === 'fixed' || getComputedStyle(elements[i]).position === 'sticky') { elements[i].parentNode.removeChild(elements[i]); } } document.body.style.overflow = "auto"; document.body.style.position = "static"; })()


Yeah, as if the popup spam wasn't enough, now it asks for confirmation to close...


I immediately closed the tab away as soon as it asked if I was sure.


Fair point! Consider it fixed


Great article!

Microsoft Press' "Software Estimation: Demystifying the Black Art" is a good deep dive into the subject.

Found it via a talk by Jakob Persson at Drupalcon many years ago, he recommended a template-based spreadsheet approach where you estimate a project by splitting it into components and giving each a "confidence" level from 1-5, then adding fixed padding for various items like project management (20%), deployment (5%), buffer for unforeseen problems (20%) etc that essentially came down to "2-3x the initial estimate" but with some calculations to back it up, and made estimations systematically easy.

His slides are here: https://www.slideshare.net/jakobpersson/the-science-of-guess...

The sheet template is here: https://docs.google.com/spreadsheets/d/13MGHIxFOtbJ2Qxygc_Gx... - have used variations of it with great success for many years beyond when I shifted away from Drupal work.

Noting that his more recent blog articles now recommend a value-based pricing approach for the consulting model (which I agree with and also shifted to in my freelance days).

Good to see more writing on estimation for engineers, Agile's story points model is nice for teams doing the work, but in many kinds of work when you're running a team there's someone above you who signs the checks that wants to know what something's going to (approximately) cost before they sign off on it.


I wish companies would do this kind of analysis within their own projects. Probably not every project since doing this analysis would add overhead, but perhaps a small sampling of arbitrary projects.

How are we supposed to optimize things if we don't know where the time and effort is going to? How are we supposed to estimate, especially when a huge chunk of the extra time may be caused by an inefficient part of the process, caused by specific people (say, who like to schedule useless meetings), or caused by external factors?

Instead we see people choosing arbitrary things to blame, and business goes on as usual.


Here are the factors proposed by Code Complete (2nd edition):

- baseline: a program, designed by and destined to the same entity (programmer = customer) - a product (3x): designed by an entity destined to another entity (customer). - a system (x3): 2 programs communicating with each other - a system product (x3x3=x9): a system of 2 products


It's kind of ridiculous that people still believe that they can get better at estimating work they've never done before.

The amount of hours lost by teams estimating work is insane.


I have some thoughts on the vast differences between working on a team delivering firmware that is indefinitely the lifeblood of the hardware, vs working on a team writing hardware test tools that get used by people for one cycle (1-2 years) and nobody bats an eye when they're broken/unusable or difficult to extend to the next generation of hardware...

Clean, readable, beautiful C with well thought out descriptive names for everything, comments that get anyone up to speed on the code section. Self-descriptive, pleasant to work with.

VS

MVP Python repo that has a super wide directory structure with poorly thought out grouping that takes 5x as long to memorize. When trying to understand a code section, told: "Just hit tab complete on the command line". Okay, that tells me the relationships between objects, classes, methods, containers, etc... But how does it work? Obfuscated not just from the users but the devs themselves.

Yeah.


The author seems to argue that estimates are off because of "unknown knowns", and can be improved by adopting an estimation framework that makes these items more explicit.

While this is certainly valid I would add that in my experience the most influential causes of major shedule disruption were the "unknown unknowns", wich are inherrent in any new creation, but which are exacerbated in software because of the extremely high and fractal dimensionality of digital assets. The smallest deviation from assuption can have an unbounded impact on the plan, often popularly refered to as a 'butterfly effect'.


Yes! I have raised several times across several teams that task estimation is a variant of the coastline paradox[0]. You'd do well to consider determining the fractal dimension of your problem domain and "sandbagging" according to the dimension.

0: https://en.wikipedia.org/wiki/Coastline_paradox


The author covers "unknown unknowns" explicity;

> It’s interesting to note: (...) the amount of work “outside” the project, aka unknown unknowns

If I understand correctly, most of those fall under the blue "Problems" category, aka. "the work outside the work" - though there may be some bleed-through to the green "Iteration" category, aka. "the work between the work", in which the author includes "debugging, refactoring, maintenance, tooling".

This is valid by my experience too; most of the schedule-breaking "unknown unknowns" fit into the "Problems" category.


Love the graphic, mind sharing how you made it?


Sure! It's in Sketch, and the assets are here if you want to play:

https://github.com/davestewart/davestewart-site/tree/main/co...



Thanks!


Not the OP or author but you can knock that out easily with Sketch app or Figma.


Good related Freaknomics podcast about the planning fallacy:

https://freakonomics.com/podcast/why-your-projects-are-alway...


Timeboxing (simplifying designs and eliminating requirements) is a far more important skill than estimation. Shrinking a schedule has much more impact than predicting it correctly


Why


I always use the 80 20 rule for estimating stuff:

1. Everything takes at least 80 days (or hours if I‘m generous).

2. Multiply the result from step 1 by 20.


Clients will ignore high estimates and take the unrealistic one.


How do I get into these odd jobs as a professional developer?


I was asked to estimate how much time remained to complete all the components for a 3d game engine to make it perform excellently according to the design in my head. This was back in the days when you rendered every pixel on the CPU using every clever technique you could think of for performance, and you also needed similar geometry calculations as you do nowadays for higher level modelling and physics.

All the components, math, data structures, algorithms and APIs fitted neatly together but the big leaps in performance would only come when they were mostly done. One of those things where it's fast when all the parts are in harmony and slow otherwise. But hard to explain convincingly to others (especially those lacking algorithm or low-level performance experience) without a working demo. Even the APIs for other developers using the engine made little sense to them, until built, and then they liked them a lot. Sometimes you've just got to show rather than tell.

So I sat down and estimated systematically, going through each little item on the list as I imagined doing them: X hours here, X days there. Just wrote down everything that came to mind, and estimated each one independently. Let's just grind through this estimate one item at a time. I didn't pad it or try to make it fit any overall expectation. I just did the honest exercise of writing down each item to build and each X that seemed to fit that item.

To my surprise that was an intensive exercise that 2 weeks full time, just to write down all the moving parts in a list, which, somehow, all fitted together as a "simple" design in my head. It never seemed so large that it could possibly take that long to write down a list of one-liners. No wonder I had trouble explaining the whole scheme.

Was that a fantasy? Turns out it was accurate.

The sum of those independently estimated tiny parts came to about 2 years. Ouch! A lot longer than the design seemed in my head, which seemed closer to a few months, but somehow always slowed by life experiences, tired weeks, distractions from other projects' requests and trade show demos. Because of all those "unpredictable" factors I didn't believe my estimate was really going to be accurate, and figured that I must just be overestimating, adding too much padding, or not combining overlapping parts of items enough. I wasn't very exprienced with large projects!

And then I went ahead and worked on the game engine until it was working and performing well. I buried and ignored that document and just got on with each thing that made sense to me to work on as I saw fit.

The real surprise for me, the lesson which has stuck with me ever since is, that estimate and plan proved to be reasonably accurate even though I didn't believe it, didn't think hard about it when I wrote the items one after another, and completely ignored the document after writing it.

It took 2 years to get the components working really well together as my design intended, and by then just about everything on the list was done. I was unhappy it had taken so long, but had to admit that all the small items really did end up adding up as I'd written down earlier, written down, and the design really did work and perform well when the parts came together.

The list of things to build proved reasonably accurate, despite surprises on the way such as complete changes of target platform and new hardware coming out.

I learned so many lessons from that exercise and seeing it play out. One of them is, sometimes an honest but low level "add up the parts" estimate can turn out surprisingly accurate but is a huge grind to produce. Another is, perhaps I'm not as bad at estimating as it seems when I'm asked to throw out an estimate quickly without giving it that kind of detailed examination. So now, I'll consider doing an exercise like that, if there's time, and if I don't like the answer, I should treat it as a harsh fact rather than something malleable by wishing.

Another is, like many programmers, I'll sometimes imagine a design that seems simple to me on the face of it, imagined almost magically fast visually, yet it turns out to have a huge number of details when examined properly that take weeks just to list and years to implement, yet marvellously something so complex does end up working pretty much as conceived in the first place, including big leaps of performance, good features and useful APIs. That ability to conceive of something in a flash that actually works well yet is vastly more complex than it seems on the surface is a kind of superpower, if you're looking to push the state of the art or solve stuck problems, but it's also a problem for what are hopefully obvious reasons. I have to be careful to not answer questions like "can we do X" with "[thinks then] sure, just do Y and Z this way and it will work" because it never sounds like years of work when summarised, but it can be. And I have to be really careful with side projects, where simple ideas correspond to months of work and a few favourite ideas corresponds to centuries as I zoom in, like a macro that contains much more when expanded then you'd expect. Life is literally too short to implement most of the things I think of that I'd like to build, and I have a disappointed feeling that some of them would actually work, but I'll never be able to implement them to find out for sure.

Another lesson from that particular project was: I was asked if I could add a significant capability to the engine: change it from quasi-3d a bit like Wolfenstein/Doom to full 3d like Descent but including surface physics. Having just done that estimate I was loathe to rip up a perfectly good plan that already seemed daunting, and add a huge, onerous change, breaking countless invariants and assumptions. So I said no. Years later after leaving the job I looked back with new insight and realised I should have said yes, taken two weeks of low-pressure vacation to let my imagination conceive how to make the more versatile thing work (at every level, from rendering up to physics), and I would have ended up with a simpler design than the original that would have shipped more quickly. The more powerful, general solution was much simpler to reason about, just as performant, and had more value to the business long and short term, as well as being more satisfying to use.

All I needed to do was rip up some working assumptions and take a look with fresh eyes, to get something simpler and more powerful at the same time. So now, if I asked, I remember that "no" as a mistake. I think it's often best to evaluate "go bold" changes with an open mind.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: