The deadline for project completion is 1 October. The due date is approaching quickly and 4 weeks ago, the other teams were telling me their stuff was mostly done and now it's just time for tidy up and bug fixes (which sounded dubious); However, now they're all working extremely long days and weekends just to complete.
So we went from almost being done 4 weeks ago with the finish line just days away, to clearly being under quite a lot of pressure to deliver and obviously still a lot to do.
The other thing is, people seem to be spending time on things that aren't really important and not tackling the actual work.
Maybe a lot of this comes down to stress and fatigue from their expectations not being met which ends up affecting performance.
Are the individual metrics/incentives aligned with progress on the project? Is anyone panicking? Is anyone trying to appease someone else who's panicking, by giving an inaccurate appearance of progress/work? Is anyone stalling for time, while they try to move elsewhere?
Is there a project plan with work breakdown (and interdependencies, and preferably resource allocations) that shows what you've done, and what you still have to do? Are the incomplete tasks broken down to the resolution of 1-2 days, or to hours? Is the plan complete, or is there substantial work to do that doesn't appear in the plan? Is the completion on the plan accurate? How frequently is slippage being checked, and how does that happen? Does anyone have incentive for the plan be inaccurate at this point?
Are people working on unimportant things because they're blocked on important things by dependencies on other people, but don't want to say it?
Is everyone still putting in full effort, and committed to the project success? Or have some given up on the project, and are focused on shielding their careers?
(Even if the project effectively isn't working from a plan, has bad morale and panicking, there are conflicts of interest, etc... it might not be too late for a good manager to rally everyone around an achievable new plan, possibly including revisiting the requirements. Given that the project is in trouble, I suspect it would need believable buy-in from upper management and the "customer" for this project, or people will still feel doomed, rather than focused on making it work.)
Instead, I've seen mostly attacking a problem head on which involves a really inordinate about of planning, more planning, some execution, and then more planning. Planning so much, that timelines must be altered to make room for more planning.
That's been my experience though.
I believe the desire to blow through timelines planning instead of executing, is born out of fear and what I call the sensation of movement. Fear because management doesn't want to mess up and miss something important. The sensation of movement causes management to incorrectly perceive work is being done.
At the end of all of this, nothing is done because no one has done any of the work to accomplish it, they've only been planning.
At some point you have to just say, "Fuck it. It's good enough" and leave it as the terrible flawed pile of crap it looks like at the detail level you're at.
Often once you step back and look at it from a customer perspective, you had just gone in too deep and they didn't need half of what you were preparing for anyway.
It seems more fruitful to talk about gradients and equilibriums of desired changes.
I chose the exponential distribution because it's the maximum entropy distribution for a positive number with known mean: https://en.wikipedia.org/wiki/Maximum_entropy_probability_di...
But power-law distributions show up over and over in things that people here care about: file size, network traffic, process lifetimes, etc etc. In these cases the exponential will drastically underestimate the fat tail.
The important question you bring up is which is more accurate, which I don't have an answer for. But perhaps a reader has data on this. I will note that I compared exponential against some FOIA request processing data while back and thought it was okay, though I don't remember anything quantitative; see here: https://news.ycombinator.com/item?id=21032750
I think it's likely that something with more parameters like a log-normal distribution would be better than either, but intuitively I doubt you'd be able to get simple equations for the mean remaining time out of that.
One problem with the power law model is that the expected duration at t = 0 is 0. The exponential model does not have that problem. You could fix this for a power law by not having power law behavior for short times.
She reported the Pareto distribution of process times from first data collected in 1997.
For example, if there is a project that is meant to complete in a week. It has now been two weeks, so one week over budget. Most people would think that finishing the project is right around the corner, but rather the expectation should be that it will take another week or two weeks. If you get to the end of a month, same applies - the expectation should be that it will be another month, not that it is right around the corner.
You may get lucky and one of the new additions throws out the current plan. Cuts from the sunk costs. Probably, though, that will be its own trap. You'll think you understood your success for the next time.
With a power law there's no mean or median starting at t = 0, so this sort of comparison isn't possible given the FOIA request data available. I'd be interested in seeing the data on the tails to see if those are power law distributed.
Edit: I'll do a quick quantitative check. Using data from the CIA's 2018 FOIA report (p. 18): https://www.cia.gov/library/readingroom/foia-annual-report
Mean: 32.21 days
Median: 12 days (exponential prediction: 22.33 days, 86% error)
Mean: 368.49 days
Median: 306 days (exponential prediction: 255.42 days, 17% error)
Not the best fit but could be worse.
I'm pretty sure orgs also have this problem.
For some people that's really intuitive and others can use it as a more practical stepping stone on the way to understanding what happens in projects.
In the project example, not only does the total project time increase with spent time, but the time left increases with spent time.
I think this holds for most human endeavors that have intellectual property as the end result - software projects, books, doctoral thesis, etc.
Two questions -
1- I would really like to understand why?
2- I have always thought lean is the answer to the above in a startup context but really curious of hear of others.
I have a book called Industrial Megaprojects by Murrow which gives a lot of examples of financially disastrous multi-billion dollar projects. His conclusion? It's not really the doing of the project that was wrong, it was that the projects shouldn't have been done in the first place. The estimates and preliminary investigations were underdeveloped and this typically leads to overoptimism.
Bent Flyvbjerg has also done a lot of work on megaprojects and his basic conclusion is that well-estimated projects don't get built, because almost no megaproject is ever viable or cost-effective in itself. So there's a survivor bias towards "bad" projects, ones that are poorly planned in the first instance, magnifying the effect.
In terms of software, agile/lean has the advantage of limiting commitment. It's easier to terminate something that's cost very little and not gotten far than to terminate something that's several years and millions of dollars into going nowhere. The former can be dismaying and annoying. But by the time serious time and money have been spent, there's a sunk-cost fallacy and often, personal pride or status of powerful individuals is involved.
So, if the only information we have is that the project has gone on for 2 years we expect that to be the midpoint and that the project will continue for 2 years.
In particular if the project manager has lost control of the project (eg, not scheduled in a contingency, missed requirements, etc) then there is no reason to believe that anyone knows what % of the project is done. So assume 50% because that is the Most Likely Estimate. And be surprised at how often that is the right guess in my cynical and not inconsequential experience :p.
If the project manager is in control (usually evidenced by people opining that the project will finish early) then expect the project to finish exactly on time when something unexpected goes wrong.
The expected duration converges to exactly double your wait time so far. I've found the "unreliable friend" distribution to be very useful in my modeling!
(I say 10%, but my estimates are usually 100% under, even if I added 100%)
So the challenge for organizations who want predictability is to set up their system to balance the rewards and penalties to get the desired result.
Some estimating techniques: https://www.simplilearn.com/project-estimation-techniques-ar...
For a team’s ongoing everyday tasks, I’ve used parametric estimation with some success by defining team weekly capacity for productive work and then summing up the parametric estimates for all backlog tasks. You can then get a rough idea for the number of weeks to complete a bigger, more complex task.
You can even build a model to automate this estimation.
This is why features get cut and crunch is a thing. There are only so many ways to prevent the finish line from falling off the horizon.
Complexity grows exponentially (a graph of inter-relationships) and so even cutting a feature isn't enough to curb the ballooning complexity of an over-scoped or poorly estimated project.
I could be projecting from past experience ;)