Hacker News new | past | comments | ask | show | jobs | submit login
We are clueless about how long things should take (kyleprifogle.com)
465 points by yachtman 29 days ago | hide | past | web | favorite | 247 comments



Great article. When I worked at $OldJob, the leadership wanted an unsolved, research-grade problem solved in a few months. They demanded estimates, then refused to accept estimates beyond their timeline, then tried to hold people "accountable" for missing the estimates. Of course it was a mess, and of course we failed to hit our goals, and of course many many people burned out.

But I noticed that some people handled the situation fine. They stayed on management's good side, even though they were failing to deliver along with the rest of us. I will try to distill what I observed them do:

1) They did not fight on the estimates. If a manager forced them into a certain timeline, they registered their disagreement and just accepted the new timeline. I think they realized that fighting would just make the manager judge them less capable. (Pick your battles, eh?).

2) When the schedule slipped, they would communicate it in a way that made them seem more competent instead of less. The explanation usually had three parts: unforeseen events kept us from hitting the deadline, we accomplished some great things in the meantime, and here's why we're in great shape to hit the next deadline! For example: "When we made this schedule, we did not realize that AWS nodes were so unreliable! Despite this, our team has made incredible progress on implementing a fast method of storage and solid compression! We have reworked the schedule to reflect this new information, and we are already on track for the first milestone!"

It's possible that this trick just worked for the specific situation at $OldJob, but I really enjoyed learning it. They seemed to understand that certain explicit rules were not important, but that other unspoken rules needed to be followed. Are accurate estimates important? It depends on the situation! Sometimes wrong estimates can be more valuable than correct ones. Construction companies give wrong estimates all the time in order to win projects. Is staying on good terms with your boss important? Yes! Even if they are total shitbags, being adversarial won't help, only leaving will. These people demonstrated that there are often ways to fix a bad situation by breaking some explicit rules and carefully following implicit ones, and I wish I had the acuity to see these possibilities on my own.


> 1) They did not fight on the estimates.

> 2) When the schedule slipped, they would communicate it in a way that made them seem more competent instead of less.

In other words: Enable the bad estimates, then externalize accountability when they can't be achieved. In the past, I called these people "blameless go-getters" because they were always first to volunteer to take on work, but somehow it became everyone else's fault when they failed. If management is asleep at the wheel, it's a win-win situation for them.

You're exactly right that this method works in many companies. Like you said, it's all about understanding what the company actually values. When unrealistic schedules are forced on teams, the exact date might not be important. Instead, it might be leadership's dysfunctional way of emphasizing focus, urgency, and quick iteration. The savvy engineers and managers know how to put on a show that hits these key points while steadily delivering progress in the background.

Still, there's no escaping the fact that this is dysfunctional. More importantly, it doesn't have to be like this. It's eye-opening to move from a dysfunctional company like you described toward a company that values honest communication and understands engineering project management at the executive leadership level. When hiring someone out of a dysfunctional company like you described, it can take some time to break them of bad habits around schedule misdirection and estimation dishonesty.


If you look at the examples, they are clearly on the format "get the unavoidable problem -> understand the root cause of the problem -> fix the root cause -> tell your manager what you did".

Those people are basically doing their managers' jobs, and telling them that. If the managers aren't technical, they would be completely unable to do it, and are probably very afraid of somebody finding out, having somebody doing it will bring a lot of confidence.


    >  ... there's no escaping the fact that this is dysfunctional.
It may be "dysfunctional" but it is how things get done in many places.

Even in places where people pretend to follow a strict "Agile" methodology, there's often a level of management that is brokering deadlines and promises for the "completion" of the whole damn thing.


This strikes me as an understandable but pathological response to managerialism. Managerialism being our current dominant business philosophy.

To see it in contrast, think for a moment about a well-run hospital. At a hospital, the people who make the key decisions about cases are medical professionals, not managers. If a surgery was supposed to take 4 hours but takes 12, well, that's how long the surgery took. Everybody recognizes that the 4-hour number was an estimate, and estimates are not commitments. The most important thing is patient health, not manager feelings. Managers help organize the work, but they do not control the work.

I would love to see software development become a true profession, where stroking manager egos by making them always feel correct and in control is not the most important thing.


Yeah, this. At my current company, which I would consider large (20k+ employees) and bureaucratic, we've actually gotten this part right. Managers in no way control what the team is working on or deadlines for that work. They behave more like career coaches and therapists. They DO ultimately have control over your career, and I feel like that's a part here where having a good manager matters. But it's very rare for them to step in and shape the teams sprints etc. This is all in my own experience, so it probably depends to some extent on the department you're in and your manager in particular, but I can say this is all true for every team in my department.

The managers are held accountable by their bosses, but usually what this boils down to is teams not doing what they said they would. Which of course happens from time to time because as the article mentions we're all pretty terrible at estimating and shit happens sometimes. It makes me nervous to think about changing jobs because it sounds like this is NOT the way it is in most other places...


> Managers in no way control what the team is working on or deadlines for that work. They behave more like career coaches and therapists. They DO ultimately have control over your career,

I work in a place like this. It's a horrible idea. It's politics 100% of the time for managers, because there's no other way to climb for them. Welcome to half-brained initiatives and goalpost technologies being championed rather than ROI exploration and derivation, because the managers that do get stuck on projects that cost money don't want to talk about that.


Same in the arts, until you get a Heaven's Gate or other big artist-driven films that flopped at the box office and ate up UA's budget and then company. The studio model is savagely more effective at survival.


Catering to delusions is very dangerous in a hospital !


This works because no-one is demanding estimates because they want to know the answer, they are doing it to apply pressure to get it done. The project manager is being subject to the same thing, and if her minions are savvy enough to provide a story that she can take to her management, and if she's smart enough to realize that's the most she can realistically hope for, a sort of equilibrium is achieved.


> there are often ways to fix a bad situation by breaking some explicit rules and carefully following implicit ones, and I wish I had the acuity to see these possibilities on my own.

Now extrapolate that to working in BigCo, which has tens of thousands of employees worldwide, with each country having its own unique unspoken rules and hidden undercurrents. The greatest lesson I learnt is the importance of giving people the benefit of the doubt unless they've really proven they are a bad actor, because over and over again problems turn out to be cultural at base.


Stakeholder management is an important but often forgotten art. Keep them up to date on what they can expect. Don't fight them, just inform them. If a deadline is unrealistic, don't fight it, just let them know you cannot commit to that deadline with any certainty. Once it's clear you're not going to meet that deadline, let them know in advance.


You are describing veterans in large corps.

- CYA above all. The chances of a significant project succeeding in such an environment are pretty slim. Prepare your umbrella from the get go for when the inevitably will hit the fan. Some heads will have to roll, and you will not be easy pickings as you have been prepping for this since day 1.

- Greenshift like there is no tomorrow. Your manager is going to anyways until 80% of the budget is spent and the last thing she wants is someone pooping on her parade. Always be positive, but make sure not to get caught on factual lies. Remember, you don't want to be the one easily thrown under the bus when she needs to find a scapegoat.

I chose not to stay in such environments.


I think your two points are really important soft skills or work politics skills that apply transferably to any job.

Your points illustrate: - Communicating your concerns clearly and in a timely manner, - while also committing to do the work as it has been agreed to the best of your ability, - and also communicating your progress clearly and regularly.

This ought to be valued by any employer.


I find it is sometimes useful to distinguish between the "hypothetical estimate" and the "practical estimate".

Task X will take 2 weeks.

Task X will take 2 weeks of development time and 6 weeks to test, validate, productionize and roll out.

Task X would take 2 weeks if we drop everything else we're doing, but because we can't, it will take about 12-18 weeks to finish.


Appeasing management (as you described) might preserve your job, but it doesn't fix a fundamentally broken process.


All the research indicates that software developers are terrible at estimating time, but are fairly accurate at estimating relative effort. The only reliable method is to break down projects into the smallest deliverable "chunks" that you can, and then estimate the relative effort of those "chunks" compared to actual, concrete tasks that the team has completed in the past. After doing this for a while, you can use these relative effort estimates to project completion dates for new projects. This is what agile "story points" are supposed to represent, but many teams unfortunately end up with things like "2 points = 1 week", which just doesn't work.

If you use this method, it's absolutely critical that the team always focus on relative effort, and not on time. If you want it to continue to work, you also have to be very careful how you communicate with team members regarding their progress and estimated completion dates - don't tell them that their component is "late" - this will make them think in terms of time next time you're estimating something, and will ruin your estimates.


>After doing this for a while, you can use these relative effort estimates to project completion dates for new projects.

This doesn't work. The problem is that you get periods where the work is relatively easy overall so 5 points ends up being tasks that take 2 days. Then you pick up new difficult features and estimate everything relative to all of the new hard tasks and you end up with 5 points meaning 5 days.

Sure you can gauge relative difficulty of tasks between each other, but in a regular developer's day-to-day life the range should really be 1-100 or so rather than making stuff fit into 12 points or less.


1. Keep your baseline tasks you're comparing against consistent for as long as you can (as long as the team remembers them and agrees on the effort they required). Don't use a task from week 2 to estimate week 3, etc.

2. Don't use a short time interval or moving average to determine your points->time mapping. You're looking for the long-run average mapping, not a specific estimate for this developer today.


+1 , this guy knows what's up. Relative estimates work really well, from my experience!


I think it's common to use the Fibonacci sequence as the difference between each valid weight increases pretty fast.


i like the 1-100 idea. so often the same points come back (a world of 1 and 3 pointers) but really have we broken that down?

I think we should always have reference tasks in front of us -


> break down projects into the smallest deliverable "chunks" that you can

Step 1: Figure out how to do this

Step 2: Do it


"Figure out how to do this" (aka create a high-level design) is a task, and you should be treat it the same way as you treat implementing a "chunk": estimate the effort it takes, prioritize it, and then work on it. The only wrinkle here is that you should not try to estimate any "chunks" of the implementation until you've finished your initial high-level design - there's no point estimating when you don't know what the "chunks" are. For big systems you may need to iterate on this a couple times to design sub-components of the larger system and create prototypes for systems that are critical to the project but might not work, etc.

Estimation is incremental - sometimes you need to spend some non-trivial effort gathering the information you need before you can estimate the relative effort in building a system.


> Figure out how to do this is a task

this is the crux of the issue.

on average you can estimate six month of work if you spend two weeks chopping it into tiny bits. rarely management allow that to happen and developers are rushed in with aggressive randomness.

these "developers are bad at estimating" pieces are mostly people that try to estimate effort from complexity which is the wrong approach a priory because complexity hides the project unknowns, and this methodology, along all other comparative methods, is just a Gaussian curve built out off gut feelings: sometimes converges to a realistic estimate for large enough values, but more often than not the center just shows the developer team bias.


I wish I could transplant this insight into the mind of every project manager.

Throughout my career, tasks which can be estimated relative to similar or identical tasks from recent history are the exception, and the rest is indeed gut feelings on complexity.


Step 3: Discover that what you did was wrong (bad implementation or not what they wanted)

Step 4: Go to step 1


> All the research indicates that software developers are terrible at estimating time, but are fairly accurate at estimating relative effort.

I'd be interested in reading the research you're referencing, if you have something specific in mind.


I don't have a specific online reference to cite, but the Software Engineering Institute has a lot of research in this area.

Specifically, the PSP (Personal Software Process) suggests using LOC as a proxy for size. e.g., a Small task might be 50 LOC, 200 LOC is medium, 700 is large. Then you can take those T-Shirt sizes and compare them to tasks you have done in the past and use that historical data to predict completion times.

I'm simplifying it somewhat (e.g., there are also Complexity inputs), but the process does work if the organization is disciplined enough to follow it.


LOCs don’t work either for more situations than I can count in my experience. I’ve had 50ish LOC stuff that implemented a tricky algorithm that was core to a system and required careful design, implementation and testing, right next to 700ish LOC stuff that was super straightforward and basically just laid itself out.


There is a huge difference between "research" and "here's another idea" — I would personally love to hear about some research backing any estimation methodology.

FWIW, I "feel" the big issue is with loosely defined requirements which shift as development progresses.


PSP/TSP isn't "here's another idea." It's a research-backed software development process.

And yes, while unmanaged, changing requirements is a major problem, even with proper requirements management you still have the time estimation problem.


I think the author is trying to say that we should call a spade a spade and say that by story points they really mean time anyway so might as well just ask for a deadline. I've never worked at a company where points didn't have an equivalent time translation.


The article didn't mention points at all. It said "agile", but only, like, once.

It also didn't mention Evidence-Based Scheduling[1]. Also, the evidence-based-scheduling article doesn't mention points either.

[1]https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...


Except it fails at step 1. If you have to breakdown a “Ajax photo editor” into 16 hour max chunks, as stated, you need a detailed design. And that takes times in itself. The problem is, often times, the manager want the estimate before you've taken the design time because, to them, it should be included in the estimate.


Yeah, if you have a non-trivial big project that you don't know how to do yet, then the only thing you should be estimating is "Create a design for project X". After that, you might need new tasks "Create a design for sub-systems Foo and Bar". Until you have spent time on design, there's often no point estimating the total project.


The relative effort estimation seems to be more-or-less accurate for small chunks. However, at the beginning of a project, you don't know how many small chunks there are. In a small project, you can reasonably plan and estimate most of the chunks ahead of time and (eventually) plot out on a burn down chart that shows roughly when the project will complete.

However, in a large project, you can really only plan the beginning of the project this way. Eventually there are too many unknowns and you risk all the downsides of waterfall design. In my experience, a project like this has a burn down chart that goes UP over time (i.e. chunks are added faster than they are completed). This makes estimating the project impossible, especially if the chart is still concave-up (i.e. the rate at which it increases is increasing).

If you estimate large chunks, then you are less likely to add them over time, but you also won't be as good at estimating them. I think this is a good method for determining if a project is closer to a 1 year project or a 10 year project though. E.g. if you have 10 large equal chunks in a project and it ends up taking you 6 months to complete the first 1, you probably aren't going to finish by the end of the year, and you should evaluate if you are happy spending 5 years on it before you spend any more time.


In the "agile" practices I believe it comes from, I think it was actually basically only intended to answer "How much will we probably get done in the next week" or two or three at most, never to estimate an entire 'project'. Matching your perception of where it works and doesn't.

i think? Anyone confirm/deny?


It’s a much bigger problem than this, sadly. The fact that you are even saying “If you want it to continue to work, you also have to be very careful how you communicate...” is the problem. The problem is that the way we manage things causes engineers and management to be against each other in principle, so that management is pushing one way and engineering is pushing the other way. Your advice is kind of like marital counseling, “let’s agree to not label Tom the person as an inconsiderate asshole but just talk about how he makes you feel,” “OK, well Tom, I am frustrated by your daily behavior that I perceive as assholish and inconsiderate” and I am sure it helps a little but I mean if you think Tom’s feelings aren’t going to be hurt you’re not giving him much credit.

In fact if we were being honest with our managers we would be scandalized. We base our estimates on something like an 80% chance of getting something done (which already improves on the 50% chance by roughly doubling our estimate) and then every engineer I know has some informal rule about doubling or tripling the estimate before they hand it off to their manager. Those managers often add their own 20%. Then their managers see that something is up so they slash 50%. More than half of any project—perhaps even three quarters or more—is estimated as safety-factor. You would expect that if we were getting it wrong, we would be getting it wrong the other way, delivering early. The fact that we are late so often is a scandal in its own right. Adding more time buffer is clearly not helping much, since we have such a surfeit of it already. We must have wasted that safety buffer; there is no other explanation for how it disappeared.

And once you have set up these luxurious margins of safety it is already too late. Why is the engineer adding that extra factor 4x or whatever it is? It is because management is already at cross-purposes to engineering. That engineer is adding that safety because they do not feel safe without it. Any insufficiently safe ski-lift in active use will see hundreds of buggy ill-specified implementations of safety harnesses: I will go on this ride but I am taking some precautions.

The starting assumption has got to be that it is OK to fail on the deadline. This cannot be because of hidden information but must come as a result of trust: “we are eating into the safety buffer, that is totally OK and that is what it is there for, but I want to do whatever I can now to get you out of meetings and approach-planning and to fill out your timesheets for you and heck, even plan an amazing weekend between you and your fiancee to alleviate relationship stresses—and take any other distractions and loads off of your plate so that I can get your single-pointed focus on this feature and we can finish this.”

If management makes themselves that safety harness, then there is no cross-purpose.


I think a fair summary of your comment is "If your management doesn't buy-in to the process, and treats estimates as deadlines, then it won't work." This is true. You need management to be on board for this to work.


Can we enumerate the places that have this level of management buy-in? Is that a long list? Because they sound like unicorns to me. In saying that, I think that I may have even seen it once but it didn't last very long.


I wonder to what extent this problem motivates folks to branch out and do their own thing.

Or conversely, stay in a boring-but-successful role too long at a company because they're able to play the game leisurely.

The problem is ultimately that the management buy-in opens management up to getting taken advantaged of. It's hard to balance accountability, when management via HR holds all the power. Peer reviews are rarely more than formality, used to reinforce the manager's personal assessment/feelings.


Personal experience is that this shows up in a lot of places doing real R&D work because upper management has a lot of researchers, and researchers understand just how (unexpectedly) long it takes to solve hard problems.


> We must have wasted that safety buffer; there is no other explanation for how it disappeared.

Or you just accept the research that people aren’t able to estimate time, and so it wasn’t a safety buffer on top of a real estimate so much as trying to compensate for the gut understanding that the estimate is pulled out of the air.


"You would expect that if we were getting it wrong, we would be getting it wrong the other way, delivering early."

How likely is it that a surprise works in your favor? In my experience what you don't know rarely does. So my mental model is that completion times aren't normally distributed; they have a long tail. Additionally a task can't take negative time. This makes estimation tricky because a task can take much longer than the average. At my company we typically report an average and then explain why things took longer.


> All the research indicates

Which research would that be?


How it often works:

Manager says I want an A that does B? How long?

Most of the time estimate is N days, actual is N +/- 20%.

Sometimes it's N days, and you get lucky it's N/3. Instead of credit, you now have a reputation of padding estimates.

Sometimes it's N days, and you go oops, and it's 10*N. Now you are a slow incompetent.

This is why estimation is such a no-win for developers.

For areas that come closer like construction, they have centuries of prior experience and often the contractors are big enough to push back.

To use the example of this article, go to a boatyard, and say I want to 150 foot yacht with a hot tub and a 20 seat theater, and my budget is $10,000. You would get laughed out of the office.


> "you now have a reputation of padding estimates."

> "Now you are a slow incompetent."

This is also symptomatic of organizations that behave adversarially and assume your intentions to be malicious. In that dynamic, even if you perfectly estimated things, you would still get in trouble for delivering what was agreed on instead of what was "meant to be agreed"...

Usually this is more the case in orgs who view software engineering purely as a cost center, instead of as a business differentiator. When considering a position at a company, this is a good thing to pay attention to.


> orgs who view software engineering purely as a cost center

This is spot on! I've heard this phrase before and it's never made any sense to me. Any business concerned about profit should never have a "cost" center. If that unit is not returning a positive ROI it should not exist. Of course software development costs money just like all other labor, building and materials but it's assumed the value generated by the work product is worth the investment.


Cost centers are generally unavoidable, they're aspects of the business where there is a well-defined target past which there's no meaningful ROI.

Regulatory compliance is a good example of this. Both you and your competitors must be compliant, but past reaching that state there's no additional value add. As a business, you're thus interested in keeping the cost of compliance as close to zero as possible, all cost cutting that does not break compliance is good for the company.


Keeping the cost of compliance small isn't the same as doing as little work into compliance as possible. More often than not, it's the other way around.

Corner cutting doesn't work on any of the usual "cost centers", and most of them can be transformed into a differential.


But even your example can be viewed as an unavoidable cost of doing business where the alternative is to not be in business. In other words there's a distinction to be made between regulatory compliance where you have no choice and automating payroll processing where you do have a choice. I think it's pretty clear we're talking about the latter and I would not categorize an internal software development team who's job is to make the business run more efficiently as a cost center, even though there's not external sales of their work products.


All too often "cost center" is used to mean "anything except sales and marketing".


If tech is seen as a business differentiator, wouldn't the opposite be true? It's so important that they want estimates so they can deliver more value. If it's a cost center, whatever... we don't care that much.

I think my point is, the cost center vs business differentiator is not a good indicator of the root cause of this problem.


If it's a business differentiator, there's usually a better understanding from upper management that these things are hard, that there's trial-and-error, that some of this is R&D which will not have immediate value but strengthen the long-term position (or buy a patent moat)... If not, then that business is probably about to be in trouble.


> Usually this is more the case in orgs who view software engineering purely as a cost center

In my experience, this is more of a cultural issue, with it being pushed by engineering managers and tech leads who insist that everyone can be a 10x engineer if they just throw away their life.

When they’re past a deadline, it doesn’t matter why, because they were clearly just being “lazy”.

Estimates truly are a lose-lose for software engineers and they have probably caused me more grief than any other aspect of this field.


Or almost as bad, you are a nay-sayer. Why are you so negative?

Asks the person who punishes people for being wrong.


Yeah, if you don't trust the people that you hire to fight in the trenches, get the f--k out of this business.


And I think the iterated version of the game is worse. Once you give estimate N₁, they don't like the answer and ask again. Maybe they use different words, or ask different people, or make small changes. Maybe they just imply it seems high. So now we have N₂ < N₁. Repeat until the manager is happy with Nₓ.

Nₓ is of course is about 1 day past the "absolutely impossible" time. So now we have gone from 50% likely to hit the date to 5% or less. So at best we get yelled at for being "late" to a number that was about what the manager wanted, not reality. Or worse, we finish the visible work on the day of, while leaving a lot of hidden work (e.g., technical debt) for the future, increasing the size and volatility of future estimates.

And that's not even mentioning that all of the conditions of the original estimate (e.g., scope won't change) are breezily ignored even when the date is remembered.

It's a stupid game, and I've stopped playing it.


> It's a stupid game, and I've stopped playing it.

Care to share the secret of how you did that? :)


Basically instead of giving them a sense of control, it's to give them actual control. What I promise to do is release a new version of whatever they want at least once a week, and hopefully more often. If every week (or every day) they get to decide what that is, they stop caring about estimates. More here: https://news.ycombinator.com/item?id=21071374

And I also wrote something up in response to a previous HN discussion: http://williampietri.com/writing/2014/simple-planning-for-st...


An observation here is that a house or a boat is something that people can see; we've all seen a building go under construction whether it a single family home or an office building. It takes time, and we know you can't rush the process or risk the building being unstable.

With software, many parts of the business can't "see" the development happening, so they don't know why it would take so long. At the same time, unlike a boat or building, software is more likely to be easily modified. Therefore there is less of a risk to get it all right initially. I do not condone said behavior I am merely pointing out a possible scenario in which it could happen.

It is also true that a developer is more likely to be a push over than a general contractor. Source: 11 years as as software engineer and my family has been involved with land and real estate for a long time.


I’ve found it really helpful to communicate and train funders to learn how to watch software being built. It takes time to teach someone to look at builds and tests and growing code and activity and demo releases. It also takes time to teach developers not to game this.

It’s amazing how much this helps build trust. It’s hard when there truly are big giant components that take a long time between commits and builds. But I think those are much rarer than programmers think and say.


>It also takes time to teach developers not to game this.

That's a massive ask unless management stops using their old style of management immediately while learning instead of slowly transitioning. There is zero reason for employees to stop protecting themselves from their incompetent managers. This is especially true in software where the average job length is much shorter, so you get less of a benefit from investing time into helping train a boss


The reason is trust. That’s nonzero but really hard to establish.

I’ve seen projects with untrustworthy devs and untrustworthy managers. Oddly sometimes that worked. But I’m too old to work in hellish environments for pay.


Sometimes the manager just doesn't believe you at all.

Learned that the hard way on my first lead role.

Him: How long will this work take? Me: A year with this team. Him: A year? How is that possible? I was thinking six months. <starts going down the entire list line item by line item dickering on every little thing> Me: <after 15 minutes completely checked out of this conversation.>

When you don't have time to do something right you have time to do it over. It ended up taking us 18 months. To this day I firmly believe that because we tried to do it in half the time, it ended up taking us 50% longer. Accounting for our tendency to procrastinate, I believe if we'd gone in saying 12 it would have taken 14 at the outside.


> Sometimes it's N days, and you get lucky it's N/3. Instead of credit, you now have a reputation of padding estimates.

Or you just have a reputation of not having the foresight to pad the release based on your estimate.


I agree with everything except after doing this for 20+ years, I think of my estimate and then multiply by 3. I've been much better at hitting my estimates since doing that (but I still miss 75% of the time).


Yeah for a basic rule of thumb, I ask myself:

How long should it take?

Then I take that number and I ask myself:

How long will it actually take?

Then I double that number and add 30%.


If you are not embarrassed by your estimate it’s probably too low :-)


If you double a number and add 30% on top, you could just add 160% to start with, or multiply by 2.6. Since we are talking estimates, you would be similarly well served with just 3x :)


Yes, you're correct. I'm also, like many of the people on HN, familiar with the associative property of multiplication. It's just a humorous way of saying that taking your estimate and doubling it isn't gonna be enough.


I apologize for not seeing the joke: humour is a strange beast. :)


As a former developer, and as a manager of development teams, I feel that engineers tend to underestimate how long a task or a project will take them. This is due to several factors, including good developers having big egos about their capabilities, and with them focusing only on the straightforward solution, without giving credit to the inevitable (and expensive) edge cases.

My formula for getting a better estimate from engineers (call it "padding" if you wish), is to take their estimate, double it, and then round up to the next largest time unit.

For example, if they say a task will take 3 hours, I double it to 6 hours, then round up to a full day. If they estimate it at 2 days, I double it to 4 days, then round up to a full week.

This sounds like a joke, but I've found that in the real world, these types of estimates are closer to what ends up happening.


Agreed. The problem is that software really falls into 2 main categories:

1) doing something that has already been done in a very similar way. This type of software is relatively predictable and estimates are generally reliable, assuming that is being done by the same team that worked on the prior implementations.

2) doing something new where NO ONE actually knows what the specification really is or how hard it will be to implement. Most of the time, people don't even know who all the constituencies that will have opinions or requirements to contribute to the specification. Estimating this type of work is largely a guess in the dark, and there are no reliable ways of producing valid estimates, so it is all based on prior interactions with the group and experience of the senior developers with this type of project.

Most managers think they are in camp 1), but in practice if that was true they'd just buy off the shelf software. You're doing your own because something is different.


When's the last time a construction job was done on time? Or even within +/- 20%?


Happens every day all over the world. It's just such a mundane event no one writes newspaper articles about it.


Well, I've never heard about it.

Both of my parents worked their entire lives on the construction industry. It's a running joke that this never happens.


Software people tend to assume that the rest of the world is far more organized than it actually is. In particular, they seem to think that construction projects tend to be well-oiled machines with perfectly followed specs written in advance.


I work in civil engineering so I'm fully aware of what monstrous cluster fucks some projects turn into. Doesn't change the fact that the majority of projects (by number) are simple, small, unambitious projects that basically go as expected.


So, just like software projects.


It happens when people really care. I think there’s a lot of corruption and incompetence in construction so unnecessary overages are tolerated.

Atlanta rebuilt its 8 lane highway segment early [0].

[0] https://www.ajc.com/news/local/bridge-fast-construction-safe...


After Northridge earthquake in Los Angeles area, many freeway interchanges were rebuilt within months. But there was extreme urgency here.

General Leslie Groves was chosen to administer the Manhattan Project because he got the Pentagon built on time and on budget, and was viewed as someone of superior skill at these things.


Here in Germany we often jokingly say that the Berlin Wall was the last big public construction project that was completed within time and budget.

;-)


I suppose a good solution would be to evaluate the person on the accuracy of the prediction :)

Of course that's a nice dream in a management driven environment.


I moved away from web development after many years of full-stack development. It almost totally destroyed the joy I had in programming.

  - happy Agile team? That alone will cost you 2 days of work per week due to meetings, planning etc.. 

  - Javascript? No, it has to be Typescript for even the most futile websites nowadays, and yes, TS adds another 20% of workload

  - TDD, with a dedicated testing engineer?, no of course not, you do it yourself! Another 20% of added workload

  - A shitload of tooling from linters to bundlers and whatever else that always needs some attention

  - Deployement, done by a devops engineer? you must be joking right? That's also the work of the full-stack dev.. 

Now try to estimate how long it will take to implement a simple feature request from the PO? I always did my rough estimation, times 2. And even that was often not enough because all kinds of urgent issues needed to be resolved, so you're kind of lacking all the time which is a great recipe for burnout. I'm done with it.


If Typescript is adding 20% to your workload, I suggest spending a little more time getting used to it. When I first started writing TS, I agree it felt a bit cumbersome and it took some time to understand the nuances, but the time it's saved me in refactoring and catching errors I would have made without it has more than made up for it. I know this isn't really your main point, but I don't want anyone on the fence about TS to be scared off by that 20% figure.


Using static typing on anything takes more time at first, but I agree that it saves time over time.


Everything that you mentioned is supposed to make things easier, safer and more importantly faster to change.

However it seems like developers everywhere are just adding these things because they heard that facebook or whoever was doing it without actually stopping to think if it makes sense for their team and their product. Before you know it you're left with a ball of tool/process mud where every little change requires passing a resolution at the UN to implement and deploy.


I’m now of the opinion that using typescript, react and Webpack every time is the right thing to do! Even for a tiny project.

Typescript has won the well typed JS war and offers the most practical solution. There are Haskellly and Lispy alternatives that are sexier but typescript feels closer to the metal and working with JS interop is sublime while getting the benefits of types for refactoring, documentation and robustness. Always use it.

React is the best front end paradigm. It provides an excellent way to reason about the front end and avoid messy state and event handlers or bindings going around cascading shit into your ui.

Webpack is cool. You’ll need something to bundle and I feel it’s a good choice. I’ve had it singing some interesting tunes! It’s very flexible.

Also git goes without saying. And the other implied tool goes literally without saying :) and not it’s not yarn!

Once you get used to these tools it’s just not worth not using them. It’s a one time investment like learning touch typing.

Yes in js you just need a script tag but most languages and platforms off the web have a bundled/build process, Ui toolkit and typed language so I don’t see the big deal in learning these for web dev.


They make a lot of technical sense to most teams, not just to Google/Facebook. The problem is that there's a cost to everything but people don't take that into account.

You don't buy a car and say "I want those fancy wheels, yeah!" without considering the cost. But in IT, sure, management thinks most of that is easy or barely costs anything. I think that's the whole point of the article.


- Don't default to "Agile" (or be fully "Agile" for that matter)

- Don't default to Typescript (or any technology for that matter)

- Don't default to TDD (or any dev methodology for that matter)

- Don't default to linters and bundlers (or any intricate tooling for that matter)

- Don't default to complicated infrastructure

A friendly PSA that you can still, in the year of our lordt 2019, create a static site with an HTML file deployed to shared hosting over FTP. You could also create a dynamic site with some PHP thrown in there.

Reach for the minimal process, tech, methodology, tools and infrastructure you need to get the next X pieces of work done, and complicate things only if the pros outweigh the cons.

I'm not picking on you specifically blobs, I'm sure you walked into an organization that employed all these things with seemingly no good reason. The thing is, there's always a reason, it just might not be a great one!

That said, if your goal is to estimate things more accurately, you might need to spend a little more time with your coworkers to define as many unknowns as possible.


> A friendly PSA that you can still, in the year of our lordt 2019, create a static site with an HTML file deployed to shared hosting over FTP. You could also create a dynamic site with some PHP thrown in there.

Honestly that sounds like hell, we may live in a spiderweb of complexity but it's still a long way from the primitive web of the 90's which provided every conceivable kind of way to shoot yourself in the foot.

The hidden issue is that new web products are becoming increasingly complex due to tight competition, but businesses somehow do not expect that increased complexity means there is a need for larger teams and budgets. All this cruft in the modern web build pipeline is stuff made by developers, for developers, to help us navigate the impossible tasks handed to us in impossible timeframes. Some of these additions are very welcome, but the proliferation of complexity in development in general is mostly due to teams and budgets and time estimates not scaling accordingly.


A lot of that looks less like issues with full-stack development, and more with start-up culture where everyone thinks they're google.

Scrum is pretty gross, I agree.

Typescript I'm unsure about, I typically stick to vanilla javascript with a polyfill for the fetch API. I kind of doubt it adds another 20% of workload.

Tooling, linters, and bundlers I mean. That's just programming in general tbh. Cmake is probably my greatest headache with C++.

> Deployement, done by a devops engineer? you must be joking right? That's also the work of the full-stack dev..

I don't know about fancy Kubernetes setups and what not. If you've scaled to that level then yeah, you probably need someone dedicated to things like deployment.

But for the most part I just use shell scripts, git, and make files for deploying our websites. Works pretty well.


I don't agree with everything but I can resonate with you a lot. There are a ton of go to approaches and utilities, all the hipster shit, that people easily reach for and integrate it badly. Not talking about technical integration, but cultural / social integration. If one rubs X into your face he should at least reasonably be able to transfer the passion for it and care how it is applied. Commiting to a new X is an giant effort for small teams, if done right, but usually its just quickly added and then sticks there somehow. This is just one point of nonsense. Scrum and devops are good examples how things with good initial intent get corrupted. Hey I am already doing devops so why call it out if there won't be serious support from the company? Scrum is just pure cancer at this state. It literally didn't change since it inception. But back then scrum cut down infinite cycles to like quarterly. Now we are doing 2 week sprints with the same method and same overhead? Smart. Of course scrum is just for the tech people, so there are still 3 other layers of planning and a couple of meetings that you have to attend to.

Eventually there is no reflection on the things we do not great with.


The thing I find blows out estimates more is complexity in the codebase.

Typescript can be learned to a sufficient level that it has zero drag and saves you time when you need to refactor or understand someone else’s code.

Webpack is a pain to learn but like git you learn it once and benefit again and again.

But codebase complexity can knock estimates for 6. Someone’s crappy spaghetti design can below those assumptions in an estimate by an order of magnitude. And it could be a detail buried somewhere that wasn’t easy to discover. Like finding asbestos in your ceiling as you are about to remodel your house. Except a lot harder to explain to management!


One of the blind spots we have here is that I can't get people to think about the question "how long would it take to get a one-line code change into prod?"

That would mean looking at some things that people don't want to think about. So we try to feature toggle everything instead, and we dedicate extensive resources to keeping the old version of code as hot standby at all times.

These aren't bad solutions, it's just that they should be something we do in addition to being able to deploy quickly, not as opposed to.


I moved to desktop Windows development and couldn't be happier. People are more realistic, everything is more thought out and not a feather in the wind, and theres respect for time.


TDD is not when you have a dedicated testing engineer. The whole idea is to write tests before developing even the smallest of units (functions, methods): unless everyone had their own test engineer to pair program with, just imagine the friction if someone else was writing tests for you.

If you are talking about decent test coverage, and explicitely about decent system test coverage, then yes, a dedicated test engineer can help a lot.


I'm not sure how to avoid answering these loaded questions though. In my experience trying to 'negotiate' with people like that: they will just ask you the same question in 5 different ways and it will eventually get so uncomfortable you'll tell them anything to move on.

Another option might be to never do business with cash-starved companies. Founders and execs will be constantly worried about running out of money, acquiring customers, and pulling off miracles, and all that stress will trickle down on you probably for no real benefits.

A key question to ask an employer then is how much 'run way' / money they have left? Or whether they have a revenue source. It seems fairly probing, but realistically, not everyone wants to invest heavily in a company that might not even be around in 6 months.


> I'm not sure how to avoid answering these loaded questions though. In my experience trying to 'negotiate' with people like that: they will just ask you the same question in 5 different ways and it will eventually get so uncomfortable you'll tell them anything to move on.

Once you realize what's going on, you can turn that into a game.

Do you know the game where somebody asks you a bunch of questions, and they you allowed to answer with anything except "yes" or "no"? It's very hard to do, because it's such an ingrained habit.

If you really want to not give an estimate, make it a game to not give one.

But, give them something else instead. Work out a bunch of questions about uses cases / data volume / whatever, and say "if those were answered, we could build a prototype in a few days that would let us make a more reliable estimate" or something like that.

Another comment: coming up with good estimates is work. The other day somebody asked me if I could come up with a rough estimate for a (poorly specified, IMHO) project, and my answer was: no, I don't have time. If you need it anyway, formulate it as a task in Jira, so that it gets prioritized along with all my other work.

(Fun fact: we estimate our tasks in story points, so then we estimated how much time it would take us to come up with an estimate... :D )


"and they you allowed to answer with anything except "yes" or "no"?"

None of the Celtic languages have an equivalent of yes and no. Just make a habit of speaking, say, Irish or Welsh at work. (To an English speaker, that sounds weird at first, but you get used to it quickly.)


On the flipside, necessity is the mother of invention. How many behemoth companies do we see now that sit on their hands and don't bother innovating at all? There's surely a balance to strike here between desperate and panicking and fat and lazy.


I think what we're looking for here are healthy buffers.

Otherwise the engineer is relying on (possibly blind) trust that management won't try to "motivate" him/her into performing miracles (ie. breaking the project management triangle).

https://en.wikipedia.org/wiki/Project_management_triangle


My understanding is that it is harder for engineers to estimate how long a job takes, than to do the job. That is to say, that the complexity of doing a time estimation task is higher than the complexity of task you are estimating. I read this in the book "Making Software: What really works and Why we believe it" [0]. The assumption in the article is that an engineer can produce reliable estimates for complex work, and appropriately guide their managers, and don't think that's true.

[0] https://www.amazon.co.uk/Making-Software-Really-Works-Believ...


Estimates are hard because you are literally making something that has never been made before. Yes, you have experience with sub parts and similar patterns but not the situation of this timeline, this team, this technical debt, etc.

Too often the request is along the lines of "How long does it take to build a house?" not "How long will it take to build the house in these blueprints with these systems on top of a mountain that is also impervious to mudslides?". People can generally estimate the former but the latter is all about the unknown details.


I don’t think the article assumes an engineer can produce reliable estimates for complex work; in fact it claims the opposite:

> But the reality is that if you can make a probabalistically accurate estimate, then its likely that the task should have been automated by some other means already. In other words, its easy to estimate a task that essentially amounts to copy and pasting some well known CRUD API end point patterns, but any even remotely creative or novel work is almost guaranteed to be totally unknown.


> I don’t think the article assumes an engineer can produce reliable estimates

Well, he sort of goes back and forth, but he includes this:

> The engineer comes back with this simplified description and says he can get a first version produced, but it will take a month instead of 2 weeks.

Which I see some variant of every time I see somebody rail against the unrealistic expectations of software estimation (which, by the way, I’ve been seeing people rail against since the late 80’s to no avail). The implication here is that if the manager had just listened to the developer and accepted his initial estimate of one month, the software would have been done in one month: the developer could estimate with precision, but the manager bumbled along and screwed everything up by trying to negotiate it down.

This is a dangerous position to take unless you’re absolutely sure about your one-month estimate: if you say one month, he says two weeks and you look him in the eye like the alpha wolf say, “no, one month, and no sooner”… you had damned well better be able to deliver in exactly one month. The reality is there’s probably no way to tell, _especially_ if other people are involved, so you’re better off shrugging your shoulders and saying, “yeah, sure, two weeks”, doing as much as you can, and preparing your story ahead of time.


I recently estimated a task would take 30 minutes.

It took 40 minutes to do that estimate.

But, TBF, several important decisions were made during the process of estimating, such as what should be excluded from the task, basic organization, and some research.

BTW the task being estimated was doing time estimates for a project (which came out to be 2-3 weeks).


> I recently estimated a task would take 30 minutes.

My boss at my last job told me that he had observed that if he asked somebody “can you get this done by the end of they day”, he would get an accurate answer (either yes or no). Any further out, there was no correlation between what they said and what they actually delivered.


Obviously we now need an estimate for how long it will take to make an estimate.


Often it seems you take the time to properly evaluate the code and changes .... often it seems you are nearly done.


This is often overlooked and undervalued — upfront time spent on understanding the problem and designing before getting down to the actual work ends-up saving a lot of actual time. But most people don't have patience for that and think it is wasted time.


Yeah I've only been coding professionally for a year now and I do SO MUCH MORE reading/researching, planning, annotating, and pseudo coding than I ever did when I started.

The outcomes are so much more predictable / better quality.


In that case it seems like it would be better for PM's to take the role of task estimation. Of course this only works if they are also the ones accountable for inaccurate estimates.


This sort of thing is why engineers are paid way too much, and way too little, and why so many see the profession as a young man's game. You're just not the master of your own fate. Sure, the computers are predictable (for some value of "predictable"...) but management is not---even if they try to be.

I've been mentally moving away from being paid to write software. I can see writing it for my own use, even professional use---I just don't want to be on the hook for ever-changing requirements, decided by people who are often kind, but not, in the end, competent. The best managers understand this and will give you leeway, but this is not a sustainable, repeatable thing. It lives and dies on one relationship.

I wonder sometimes: what if we just all stopped writing software one day, and started just using it? Writing software is a bad deal in a lot of ways---hard, socially isolating, etc---while using it is amazing---the computer does the work for you!

Don't get me wrong, I spent an hour at work today presenting on Lisp macros and loved every minute of it. But a dev career, for many, means a capped income and a razor's edge of apparent competence.


What jobs don't have drawbacks though? I have been thinking a bit about this:

1. Doctor: In many cases, throw away your life and work 60+ hours and also you need to specialize and study long and hard.

2. Laywer: I don't know enough about the profession. The job doesn't transfer well to other countries.

3. Consultant: 60+ hour days are the norm, interviews tend to be based on quick thinking in the high school arena. To pass the interview, you need to have excellent and super quick high school level knowledge of: math, logic, social and political skills. This sound denigrating but I found it tough to do this quick.

4. Investment banker: 100 hours

5. Construction worker: your body will thank you later (/sarcasm).

Programmers have a certain set of advantages and disadvantages (I agree with your disadvantages), but how is it worse than other white collar jobs?


You're not wrong. The labor market is relatively efficient.

Go on a reasoning chain with me:

- what solves the problem nicely is to sell software. Selling good software can be one of the easiest and most lucrative jobs in the world. In practice no particular employee gets that---the company pays enough to motivate, but takes the rest. The solution, then, is to be the software company.

So...start a startup? "What a novel idea Dropit, on HN of all places!" I have actually "done this" (or thought I was doing it) multiple times (failed every time), but looking back I can see a lot of trivial mistakes I made. But at least I can find some---with many code-for-hire fiascos, the mistake was taking the job in the first place.

So my conclusion: accept the job you have, for now, while saving money and trying to have a good, normal life, and put some effort into seeking out new opportunities. FU money is a thing, as is FU market position.


A strategy I thought out today was as follows (note: I'm based in The Netherlands):

1. Get a job for 32 hours per week 2. Work 10 hours on week days and 8 hours on one weekend day (take the other weekend day of), so that you clock 58 hours per week. 3. Don't take up your vacation, safe it. 4. Take the other 6 months off. 5. Oh, and pay less taxes. You're handing in 20% gross, but net you're only handing in 15%.

I think a schedule like this works for people like me, because I like to work hard and earn my freedom and then relax and doodle around for quite a while (i.e. 2 to 4 months) and then do a small side project and then work hard again.

Since I'm at the beginning of my career, it sounds like an interesting experiment.

Not for my situation, but for others: geo-arbitrage becomes interesting as you can literally fly to Thailand for 6 months and come back (a cheap retour ticket is found for around 400 euro's).


> 3. Consultant: 60+ hour days are the norm

Dang, I thought 12 hour days were bad.


You might consider whether you are ready to go into management. We know the trope of the engineer forced into management, who did not want that job. But I think the opposite is just as common -- great engineers, who would also be great managers, that never (want to) make the transition. Understandable -- a good management day seems less enjoyable than a good code writing day. But I bet most companies, careers, and products would be the better for it.


I have wanted to make that transition for years now, but the opportunity never presented itself, I can't even find a way to force my way into the opportunity. It seems like all management in every company Ive worked at were coworkers of the founders at a previous company.


> he best managers understand this and will give you leeway, but this is not a sustainable, repeatable thing. It lives and dies on one relationship.

This is one of the reasons you often hears about an entire team moving to a new company. They have a dynamic that works, and they do not want to risk it.


"this is not a sustainable, repeatable thing. It lives and dies on one relationship."

That can be quite sustainable; that one (or few) relationships can be long-lasting. If good managers leave and are replaced by bad managers, often quite soon team members will leave to join that manager in their new company - it's a well known fact that the direct manager matters more for job satisfaction than the particular company.

I know people who over the years have worked at 3-4 companies for the same boss (not continuously), and if I was unhappy at my current position, I remember a few previous managers whom I'd call to ask 'are you hiring?' and in the tech field the answer pretty much always is positive.


> I wonder sometimes: what if we just all stopped writing software one day, and started just using it? Writing software is a bad deal in a lot of ways---hard, socially isolating, etc---while using it is amazing---the computer does the work for you!

What's the difference, though? It's insane how much of my work the computer does when I write code these days.


The difference is it never actually gives you more time. The expectations increase in lockstep with the efficiency.


A few years ago I was brought in lead a team in a division of a public company. Our mandate was to replace a major component of an enterprise system that you couldn't quite call legacy because they'd never actually gotten around to replacing the real legacy system with it.

My initial proposal was to build it incrementally: tackle the obviously critical functionality to get something working and ship it, then define and implement additional features on an ongoing basis. I had hoped this would fly, since the company talked a big game about being agile.

But no dice. We had to have a complete plan for all the features currently supported (some of them of dubious value, many of them incomplete and inconsistent in specification), and it had to come with a specific timetable. Under protest, the team and I worked hard to produce high-level estimates, and ended up basically guessing that the thing would take six months.

No dice, it had to be delivered in three. I made a suggestion along the lines of the strategy suggested in the article -- we could identify a subset of features that the team would be comfortable could be delivered within the deadline. It would probably be stronger, I argued, since it wasn't clear that all the added weight added much value.

No dice: everything was priority one. I pointed out something like, "you can't fight the laws of physics". I apparently gave the impression that we'd get it done anyway, though my memory of the conversation was that I was sternly disagreeable.

Our team goes ahead and starts implementing items from a value-prioritized backlog. Fast forward three months, and we have a working system that supports the most important use-cases. We considered it past ready to ship, knowing we'd need to keep iterating. The response from management, predictably, was frustration at the missing features, despite the advance warnings.

Management goes into "high-pressure" mode, and for the next three months I do my best to keep the team insulated. After more or less six months of total development time, we finally replace the prior component. All the users agree that it has far fewer bugs. I and many of my team members grumble that it has far too many, on account of the fact that we weren't given the chance to ship the minimum viable product when it was ready.

I'm not really sure what the moral of the story is.


The moral of the story is that hard-to-please idiot customers/employers don't want to pay more than realistic, normal-to-please customers/employers. So there is absolutely no reason why anyone would work for the first when the second is available. And believe me, the second is out there.

I've seen plenty of (experienced) businesses and individuals that refuse to work for toxic clients, because there is plenty of other work that pays exactly the same but without all the shit.

P.S.: According to your story, you sound like a good manager :).


Did it result in burnouts? I was part of such a mess and two out of four team members got burnout, myself one of them. It did teach me to get the fuck out early the next time I feel the same emotions of inadequacy and pressure on me. And in the very next project after a very sudden ramp up of backlog and increased pressure I looked in the mirror and got out, cleanly and without burnout or bad feelings.


Helps to be on one death march project in your life to know what it looks like. Don't do more than one. It can damage you, give you long-term health problems, lead to substance abuse, etc. When I see a thing like that 10 miles out, I go "Nope, nah-ah - I am out of here".


Thanks for sharing; to me it sounds like a successful project which management/customer made feel like an unsuccessful one due to unreasonable demands. Way to ruin morale!

Well done for at least attempting to introduce some flexibility. One lesson for the future might be: how to more forcefully stand up for what you believe is the right way to go - but of course this is easier said than done!


"I'm not really sure what the moral of the story is. "

To me, it's - never waiver from your experience/gut estimates and a no is a no.


Incredibly hard to do when you have the full weight of management on your shoulders. Especially if they back you into a corner in a meeting where you are the only technologist facing a whole hierarchy of management types.


I completely agree on how hard this is. However, I've built the best relationships with managers where I kept my foot down once in a while. I guess they like the struggle and the honesty in the end.


I've run a business longer than a decade so I know how hard it is and I've burned out several times over. But I just don't give up and a no has certainly become a NO.


I feel like this is why lead technologists turn over so often. Because they say the harsh reality that no one wants to hear, and no one cares that they are right, they just care that they are not delivering to their expectations. Maybe the issue is the expectation.


A lot of managers think it's their job to decide what the quantity of things is. Like in an assembly line, we need 30 turbo encabulators at X price and it's your job to make it.

Software has an infinite quantity of numbers to use as tools to build an end product. A programmer's job is to decide what general ideas end up as what specific quantities. A manager cannot decide this, lest he do the programming job himself.

Money is a great tool for measuring quantities of material goods, or the value of material properties. But electricity provides infinite quantities, exchanging money for programmer time is closer to exchanging two forms of currency (binary numbers and dollars) for one another. Code is a constantly transforming river of numbers that we make a draw-down into a bucket that we call an end product. That is not a quantity of material good like iron ore, at all.

The factory model simply doesn't work and producing code should be treated closer to the stock market than the factory model. Early access/kickstarter/pay-over-time business models in video games represent an example of the constant transformation model working successfully. Business apps should be built by crazy people who will build them anyway and money holders should bet & invest in them like stocks that raise and lower value over time, rather than as promised end goods.


I’ve gotten plenty of requests like this:

‘we just need a rough estimate, we won’t hold you to it’.

‘Ok based on the 2 minute conversation we just had I think about 3 months’

‘What! That seems way too long’

Other times I’ve been asked for estimates on features even though there is a hard deadline due to some external factor. I really fail to see the point of estimating anything when there is already a decided upon end date. Anyway I usually point out that they will need to just put the features in order of importance and I’ll work down the list. And they should start thinking about what can be cut. This usually leads to protests of ‘we have already cut everything we can’. But I have to laugh as we get closer to the deadline that suddenly not every feature is as important as was originally thought and magically get cut.


I always talk them into making an ordering, and it's not difficult. If my managers fail they escalate to it to me. It usually goes something like this:

   Me: You need to prioritize the items.
   PO: I cannot, they're all important.
   Me: If you do not, we will make them by the order we want, 
       possibly coin flip, but probably in order from easiest to hardest.
   PO: Fair enough, you get them all done anyway.
   Me: That is not a given and you know that, but I will send you an 
       email for confirmation that any of them can be dropped to
       meet the deadline, okay?
   PO: Hold on, can I at least pick a subset that you know will be done?
   Me: Sure, and don't stop until you have roughly 3 equally sized, 
       by estimate, categories: Must-have, Ought-to-have, Nice-to-have.
By the time we're heading into the "Ought-to-have" I tell them to do it again. I fear that I some day might be in a position where I do not have the weight to do this, but as it stands right now, not a single developer produce a line of code if someone waltzes in and tries to decide both scope and deadline.


You know, I've seen a number of Uncle Bob talks where he emphasizes professionalism and every time he does I envision interactions like you've described. But every time I hear this I think to myself, well he's in a unique position because of his brand and companies seek him out for is skill set; he has the luxury of being professional.

You've added the qualifier he never seems to add, that someday you might not be in the position to act professional. This is a sad but accurate commentary on the current state of things in our "profession".


I guess also some people find out by trying. It's scary when you don't know if the response will be "fine, ok then" or "hm, well talk about this later", and you find yourself demoted or encouraged to leave (Europe) or fired (more a US thing I guess).


If you're punished for professionalism, you know it's time to find a different place to work. If you don't it's a one way ticket to stress and depression.


Then when the deadline does whoosh by - nothing changes and the sun rises just as always. It turns out very many deadlines are not in fact hard, they were just thrown on the table in some meeting and everybody starts to act like it is the end of the world if that date is passed.


Very true. Plenty of projects I've worked on have had immovable dates that suddenly become movable once you get close enough to them.

On the flipside though I do think a deadline is necessary for everybody involved, developers and managers alike. It really helps to limit feature creep. And it forces people to think about what they really want or need.


My last conversation like this:

Manager: ‘we just need a rough estimate, we won’t hold you to it’.

Me: ‘Ok based on the 2 minute conversation we just had I think about 3 months’

Manager: ‘Ok, I guess then we need to outsource it.’

Me: ‘Ok, then I need 3 months for guiding them through it, and 3 months for fixing their bugs.’


If managers come to developers with problems that are NP complete, no matter how well they transfer the idea/vision, or set time expectations, the problem cannot be solved efficiently. And, unless the developers have been trained in CS/Math they may not even understand that what they are being asked to do cannot be done.

For example, say a manager has an idea to find the largest group of friends in a massive social network. He wants a developer to write an app for that and has a 20K budget and 2 months. You could not write this app with 10 times that budget or time.

How can you determine which group of friends is the largest?

https://en.wikipedia.org/wiki/Clique_problem

https://en.wikipedia.org/wiki/List_of_NP-complete_problems


The thing is, there are almost no cases where a problem must be solved in some strict mathematical sense. Huge part of project management is the problem reformulation to fit the existing solution to the initial requirement: what do you mean it is not a clique? it's still a group of friends, 80% of which know all others, with empirically tested chance less than 5% that any person knows less than half of others.


There is also a sort of "corporate interaction" NP complete game. It's hard making department- or corporate-wide changes.


There's no substitute for technically competent management, period. If you don't have it, you will be limited, and its something you just have to accept -- as in, make that a high priority question to ask about when interviewing for your next job. Further, its not that software developers are bad at estimating. Its that software estimation is only useful when taken as a very rough guide for strategic planning. You need to know if something will take 2 weeks or 2 years. You don't need to know if it will take 2 weeks or 4 weeks. If you think you do (as most shops I've worked do) -- you are micro managing your team. The sooner we start calling it that, the sooner we can all accepts its as much an anti-pattern in software development as it is every other facet of business.

In short, estimate roughly. Agree to high level deliverables. Assert that technical management needs some control over strategic business decisions, and flexibility in implementation and timelines. If you can't get those things, temper your expectations, or look for a job with more competent leadership. The last point has become my guiding principle. Stop thinking you can "fix" an org from the bottom. You can only fix an org if they hire / promote you to do so, and empower you as such. That's what you ask for (or look for in your managers).


When my boss asks for estimates I break down the work into rough chunks and estimate a range for each chunk. Added up I might end up with an estimate of between 4 days to two weeks. I also explain my reasoning, telling him which factors that have the largest variance, either due to uncertainty or maybe which route we need to take depending on client wishes.

This is usually good enough for him to give a decent estimate to the clients for when they can expect fixes and features. Regardless of my estimates, not seldom other more important stuff comes up, pushing things a week or more.


I dunno, some of the worst managers I’ve had have been great engineers and fully technically competent. It doesn’t always translate, and even during estimates or sprint planning I’ve seen them make these same mistakes.


I agree and should have clarified -- technical competence is _a_ requirement for good management, but you still also need to be(come) a good manager in the traditional sense. Its a different job than writing code and not everyone will be good at it (and arguably by definition, of the relatively small number of good engineers, even fewer will be good managers on top. I would expect quality technical managers to be relatively expensive).


> I would expect quality technical managers to be relatively expensive

Now that I can fully agree with. I can count on one hand the number of technically competent _good managers_ that I know. They're few and far between.


I think it was being advanced as necessary but not sufficient. Certainly being technically competent does not qualify one for a management position. It requires an entirely new skillset to be developed and a shift in mindset that many developers aren't going to be willing to make (i.e., despite being elevated to a management role, you are decidedly not the "star" of the team and must delegate the most interesting work to your team members).


The point is that it is generally easier for an engineer to learn business skills than a non-technical background person to learn engineering.


Can you share any specific thoughts on how to look for this when you're interviewing?


Project plans and estimates are usually fantasies made only to give executives a sense of control. For people who are actually trying to get something done in the world, I think the better thing to do is give them actual control via short-cycle processes and frequent delivery.

At my last startup, we got very good at releasing early and often. What would have been projects elsewhere got broken down into very small releasable units; our average story size was under a day. Very occasionally my cofounder would have us ballpark estimate two different batches of work using arbitrary points, just so he could get a sense of the relative cost of two paths. But we never estimated in terms of dates.

This worked for us because he really took advantage of the speed of iteration. Almost everything we built came with a question. In the next user test, would users understand it? Would they want to use it? Did they react as expected? When we sent some traffic to it, did people engage? Did the right people engage? Did they return to use it later, indicating value? Etc, etc.

The answers to those questions would drive what we did next. Because our goal wasn't to build features, but to make things happen in the real world. And once you get used to continuous learning, it's obvious that planning too far in advance is wasteful. All of the brilliant ideas you had a month ago weren't based on what you learned in the last month. Eventually, sensible people learn to stop producing a lot of plans that never get fully used. And, of course, to stop asking for estimates on them.


I understand that sometimes a project 'takes as long as it takes', but how does one deal with extrinsic deadlines then? For example, software that needs to be 'on the shelves' by the next Black Friday, or the next tax reporting season.

Estimates can't predict the future, of course, but good management will know that 'make the deadline, no matter how' is not the only option - changing scope is another popular option, for example.


The solution is exactly as you say: being realistic about scope.

If we absolutely have to have something by, say, Jan 1, then the first thing to do is figure out the absolute minimum for "something". The way I'll usually explain it: "Put yourself mentally on January first. If there is a feature whose absence would make you delay shipping despite the consequences, put it in the minimum set. If you'd ship without it, then leave it out."

Then we start building, measuring completion toward goal as we go. If the project is in good shape, we pretty quickly should be able to say, "Yes, we'll hit that months ahead of time," or maybe, "The date is at risk, but here's what we can do." As a bonus, we will also quickly have something we can put in the hands of users. Maybe for validation, maybe for early feedback, maybe for revenue!

Then as the date comes, we're just getting the nice-to-haves. We know we can ship any time, because we've been shipping frequently for a while now.


That's absolutely fine and a good way to work. However, I do still believe that plain old estimation has its place. Consider that you are part of a bunch of executives deciding in January whether to build a product A, which would have to be done by Black Friday, or product B, which would have to be done by Christmas.

Even after such a high-level decision is made, a broad scope estimate must also sometimes be made. For example: "would we have time to add a chatbot to this product?". If so, you could need to hire people with the relevant experience and have them ramp up. But if you have an estimate in the spirit of "well, must-haves A, B and C are probably going to take no less than X months, and the chatbot would take just as long if not more, so it's better to drop the chatbot idea and invest its budget elsewhere".


I do agree that broad relative estimates for different paths can be useful once the project is underway, and in fact said so.

But I just don't believe that the high-level decisions you describe can be effectively estimated in calendar terms. Even if you get the execs to cough up sufficient details for real estimates (which they won't), those decisions are being made at the point in the project lifecycle when people know the very least. Any user-focused team will learn a ton along the way that shapes the product, and presumably neither the competition nor the market is standing still. Better decisions driven by learning means scope volatility, which means there's a hard limit on the utility of estimates.

I think the best one can do on day 0 is give reasonable sanity checks and the haziest of ballpark numbers. But that's fine, because execs are making ROI calculations, and there's no point to making your I more precise than your R. And we all know how much of a SWAG business value estimates are early on.


I think that a lot of people who work on deadlines like this have learned how to incorporate this into their products overall structure. Think of games that have to be released on a proper timeline for a specific holiday season. Now instead of trying to meet those deadlines, game developers are putting out a minimum product and releasing updates, patches, and missing portions of the game in chunks after the official purchase/release date. Whether this is successful or not is up for question, but it seems to be one strategy for coping with amorphous deadlines.


This sometimes works for games, but not for everything. Consider, for example:

* Tax-reporting software

* Adding a coupon system to your ecommerce software in time for Black Friday

* etc


> but how does one deal with extrinsic deadlines then?

Deliver working software one chunk at a time, however you have to break it down to make it so.


After you're set on the general product you are building, that's fine. But consider you're trying to decide whether to build product A or B for deadline X. Which one has the largest probablity of finishing on time?

I'm not advocating having engineers estimate the completion date of their daily tasks, I'm just saying estimates aren't totally useless.

For another viewpoint that's not just mine, consider point 6 of the Joel Test, or the methodology of Evidence-Based Scheduling, also by Joel Spolsky (https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...).


One of the most painful types of people to work with is the CEO / founder who got lucky with a product early and came to believe they're a "product guy".

As in: My early product hit whatever trend / wave / need therefore I must be a product guy (and not just lucky).

The product guy has a preternatural ability to understand what the masses want. Watching them work -- witnessing their process -- is something to behold. They will steer products in a direction regardless of cost, complexity of likely outcome.

The outcome, quite often, is to tank their company. Since they don't understand why they were successful in the first place, it's very likely that their success won't last.

But if you're along for the ride, wow, expect the following:

1. You're the greatest (available) engineer we've ever encountered, building super-complicated XYZ is going to take this company to the next level!

2. This is taking much longer than expected and isn't matching up with our expectations but I'm 100% sure of my vision because I'm a product guy.

3. We're running out of money (because the market conditions that gave us early success have changed) and super-complicated XYZ isn't going to rescue us -- because you're a worthless piece of shit of an engineer!

See what happened there?

They're sometimes hard to distinguish from a vanilla bullshit artist. The bullshit artist will tell you how well capitalized he is, tell you he only wants the best (meaning he thinks you're expensive) and then try to slowly whittle your sense of self worth down until you get "the offer":

The offer is game-changing, life-altering for you: Instead of continuing to pay you with money, they're going to start paying you with magic pixie dust. The magic pixie dust will make you rich "when everything comes together."

When you tell bullshit artist that you don't work for magic pixie dust, that's when you learn that, in fact, you're a worthless piece of shit of an engineer.

I actually respect the bullshit artist more: They're bullshitting other people but they know they're full of shit. Product guys, depressingly, bullshit themselves.


It baffles me someone can be a product guy without understanding the market. Surely where, how & why the product sits where it does in the marketplace is intrinsic to the value of the product to the customer to begin with.


Makes me glad I'm not a product guy. As an engineer you only risk your own hide or the work you do. As a product manager, you risk the entire enterprise.


Fantastic article. The diatribe against estimation reminds me of this old article from 1996: https://web.archive.org/web/20140604112011/http://www.thomse...

> It is our belief that over the 30 plus years of commercial computing has developed a series of sophisticated political games that have become a replacement for estimation as a formal process.

Unrelated: can we not do this thing with the whole left side of the screen being one static image? It's really distracting.


> can we not do this thing with the whole left side of the screen being one static image? It's really distracting.

Agreed, ive been meaning to rework this


Timelines kill me. When I'm half way through a project I can tell you exactly when it's going to be done. Or I can plan the shit out of it upfront and give you an accurate timeline but that doesn't fly because you need the timeline upfront. I like working - can't I just work till it's properly done?


My usual way of doing this, is by spitballing an estimate with the team, multiplying it by 2.5 or 3, inventing some meaningless milestones and gantt-charts and presenting it to an outraged management. Then we haggle down to something realistic + a thin buffer. The management has a sense of control, we have realistic deadlines.

Caveat: I work in the financial industry, where the complexity of writing a compiler and whipping up a Tableau dashboard are perceived to be equal.


I’ve been participating in build pipeline work for quite a long time now, and fairly early on I noticed a pattern. If the build takes ten minutes on your local box then the tempo of code-build will tend to be 20 minutes, even when you are just fixing typos.

This is partly why some people are obsessed with very short builds. Some said seven minutes was ideal to avoid the developer getting preempted. Then it was three. Now some shoot for one.

I would bring this up and others would say they had noticed the same thing, but none of us knew why this phenomenon happens.

Then it hit me: it’s just Hofstadler’s Law playing out in the small. If you think you have five minutes, you will start something that you estimate will be five minutes, but it will take you ten, either because you were wrong or you get distracted.

There aren’t a lot of tasks that take one minute, and even if you’re wrong you only lose one minute.


> If engineers stop giving estimates for their work and simply ask for deadlines then it changes the dynamic of the conversation.

If there's a single line you want to take away from this wonderful essay, it would be this.

Parkinson's law usually takes care of the rest once the resources have been agreed upon.


When I was a young programmer in the early nineties, an old IBM programmer told me 'Don't accept the invitation to fail.' Good advice but what if the goal posts are moved. Best then to keep positive and communicate early and often about how the changes will affect the schedule. Work together as a team and get your manager to help you and that'll go a long way rather than keeping 'obvious' things to yourself and catching management off guard when work slips.


Web designers: Thin sans-serif body font means you hate your readers' eyes.

I'm grateful for "reader mode".

- - - -

There's a very interesting book, " Hollywood Secrets of Project Management Success" that details their system from an IT point of view. Here's a decent review: https://community.dynamics.com/nav/b/navigateintosuccess/pos...

> Inside this system lurks the biggest difference between the IT and Hollywood: movie industry exists for more than a hundred years already, they had enough time to develop and establish best practices and to prove them in practice to such an extent as to tell everyone: this is how you need to do movies; and everyone can trust it works, because it has worked for the whole industry for decades already. IT industry is very young, and exists for a few decades.

(Although, IT is technically as old as the cuneiform-inscribed clay tablet, eh?)

Specifically to this discussion, they (Hollywood) can set expectations reliably because have enough shared baseline experience of how long things take.

One way to interpret this is to ask yourself, of some new change in process or technology, "Will this stabilize delivery times?" But to even begin to answer that kind of question you have to establish reasonable ways to map between work done and results delivered, and then set up and track metrics. (Which is easier said than done.)

- - - -

One problem unique to IT is the "interpretability" of our end products. Anyone can watch a movie, but most software requires at least some training to understand and use.


Good article. I thought about this problem quite a bit. I've been on both sides of the question.

I thought I was brilliant when I came up with the solution of fixing the time frame and estimating the work that can be done in that time frame. Turns out I came up with sprints, 60 years after they were invented.

The fun solution for this would be to give 'hit dice' estimates for tasks. Assign type of dice and number of them to each eastimated.

How long will this take?

About 1d20 days.

Nobody will be happy with this, but it is the most realistic one. Cause tasks do have that variability to them.

The wises thing said here is: "the reality is that if you can make a probabilistically accurate estimate, then it's likely that the task should have been automated by some other means already. "

Is there an answer to this problem? Maybe abandon long term estimates entirely. Having really short term estimates, with frequent updates.


Agile has his concept of "horizon of predictability". Basically, you are semi-accurate within the two-week range. After that, the accuracy nosedives. At 6 months, you could be off by, well, 6 months.


Great article. This is very true for game development as well.

But serious question - for live products especially, you do need to have some sort of schedule where you are launching new features every X weeks. So it's important to know how long your features will take, so you can have a constant cadence of updates. Plus a lot of times you will have marketing initiatives or other things that need to be coordinated with your releases.

So my point is that you cannot just remove estimates. There is a need for knowing when the current sprint / feature will be completed, and being somewhat accurate about it.

I do really like the point about re-framing the conversation to start by asking the manager how long they want the engineering team to spend on the new feature. That will definitely change the dynamic and hopefully should encourage a conversation about what is realistic to do in the timeframe that the manager has in mind, and how the feature needs to change in order to achieve it.

But after that, the engineer still needs to go through and create estimates to make sure what they just agreed on is actually possible, and then those same estimates are necessary to plan out the development to make sure you are on track. So yeah, you can never really remove estimates.

Am I wrong?


> you do need to have some sort of schedule where you are launching new features every X weeks. So it's important to know how long your features will take, so you can have a constant cadence of updates.

The only way you can have new features every X weeks, is if those features take no more than X weeks to develop. You can estimate a new feature to take 2X to complete, but that would still mean it can't be released "in time".

Of course, you might still want to have a regular cadence of feature announcements, or at least be able to plan them in advance. But I feel like the best way to do that is to decouple finishing a feature from releasing it.

Estimates are primarily useful in deciding what tasks to pick up first. Luckily, that usually needn't be that exact - you don't need to know to the hour how long some development is going to take, just how much faster one thing will roughly be compared to the other. A manager can then decide whether it's worth it to risk picking up a larger task that might provide substantially more benefit than the smaller task.


> The only way you can have new features every X weeks, is if those features take no more than X weeks to develop.

You can still have releases every 2 weeks, where each feature takes 1-2 months, if you have multiple small teams each working on a different feature at the same time. That is how we do it.

But of course, it's typical that each feature takes an extra 1-2 weeks of development time and many times other devs are pulled off their own projects to help out, so then those other projects are even more delayed.


That's completely true.


Estimating knowledge work is always more unwieldy, and in this context you are often working towards outcomes which ultimately have no precedent (although they may be made up of known components).

It is also more difficult to initially assess the skills fit of candidates for knowledge based work, especially those that require creative problem solving, and unlike other engineering disciplines past outputs are opaque and hard to rely on as simple markers of past performance.

For a project, in order to produce a good estimate you need to understand scope, then align it with precedent, adjust for your resources and productivity profile and then view all of that through a risk lens to set probable outcome ranges.

For a programme, in order to produce a good estimate you need to understand and manage the risks, constraints and dependencies across all your projects and ensure that the projected benefits (both hard and soft) are still net positive, meaning the investment makes sense.

From my observations at least it doesn't look like the idea of development as "investment" in a product or service is very common. I'm assuming because time to market is often the ultimate driver rather than cost, in which case, congratulations, you should increase your costs on more numerous and productive resources whilst aligning your strategy and risks to iterate on smaller scopes faster so that dead ends can be quickly parked.

The problem isn't so much the estimation process as it is more generally poor portfolio/programme governance and management practices and more specifically a lack of risk management and understanding of contingency at those levels. I find IT, and Software Development more particularly, to be some of the worst offenders, but that is because the risk profiles of such projects are vastly different to the risk profiles of other types of work. The productivity of your resources is difficult to discern and a lack of precedent for similar-enough projects and knowing what their variables were, all meaning you really can't produce a reliable estimate with incomplete information.


From my experience (3 technical co-foundings, 1 additional top-tier startup, other companies) it’s rare for management to incorporate our estimates. I’ve been coached by a VP Eng on how to blow off actual estimating, in that top-tier startup. Sometimes, it may be quasi-rational in that the apparent business constraints just don’t care about the estimates, in which case, yeah, just cut to the deadline and budget and we’ll see if we can do anything that isn’t embarrassing. Management usually dictates rather than engages, that’s how you get to be CEO in the first place.

“We” are usually not at all clueless. “They” tend to be.


The original title was closer to what you are alluding to here. Idk why but hacker news changed it to say "We"


On HN, moderators edit titles that are misleading or linkbait, as the site guidelines ask: "Please use the original title, unless it is misleading or linkbait" (https://news.ycombinator.com/newsguidelines.html).

It's a linkbait trope to use "you" in a title, because it grabs attention whether the topic has anything to do with the reader or not. That's why "you" is headline writers' favorite pronoun. Combining it with a pejorative ("you have no idea") makes it even more sensational. That's definitely the sort of title we edit.

When we edit a title, we look for a representative phrase in the article itself that expresses its point in a more neutral way. That's what a moderator did in this case. The language comes from the article's own summary of itself: "After all of these years, I finally came to one simple conclusion. With all due respect: we are completely clueless about how long things should take." Reading the article text to find how it states its own conclusion, removing any residual linkbait (such as the superlative "completely"), and making that the title instead is the best way we've found to correct titles that break the site guidelines.


It's part of the line before the picture of "the The Parsimonious Yachtman".


At certain points, this article touched on the commoditization of engineers and engineering skills. Beyond extremely simple things, it's never going to happen. Especially as you build a company and end up with a few monoliths through various acquisitions, all while concurrently hundreds of microservices. Things get too intricate and too hairy between systems. There's no amount of handwaving that will convince me that machines will replace Software Engineers anytime soon. Solving valuable, enterprise-scale problems will never be as simple as a Wix drag-n-drop solution.


I'm not so sure. I think this is something that's happening right under our noses -- it's just easy to miss it if you're not looking at the right thing.

Software engineers aren't being commoditized by being replaced by machines that write software. They're being commoditized by their own frameworks, libraries and tools.

Take the game industry as an example. Twenty years ago, your game company needed a big team of software engineers employed to write a game engine with advanced graphics capabilities (let's assume you want advanced graphics). Today, a single developer can just download Unity or Unreal Engine and have at least the technology available to them immediately (art is different but in many ways similar; automation and process improvements are coming for those jobs too, I'm sure).

So you don't need the same number of engineers for the same result. Sure, you have a big team of engineers employed at Unity Technologies or Epic Games, but that's now a shared resource. That employment is no longer duplicated at the companies that decide to use those engines.

Another example is the push for 'DevOps' and 'Cloud'. Think of all those system administrator jobs and IT departments being made smaller because now you can just spin up a server on AWS or have your CI infrastructure managed by BitBucket.


> Another example is the push for 'DevOps' and 'Cloud'

It's been my experience that delivering business value is taking longer because of this, not less. It's hubris to believe that a single person can be competent enough in all these domains to replace multiple people who focus on specific domains.

By distilling DBAs, Configuration Management Engineers, System Administrators and Software Developers into single people business is getting shittier products less frequently which incur more operational overhead.


Death by a thousand cuts sounds more believable. I concur with your position: instead of it being black-and-white, it could simply lead to less demand for engineers as the "building blocks" -- really common open source technologies with nearly omniscient presence -- are essentially commoditized.

On the other hand, I could also see a reality where, since many low-level problems are solved for you, management expects more out of you, so the number of engineers stays about the same, but you get a higher level of productivity.


Agree with your second paragraph and that's actually been my real-world experience. I also think there's another effect where commoditizing a technology leads to the creation of jobs that specialize in that technology. With easy-to-obtain game engines, suddenly more companies are interested in using them, which itself leads to an increase in demand for engineers, just with different skillsets.

I wouldn't put money on this balance lasting forever though. To me, that's too close to dismissively saying "it's different this time", with regards to our profession.


So who is it building the next Unreal Engine?


Nice article, but what's with the web page design? On desktop, the left half of the screen is just the blog title, while the article is squeezed in the right half. Safari on Mojave if it matters.


And it breaks keyboard scrolling (at least on Firefox) which pisses me off more than anything else about it.


LOL I came here to see if someone else had noticed it's the worst mobile-first layout of all time. At least it's mobile-first I guess. :)


I'm reading The Moon is a Harsh Mistress right now (fascinating, bizarre, and highly enjoyable thus far), and there's a moment where the main characters discuss a timeline described as "on the order of 50 years."

This is still a fairly common phrasing, and in general, people around me generally take it mean "roughly" or "plus or minus a few," which is how one character interprets it. Another character then explains that its actual meaning is, paraphrased, "probably not less than 5 years or more than 500 years."

While I had a vague awareness that "on the order of" was not mathematically equivalent to "roughly", I don't remember learning the concept in such a memorable way. Similarly, an order of magnitude are commonly understood as "a lot more than", when they also have a precise mathematical definition.

I think that reason we are terrible at estimating the time something will take is that humans struggle to think in terms of timespans that are actually representative of the variance in an estimate, not to mention the fact that as additional variables are introduced, that variance will almost certainly increase (even if the midpoint decreases). I'm not sure of the causal direction here, but our misunderstanding of terms meant to succinctly express "somewhere between" sure doesn't help things.

(side note: there's also a discussion about designing a revolutionary organization that really clarified my rudimentary understanding of circuits. The book is truly stunning at times.)


Wow, lots of great opinions here I'll look forward to reading and learning from. My very brief take is:

Estimates should be for "making the best decision now" and not judging performance. Those are orthogonal concerns.

It's very fair for the business to need to weigh "well if X takes 10 days and Y takes 20 days, and X makes more money, of course we'll do X". That is what estimates are for.

However, if X ends up taking 15 days, the business (CEO, product, even CTO depending on their closeness to the solution) shouldn't/can't decide if that is a performance issue--only the engineer/the engineer's manager/etc. can make that call.

And maybe it is, maybe it isn't.

Granted, pulling this off is really hard; tangentially I'm now working in the construction industry which has the very same problem: this home should take 9 months. It took 10! Who's fault is that? Well, that's the wrong question. The right question is how could we have known that delay sooner/better, and mitigate it if possible this time and more importantly next time.

If you're interested in working on a "humane" project management system (I just made that term up and it's super early, so disclaimer/etc), reach out </shill>.


I've always had strangely accurate time estimates. The basic idea is to divide up the work, and make a fair estimate each part, with special attention to unknowns (which get much more time).

Then double everything.

The individual task estimates are often out, but the overall estimate is close... as if, with a population of tasks, there's regression to an accurately estimated mean.

(An alternative explanation is that I over-estimate, and Parkinsonianly, work expands to fill time.)


Or you've baked in the appropriate level of risk for the work at hand. You might want to look into Reference Class Forecasting. Essentially an "outside view" of past performance which is added to provide an uplift adjustment to some measure of your estimate (time, cost, benefit).


Some of my colleagues would 3x their estimates and come in ahead of the schedule, looking like heros.


I think time estimation can be learned to some degree. As an anecdote a couple of years ago I had some free time and my sister was renovating her home. I volunteered to tear down all the wallpapers (basically an entire apartment). I had no idea how long it would take so I took a kitchen timer, set it to 25 minutes and started chucking away. After a couple of these 25 minute runs, I could predict fairly well how long the task would take for a certain room. I also found it interesting that it was very easy to adapt the estimate to changing conditions. For example in the kitchen the wallpaper was soaked in fat and there were 4 layers glued over one another. After working on this for about 5 minutes to see how much harder it is (answer: a lot) I was able to estimate fairly accurately how long it would take.

Granted this is for manual labor where you know exactly what will happen. Estimating more uncertain environments is a lot harder. Nevertheless, I have grown used to using this time-chunking approach for all sorts of new tasks. I think shorter timespans (10 minutes) work better for estimation but for actual work, 25 minute "sprints" are good.


What you're describing is known in Manufacturing Engineering as a Time Study: https://en.wikipedia.org/wiki/Time_and_motion_study

Basically someone stands around with a stopwatch and monitors a process a few times and then gets a feel for the average time the process takes.

You can do the same thing in software with the caveat that you must be doing pretty much the same thing.

e.g., if the last 5 times you had to build the basic framework for a CRUD program in Ruby it took 10 hours, then it's likely that the 6th time it will also take roughly 10 hours. However, doing the same thing in C means that the estimate goes flying out the window.


I wonder if this works a bit better where things stay the same. E.g. "I need to perform a minor tweaks to a feature in repository X in language Y and Z and release process Q.". It'd take considerable time to build a dataset of timings, and since the software itself evolves over time (developer tools, build process, language, etc.) any data recorded is almost immediately irrelevant. Fun!


Either they have to stay the same or you need good data on the time impact of changes.

None of this is rocket surgery. It's "just" that most software teams won't do it for one reason or another.


While I don't think estimates in software development are hard (unless you're asked to find an algorithm, that's impossible to estimate.) Manual labor and knowledge work (not counting repetitive, easily automated stuff) are pretty much completely different and it's dangerous to make analogies between the two.


There is something to be said about pushing engineers to tighten their deadlines. It's not necessarily out of a desire to shift blame. Instead, it's often to encourage them to work hard for the same amount, and to think up novel solutions to problems where others might not have. In other words, it's an attempt at maximizing productivity. Start ups and some teams are running on legitimately small budgets and squeezing every moment of the process will net a large outcome if done consistently.

Having said this, the idea that you can ask for something in 2 weeks when your eng estimated 4 weeks is ridiculous. I would automatically add 50% to my eng's estimates because their estimates are probably too ambitious to begin with (I fall to the same problem when estimating my own work).

If you consistently get undercut in your estimates, get out! I've seen this behavior most often in marketing agencies and game companies. I would not work for these types of people regardless of the money they are paying.


This is talked about in the book "Peopleware", that states the opposite. Imposing tight deadlines often stresses people out, causing them to miss creative solutions that would save time. Instead they chose the obvious, longer, safe route.

That's what the data in the book seemed to point to. Programmers with no deadlines were more productive then programmers put in the control group. However there might be data showing otherwise, I would be open to it.


I have read that book and having worked in some companies, I can agree that tight, unrealistic deadlines are detrimental to outcomes. However, working in silicon valley, I can say that there may be need to push people to work more efficiently. It's a fine line.


In my experience the deadlines are not a big problem; the communication about it is though. I worked with PMs that try to attack the issue by shooting into micromanagement mode (symptom: the project management system gets filled up with literally 1000s of tiny points with random deadlines in order to try to manage the fact that they don't get how software dev works); this drives most devs mental up to a point they burn out. There are only so many ' why is task xyz3835433 not done? It was set to be done yesterday?' 'Yes, because it was set by you and task xyz3835433 cannot be done in that time' 'Why did you not tell me?' 'Because I do not have time to respond to 9000 points every day; they are not relevant to my work' etc. Usually these PMs come from another discipline like management of marketing or PR campaigns, were breaking deadlines is not an option. But complexity there is far easier to manage and oversee and they do not understand (even after many metaphors) why our estimates are not spot on. They will communicate with the client (internal/external) in massive MS Project files and accompanying docs that no-one will ever read and berate the team for being so much off in the estimates.

Then there is the other type of PM who will continuously manage back the client expectation. To the layman, this person seems to tell the client (internal or external) bad news on a daily basis; it'll take longer, cost more and you're getting less.

It's the latter PM + team that actually will get the flowers and the cake on launch day and a have the happy devs that did not have to sleep under their desks while the former team will be near burn out, client unhappy even though they probably technically did deliver more (but also too late). Usually that former team won't get more money for it either...

I have seen both in startups and fortune x companies; I have seen both as contractors and as internal teams. For the anecdotal part here; the ones with a PM like the first one in a product company internal setting, those companies all failed that I have seen/worked with. For contracting work, it can work, but it's a stressful and panicky way of working which often results in one-off contracts.

You inevitably come to something like sprints and good, timely bad news talks. People who have the micromanagement type need for control are just not going to survive those as their struggle breaks down even inside a 2 week sprint period.


It strikes me that the issue is, to some degree, rooted in the transfer of the vision from one person to another. I have struggled with this as well, on the side of the 'visionary' rather than the developers.

A quick search for 'how to communicate what you want with software developers' turned up what I expect is at least modestly helpful: https://www.entrepreneur.com/article/224816 (2012) but I'd venture that the topic really is worthy of much more in depth treatment.

A key element missing at least from that post is: How to confirm that the developer does understand what the vision is? If that can be confirmed, they would then be in a position to modulate the vision vs. reality, and, even contribute some of their own creativity. I suspect that this modulation frequently occurs with a start from misunderstanding. The divergence can be dramatic.


Posting one more time; David Packard's (of HP fame) address on "How to be a Manager in a Technical Company" -

https://gizmodo.com/the-hp-way-how-bill-hewlett-and-i-built-...

One thing i noticed in discussions like this one is the amount of excuses made for Management's/Manager's lack of Technical Proficiency. This has to stop. If you are in a Technical Domain and leading/managing Technical People you have to know the domain decently well. There is no other alternative. Else you are reducing the effectiveness of your Engineering Team and the Company as a whole to your level of incompetency. The resulting effects may not be visible immediately but sooner or later you will drag the company down to oblivion (eg. see what happened to the same HP).


Related article (albeit a more mathematical focused analysis of the same problem): https://erikbern.com/2019/04/15/why-software-projects-take-l...


Why does everyone accept time estimates are accurate outside of software? Has no one hired a contractor to build out a bathroom, paint a few walls? Commissioned an artist for a sculpture? It's my experience that anything that can run over schedule will...why do developers get so much flak for it?


I work at a company with dozens of hardware and analysis teams, and software is one small group making up not even 5% of the company, but we are never the bottleneck. Our estimates are usually the most accurate of anyone. If we have a problem in a design, we don't have an entire supply chain to modify.


How good are coracles though? Not much good as a yacht, but then you can't carry a yacht on your back and walk it upstream can you?

https://en.wikipedia.org/wiki/Coracle


IMO if you're familiar with the language and libraries it's pretty easy to predict how long writing some feature will take (provided there aren't bugs in anything you're working with, you don't get any weird emotional hangups etc. Those are fairly rare.)

What's hard to predict (at least for me) is writing non trivial config files or using some new library or language or finding a new algorithm. Some of those things you might be able to estimate how long it might take to estimate the time it will take but other times you just have to say "I don't know" and time box it.


This is amazing. I work in a law office and have nothing to do with tech but just change the examples and he has just described my typical day that replays over and over in matters both large and small.


I have been having the exact same thoughts, except at a public accounting firm. A lot of the comments here describe what is typical in public accounting as well. It seems the moral of the story is that managerialism is hard to make work. Perhaps we should try something else?


I'm intruigued, could you elaborate more?


Reminds me of this article [1] from Michael Wolfe. I've sent this to several PM's over the years and they've all begrudgingly agreed with me once I applied this to the current project. The beer probably helped too :D

[1]: https://www.quora.com/Engineering-Management/Why-are-softwar...


I think the commonly claimed rules about adding increment your unit of time or multiplying by x are kind of childish and a wasteful. Estimating time to complete is difficult because individual items have high variance. If you have 3 items that are likely to take 2 hours +/- 2 hours, estimate 10 hours for the lot. Variance of the total goes down by a factor of n^(-1/2).

Explain why you do this. Flag when variance is screwing you anyway.


Everyone on my team knows i can guess when they are going to finish usually to the day. They don't know how but it's pretty easy, measure biases


The article completely misses the point. No amount of reasoning will change a manager's mind. They do not push people because they actually think it can be done faster, they push people because software developers are very weak and because they want THEIR project to be delivered as soon as possible so they can boast in front of the other managers.


The author illustrates how easy it is to be clueless about how long things should take with his parenthetical about how trivial it is to make a Wix website. Yes, you can get the “used sailboat” class of website in Wix in an hour, but that’s probably not what you need or had in mind. The fact that the tool seems to do so much of the work for you will arguably just make the expectations even more unreasonable.


that’s probably not what you need

99% of Wix users need a website that's just a big phone number and street address that gets indexed in Google. A "used sailboat" is exactly what they need.


That's only true if you assume that 99% of Wix users are Wix users because they correctly picked a simple tool for a simple need. But that's a tautology. The problem I describe arises when someone looks at Wix, gets the impression that it is a tool that magically makes it trivial to create a great website, and then assumes they can get the yacht they have in mind for the price of a used sailboat. Most of the work involved with creating a good website isn't technology, but rather strategy, writing, design, and other things where Wix won't help you or even gets in the way.


or the are Wix users because thats all they want to pay for.

If I tell sombody that a full custom site costs idk 20 times more than a Wix site they might not want to pay for that.


That's because developers shouldn't be making estimates in isolation. They only know what they know about themselves. People who watch developers across many projects can better understand the 'likeness' of similar projects, similar inputs (in terms of developers), and then talk to developers about the differences of a given project to come to a better estimate.


Quote from the article "Engineering is not getting simpler, its getting more and more complex, because we are solving harder and harder problems" I totally disagree with that , for mine work environment. We are over engineering stuff. It all has to be new and shinny. Lots of sites can still be build with a small framework, html5 and jquery.


> I found myself absolutely astonished that tech founders could be so clueless as to assume that a simple rest api integration should take the same amount of time as a real time transactional distributed ward’s clustering implementation for peta bytes of data, or a highly available complex distributed metastore.

I would love to know the context behind this. Beautiful website by the way.


Its honestly so so nice when you have a project manager that understands being Agile(tm) is about the schedules and deliverable reacting to shifting realities as well as the engineering team. I guess too many pms don't understand that they shouldn't stick their neck out with what they don't have yet and paint themselves into a corner with promised deadlines.


sorry for earlier post, a lot of stuff still hurts.

i'm in poland. my type of manager would be a person who accepts me, gives me clear target and constraints and leaves alone, person who does his job, person who doesn't bullshit me and says the truth if we're in shit or not and realises that we are all on the same boat and we're also waist deep in shit so there is no point in fighting each other.

on 16 IT companies I worked in, there was only 2 such people. that's 12.5%. shouting and batshit crazy behaviors are normal.

someone here wrote that some things are cultural. a lot of stuff is cultural in poland. its a catholic fundamentalists country. so anyone who solves problems is a problem. people here had no type of french revolution or industrial revolution. basically its feudalism in modern dress.

last guy i worked for was freestyling everything. he had no plan. he told me he uses his imagination and he tells people what he had imagined and they have to do it. i found out about this when i was trying to resolve communication issues that i had with him. he often imagined new stuff and forgot to tell me about it and during code reviews he blamed me for not doing what he wanted me to do. i ditched the guy because he didnt wanted to do anything with that problem. the guy sabotaged himself throught the whole time. sometimes he had some strange outbursts.

there is a lot of such people in poland. a lot of office workplaces are like kindergartens. there's constant chaos. no plan. big ideas but nobody wants to wait. people who want things done asap usually are abusing others, not to mention that they are total morons cause abusing people takes time they dont have. most companies are micromanaged.

thing that keeps me going is knowing that, as someone said here, this stuff is cultural and that there are some better places. although after all those companies i'm kinda like a dog from the impound. i dont know how to trust people. word nation is for me the smallest joke possible to say. i'm afraid of polish speaking people.


Doesn't address how highly technical people dealing with new systems are also wildly innaccurate in their estimates.

It's like a expert in tech, business and computer science trying to predict stock prices. I mean that's the expert of experts failing.


This post is assuming that all startup founders are non-technical people...


Honestly, I've seen highly technical managers make this mistake worse than the non-technical ones. Non-technical people usually ask sincere questions before shoving an unrealistic deadline in your face. Technical managers may have already told someone in the C-suite "ohh yeah, that's easy" before talking to the guy who has to do the work.

I've had ex-programmer managers say "I just want you to add this graph to the app. I could do it in 20 lines of python." In reality, the request is more like "Add in this plot which is the result of a long running calculation. The calculation has to run in a background thread so the app is still responsive, even though the whole program has been architected with a single-threaded design. It needs some mechanism in the UI to indicate it is making progress. We'll also want a way to cancel the task. Half the datasets are using a different sign convention, so you'll need to automatically handle that. Actually, the data is polluted with garbage, you'll need to spend an unknown amount of time debugging a legacy system to figure out where the bad inputs are coming from in order to understand what can be done to filter that out...so we can have this plot by end of day, right?"

Apologies for ranting about my "internal software" days.


Many are, you just don't hear about them, because, well... yeah.


"Bro, I'll just get some other nerd to do it. Instagram was built by 12 guys and WhatsApp was acquired for $19B with only 50 engineers. Can't ship if you whine; so shape up and stop wasting time!"

/s


not sure if it was mentioned, but I'll throw out recent other issue not often talked about - external dependencies/vendors.

"integrate with external API" - had a project where there were several external data sources (financial service providers) to import on a regular basis. One had an actual API, commercial service, good docs - took a few weeks.

The others are...

1. "hey, we'll send you a nightly file, except it's not always nightly, because it depends on someone running the job and if they're not here, you won't get it".

2. "here's our SOAP WSDL" - "this doesn't work" - "oh... try this other one" - "that doesn't work" - "try this one, but just don't use some of the endpoints cause they don't work" - "OK, but this doesn't really work either". Now... intersperse those sentence fragments with a minimum of 4 business days via email (sometimes going for a couple weeks because 'vacation' time).

3. X was working for 7 months, then stopped. "oh, we changed the file name and format of what we push to you yesterday". no warning, no documentation on what the change is. just... pissed off end users who are now saying "my data is wrong!"

Figuring out how to take data from a file or SOAP or REST endpoint and process it - that's not terribly hard. Figuring out how to deal with more than half a dozen vendors who are not 'really' in the business of providing data, but do it half-assed anyway - there's no end to 'figuring it out', because it's a moving/changing target.

I'm not naming any negative names but I'll mention that quovo.com was comparatively pleasant to work with - they're an actual commercial service. however, less than a year after we coverted a system to use them, they were bought out and some of the useful functionality seems to be sunsetted already. I'm not on that project directly anymore, but talk to some colleagues still involved in it.

From the client's standpoint, it's all "integrate with external data providers". "You did one, the others can't be that hard". But each provider is a completely separate island of functionality, documentation, responsiveness and professionalism.

For the record, no, you shouldn't be providing me with client SSNs as their identifiers (quovo didn't, but I'm surprised at others that do, in at least one case that's the only way they provide client identifier data at all).


this is basically the whole point of correctly applied Scrum


Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.


most managers is drug&alcohol infused money starving 10 year olds with no fucking idea what they are doing and knowing only how to abuse people like their fucking abusing power figures called parents


The Content-Type header of the article is set to the empty string. It still renders, but I'm guessing the author hand-coded a website and screwed it up somehow. He probably should have used one of those Wix-style solutions he mentions in the article.


I only hire people who can reliably estimate. How can you call yourself a pythonista if you cant reliably estimate. Incompetent estimators have no place in my organisation.


These estimators don't exist. Such a reliable estimator has more incentive to use his ability to estimate tech stock prices and become rich rather then work for you.

In fact hearing this line, no one will work for you. Frame it and make it your motto, see what happens.


They sort of do though. As hyperpallium alluded to above, the trick is to vastly overestimate the time required, then let Parkinson's law take up the slack.


Doesn't that prove my point? They use overestimation to hide the unpredictable nature of the project itself.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: