Hacker News new | past | comments | ask | show | jobs | submit login
Dear Client, Here’s Why That Change Took So Long (simplethread.com)
464 points by jetheredge 40 days ago | hide | past | web | favorite | 273 comments



What gets me is that software has been a mainstay of modern business for _at least_ 30 years. And this whole time, every single professional software developer has been telling every single non-software developer the exact same thing, over and over: this takes longer to do than you think. If, say, 80% of developers were knocking things out, problem free, in an hour or two and the other 20% were hanging back like a 50’s union boss saying, “yeah, that’s going to be an all-day job easy”, then maybe I could understand why they STILL think we’re lying. Even if 20% of the devs could get things done in the time they seem to think it takes and the other 80% were hemming and hawing I could still understand this perspective. But that’s not the ratio. 0% of devs can reliably complete tasks in the time that MBA’s seem to think it should take and 100% of devs take longer than they “wish” it would take and THEY STILL AREN’T PAYING ATTENTION.


On the flip side, I've seen far too many teams move at the speed of molasses, for reasons unrelated to the intrinsic complexity of the problem. Bureaucracy, analysis paralysis, no automated testing, accidental complexity, tech debt, poor retention of experienced developers, poor compensation resulting in sub-par hires, insufficient training and mentoring for new hires etc etc. I wouldn't be so quick to assume that every single development team is operating at their most ideal.


> . Bureaucracy, analysis paralysis, no automated testing, accidental complexity, tech debt, poor retention of experienced developers, poor compensation resulting in sub-par hires, insufficient training and mentoring for new hires etc etc.

Most of these look like MBA problems.

That said, maybe more experienced coders should be doing MBAs (which isn't just about attending classes but about networking, acquiring a wider shallower knowledge base etc) so the dynamics of tech debt are clear to people who are in charge of making debt decisions.


As someone who has spent time as a project and product manager, I will say a lot of it comes down to a point of view that 'tech will make it happen'.

Many times I've seen long time estimates for something, because C level was requesting "X feature". But when you sit down down with C level, and understand what they want to achieve with X feature, many times it is possible to sit down with tech and figure a way to get what was wanted without the feature, generally by adapting some other request or feature.

The problem is that most managers are looking to suck up to C level guys and 'deliver'. And many C level guys won't stand the insubordination of someone telling them 'maybe that thing you guys thought of isn't the best way to achieve what you want', no matter how much you work on the techniques from How to Win friends and influence people. Some guys are just dictators, and some are suck ups, and when the two meet, as they often do, it means tech will work 10x what is needed to get where the company needs to be, because the focus will be on what someone requested, not what we are looking to get at.

Also, many developers get stuck in the mindset of places like that, where they assume they have no voice and everything they say will be used against them. So you ask them as a manager how they would achieve X and they just say "tell me what you want in the spec sheet"... people get weirdly conditioned when in negative re-enforcement environments.


> because C level was requesting "X feature". But when you sit down down with C level, and understand what they want to achieve with X feature, many times it is possible to sit down with tech and figure a way to get what was wanted without the feature, generally by adapting some other request or feature.

In the best functioning development organizations I have experienced or seen, nobody should be asking for "X feature", they should be explaining what their actual needs are, what they need to get out of the software and why. And how important it is to them, or even what their "budget" for it is.

And a technical designer (who could be a developer who has some design skills perhaps by experience, or a designer who understands the tech) _designs a solution_.

This requires someone(s) on the "development" side who can do needs/requirements analysis leading design (ie, "UX"). And it requires the stakeholders to trust that the capacity is there, and that it will turn all right if they don't try to micromanage the feature -- not just "all right", it'll be SO MUCH BETTER.

This works so well when the organization staffs itself to make it possible and the people in charge allow it to happen, that it seems kind of insane that so few organizations do so.


You just described product managers.

They solicit needs from customers, support conversations, etc. and translate those into future development. They combine those needs into major product directions and balance them with an internal compass for where the companyw wants the product to be (i.e., what jobs they want to solve for which users.)

They are technical people with an eye for how something should be built, working closely with designers (visual, UX, etc.)

Excellent companies figure out how to deploy these types of thinkers -- with appropriate mandate -- throughout the organization.


Though you know, sometimes organizations fall into this issue in other areas.

I once worked with a company that had that attitude toward accounting/compliance - so much more work and all because the primary holders were not in agreement, so they kicked the corporate structure can down the road.

Same goes for CS, etc.

I guess the only solution is a holistic approach, but even then sometimes someone will have to bear the brunt of being the area where people will 'will handle it', at some point a CEO will decide that is what is needed and it will be needed, even in a perfect organization.


I am pretty sure that my Product Manager has no technical experience at all. And I assume that it's not mandatory for them.

If they are supposed to be technical, then we hired wrongly.


Technical enough to understand, but probably not technical enough to build (but maybe was at some point). That's where they need to live. Their job is to align the devs and the roadmap with the business needs, prioritize what bugs to fix, and decide which enhancements are worthwhile. They also need to know when the team is bullshitting them and when a tech design is too complex or simple.

I can imagine a non-technical person could end up in this role and succeed, but someone with a technical background and some business acumen will have a much higher probability of success.


Well, because stating needs and not implementations disempowers managers and means they're left with no feeling of grip. It makes it hard to imagine what you'll actually get and whether it'll be useful, which is scary. Plus, what people need is often so vague that literally anyone could have come up with it and it's embarrassingly obvious. If a manager goes to his tech team and says, "I need to make our business more efficient. Think of something!" then this may be the best way to get results because often the programmers understand the business as well as or better than the managers and they understand the codebase, but who would respect a manager who said that? It'd be Dilbert cartoons all over.

This is one reason why tech firms win. Their senior management are made of [former] engineers who are much more aligned with the workers around what implementations make sense.


That's why it's an actual skill to translate needs to implementation. It's not one most managers have either, is why when the managers just give you implementation it does not work.

The skill includes working with the stakeholders to develop actionable needs (not just "make our business more efficient"; the obvious place to go from there is, okay, what do we know about where the greatest bottlenecks/inefficiencies are now, if we don't know enough what can we do to learn?).

Also you don't just take needs and come back with a finished feature, you iterate with feedback from the stakeholders. The first iteration might just be a textual description of the feature, to make sure it makes sense to the manager/stakeholder.

It's a skill, and it involves building trust.

But translating needs to features that will succesfully meet those needs is not a skill most managers/stakeholders have either, is exactly why it doesn't work to just get detailed micro-managed implementation directions from managers/stakeholders. (Or customers!).


My boss/friend just sat (passed) his PMP [0] exam and was talking to me about it as he learned. The main thing that I learned is that the amount of time they say you need to spend planning a thing is waaaaaaaaay longer than we’ve ever spent planning a thing. Orders of magnitude longer. And you plan it multiple times over, with rounds of stakeholder engagement, rounds of risk analysis, rounds of breaking it down and analysing each component and figuring out what it is and how long it will take and what can go wrong and the dependencies and so on and so forth ... and then going back and doing it again until you’re sure it’s correct.

Why do we, as an industry, expect to be able to run a project to schedule when we often give it the most cursory of glances in the planning stage? I work for a very large integrator and we don’t plan at all. We jump in and start while we’re ‘planning’. It’s absurd.

He and I have long said that if the people who built [hospitals|bridges|aeroplanes] behaved the way we did, the world would look very different [1]. Now, thanks to my recent PMP-by-proxy, I’m starting to understand.

[0]: Project Management Professional. One of the main accreditations for project managers, I think PRINCE2 is the other one.

[1]: I’ve seen the project schedule for a railway line extension. “You really plan a concrete truck coming three weeks from now down to the half-hour?”, I asked my friend. She does.


> you need to spend planning a thing is waaaaaaaaay longer than we’ve ever spent planning a thing. Orders of magnitude longer. And you plan it multiple times over, with rounds of stakeholder engagement, rounds of risk analysis, rounds of breaking it down and analysing each component and figuring out what it is and how long it will take and what can go wrong and the dependencies and so on and so forth ... and then going back and doing it again until you’re sure it’s correct.

Yuck. I think the agile approach might be better here. It's better to expose your plan to reality ASAP and adapt along the way.

Except, that's not how agile really works. How agile works is "bosses" have an hour planning meeting, and then act as though they have a veeeeery robust plan (like you talk about) and get upset if things do go according to their quick one-hour plan. So much for adapting along the way.


So I just said that in order to get a project planned properly you need to spend way more time planning it, and you literally said "yuck", let's use agile instead. Which is ... not planning it properly.

And anyone wonders why we're in this situation? We dug this hole, kids. (And to be clear, I'm guilty. I'm not trying to be better than anyone. I am not! I'm just attempting to explain it.)


Agile works well for redecorating a house (“let’s paint the walls first, and worry about the couch later”), less so for building one. That requires more foresight, larger-scale planning, and writing things down.

With agile, building houses still somewhat works because people will do that planning in their heads. So, knowledge will end up _only_ in people’s heads, not even in everyone’s heads, and will deteriorate there, even if those people do not leave the project.

So, by the time it’s time to do a large-scale update to the house (add a room, update ventilation to modern standards), nobody knows where the ventilation ducts run, why one of them is so much larger, that there’s asbestos in the ceiling, etc.

And of course, some people try to use agile not for building houses, but for building apartment blocks.


It's many years since I read this but I seem to remember the original impetus for agile was an internal software development team having a mix of longer term goals, and then lots of reactive work/support. Possibly the latter making up the majority.

It was a way to bring some sense of order to what was chaos: new requests coming in all the time, work being dropped partway through to deal with the new requests, little progress on any front.

I think agile works really well in this kind of context by bringing in a sense of discipline and keeping new work at bay until the beginning of the next cycle[1]. It can also help to keep teams delivery focussed in other contexts but, key point, does not obviate the need for additional planning outside the sprint/iteration framework.

The problem really occurs when people treat agile as sufficient on its own, or as the one project management tool to rule them all. Total nightmare, especially with multiple teams involved.

[1] How workable this is in practice is open to question: I can tell you it's not always a great approach in an environment where client projects typically last a few days and on time delivery has been known to depend on a critical bug fix for a legacy codebase. Especially problematic when requests come in either during the launch phase, because there's usually a fairly hard window, or during the reporting phase, because there's a delivery deadline looming.


Agile doesn’t excuse you from having to do the work of resourcing, budgeting, business alignment, goal setting, sprint planning, acquisition, and so on. For bite-sized projects with just a few developers, it’s less of an issue, but for a project covering PMO, design, QA, change management, and multiple technical workstreams you have to have — and need to communicate to the business — at least a reasonable idea of what you’re getting into.

You ain’t wrong about how most businesses implement “agile,” though. :)


Not that it's the best or only methodology to project planning in a given scenario, company, or industry, but I do think there's significant value in PMP training. I completed the PMP exam in 2012 after being encouraged by a coworker, and I see it as both useful and cheap enough to suggest it to most software/hardware developers.

Learning and applying "how to plan a project" has pretty universal career relevance. I suggest 30-40 hours of study time, and it's around ~$600 to buy the book, take the exam, etc.


Many times, they are indeed MBA problems. VPs micromanaging instead of delegating, not investing enough money into hiring, retaining and developing their talent, etc.

Other times, it is a problem with the existing developers. Choosing not to learn/follow industry best practices. Creating an over-engineered solution instead of a simple one. Skimping on automated testing and spending more time on manual testing instead. Making their colleagues wait an inordinate amount of time for a simple question, code-review or approval. There are many things developers can do that would slow down their team's productivity.

I don't think tribal finger-pointing is very productive. It really does vary case-by-case.


Ugh. MBAs? No thanks. Usually it's interactions with MBA-holding folks that gives me heartburn. They try to mechanize everything. I agree that an MBA education is probably fairly useful for developers, as long as you can retain your perspective and balance the two worlds--they are very different after all. I'm _very_ business-minded (though I don't have an MBA) and I still run into extensive frustration when I get these kind of queries from the business: why is this taking so long? As this article laid out well, the change or new feature is often conceptually simple but there can be _so_ much required to make conceptually simple things actually happen. Unless you have real development experience before becoming "management", I doubt it's possible for a developer to truly convey that complexity to you. Ultimately it ends up becoming a matter of trust, and frequently competent managers realize over time something along the lines of "well, if every developer I've ever had has taken a long time to deliver conceptually simple things then perhaps that means there's a lot to do to deliver things I think are simple." Sadly, there remain some managers who remain convinced, in the face of all evidence to the contary, that all developers are lazy, slow, and just need to be whipped more and harder.


my own anicdata: Integrate a default proxy into our existing "choose proxy" workflow takes about 2 weeks, and I'm currently on my 3rd month of those 2 weeks :)

Reasons for the delay:

1) Proxy is a stand-alone application. Needs it's own deployment and build configuration 2) Main app was running on Node 6.x. High time to upgrade to 10.x as 6.x LTS is running out. 3) Upgrade to 10.x breaks some modules we depend on (Google Cloud datastore) and that module has been deprecated in Node 8.x+.. time to refactor that.... 4) Main app is in a mono-repo with backend systems still running node 6.x Don't want to spend the time upgrading everything to node 10, so need to split Main app into it's own repo, decoupling from existing mono-repo codebase 5) Since we are upgrading main app to Node 10, Best to remove dependencies on old definitely-typed typescript typings (upgrade to npm @types definitions). And as such refactor to use latest version of all modules.... 6) Security: New Default Proxy should only allow the Main app. code a solution for this using Keypairs.... 7) Docs, sample code for users, announcement mail.

thankfully i'm currently on step 7. Usually I expect to be off on my estimates by a factor of 4x. This is excessive :)


But in a case like yours if sounds like steps 2 through 5 aren't strictly relevant to the change you're making, so it's no wonder that your estimate is blown out the water. (Sorry if I'm wrong, you've obviously got more context on your own platform than I do!)

It does make me wonder though if there is a trend of 'hiding' necessary maintenance work in larger tasks - if so that seems to me to be indicative of a larger problem around not being given/making the time to do those jobs as their own tasks.


Yes, there is. Probably the best four projects I worked on, there was collusion to spread out the cost of necessary maintenance across all work items. Because nobody will agree to doing maintenance stories (this is how Scrum hijacks developer ethics). Never ask permission to do something that must be done.

Or to look at it another way, if no stories are maintenance stories, then all stories are maintenance stories.


I used to be all about jumping on the next fancy thing as soon as it was (pre)released. Having been burnt and learned my lesson, now I delay until it's blindingly obvious it needs to be done.

But yeah, if it needs t be done, do it.


Version x.y.0 of most software is "public beta" quality. x.y.1 is "release candidate". x.y.2 might be production-ready.


You need to get an agreement with the team that you're going to amortize future upgrade treadmill activities across all work that is done. You can't mortgage future productivity to get short term gains when your project is supposed to run for 5, 10, or 20 years. You'll spend most of your time working slowly if you work that way.


yours and a lot of other comments seem to hint that this work may not be strictly required? I'm a bit surprised.

At least in my case, as a public facing SaaS I don't think running Node 6.x past it's LTS window is a valid choice, so better to bite the bullet now. If I went about doing the minimum work, it just adds more code/complexity that needs to be refactored when upgrading Node.


People are hinting that it is not strictly required to get what was initially desired done.

And that seems 100% true.

You do not have to do everything at one time. You can easily to one thing, and then right after that, literally right after, do another.

You could have gotten the proxy up and going, and then upgraded node.

It's not about "doing the minimum work" it's about understanding the problem space, what can be split up, and how to attack the surface area efficiently.

You're also introducing other problems that can occur by doing multiple things at the same time.

Generally speaking I advocate my developers to be really good at separating problem spaces and attacking accordingly. It's really hard to take on multiple issues at one time and actually have high confidence in not breaking many systems along the way.


okay thanks for the tips, you and everyone else :)


No, it has to be done. But some people won't believe it until they've had a couple tough upgrades that show the cost of laxity.

On one project there were CERT advisories out, and we had deferred upgrades due to breaking changes in those upgrades. All of a sudden we had to deal with the upgrade and a security issue at the same time. It was ugly. After the second time, we started putting at least one upgrade story per month on the work queue. Sometimes we let the engineer pick what they wanted to upgrade (just upgrade something!). Other times we picked the engineer for the work that needed to be done.


You either need to upgrade from Node 6 or you don't.

That's nothing to do with adding a new feature, and should be planned accordingly. It's unfortunately very common for developers to hide maintenance tasks in other work, but clearly not ideal.


I agree. However, to be devil's advocate - quite a lot of developer teams aren't allowed to migrate to new systems until way too late, so it's not surprising they do it in unrelated feature requests/bug fixes.


While all platforms have some measure of this phenomenon, the update treadmill problem does seem to be especially acute in the javascript ecosystem.

I update Java dependencies maybe once a year; it's usually painless and incompatibilities are highlighted by the type system. If you let your JS dependences get stale by a few months, you're looking at hours or days of work. And that doesn't count the massive shifts in build systems and frameworks that seem to happen annually.


I've spent the last 5 years on Node, and honestly most modules are fine. The only ones I've had serious problems with are those created by Google. They keep taking huge complex dependencies on things like Grpc that break between Node versions, and keep churning their other library signatures (or deprecating entire modules).

Maybe in the last year they've settled down, maybe not. I learned my lesson and don't take dependencies on them anymore. Much easier.


I just spend maybe half a day a month updating everything to the latest and testing. There are the occasional gotchas, but it's not so bad... When you leave a codebase to rot for a few years, it doesn't surprise me that it's more painful.


My own 'magnification' index is as such:

Take whatever time I think it should take. Then double that number and go up one time unit.

Exp: I think it should take me 3 hours this afternoon to do this -> 6 days.

I think it should take me 2 days to to this -> 4 weeks.

I think it should take me 1 week to do this -> 2 months.

I think this should take us one month to accomplish -> 2 years.


Such approach assumes that literally all tasks are minefields full of unpredictability. But are they really? In my experience such reasoning and 'magnification' applies only to minority of cases, while most of the time you hopefully deal with well-enough understood domain and codebase to accurately estimate the effort based on the initial judgement.


For me, it's more of a guestimating rule. Sure, some tasks take less than I'd expect them too. Sure, some take a LOT longer than even my guesstimate. But this helps me 'plan for drought and pray for rain', so to speak.


To clarify why this strategy is not as bad or egregious as it seems, one must only ask: what are the consequences of overestimated time for work (that gets delivered early?) And what are the consequences of underestimating work?

Would you rather deliver a product with finished surfaces or cut corners? Proper design and good overall test coverage, or zero budget for maintenance?

It also helps to understand that management will ultimately decide that we need to do the project in half the time, and with these extra features you didn't ever plan on building.

My budgeting strategy is to always assume that for whatever scope you have planned, 50% of the complexity that you will need to deliver in order to reach the finish line, remains unknown at any given stage of planning. Compensate upward if there still remain known unknowns.


The consequence of overestimating is you don’t get the project. Occasionally that’s the consequence of proper estimation too, as people tend to underestimate when selling the project to get it through. Once I estimated a project at 3 months team of 5. They told me it should be 3 months team of 2. “We are doing sales mate” they told me. “You are, I am doing delivery” I responded, and then did neither sales nor delivery. ..


Estimates for internal client is a different ballgame than estimates for sales, it would seem.

My approach to the same project would be different if I know that I get to walk away when the project is over, versus if I know I'll be the one to maintain the project when this phase is over and it goes into production. I work for internal clients lately, and I can't honestly say it's my way or the highway, but if you ask for an estimate, I'm going to give you an estimate.

If you tell me the deadline then we're having a different conversation entirely. (And that's OK! Deadlines are better than estimates in my book, especially if the consequence of missing the deadline is some unfinished garbage goes to production, or doesn't ship at all as we had to move on to the next thing. If you asked me for an estimate and told me "that's too much, do it in less" then you didn't really want an estimate, did you? Just tell me the deadline and I'll deal with it, in that case, let's not play games.)

Here's another approach: tell me how much time you want to spend on this project, then I'll put a pen to paper and tell you what part of the scope we can deliver given the resources you've allocated. Don't like that very much? OK, you write the plan and I'll say "yes sir." But that's not why you pay me, is it?

Estimates are a function of scope, time, and cost. If you tell me to do it in less time, I'll do it in less time, but it's going to come at the expense of one of either feature scope or quality. This is not a negotiating tactic, it's just a statement of fact. If you tell me I have to shave off a bunch of time without sacrificing anything, then we're not having an honest conversation and we're not going to have a good time. I'm sorry you had a bad experience, mate.


For me it works pretty well if I imagine someone else (another team member who is less experienced) doing the job, and myself reviewing the code. It turns out my estimates about other people's work are much more accurate... :)


This is brilliant.


The real trick is being able to tell the difference between a normally slow process and a pathologically slow process, on non-trivial scales. I’m 20+ years in and I’m just getting a handle on it in the last 5 years or so.

Meanwhile, I see non-software people putting their complete trust in people with less than half my experience, and getting burned.


Limited Work in Progress tries to address this. The longer a task takes the more scrutiny it gets from hopefully sympathetic individuals. If you are chasing your tail, after a week or two someone who understands what's going on will unwind you and set you up to complete the task.


That’s irreverent as each estimation is based on that team and it’s situation. If I am going to spend 20 hours a week stuck in meetings that’s going to impact my estimates.


It shouldn't do. Estimations should be based on FTE (full time equivalents) of the broken down tasks. An estimation shouldn't be guesswork of a delivery date based on your current teams situation but rather a calculated value of the number of days of effort (eg it would take 1 person 82 days if they worked on it for 7.5 hours a day) with a little margin added for any unforeseen technical hurdles that will inevitably crop up.

If estimations are based on FTE then you can factor in meetings, holidays, and even other projects that might suck up resources ("resources" in this case being your engineers work time) into your delivery date. You can detect very early on if your deadlines are slipping (assuming your team are logging their hours against tickers or a timesheet) and you can also make informed judgements ahead of time about whether you need more resources (were that's an option / the work can be scaled across more engineers).

Obviously this is harder if you're tackling a significantly larger "green field" project - you might have to start making some educated guesses in those instances. But most of the time you should have some idea about the work involved.


Accurate time estimates need to be associated with specific individuals. Someone who is familiar with the relevant code is simply faster than someone who is not. Further, interruptions don’t just cost time, they also reduce productivity around them.

Now, you can try and do effort estimates using hypothetical people. But, at that point you have already given up the possibility of an accurate answer.


> Accurate time estimates need to be associated with specific individuals. Someone who is familiar with the relevant code is simply faster than someone who is not.

This is one of the many reasons why I disagree with giving projects to specific individuals. Not only do you end up silos of knowledge (which is a risk if that engineer should leave / get fired / die in a bus accident) but you also make it harder for yourself to make estimates for the reasons you've described. Sure there will always be a variance from person to person but you stand a greater chance of that averaging out if you discourage engineers "owning" code bases.

> Further, interruptions don’t just cost time, they also reduce productivity around them.

You missed my point regarding meetings. I'm saying if you estimate a project based on FTEs (different methodologies and frameworks will have different terms but they usually amount to the same concept) then time spent in meetings becomes a modifier you can easily adjust for, rather than a hidden time sink that you can't account for.

> Now, you can try and do effort estimates using hypothetical people. But, at that point you have already given up the possibility of an accurate answer.

You don't need to make the estimate on hypothetical people. You just need a system of tracking and reporting the hours people spend in a working day. For example in JIRA you can log time against a ticket and you can create generic tickets for meetings. Therefore after a week / sprint / arbitrary point in time you can view where your engineers have spent their time and if it's below the allocated time for that project you can then either:

- enforce a new policy (eg are all the engineers going to every meeting when just one or two representatives would suffice? Are some meetings just duplicates or redundant? etc),

- inform project managers that there will be a deadline slippage due to resource constraints

- hire more resource to compensate

- or all of the above

I agree estimates are never going to be 100% accurate but that doesn't mean there aren't better ways to estimate than just applying guesswork based on your current teams circumstance.


I agree estimates are never going to be 100% accurate but that doesn't mean there aren't better ways to estimate than just applying guesswork based on your current teams circumstance.

The entire dev team walks out, and yes this actually happens surprisingly often and could happen to you’re team tomorrow. Now, what happens to all your estimates?

Well clearly anything short term is now worthless. You can build a new team and use older effort estimates as a guide, but with different skill sets and a massive learning curve they in no way translate into FTE hours.

Effort estimates can survive those kind of transitions. But, time outside of a functional team is not meaningful.


> The entire dev team walks out, and yes this actually happens surprisingly often and could happen to you’re team tomorrow. Now, what happens to all your estimates?

The estimate is still the same because, at risk of repeating myself, you estimate on FTE and not wild guesses at a project end date. Thus you then go back to your project managers following the steps I outlined in my previous post.

Granted in the most extreme of situations you would need to factor in some upskilling time - maybe even get the team to re-groom the tickets (if you're following the agile methodology) but your method of making wild guesses wouldn't put you in any better a position should your edge case example happen. So you're not exactly winning any arguments by raising this point.

When running projects, estimations are based on the required work to do, not the team itself. Teams can fluctuate (as you keep pointing out) where as the work required should be closer to a constant. Thus any feature creeping that happens - as often does happen in software projects - gets captured and costed before so budget holders aren't surprised by hidden escalations in costs. Delivery date is then derived from totalling up the required work.

If you have a hard end date for delivery then it's up to management (eg yourself as a hiring manager, the companies board and the projects PM), to decide if you reduce the complexity of the project, do a staged release, hire more staff or even argue if the requested delivery date can be extended.

The above is how software projects should be run when they're managed properly. Other places might do things a little more adhoc but in my professional experience that almost always ends up being a worse way of managing a project, budgets and teams.


Hiring more staff always slows progress in the short term. It has a habit of making late projects later. https://en.m.wikipedia.org/wiki/Brooks%27s_law

I understand you’re desire for an FTE to be a meaningful measure, but imperially that’s not true. As I said several times effort estimates can be used, but after deciding to add staff progress slows down for a while. So, using effort estimates you would reduce progress over the next 3 months before expecting a longer term increase in rate of accomplishment.


> Hiring more staff always slows progress in the short term. It has a habit of making late projects later.

Brooks law applies to late projects, not new ones. If you have good estimates you can determine if you need resources early on.

All a lot does depend on the size of the project. If it's something that will take 12 months or longer than extra hires definitely wouldn't have that affect. Depending on the hire and the work required, you can shorten than time frame significantly too.

However I do agree that hiring isn't a silver bullet. This is why I suggested "hiring" as one of many outcomes that can be considered rather than the preferred outcome in all situations.

> I understand you’re desire for an FTE to be a meaningful measure, but imperially that’s not true. As I said several times effort estimates can be used, but after deciding to add staff progress slows down for a while. So, using effort estimates you would reduce progress over the next 3 months before expecting a longer term increase in rate of accomplishment.

This is where another piece in the jigsaw comes into play - an employee might not be 1 FTE. They might be part time, might have leave booked, might have commitments on other teams (effectively part time from your perspective) or might be a new hire so need upskilling time. Those are just a few common examples - I'm sure you can think of others.

This is why I keep reiterating my point about calculating your figures based on work required from FTEs. By talking about new hires as you are, you're again thinking about "teams" rather than "work required". When you look at work required then you can use your team as a variable in the calculation and instantly estimations become easier.

I'm undoubtedly explaining the process poorly but I do strongly recommend you read some books or articles online about managing projects and teams using (for example) agile methodology - even if you've already worked in places that employ scrum (again, for example). You could potentially really improve how you estimate work which in turn will improve how you manage your team. From personal experience, I've been a manager for a number of years and have found my skills really improved as I've adopted those lessons too.


I’m dealing with this right now. A team that is downright arrogant about reducing their well-padded schedule, taking months to do a job that should take weeks, at most. Getting them to do anything is like pulling teeth, and accompanied with lots of arrogant lectures about how hard it is to estimate, how you can’t rush quality, and so on.

It’s mostly bullshit. This team has historical reasons for their bloated schedules, but at root, they’ve simply been coddled, and never forced to justify their behavior.

I’ve been on both sides of the table now. Developers like to gripe and moan about “unrealistic” schedules, but without aggressive pushback from management, a huge number of programmers will simply never ship.


That knife cuts both ways. I am fundamentally a clever lazy person. I'm always looking for ways to reduce the amount of accidental complexity I have to do, and reduce the number of trivial interruptions I get by documenting the situation a little better every time someone asks me about it, until the rate drops below my pain threshold.

If it weren't for schedules I'd have no defense at all for 'wasting time' on something that saves each of my teammates an hour of pain per week. (I've had a lot of shitty managers. Stuff like this shouldn't need a defense).


>"...taking months to do a job that should take weeks, at most..."

Well,this is basically lack of trust between you and your devs. Or lack of common understanding of priorities. I'm not sure if you're a project mgr or a client, but clearly some expectations are misaligned. Perhaps you don't see their barriers or they don't agree with your priorities

The existing contract doesn't work, so some mediation is due to close that distrust before any damage is done. After all it's the team that you currently consider yours.


”Well,this is basically lack of trust between you and your devs”

It’s pretty amusing how many definitive responses I’m getting from devs who are diagnosing my problem without any knowledge. It’s also telling how different they are.

Like I said, there are reasons, and I’m aware of them. I’m also fully aware of the technical barriers.

The devs are padding their schedule, and think they’re being clever about it. They are not. Some of it does boil down to trust (i.e. bad history with previous managers), but a lot of it is just that they don’t want to do the thing because it’s less fun than other things, and nobody has ever held them to account in the past. A sufficiently lazy developer will find endless Legitimate Technical Justifications for not doing something he doesn’t want to do.

I’m not trying to generalize to all developers here, but I do know the people and problem I’m working with.


Are you saying there is trust between you and developers? Cause you surely don't trust them and the way you describe their communication sounds like they don't trust you.

Also, if they give you non-padded estimate, that is the one they will occasionally miss, will typical manager of your company force them to work weekends and evenings or otherwise punish them? Or will typical be ok with it and accept it as risk factor?

I am asking because the one manager that complained about paddings was the one that took pride in manipulating people into weekends - including by claiming they promised it. I do tight estimates when I know I can be late and generous when I assume it will be law or don't trust management.


Rather than second-guessing a total stranger on the internet - concerning a situation about which you know nothing - you would do well to question why you insist on assuming that everyone is as bad as the worst manager you’ve had.

To answer your question: no. I don’t do any of that. Also, this situation has nothing to do with aggressive schedules.


So then what the issue is? Is company loosing money on that project? Wish bigger effectivity? Estimate is just that - padded one makes it safer to plan. Padded one is also trust signal - people can be lazy while making tight estimates they will not make after. Estimate and when it is done are two different categories.

That particular manager was not worst.


Sounds like a hiring problem. Start hiring some new devs and keep them separate so as to not contaminate them. As they come up to speed, let one of the other devs go. Wash, rinse and repeat.

Note: I'm a dev and I can tell you I very much dislike working with prima donnas.


> Sounds like a hiring problem.

Yes, but not the way I think he’s thinking (and maybe not the way you’re thinking either). If you were actually in a room with him and said, “ok, let’s just walk in there right now, fire these clowns and replace them with somebody better” he’d immediately hold up his hand and say, “but I can’t find anybody better…”. So yes, it’s probably a hiring problem but the problem is that he’s putting unreasonable expectations on the people he’s hiring.


Oh yes, keep them separate from the people who know the application, how it works, what it integrated with, all the codes, all the issues. Brilliant!


Right!?


Perhaps shipping simply produces more value for you than it does for them?


Aggressive pushback usually ensures the other side will treat you as the enemy. Which sounds like what happened there - typical management hates developers and vice versa.

Ate they having bloted estimates or not shipping within estimates? Why there are such large features anyway? Why is manager not a ten member but instead someone who don't belong?


All the more reason to expect it to take longer.


There is a small subset of devs that will rush and get the work "done" in the underallocated time. The catch is in, of course, the Definition of Done. This is why we spend time aligning teams on exactly this.

Sure I can fire off that one line change and tell you it's done. But, does it work? Is it right? Do you care?

Typically this sub-par work is done by inexpensive outsourced development shops being managed by a client rep without the capacity to see through the lies until the project goes sideways in the future. The developers who rushed the work are paid and out of the picture and never have to fix the problem or touch that code again, so they don't care.


Handyman Contractor mentality vs Civil Engineer mentality in a nutshell

Unfortunately not everyone cares about resilience and certain performance thresholds when it comes to construction, especially when budgets become involved.

Some customers will are happy to cheap out for a hack-job remodel in the hopes that they can flip their home and run off before the bagholder realizes they got a lemon.


The handyman spends 20min thinking about how to solve the problem.

The civil engineer spends 20min thinking about how to solve the problem and 20-days making sure there's no way whoever signs his paycheck is going to be told by a court to pay out a bunch of money if something goes sideways.

An professional engineer is basically just a lawyer for the laws of physics. You're not paying for his ability to come up with a solution. Anyone can read the books and do that. You're paying for the fact that other people take that solution seriously.


There was some CivE professor that people liked to quote who talked about how he had no qualms flunking people from his classes because a degree in Civil Engineering was a license to kill.

If you let dumbasses through the system, they will build a pedestrian bridge in a hotel lobby that fails and kills a hundred people at a party.


For those who didn't catch the reference : Hyatt Regency Walkway Collapse

https://www.youtube.com/watch?v=VnvGwFegbC8


A lot of the most serious software engineering work that I have done had much the same ratio: coding the a solution that mostly works was easy. Making sure that it is robust against all kinds of failures (CPU, RAM, bus/external device failure, random bit flips etc.[0]) and does the right thing was a lot of work.

[0] The system I worked on was hardened against all of that and some more failure modes.


"handyman vs. civil engineer." I'm going to use this the next four or five times and see if the analogy helps.

I mean both are valid depending on what you want. Are you asking me to whip up a script to answer a one-off question or a pipeline to answer that question for every customer every quarter?


Spot on.

I was 1st SE at the company I am at now. Previous applications were developed by outside contractors. I tried making some changes to that code base and found massive issues. Source code was older than the compiled application. Massive methods that were 1k+ lines long. Business rules all over the place in IF statements. Barely any documentation/comments. Only way forward is complete rewrite.

I'm a relatively fresh SE, but I read books/research good practices because I'm trying to avoid major mistakes. Also, it is hard to explain to people who even know some code, how difficult it is to make what seems like a simple form.

At this point I would refuse to work without time for unit testing/refactoring/research.

Interestingly, few days ago I made a very minor change in app I developed straight out of school. App had 0 unit testing and minimal integration testing. It created a bug, because I allowed the specs to change weekly and I just coded it without thinking. Therefore, many lessons were learned that day.


I've been a stakeholder in many a "rewrite is the only way forward" projects, though I have almost never actually advocated/voted for a "from-scratch" rewrite. I obviously say this having none of your domain-specific context -- so grain of salt -- but, in my experience, a total 'scratch' rewrite is rarely the right answer.

Furthermore, it's often nearly impossible to even be able to ascertain the 'right answer' until you've gained significant exposure to both the application & the business needs; in my experience, you'll have a far better understanding & appreciation of the architecture after a year (or three) of exposure. At that point you are much better positioned to objectively understand the potential ROI (or lack thereof) on a rewrite project vs a more conservative but concerted effort towards incremental improvement over time.

When mentoring developers on this general topic, one of the key things that I emphasize is that a functional application (even if substandard in architecture) is already solving a business need and often generating revenue/profit/positive ROI (as the case may be) for what was probably a "[re]write" project at some point in the past. Rewriting is a large undertaking with many unknowns & high costs, often higher than anticipated, and with no guarantees of reaching full functional parity in a given stated timeline. That results in difficult budgeting & ROI calculations (read: risky), and typically means the project itself is risky -- meaning the potential reward would need to be quite large to be worth it & offset the risks. I find that to rarely be the case when you already have a functional application, even if substandard. ;)


Also keep in mind that often the existing application is the only real artifact you have capturing all of the complexity and corner cases your rewrite will have to deal with.


This is a very good point and it's probably far more common than it is given credit for.

Sometimes you manage to catch some of these early on, but often they aren't caught until later in the process -- at which point the cost for the change (and the impact on budget/timeline, and potentially even the very architecture you set out to fix!) is drastically higher.

I've seen it happen a number of times where the 'spec' seemed simple enough at commencement & throughout most of the project; then you'd have each department test & the reports would start coming in... and the result would inevitably be a multiplier on the project scope.

Not from missing features mind you, but via the omission of 'complexity handling' / business logic which was documented -- but the code itself was the documentation, which isn't overly unusual (and IMO isn't even a particularly bad thing in many cases, though not all).


It's important that documentation be clear, discoverable, and up to date.

Code-as-documentation is best-in-class at staying up to date, when it comes to documenting what a system does and how. Often the relevant pieces are very discoverable as you're going to make a change. For other goals, it's much less discoverable. Clarity depends on both how the software is written and who the audience is.

Tests-as-documentation are arguably a special case of code-as-documentation. Outside of the case of well maintained tests-as-documentation, code-as-documentation often has a hard time expressing "why" and distinguishing between what has to be a certain way and what just happened to be that way.

It also has some trouble expressing aspiration - "we've decided it should be this way, but it's not yet".


Late to the thread, but just to chime in, briefly - I would agree unless the underlying platform has been deprecated.

If we're talking about something in Java that could be better, in the abstract, but it still works, agreed: it might not be worth the effort to redo. Java isn't going anywhere, and versioning differences aren't always critical.

If we're talking about deprecated front end frameworks that no longer have any LTS (Angular 1 comes to mind), moving over to a framework that does have LTS is a pretty smart move, if the code or app is that important and valuable.


In my experience it takes months of looking at existing code before you see the good architecture under the superficial mess.

In other words, work in the current code for at least a year before proposing a re-write.

Source code older the the application doesn't mean much as a statement - any good process will build off a CI system which only takes old source code. The question is how different is the application the code generates from what exists, and what to do about differences.

Massive methods are sometimes a sign that the good architecture missed something. Refactor them of course, but that doesn't mean the architecture is bad, just that it needs to change to fit current requirements.

Documents/comments lie. Their value needs to be contrasted to the risk that they mislead you. I'm not saying you shouldn't have them, just not a much as you think.

The problem with books and research is knowing when to apply them. The rules exist for good reason, but they are really guidelines not rules and sometimes they need to be broken. I'm considering that now in my application, the GUI depends on the business logic depends on the network at first glance, but looking closer I'm debating calling it fine because there is no business logic (network sends 5, business logic changes the type from int to mm/s, GUI displays 5 mm/s), so the complexity of making the business logic the thing everything depends on doesn't seem worth it. Maybe, I'm still trying to figure out where we go next, different guesses result in different optimal architectures: if we were really sure they wouldn't be guesses.

I do agree that you shouldn't do anything without creating a test. Note that test is singular, don't try to create exhaustive tests, not only will it take too long, but you often will waste time on a test that is incidental to the implementation and not required. Create a test or two around something you want to change, and then make your change. You get some assurance you didn't break anything, if you were wrong and broke something, you at least get another test out of it.


> it takes months of looking at existing code

All the while listening to management say, “this should be a one line change, why should it take more than a few hours?”


> this should be a one line change

Oh great, you do it then.


Sometimes it's completely obvious that a design decision is flawed.


And when that's true, the design decision in question is often flawed.


I disagree with the term obvious.

Sometimes it only seems completely obvious until you discover an unexpected constraint that the design decision address well. Sometimes those constraints are no longer a limiting factor and you can safely rethink the design, sometimes they are an elegant way to prevent a specific real problem and you shouldn't.


Even though the code you are interacting with appears awful, you may want to reconsider the rewrite strategy. This post may be helpful: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

Consider rewriting only a handful of small parts that are causing problems critical to the success of the product, and making small improvements to the rest of it as needed over time.


This is impossible in my case, the source code is outdated. We do not have the source code of the project, which we run in production. The dev lost it.

I have tried working with the source code but it has many bugs and it appears to be two years out of date. This leaves us with an application only in its current state with no way of making any changes.

I may be able to extract pieces of information, reuse some stored procedures and so on.


At least you have something. You need to tell your boss that all features in the last 2 years need to be rewritten from scratch, with no lessons learned to speed up the effort, but that is still faster than starting over (particularly if the boss can say some things didn't turn out useful).


Would you take this approach even if the framework is outdated? Part of rewrite is to switch to a newer framework and make general improvements in maintainability, reliability and speed.


Yes. By the way - „Working effectively with legacy code”.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

If you start from scratch you may bump into the same edge cases that the original writers bumped into, and end up with a code that is not much better than the original - even in the original is 2 years out of date.

I’m sure there were cases when writing from scratch was a good call, but I don’t remember hearing about it.


I guess if I do rewrite, I shall write about it as I go. If I fail it will make for a good story.


I'd emphasize that failure on a project like this may not be what you'd traditionally have in mind when thinking 'that project failed', though it happens and it could be that bad in the absolute worst case.

The issue is primarily that of the reward versus cost -- especially the opportunity cost.

When the system is rewritten, will the business have increased revenue or decreased cost? Will it do so significantly, surpassing at least the cost to rewrite (salaries, etc) -- that's the absolute minimum bar, but then you have to consider the opportunity cost which is the real concern:

If you had instead spent the same amount time adding new features, implementing an A/B test suite to increase conversions, improving marketing capabilities, retention mailers, or really any other activity that could positively impact the company business metrics -- would the impact be better than the impact of the rewrite?

In most cases the customers (internal or external) don't really know or care how good/bad the underlying code is, as long as the product serves their needs. When that's true, even partially, the value of 'rewrite' almost never exceeds the opportunity cost alone, let alone the absolute cost (and that's to say nothing of the risks).


Yes. A framework is a detail, the business logic shouldn't care. Even if the framework is tied in I'd keep using it while moving newer stuff to something else. I've been in big rewrites to change the framework and everything else, on hindsight I believe I could have done an in-place refactor of everything to the new framework and been working the whole time, at less cost.

That isn't hypothetical, in the above rewrite we took one part that was considered too essential to wait brought it in, wrapped it, and used the new framework. It was working much sooner than the other code. It still isn't completely re-written - but it doesn't need to be as the core business logic is known to work.


If it's language that has byte-code like java or .net you might be able to decompile the production binary.


Even if it's not, decompilation might be useful. What I'd try is decompiling both the running version and the build of the available source code, and see if the diff is informative.


Last time I checked x86 decompilers were not that useful.

They were producing code that is not compilable, often completely failing on more complex, translating code compiler boilerplate into code.

This was a while ago so maybe things improved since then.


The goal here is some window into "what changed". For that, we don't need "compilable", and boilerplate won't be an issue provided it's stable (by no means guaranteed, to be sure).

If I had to bet at even odds, I'd bet against it being useful. But the win is potentially large and the effort isn't.


Good suggestion.

I have decompiled the version in production. I will admit that reading it, is extremely frustrating as there are no objects and fields are named field1, field2, field3.


Still, you can compile + decompile the code you have, and compare that to decompiled production app. This might allow you to apply any cganges back to the code.

Btw, it's not a dev who lost the code, it's the company and inadequate development process. It should never have been the single dev's responsibility to make backups.


Definitely did not mean to only blame the dev. The company knows that errors were made on both sides, hence why we are working on improving the process.


Slightly relatedly, we bundle our source code into our deployable artifact along with the revision. A bit silly and it increases the artifact size, but it's just another layer in the Docker image and so it's not that painful.


At the end of the day, GP is claiming zero confidence on being able to maintain this piece of code. It's come out of their mouth as "I want to rewrite this" but what it means is "I think it would be less painful to rewrite in a way I can support than to bumble around like an idiot for 3 years in this one."

There are a lot of things you can do with that input but you really have to acknowledge that there's a pretty big problem here, and some empathy is going to be part of our path out of this mess.



Don't take the Joel On Software article as gospel (not saying that anyone is). There are other models out there as well: https://medium.com/@herbcaudill/lessons-from-6-software-rewr...


Sounds like you don't even have source control and a CI system.


You are correct.

We have implemented source control and CI is the next obvious step, but those things were not in place 10 years ago when the application was developed.


On the contrary, there are plenty of devs who won't take on any work until it's been designed, scoped, run through management, prioritised and scheduled, when the actual fix is a one line change that would have taken less effort than the meeting where it was prioritised.

There's a happy middle ground between the two, and assuming developers who will deliver quick fixes are all hacks who don't care is counterproductive.


> There's a happy middle ground

Yeah, the happy middle ground is a management infrastructure that doesn't insist that you say exactly how long everything is going to take before you start doing it and accept that there ARE unknowns in software development.


Those people got burned by managers who like to think that all of their problems are one line fixes that can be rushed out.

They think that 'simple' from the user's perspective is 'simple' from an engineering sense and usually those are inversely correlated.

Essentially that engineer has grounded the management team. They can't be trusted to behave so their toys have been taken away.


Ha, hilarious. I have had changed that were done as soon as they were mentioned, but still required several more meetings to ensure everyone was aware of the upcoming change, had signed of on it (even though it was a severe bug), and discussed it ad-infinitum. Including how to do it, even though I already did it at the start of the meeting. Approximately 100x the time it took to do the work.


Well, sure. I've had those too. The trick is making sure some of that time is spent validating the assumptions you made explicitly or subconsciously.

Doing the change right means making sure that a) your change actually meets the request as the requestor understands it and b) doesn't break other features that already exist.


Definition of Done (DoD) does not exist. It is whatever stakeholder needs it to be. Stakeholder might be delusional or not, its your thing to educate or if impossible, avoid working in toxic conditions that future will bring.

Everything is a feature:

- Automatic tests, a feature. You can buy it if you want or you can accept accidental bugs any time even for stuff that worked before, or even complete meltdown. You don't have to buy them (I personally usually stop working here as this is professionally unacceptable for me, there are tones of sub standard shops that can do this) but accept warnings and give written stuff about it so I can later just ignore your anger with full confidence.

- CI/CD - you can buy it, means we are agile and fast, we do 20 deployments a day vs 2 per week where some may fail due to insulin spike at the moment. Maybe you don't want it, and snail speed is acceptable for timeframe/budget or is the least evil.

- Epic docs - you can have them or not, again, it will determine how many people you will eventually have in the help desk team, the local IT team, the perceived quality of the system, etc...

- Metrics - yes, we can make nice dashboards and you can know FIRST when anybody gets unknown exception or CPU goes higher then 90% but maybe you don't care or don't have a budget and maybe we will spend time doing wrong things.... because we don't know how often are features used...

- Full auditing - maybe you need this 10 years back in full detail for legal reason, or NOT because you don't give a damn about it as you plan to sell it in 3. Your decision.

and so on and on...

Everything is a feature. I wont accept work without some features - I can be realistic if needed but we need to mutually understand and agree what it means for the system and have that written down on public place (for example company ticketing system).


I'm not sure how any of that relates to you suggesting "DoD does not exist" above. What do you mean?

The DOD is not a singular ruleset, but something defined by the project team, and which can evolve as the needs or the project request. It may or may not involve some or all of the components you've mentioned.


OK, bad wording I guess. What I meant is there is no universally acceptable DoD even if you pin most of the project decisions.

It exists but its dynamic and context/stakeholder/implementator/task/feature specific.


> There is a small subset of devs that will rush and get the work "done" in the underallocated time. The catch is in, of course, the Definition of Done.

For these devs the Definition of Done is handing it to QA to debug. They do not include all the QA tickets their rush job creates as part of their DoD, and neither does their bad manager. Therefore, it does, in fact, look like they're way more productive than the rest of the team.


I've worked with devs who were so self unaware that they thought they spent less than 10% of their time fixing bugs in their code.

In the end they were right. Because they refused to own their own bugs, everyone else was cleaning up after them.

The real problem was when they started using this bullshit ratio to inform their opinions on development processes, pushing back on attempts to mature our process and tools.


What's wrong about this is that from the client point of view, Definition of Done ends up depending on who they're talking to more than their situation.

Hence the sibling comment of "Handyman Contractor mentality vs Civil Engineer mentality." It becomes a question of identity rather than a question of what the situation demands.

I'm not sure how to fix this, but I don't think the problem is as bad in practice as it is in discourse. I don't think people who see themselves on the "Civil Engineer" end of things would do the equivalent of providing CAD drawings and structural analysis for someone who asks them to replace their mailbox. On the other hand, it's still a problem if they talk as if they would.


We do.

That is why any proper RFP answer already provides a broad overview how the problem would be tackled, and possible solutions to the described problem.

Also why during the project development, at various delivery phases, artifacts like architecture diagrams, documentation and UAT from customer team take place.


It sounds like you're only selecting jobs that require the approach you're accustomed to, which is fine when you get to pick your jobs, but it isn't an option for internal dev teams that work on a product. The jobs come at whatever size they are. If the job is to change a word on a web page, there isn't going to be an architecture diagram, and UAT is going to mean someone reloaded the page and messaged "looks good, thanks!" to the developer.

For a consumer product with millions of users, there's probably a checklist of device platforms, screen sizes, browsers, and screen orientations to check that one-word change, but for a SaaS offering with less than 100k users, probably not.


Three months ago I found myself in yet another discussion with a PM about time estimation. I gave him an analogy I use for non-developers, where you try to estimate how long it will take to pack a kitchen into boxes for moving, the catch, I said, is that there's a significant chance each time you open one of the cupboards or drawers there might be a whole other kitchen or even house behind it.

He responded with, "well, we just have to figure out which doors have kitchens behind them, that's all!"

But during this discussion, I could feel the weight of the countless times I had already had a similar discussion in the past and it was heavy on my soul. Seriously, it feels like we've made no progress at times.

Even here, at HN, I've several times brought up the paper "Large limits in software estimation" and been downvoted as not getting it.


I like to use civil engineering analogies too. This cupboard one is great, I'll have to remember it.

Like for people who want a 50% demo at a 50% timepoint, asking if they'd ask to drive a truck halfway across a bridge when it was half completed. I mean, you could build a bridge that way, but it would be way easier to lay all the foundations first, then build support structures, then finally pave it at the end.

Another one is when someone asks to tack on a feature on top of some system that really can't support it, comparing it to asking to build a second story on a tent. They wanted the cheapest, quickest to set up, lightest on resources system, and that required certain tradeoffs. To build their new feature, it would often be easier, quicker and safer to rewrite the whole thing from scratch.


The problem here is the PM does not see himself a part of the team, your team, but is on 'other' team that is driven by other gears. The estimates is just an input for his process flow.

What's missing is another stream of input that should externalize the constraints and barriers in the flow of both yours and his teams. Even then, as long as PM does not see himself part of your team, all this is in vain, you'd be speaking foreign languages no matter the analogies.


I've always wondered why we don't provide estimates with a low, median, high scenario. It'd create a lot less disappointment and more accuracy.


Depending on the PM I've found giving a confidence level on an estimate to be helpful. For example on a task I've done a million times I might estimate that it will take me 1 hour with 95% confidence. Other tasks I might estimate at 2 weeks with 50% confidence, same task 3 weeks at 85% confidence. As with all things communication, the audience matters and it's incumbent on the speaker to communicate in a way the audience can understand. So that might not work with everyone.

Some of it I think is also personal confidence. If I tell someone it'll take me two weeks, I never budge unless new information becomes available. If you budge, you are welcoming being pressured/bullied into working overtime and/or delivering poor quality all the time to meet infinite demands. Also, we must accept that most customers are willing to accept lower quality/higher risk than we want to deliver. In this case what's important is to state the risks as plainly as possible and set boundaries on what you're willing to do before those risks come back at you.


I think the problem there is that the uncertainty bounds can be massive. Almost every task will have the form:

"This task should take a day. But if the feature of the framework I plan to use turns out to be buggy, it might take a month. A bug is unlikely but not impossible, I handwave a guess at 10%."

This sort of estimate might be reasonable, or not, depending how fractal you want to get with the backup plan of a month of tasks which themselves have wide uncertainty estimates. But it's also quite useless for planning. You end up with a Gantt chart that says "this project might take 3 months, or 5 years", which helps nobody.

Hence #NoEstimates


I don't think that people are good at that. I've been asked before to give a low and a high scenario, but all I can really do is to tell you how long I think it would take (I did a +/- 50% of the estimation, and explained what I did).

Sometimes there's a specific uncertainty that changes radically the estimation. But remove this uncertainty and we're back at estimating and doing +/- 50%.


It's the unknown unknowns that really make estimates inaccurate. If someone were an expert in one exact thing and that's what they worked with day in and day out, then those unknowns would be small and the areas around them well defined.

It is relatively easy for someone to become a maintainer of one specific thing as long as that's the only thing they're doing for a long time.

Then you look at a software developer, sysadmin, anyone in the technical industries, and quite often it's not one specific thing, it's LOTS of specific things. Each with thousands, maybe millions, of human-hours in their development and the development of the things they depend upon. Every layer of hardware and software having it's own quirks and wrinkles.

It's like trying to be the god of a small solar-system of interacting planets each with plate tectonics and life-forms.


Beta distribution is a bit better, which is basically what you are talking about:

https://www.isixsigma.com/methodology/project-management/bet...


Yep, PERT is what we use internally for estimating client work.

https://en.m.wikipedia.org/wiki/Three-point_estimation?wprov...


That model is too complicated for tiny minds. Just tell me when it will be done, man!


I've been developing for clients for 20+ years and 9 times out of 10 they do under estimate, but I think it has to do with lack of knowledge rather than anything malicious.

I found myself doing it when I get quotes for home repair or construction. My mind immediately goes to "I don't think it should take that long" even though I have no idea how to estimate those types of projects.

I wonder if it's because with something I'm not knowledgeable about I'm only estimating what I can see, but there are 5-10 tasks that have to happen in the background to make that thing I can see work properly. I just have no idea what those background tasks are to begin with or how many.


Please don't excuse this, when management over promises and devs "under" deliver, then the axe never falls on management. The best work environments I've seen are where devs and management work together to supply estimates of a reasonable scale and where time is taken in project planning to chop up big goals into small tickets in a developer-meaningful manner.


Absolutely, it makes no sense for a manager to set timelines without conferring with devs.

Devs should set timelines, and business people should set "how much benefit will this bring". The combination of these two things yields priority.


I agree, and the biggest difficulty that ends up coming out of this fact is that things like paying down tech debt are hard to justify from a business perspective, this is a legitimately hard problem that everyone who ends up in project planning will have to deal with. When you hit it try and rely on talks by experts in the fields and external resources to enlighten the business side as to the value that paying down tech debt will give you.


Paying down technical debt should be exactly as easy/hard to justify, as paying down monetary debt. They behave the same and have the same effects over time; that's why debt is such a good analogy.


I disagree, technical debt is a much more liquid quantity that can be easily paid off in as little or great a chunk as you wish. If you as a company offered to let everyone pay off tech debt with 10% of their time, but force it to be 10% on a daily basis you'll likely end up accumulating more debt - paying off monetary debt a little at a time by diverting x% of revenue to it is a particularly good way to handle debt as a private company as it ensures there's plenty to reinvest and the cost the debt is imposing is well known.


So tech debt is harder to subdivide arbitrarily, I'll buy that. And it's harder to quantify in the first place, which seems to be one of the theses of this "dear client" letter.


Not only harder to quanitify. "Tech debt" is frequently used by devs as an excuse to make changes that they'd personally like to do, but which aren't necessarily improvements, e.g. rewriting from one language to another, or rewriting code they find hard to understand because there are no comments (instead of understanding the code and adding comments).


Tech debt is manageable if you start out with the right culture.

If you start out with an environment where devs feel rushed, you will run up tech debt very quickly, and it will be harder to get on the same page later when dev slows to a crawl


> I found myself doing it when I get quotes for home repair or construction.

My quite recent conversation with a contractor about estimating our home renovation project went more or less like this: "Demolition: 2 days, foundation: 3 days, ground zero: 3 days, exterior walls: 3 days, floors: 1 day, ceilings: 2 days, attic and chimneys: 2 days... so about five months in total". I felt right at home and want to hire them. The jump from how much time each tasks seems to require in pure work effort to how long it may realistically take given the unknowns, downtimes, logistics and various overhead seemed similar to the way I do estimates. An hour here, a day there, another four hours for that... yeah, I need two weeks for this task.


You're missing a critical part of the equation: the inexperienced software engineer.

Inexperienced software engineers are less expensive, so you can hire more for your payroll budget. These juniors tend to think "yeah, that sounds pretty straightforward. I can do that in a week." It actually takes two months. It takes experience to estimate even nearly correctly. And MBAs love to buy some less expensive, less experienced engineers with their money.


Or even worse, they actually do knock it out way faster than expected, but it's buggy and unmaintainable.

I was that inexperienced software engineer once. At the time a grizzled veteran tried to drop a knowledge bomb on me and said "A good programmer can write about 14 lines of code per day". It took me a really long time to figure out that he had mis-quoted something. It should have been "A good programmer only writes about 14 lines of code a day, they're just the right 14 lines."


You should also get credit for the number of lines of code you read every day!

Some days you read pages and pages of code, but don't write a single line.

However, that makes it possible to figure out just the right 14 lines of code to write the next day, by reusing instead of duplicating code that's already there.


That is an average. On a good day you can write 100, but then you need a week and a half to figure out how to write the next line. New projects are easy, I can write 500 a day for several months before I get to those last hard features that slow me down to 1 a day for a while.


This is highly codebase dependant ime. The less you have to read and think about wtf is going on the more you can write.

And generally the stronger coupling the more you have to read.


The juniors are usually terrified by prospects of being responsible, so they get frozen or stuck

The estimating for juniors should be optional by default, but progress self-tracking mandatory. This way it yields facts without guilt and overpromiss. The team should estimate, the team should deliver.


This isn't all on the non-developers. I've walked into two rooms this week, which contained several senior developers, and when I asked them about non-functional requirements, they had to ask what they were, and why they were important.

I've also seen teams where the Defintion of Done doesn't include any steps at all towards deployment. Done is when someone approves the pull request.

Not surprisingly, it takes longer for those teams to 'complete' a change. In the former case, they are continuously surprised, and angered, by 'requirements' that 'no one' told them about. In the later, they stop halfway, and wonder why everyone is waiting on them, because it's 'development complete'.


I agree with your comment about senior devs. it seems as devs climb the ladder many get a superiority complex where they could do things quickly and others cannot.

I was once a tech lead of a team where the architect had been berating and criticizing a team member for weeks over something he thought should take a day.

I suggested he should take over and complete it. It took him weeks to complete.


the architect had been berating and criticizing a team member for weeks

If berate is truly the appropriate word (upon investigation), and I had any say in the matter, I would fire his toxic ass immediately.


"I was once a tech lead of a team where the architect had been berating and criticizing a team member for weeks over something he thought should take a day"

As I've gotten more experience as a developer, I've realized that most tasks take longer than I used to estimate..usually because I want to come up with a long-term solution and not just hack something together with no testing.


I'm sure I still have far to go, but currently when I see someone quote a time frame for a change I am more concerned when it is short vs. seems too long. Especially when the dev is questioned and the estimate doesn't include time for documentation, deployment, etc.


To be fair; non-functional requirements as a separate entity are pretty much wank.

If you need a line in a document to tell you that exposing privileged information to the outside world is a bad idea, or that you should make sure the solution you are designing has a reasonable chance of servicing the expected load then you probably shouldn't be a developer.

The last time I seen explicit non-functional requirements the had something like "the solution should have a 99.99% uptime." I realised that our down time for releases (old school I know) was more than that and promptly ignored them.


>when I asked them about non-functional requirements, they had to ask what they were, and why they were important.

Based on my own experience with NFRs, these are perfectly reasonable questions.


We're missing a lot of context about this situation to tell if anyone was being unreasonable. A reminder for everyone: unless you're all in on waterfall with complete specs, user stories are placeholders for a conversation. Talk to each other and assume good intent on all sides until proven otherwise. You'll have a much more enjoyable work experience than if you dig in and make things adversarial.


I think this is a parse issue -- is this

> I asked them about [the non-functional requirements for this project] and they had to ask [please give me details on those requirements]

or is it

> I asked them about [non-functional requirements as a general concept] and they had to ask [what are these new things you speak of, I have never heard of such a thing]

?

The former is reasonable, the latter less so.


For me, it was more "These things you call NFRs, I do not understand the purpose of them, especially given how they are decided."

Requirements like "The system must be secure.", "The system must not go down.", or "The system must not have performance issues."

I'm lost as to what the purpose of such statements are as they don't let me know where real focus needs to be given. Not every system is mission critical and deserves equal resources devoted to ensure uptime.


As developers, the most useless bug reports we get are "It's broken". The NFRs you describe sound like "The system must not be broken", which probably makes a lot of sense to someone who's likely to submit a useless bug report.


The purpose of those statements are to uphold contracts and compensation clauses when things go wrong and end up in court due to refuse of payment for delivering broken software (which might be debatable if that was indeed the case).


See, that would make sense if it was something like 'our system must have 99.9% uptime'. But to say it cannot have any downtime, at the same time as our releases requiring us to take it down during the release, and at the same time the business contracts allowing for downtime for releases, means that the NFRs do not have any realistic requirements and are thus useless.

It's like saying mobile devices must be supported. Does that mean only top of the line recent releases? Does that mean 8 year old smartphones? Does that mean the internet browser on the Nintendo DS?


It's a failure of education. This is one reason why everyone should have a basic understanding of code: not so they can all become programmers, but so they can get a mental handle on these systems and the people who do program them.

It's the auto-mechanic problem; if you don't understand the thing yourself, you have no way to know whether or not you're being taken advantage of. So you tend to just split the difference and maintain a constant, moderate skepticism.


My worst managers have been ex-programers who got promoted to management (too) early.

One always seem to think solutions were quick and easy, but that's because he never got to the stage where he could write reliable maintainable code. We were constantly fighting fires and he seemed to think that was normal.

I am not convinced that giving people enough knowledge to be dangerous is a good thing.


Why is 'code' different to any other skilled profession - Medicine, Plumbing, Quantity surveying, Piano tuning, Speech Therapy -

Why is it essential that 'everyone' knows about code -why draw the distinction?


Because like writing and basic arithmetic, it's relevant to some degree in the majority of other fields. Code is used in medicine, quantity surveying, manufacturing, film, oil drilling. It's a field that cross-cuts fields, and decision-makers who don't have a basic understanding of it are going to make worse decisions no matter what their organization's primary focus is.


Taylorism still exists. The employees in the lower rungs of the organization must be managed by people higher in the org. But when you hire a manager that doesn't understand the work that his employees are doing you get into this situation.

The only way to fix this is to have someone who understands the work at the highest levels of the organization. But that is impossible because the organization is designed to hire people without the dev background at the executive level


I bet those MBAs get mad at their mechanic and the guy rebuilding their kitchen every time they go 'over budget' too. The thing is they don't have much power over those people, but they do over us unless we are working as consultants.

Some of us wish there were safety codes for software like there are for plumbing and electrical. A lot of arguments reduce to "because what you're asking is illegal".


I see this everywhere when one person doesn’t see the value in the work they’re asking for. It’s not that it takes 4-5 hours instead of 1, it’s that it costs more than they want it to cost, because they only see the output and not the process.

Same as you get with designers and artists when a client cheaps out and says their 4 year old kid could do better, or they’ll pay in something other than money.

I don’t think explaining it works. I think that’s almost empowering them because they’ve put you on the defensive, justifying your time and expense when it needn’t be justified.

Better to get them into the habit of managing their expectations instead of trying to people please.


Software prices have not done anything to help with this.

Entire operating systems are “free”. Software that takes tons of effort may be $.99 on an app store. We have been programming non-programmers to undervalue everything for those same 30 years.

Probably, software packages should never have fallen below $100. The minimum price on any app store should probably have been $10. Then we might see a valued industry.


Well the business side wants to squeeze down the cost at any chance they got. I had a client who would say something like "it's a simple change" every single time he made a feature request, no matter how big the change was. And it's been years he's been saying the same thing. It used to rile me up to go into detail on why it would take weeks to get some huge feature done. Now I just double whatever time estimate with him.


Is this unique to software?

Our shiny new tunnel took 3 years longer than planned. Our new aircraft are being accused of crashing due to engineering shortcuts to get them in the air faster. A tower crane fell across one of our main streets, killing 4 people, so even if that building was on schedule before, it's sure not going to be now.

Is there an industry which can complete all projects on-time and on-budget, with no catastrophes? I'd love to see it.


>software has been a mainstay of modern business for _at least_ 30 years.

No it hasn't. I've worked many jobs where software is still an afterthought. There might be a few software packages that the business licenses for use, but they are far, far, far from every having any software developers on their staff.


Well, what I mean here is - you can’t do business without software (or rather, you can’t do business without computers). I suppose if you’re managing a convenience store in central Alabama you might, but otherwise, software IS the business.


My former company (valuation ~$100M) was not a software company. It was a services company, and most of the business could run off of paper.

We were not a convenience store in central Alabama.


> " then maybe I could understand why they STILL think we’re lying."

They think that because occasional something that sounds (to them) as being the same simple request does take and hour or two. That's all they remember. That's all they want to remember.

Frankly, short of critical bug fixes, is there ever really a real biz need to deploy such updates almost immediately? Why not say, we expect to have to coded and tested in time for our next scheduled update" or something like that. This way they see the big picture and don't get to know - cause they don't really need to know - the nitty-gritty details.

I'm not against transparency. But the fact is, clients need to be managed, as do their expectations.


So far analogy that worked for me is a house building one.

Something along the lines:

If you expect to build a cottage and it turns out to be actually a cathedral, that is the source of the extra time or rework.

Or, imagine you build a wall inside a house. But once the wall stands, suddenly you remember, that you wanted to have a window there. Or an electric socket. Imagine what extra work that will be.

It is more easy for clients to imagine that and they understand the necessity of planning and a why some changes, although they seem like a small ones, may actually take a lot of time.

I have the feeling that software is too abstract to reason about for non developers.


In my experience, well in excess of 20% of devs would make such claims and/or behave like this if given free rein, perhaps even a majority. This is especially true if you take "dev" to mean "anyone who commits code". The exact percentage will of course depend on the nature of the organisation and the incentives set by it.

I will give you that eventually they will take a bunch of time on something and have to explain furtively "it's more complicated than you think" but they may not even realise at that point why it's getting harder.


> And this whole time, every single professional software developer has been telling every single non-software developer the exact same thing, over and over

What? No, not at all.


In my opinion based on limited experience, this is happening because of unrealistic expectations set oy weak CIOs who doesn't have a spine to stand tall. Most CIOs usually leave in less than 3 years so promising moon and delivering Mariana trench is not a problem.

IT is just glorified technicians who should live in basement. Business is not willing to share power with IT. This is unwritten class system.


I disagree. Sometimes a task requires more time than someone with no knowledge of the system would expect; sometimes it can take much less. I always thought that a big part of my job is to make tomorrow's changes less expensive than today's. If things are constantly costing more than expected, there is a problem.


what surprises me is not that non-developers don't understand that simple changes are not that simple and will take longer than they think to implement.

it is that former devs who are now managers don't remember this when they are in a new role. maybe it is the pressure from their superiors to do things quickly or maybe they no longer have the developer mindset


More likely those who were paying attention and learned the lesson were promoted, fired, or age-discriminated out of it.


I blame HTML and things like Dreamweaver, Flash, and MS Word in making a whole generation of non-technical folks believe that software development is trivial text-editing and drag/dropping. If "my nephew in high school makes webpages|games|apps" then it can't be all that hard to do, right? Better hire some code monkeys so I can kick back and relax. Hey, why is my assembly line so slow? Yah, mules!


If you think software developers get this bad, try being another professional like marketing or legal.


The idea is sound, but this letter is truly surprising to me. Instead of a succinct summary of what goes into the change, we get a long-winded narrative that reads to me as full of excuses. I think a more useful email would be a breakdown of the time actually spent, and a proposal for improvement, something closer to this (obviously sent after the feature has been deployed, and obviously including additional items for documentation, code review, whatever else takes time in your development process):

  Summary
   - 0.5 hours investigation and planning
   - 0.5 hours feature development
   - 4.5 hours manually testing
   - 0.5 hours deployment
   
  Proposal
   - 8 additional hours now adding automated testing
Obviously, depending on your client, they may prefer a more verbose format or need some more explanation, but probably 3-6 sentences at most.


Pro-tip for anyone who needs to internally sell getting the automated testing in. Feel free to gently inflate the manual testing and deflate the automated testing numbers, and make it clear that the automated testing will reduce the cost of the next ticket.

Once a code base has decent test coverage the time it takes to add additional tests is pretty reasonable and most of that manual testing time goes away.


Do you think a housing contractor is "making excuses" when they apply the rationale requesting to take down a wall in the house by saying: 1) we need to investigate the condition 2) check if its load bearing 3) check if it runs electrical or plumbing 4) check if it is to code

No, they're telling you the rationale of what goes into making a change. I agree that this isn't something you should send to clients. But, even sending a summary to someone who invokes this question will ask the same thing for any item you summarize. The next question will be, "Why does it take 4.5 hours to manually test?"

So, explaining the rationale, at least once, will help the customer understand the process and could contest specific elements rather than trying to hide it in vagueness.


Really, I don't think that the developer is "making excuses" here. Which makes it all the more unfortunate that the letter as written sounds like excuses.

> The next question will be, "Why does it take 4.5 hours to manually test?"

I'm not sure it will. In my experience, clients are usually surprised about how long something takes not because they think specific tasks are taking too long, but because they aren't really aware of what the tasks are nor of their tradeoffs.

"You're right, it took longer than expected. I chose to spent some time setting up automated tests. That took extra time now, but will save much more time later."


I disagree with you. I don't think the article is "full of excuses." The details are essential to help non-developers understand why things take longer than expected, giving them a list of hours is not helping them anymore understand what goes on.


Agreed. This post is one long excuse.

If I was the client and you sent me this blog post as an answer to "why is it taking too long?", I'd fire you.

Because you spent more time writing the post than it'd take to update the code.


No client would ever read this, and much of the language in it wouldn't make sense to the kind of client that most needs the explanation.

You're better off avoiding getting to this point in the first place. Maintain a good relationship with cooperative clients and they won't (usually) complain, because they value your work. You should fire uncooperative clients and let them be someone else's headache, when possible.

Adding a little more detail to line items in invoices helps a lot too. "Fix bug report #493" should usually be, "Investigate report of incorrect discount calculation (#493), modify 1 file, review code, deploy to test, test for regressions (all tests passed), deploy to production."

It seems dumb and repetitive to us, but one of those descriptions looks like 4 hours' work to the person approving the invoice, and the other doesn't.


I worked in software development within companies for 20+ years. The "why does it take so long" conversation has come up a lot.

So on one hand, I see the argument. Simply opening unknown code, and making a change no matter how small, is a risky game. You need to research the impacts, test, and walk slowly through a deployment you haven'y done in ages. I totally get that.

But I also see the other side. Why DOES it take 8 hours to do a simple one-line code change? That's ridiculous. Somehow, we've developed these fragile systems. We've trapped ourselves in processes that add 7 hours to any change we make, no matter how small.

The status quo does not need to remain. It should be easier to make small changes. It should be cheaper to respond to simple requests. The client is actually right to question us when they just want an email to appear a day earlier and it costs them $1500.


I'm not sure what companies you've worked with, but the answers to your questions are very easy and come with experience. There's a universe of testable things. You'll never have 100% coverage (this is a software development law). Even if you were to do everything right, and build incredible automated deployment and testing systems, at best you'll end up reducing but never entirely eliminating fragility. To even get to that level of perfection would require a tremendous amount of overhead and time that most enterprises don't want to invest in the systems they build.

It's akin to trying to constantly remodel/add additions to a building. You may decide that you want new floors, but when you tear up the carpet you realize there's tons of water damage that was being covered.


Very much agree -- this is a good chance to step back and ask how software practices can improve. It feels like we are a long way from optimal.


Our needs are complex, and constantly changing. If our needs weren't complex, it would be easier to make a change.


>But I also see the other side. Why DOES it take 8 hours to do a simple one-line code change? That's ridiculous.

you might be right, but if you're not in a position to help make it take less time, that's not a helpful discussion to have. the reality is that it takes that long, and complaining doesn't make it faster.

The procedures that take time exist for a reason, and i'm sure in every organization there are opportunities to make the process faster. But when somebody who wants a change says "why does it take that long", they aren't looking for helpful ways to improve the process, they're looking for an exception to the process be made so their specific change gets out faster.


What's crucial here is that someone previously set the expectation of "one-line change"

Maybe a savvy client, maybe a happy dev. Most likely this situation happened not for the first time.


It seems like most of the comments here either didn't read or missed the point of the article. Yes, this is a simple change. Yes, pushing a fix out in a day is actually pretty fast. But the author using a simple example to illustrate a point. Better than taking a complex example that takes 5 paragraphs to explain. Moreover you can easily extrapolate all the parameters in the article for complex examples.

The primary point of the article is that there are very common inefficiencies in our industry that, if we tackled responsibly, would greatly reduce the turn around time of producing changes to the code base.

The point I took away the most from is how much having a single point of change for a single concept and periodically cleaning the code base to keep it this way can dramatically increase productivity.


I read the whole thing, and if the author was trying to use this as a simple example to illustrator a more general point, they did not communicate that effectively.


I thought the following line: "Below we have a letter that we have written variations of numerous times over the years. " made it clear that the letter is not real, and is therefore a semi-fictional example.


I agree, I thought it both overly long-winded and unclear. If I were the guy he was addressing I'd suspect he was flannelling me.


There were a bunch of corollary questions about user scenarios that came up when I was trying to read it. I think it was a poor way to write it out - go through an actual real world issue, touch on the problems you foresaw and those you missed and the costs all of those totaled to, it's far more interesting.


When the customer demands a written explanation of exactly why a small change took 4 hours, an engineer has to investigate that completely and write up an analysis in language and sufficient detail for the customer to understand. The given letter would take me a couple hours to revise and get the wording just right and diplomatic. Since the customer is challenging the billing and there is the subtext of fraud, the letter needs to be approved by management and reviewed by legal. The engineer and others are also taken off other tasks to work on this letter project. Providing a legally sound and technically accurate letter of this nature likely costs around $1000 to provide, plus lost opportunity cost.


Exactly! When requests like this start popping in, it signals some ongoing distrust or latent disagreement between devs and the client.

If I were that client, a response of that kind would rather infuriate me as mudding the "clear" picture I see.

I believe, in this case, there's a bigger issue to address - the issue of trust and responsibilities. If client's expectations about how things should be done overpower his understanding of what the devs do, then projects/contract assessments probably missed the target audience. To realign such expectations the team needs more than just a bark back email.


I do not think that is a good email for discussing a single, small feature. No client is going to want to read it. It's too long and detailed (1855 words!) and will probably only _increase_ their frustration.

It is a good idea to discuss with stakeholders, at a high level, why things _seem_ to take long, but I find that works best as a conversation. No one wants a tome like this in their inbox.


To speak popular manager jargon.

There are some 2D conversations that are necessary. Dates, times, places. That sort of thing.

This is solidly 3D communication. It needs to absolutely be face to face. Otherwise the receiving person on that e-mail will (a) not read it, and (b) take it as a passive aggressive way to get out of work.

Oh, and (c) look at the length and complete unnecessary detail of that email that should be a meeting as another reason to accuse you of wasting time.


I really doubt this is an actual email sent to a client. Obviously it’s too long.


I'm not a dev, but this feels like arguing with a strawman. There are really customers who complain that a change to their codebase took a single day to implement? I would be utterly thrilled if I could get a vendor to turn around anything that quickly


> I'm not a dev, but this feels like arguing with a strawman.

It's not. I make the same complaint about developers I manage and it's so prevalent across the industry, I'm surprised you think it could be fiction. Of course, in large companies with a glut of process, making a change on a production system without testing is basically unthinkable. The reality is that most businesses have nothing more than production to work with.

People STOP saying things like this, when they are berated and/or overwhelmed and/or discouraged by details, as outlined in this letter...enough times.

Interesting fact: I worked on DrLaura.com's forum website in the late 90's when the naked college photos of her came out. They constantly were deleting posts and threads regarding the photos. Then, they ROUTINELY deleted entire forums. I put 7! popup warnings in front of this action, and the owners kept complaining that the moderators were doing it "accidentally" too often.

Never underestimate the tenacity and willful blindness of customers.


Interring story. 7 warning dialogs were funny. I guess they were under so much stress to delete the nuke pictures that their mind automatically filtered out any obstacle.


I've worked in a web agency that billed by the hour.

Clients would ask for justification on why something would take 5h of support instead of 2.5h because the previous similar request was done in half the time during the development of the app.

What they don't understand[1] is that if they ask for an urgent fix after deployment, they're going to pay for a developer that had free time this very moment to pull and install the project. That dev then need to read and understand the codebase, make the fix, test it, make a pull request, ask for someone else's billable time to approve the pull requests, merge any conflicts, schedule a release, push the release, test again and document the new feature. And that's if the request is in fact as simple as it look like.

Often, requests look very similar but are very different to each other. "Why does it takes 3 hours to add a button on the sidebar while it takes 1 hour to do it in the content of the FAQ page?" Well, one is done by the content team and require no code or release. The other needs to be done in the code. Etc.

[1] They are told in very simple words that this is how it works.


> Often, requests look very similar but are very different to each other. "Why does it takes 3 hours to add a button on the sidebar while it takes 1 hour to do it in the content of the FAQ page?"

There is a good XKCD on this topic: https://xkcd.com/1425/


It's not a strawman, it tends to be an issue if you work somewhere where the core business is not software and client managers are less used to dealing with software or saas products and make commitments based on what they are more used to.

If the bulk of the business that clients deal with are happy to make ad-hoc changes and updates to delivered delivery materials on a per-hour basis they can often turn those around in a few hours, and then they ask for the same from a computer system and get a shock when they are quoted 10 times the price for a change to a "cheaper" thing. They're paying more for the manual work generally and see the computerised system as a "cheap option".


A day would be incredible. We have a show stopping issue with a vendor's software. Several reports we have to run daily takes 6-12 minutes to run, when it should take seconds at most.

I've been communicating with their lead reports developer. The fix is done, tests are written, QA has given it the rubber stamp, but he's simply not allowed to push it to our instance until their next code release on the 22nd. It makes no sense, and it's certainly not how I run our side of things.

In the meantime, my users will lose 6-12 hours of productivity this month (15 days, 4 reports, 6-12 minutes each.)


I am guessing they want to pay $50 for 10 minutes of work instead of $2400 for 8 hours of work.


You’re most likely correct and it’s a thing that has bugged me on and off ever since I started realizing that some of us, programmers, are not very well suited to linear working (for a lack of better word) like for examples accountants or lawyers. Sometimes you spend an entire day on trying to fix an issue/bug with no luck, until the following day when all of a sudden a brilliant idea comes to you and you solve everything in 15 minutes. Also, should we bill our clients for the time we spend in the shower? Because for me that’s when most of the “eureka!” moments happen (that and when washing dishes by hand).

I don’t know of any solution for this issue, just wanted to point out that this perspective tends to be often ignored when discussing about how to pay for programming work. Also, programming is hard, not so much the languages or the frameworks themselves, but the human relations that any software program ultimately ends up resembling.


Sometimes it's not a brilliant idea but actually something you just kept overlooking....


This simple change and the recommended fix has another issue that this post missed or dismissed as a non-issue. What if the deadline is pushed back after the email reminder is sent? Do some users rely entirely on the email reminders and might they need a followup email saying "Just kidding, actually you've got more time" - assuming you resolve that question, when the due date comes up again should you send another email? What should happen (with these daily emails) if a task is pushed back an hour without pushing it out of the 24 hr bound? What should happen if it does push it out of the 24 hr bound but just by a little bit? Should we encode some sort of grace period of ignoring the change? If a due date is scheduled and due to go out immediately should we allow a grace period for canceling it?

The technical questions behind why a change takes a long time are legion, but so are the UX changes that all need to be accounted for. In software there is no intelligent actor that can solve edge cases you missed when they come up, instead you're building a system that will handle all of those cases (maybe by occasionally crashing, granted) so user stories need to be vetted and resolved.


The post covered that, tho not the specific details you mention.


That's why I mentioned it, actually taking the time to see what a change can effect itself takes time, and if changes are pushed through in an hour then no one is stopping to think about what systems you might be breaking.


Too much protest here. I feel you'd be eventually judged on those numbers as well. Are you a time & materials shop? The fact is that you either start a clock when you start working on something and stop it when it's done or you have standard times that things should take (like an auto shop does.) If you're working efficiently, you can tell your client that you work efficiently and this is simply how long the change took. If you're procrastinating and charging the client for the time, you should fix that.


I don't think four hours is unreasonable for any type of shop (with 4-8 being CYA estimating). One or more resources likely have to do most of the steps below.

* Document the customer request

* Investigate the customer request

* Create and document the requirement

* Assign the task

* Switch current context

* Deploy a dev environment

* Make the change

* Submit the change to QC

* Test the change

* Deploy the change

* Clean up dev environment

* Document the change

* Bill the customer

Ideally, you would encourage the customer to bundle multiple requests, so that most of the tasks could be shared among them. However, if they insist that they need it now, then they need to pay accordingly.

This story seems like it could be solved with improved up-front communication.


Totally agree it's a reasonable time. I think that it's a waste of time to argue why it takes time to do your work. You'd only write that letter because you feel bad about charging your client for your time.

A better letter if you wanted to write one would be about the benefits you provide and the completeness of your work. I wouldn't even mention the time it takes.


If you're charging T&M money, and the client comes back with "why does it cost that much?", what would you suggest? Assuming burning the bridge isn't an option.


Itemize the bill, break it down relentlessly into tiny components and show how long different facets tied to the feature build took, ideally pin some of the time on them if it's appropriate ("Remember when you guys were a bit wobbly on the background color, well we had quite a few meetings and dev was tied up while we were talking, it ended up taking us about three hours between those two hour meetings and then an additional thirty minutes to redo the change three times - thankfully we held off on testing until we'd all gotten on the same page.")

The more explanation you get out there the better, if you're honest (or a good faker) people will be reassured by your explanations - but accept that not all clients are profitable ventures, a customer who can't pay for the time it'll take isn't one you should take on, just try and leave things in an amiable state.


> Totally agree it's a reasonable time. I think that it's a waste of time to argue why it takes time to do your work. You'd only write that letter because you feel bad about charging your client for your time. > A better letter if you wanted to write one would be about the benefits you provide and the completeness of your work. I wouldn't even mention the time it takes.

I understand your response, but the (grand?) parent seems to be indicating you don't do that, and just explain that you do good work.


That change may not be worth the cost, or even just the opertunity cost, of doing it.

Someone presumably thought it was a good idea to make the change and assumed it would take only a few minutes work. I think the senario is that they are given an estimate of up to a day and can't understand how a developer could spend that much time on it.

For me it's a little too close to the memories of an old job. I tried in vain to argue with the owner of the company that having a 15 minute estimate category was pointless because the change itself would be drowned out by the fluff (creating branches, setting up data for a manual test, creating pull requests etc).


My favorite idea here is how it's a developer's job to articulate the issues with a client's code base to the clients themselves. Some clients just want the problem to go away, but smart ones recognize the importance of a high-level understanding of the state of their code base.

If you work for a cheapskate who's just nickel and diming about billable hours, the "Why does this take so long" question is nothing more than the rhetorical whining of a bad client. But in a healthy client-developer relationship, the question is important. It's a less-articulated, unfamiliar version of the following, which any good developer would find amiable:

"I, as someone invested in the success of my company, am seeking insight into a part of that success that only you have. I trust that you're not playing Minesweeper on our dime, and yet our current process isn't delivering the returns we expect. As someone who knows better than I do, are my expectations unreasonable and in need of adjustment, or are there investments that can be made to get us there?


I believe I understand what the author of this post is trying to accomplish, but for virtually every "client" I have ever worked for, their eyes would have glazed over by the time they'd read 25% of this response (if they could even get that far).

It is worth it to try to help your client understand why seemingly simple changes take a long time to implement. I'd just be surprised if the written response in this post would be of much help to most clients.

The face-to-face has a much better shot at being helpful, but, of course, that depends a great deal on the client also.


Part 4: learn to bill by the day and never again have to worry about justifying how long a simple-sounding code change took to make.


So would you bundle up a series of these into a day? Or just assume the client is cool with this taking a day?

I believe you're also assuming you're servicing a client whose got a lot of budget to blow on a lot of small changes.


Servicing a client with a very tight budget is a colossal pain. It might be inevitable early in your career but try to get past it as soon as you can. If your client has more time than money they will waste a lot of your time quibbling over price.

The best clients have more money than time and will pay you well to make their problems disappear. They pay you to think about the software for them, so they can spend their valuable time thinking about the business instead.


Yeah, I need to develop comfort enough to become a contractor - at the moment as an employee it means you go wherever they tell you.


Great comment!


"Of course, you can’t just deploy a change to production without, at least, running it locally or on a test server to make sure the code executes correctly"

Dear client of course we can, just prepare for commits in history: "That simple fix" 6h ago "Fix the fix" 3h ago "Really fix" 1,5h ago "Really really fix the fix" 10min ago

If all goes well enough, we won't have to craft "Customer data fix" SQL query commit the next day, just because someone forgot about that totally hidden stored procedure that was not included in the "Really fix". Which takes another 6 hours to prepare on next business day and invoice instead of being 3-4h of proper fix for "one liner" turns into bill for 12 hours and 2 days of lost production time.

Yours, humble developer.


To me the main concern is that this shouldn't even be a code change: this stuff should be a matter of configuration. In a well-written system the dev time for it should be zero.

But in general, the whole letter feels wrong: either the complications of such a trivial change depend entirely on legacy code that isn't responsibility of the current maintainer, and therefore the client knows well how hard and expensive even simple changes are (and this should have been made clear when the project started); or the maintainer is asking the client to pay for developments (refactorings, tests, or a different, more general implementation of the feature) that weren't agreed beforehand.


I've actually come around to the opposite conclusion over the course of my career: everything should be hard-coded by default (tho using a well-named variable/constant/identifier). Making data-driven (e.g. configuration data driven) code is always more work and, unless you have strong evidence otherwise, [You Aren't Gonna Need It](https://martinfowler.com/bliki/Yagni.html).


> Changing the tasks due email to be delivered a day earlier should be a one-line change. How could that take 4-8 hours?

This is what's in dispute? Something else in the relationship is wrong.


On the flip side, when a change doesn't take long to implement because of a successful campaign waged against technical debt, it's important to communicate that to the stakeholder too, even though they won't ask. I implemented such a change on Friday afternoon and made sure to talk about why it went well at sprint review and at sprint retro.


Our internal model of the world is always simpler than the world itself (otherwise it wouldn't be a model).

This whole article is just saying "Dear client, here's why your model is insufficient".

Which is a much nicer email to write than "Dear client, here's why my model was insufficient". Which is what happens when the estimates go south


I'd almost argue that, in the example, the clients model wasn't insufficient. If you want to include "implement automated testing" in a small feature request without telling your clien, then your client is probably right to be surprised!


Perhaps I misread it, but I think the only testing being included was for the specific change. The bit at the end was "if we had the time to do a lot of general refactoring and testing not tied to the particular feature being done right now it would save time in the future".


Software development is magic to most people. You have the same laptop as them, and somehow you make stuff that works and makes them money by flexing your fingers.

They don't know how to internalize that - they are dealing with a magician that turns dirt into gold and gets paid by the hour.

It's totally irrational. If you were a plumber then there would be a physical manifestation of the work.

Software managers who have never written code are also often suspect, in my opinion. They can be professionals of magic management without understanding, liking, or even appreciating their field.


I'm a non-technical founder who generally doesn't bug my team about why a change took so long but that's because they communicate with me.

Thing is, working with a developer as a non-technical team member can be a frustrating, opaque experience. Communicating progress is eye-opening for non-technical colleagues but when a programmer does not communicate, then obviously the non-technical members have no idea what's going on.

Developers can forget too, that the one small change might be holding up marketing, sales and customer support, all of whom themselves are getting flack from above about why X customer is still angry, or why the press release isn't being sent out yet etc. "Waiting on a dev" isn't an answer that reflects well on anyone.

The "Dear Client" letter wouldn't be necessary if there was more communication. It can even be automated. Here's what I see in a Slack channel with my colleagues:

github APP [8:52 PM] New branch "fix-password-recovery" was pushed by xxxx [yyyyyy] Pull request submitted by xxxx #466 Improve password recovery • Fix styling • Ensure the visitor is signed out of all sessions • Redirect to sign in instead of 404 when an old recovery link is visited

semaphore APP [9:05 PM] xxxx's build passed — d61157d Improve password recovery on fix-password-recovery

I never need to doubt xxxx when I can see the myriad small tasks, failed builds, the commits etc.


> Thing is, working with a developer as a non-technical team member can be a frustrating, opaque experience.

And vice-versa. Imagine knowing something very well, something quite complicated. Then not only knowing how to fix that issue, but explain it to a child. Now, constantly having to handhold that child over every step, even when they don't even need to know, and it is slowing you down having to do that. And maybe not even knowing the solution, but trying different things, and having to explain each to that child.

What you have setup sounds awful. Learn the technical side, or let them get on with the job. The updates you need should be at a daily standup.

> then obviously the non-technical members have no idea what's going on.

You still don't know what is going on, you just pretend you do.

> Developers can forget too, that the one small change might be holding up marketing, sales and customer support

Then make this clear during standup.


> we always include time for writing automated tests in our estimates. It can slow down initial development, but it greatly improves the efficiency of operating and maintaining a software system. It isn’t until a system grows that you truly start to feel the pain of not having tests, and by that point it can be a monumental task to work tests back into the system

Not only this, but also: I've recently come to understand the idea that allocating enough time for writing unit tests and integration tests, tends to uncover design issues. If a test is taking longer to write than it seems like it should, or if a particular method or bit of functionality seems harder to test than it was to write, that is a signal that you may have a design issue!

Writing the automated tests will help you find out when you have written some code that stinks, if you know what to look for and have the time to step back and think about it. It's called "Red/Green/Refactor" for a reason, it's not "Red/Green/Red/Green/Red/Green" – if your estimates only allocate enough time to write the feature and nominally prove that it works, you're missing a critical part of the TDD pattern and you're not getting nearly as much long-term value out of your tests as you could.

If you don't write automated tests at all, it's even worse, because those design issues will only show themselves when it's time to make a change, and your bad design is now standing in the way, preventing you from iterating quickly when you really need it.


> but it could provide a benefit that greatly reduces the effort to make a similar change in the future.

Even worse, if we don't invest that time, the effort to make a similar change in the future will keep increasing. We need to invest that time just to keep it from getting worse.


I work on a project consists of tens of devs that continuously delivered applications for almost a decade, and I joined this team 3 yrs ago. At some peaks, there were almost a hundred people. We still continuously deliver new features to the project every day.

We bill by hours as a lot of people suggest in this thread. Which works pretty well. We also have flexible iteration plans, the clients can prioritize any feature that is important to them. If a feature does not worth it, it will likely stay in backlog forever.

Most things are really smooth IMO, though explaining why technical stuff is costly/mandatory is really hard to deal with. Because both of us want the project to be an ever-green project, we need technical advancement, architectural extensibilities. It's so hard to even convince myself if I considered myself a non-technical person. Why do you need to split into services? Why things are not immediately synchronized after the splitting? Why do you need a job here, and what's a job exactly?

At the end of the day, it seems when you are convincing people about things they've no idea, you really need to be trusted -- just like you kind of trust doctor, and being suspicious about witch doctors. To achieve this, it seems better first to be business-focused and solving problems to build trust and reputation, as some kind of credits.


Software programming and maintenance is mainly a mind game. The change requestor won't see you taking out a bunch of tools to get ready and they won't see you make the changes. It's hard for them to see why changes take the time they take. Also the time it takes to modify code is relative to the codebase, your understanding of it and the changes needed. I've had situation's when I've made changes in minutes but other changes have taken days. There's no way to standardize code change. Other professions can set a time on changes because there's been plenty of past data that can be referenced.

So given this, it is reasonable for requestors to feel like a change should be immediate because it seems simple relative to the way they themselves make changes when they need to. They are trying to use their past experience to come up with the time it should take to make a small change. So many things seem easy when you don't have the right experience.

The only way to counter this is to define very specifically why it will take a day vs whatever time they think it should take to make the changes. Even then it's a hard sell since the requestor will feel that a lot of what you are doing is a waste of time which in their mind equals a waste of money since time is money in business. This is how things have been, are, and will be.

I think part of a programmers training should include how to deal with customers since we all have had to deal with this situation and will continue to do so. It's part of being a programmer for hire.


> I know that investments like this can be hard to make, precisely because there aren’t any new visible rewards

Taken individually yes, there's no new 'visible' reward.

Looking at the larger picture, say, 3 months from now, changes that used to take 3 days now take 3 hours, and there are fewer outages, reduced downtime, higher uptime, and fewer (or no) hair-on-fire-we-lost-customers issues any longer.

Establishing both short term metrics (request turnaround time) and long term metrics (uptime, data loss, security breaches, customer satisfaction, team satisfaction, employee retention, etc) will help understand justification of effort.


That's an excellent set of explanations. One way to forestall most of them is the magic phrase "change order".

Anytime you get into a conversation where the customers starts off with "Can't you just..." then it's your responsibility to let them know a change order needs to be filed so you can estimate the new impacts it will have, including cost.

Some of the excellent techniques in OP's article will be employed, but "change order" terminology and workflow minimize these requirements after you've stepped them through the change order workflow once or twice.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: