Most of these look like MBA problems.
That said, maybe more experienced coders should be doing MBAs (which isn't just about attending classes but about networking, acquiring a wider shallower knowledge base etc) so the dynamics of tech debt are clear to people who are in charge of making debt decisions.
Many times I've seen long time estimates for something, because C level was requesting "X feature". But when you sit down down with C level, and understand what they want to achieve with X feature, many times it is possible to sit down with tech and figure a way to get what was wanted without the feature, generally by adapting some other request or feature.
The problem is that most managers are looking to suck up to C level guys and 'deliver'. And many C level guys won't stand the insubordination of someone telling them 'maybe that thing you guys thought of isn't the best way to achieve what you want', no matter how much you work on the techniques from How to Win friends and influence people. Some guys are just dictators, and some are suck ups, and when the two meet, as they often do, it means tech will work 10x what is needed to get where the company needs to be, because the focus will be on what someone requested, not what we are looking to get at.
Also, many developers get stuck in the mindset of places like that, where they assume they have no voice and everything they say will be used against them. So you ask them as a manager how they would achieve X and they just say "tell me what you want in the spec sheet"... people get weirdly conditioned when in negative re-enforcement environments.
In the best functioning development organizations I have experienced or seen, nobody should be asking for "X feature", they should be explaining what their actual needs are, what they need to get out of the software and why. And how important it is to them, or even what their "budget" for it is.
And a technical designer (who could be a developer who has some design skills perhaps by experience, or a designer who understands the tech) _designs a solution_.
This requires someone(s) on the "development" side who can do needs/requirements analysis leading design (ie, "UX"). And it requires the stakeholders to trust that the capacity is there, and that it will turn all right if they don't try to micromanage the feature -- not just "all right", it'll be SO MUCH BETTER.
This works so well when the organization staffs itself to make it possible and the people in charge allow it to happen, that it seems kind of insane that so few organizations do so.
They solicit needs from customers, support conversations, etc. and translate those into future development. They combine those needs into major product directions and balance them with an internal compass for where the companyw wants the product to be (i.e., what jobs they want to solve for which users.)
They are technical people with an eye for how something should be built, working closely with designers (visual, UX, etc.)
Excellent companies figure out how to deploy these types of thinkers -- with appropriate mandate -- throughout the organization.
I once worked with a company that had that attitude toward accounting/compliance - so much more work and all because the primary holders were not in agreement, so they kicked the corporate structure can down the road.
Same goes for CS, etc.
I guess the only solution is a holistic approach, but even then sometimes someone will have to bear the brunt of being the area where people will 'will handle it', at some point a CEO will decide that is what is needed and it will be needed, even in a perfect organization.
If they are supposed to be technical, then we hired wrongly.
I can imagine a non-technical person could end up in this role and succeed, but someone with a technical background and some business acumen will have a much higher probability of success.
This is one reason why tech firms win. Their senior management are made of [former] engineers who are much more aligned with the workers around what implementations make sense.
The skill includes working with the stakeholders to develop actionable needs (not just "make our business more efficient"; the obvious place to go from there is, okay, what do we know about where the greatest bottlenecks/inefficiencies are now, if we don't know enough what can we do to learn?).
Also you don't just take needs and come back with a finished feature, you iterate with feedback from the stakeholders. The first iteration might just be a textual description of the feature, to make sure it makes sense to the manager/stakeholder.
It's a skill, and it involves building trust.
But translating needs to features that will succesfully meet those needs is not a skill most managers/stakeholders have either, is exactly why it doesn't work to just get detailed micro-managed implementation directions from managers/stakeholders. (Or customers!).
Why do we, as an industry, expect to be able to run a project to schedule when we often give it the most cursory of glances in the planning stage? I work for a very large integrator and we don’t plan at all. We jump in and start while we’re ‘planning’. It’s absurd.
He and I have long said that if the people who built [hospitals|bridges|aeroplanes] behaved the way we did, the world would look very different . Now, thanks to my recent PMP-by-proxy, I’m starting to understand.
: Project Management Professional. One of the main accreditations for project managers, I think PRINCE2 is the other one.
: I’ve seen the project schedule for a railway line extension. “You really plan a concrete truck coming three weeks from now down to the half-hour?”, I asked my friend. She does.
Yuck. I think the agile approach might be better here. It's better to expose your plan to reality ASAP and adapt along the way.
Except, that's not how agile really works. How agile works is "bosses" have an hour planning meeting, and then act as though they have a veeeeery robust plan (like you talk about) and get upset if things do go according to their quick one-hour plan. So much for adapting along the way.
And anyone wonders why we're in this situation? We dug this hole, kids. (And to be clear, I'm guilty. I'm not trying to be better than anyone. I am not! I'm just attempting to explain it.)
With agile, building houses still somewhat works because people will do that planning in their heads. So, knowledge will end up _only_ in people’s heads, not even in everyone’s heads, and will deteriorate there, even if those people do not leave the project.
So, by the time it’s time to do a large-scale update to the house (add a room, update ventilation to modern standards), nobody knows where the ventilation ducts run, why one of them is so much larger, that there’s asbestos in the ceiling, etc.
And of course, some people try to use agile not for building houses, but for building apartment blocks.
It was a way to bring some sense of order to what was chaos: new requests coming in all the time, work being dropped partway through to deal with the new requests, little progress on any front.
I think agile works really well in this kind of context by bringing in a sense of discipline and keeping new work at bay until the beginning of the next cycle. It can also help to keep teams delivery focussed in other contexts but, key point, does not obviate the need for additional planning outside the sprint/iteration framework.
The problem really occurs when people treat agile as sufficient on its own, or as the one project management tool to rule them all. Total nightmare, especially with multiple teams involved.
 How workable this is in practice is open to question: I can tell you it's not always a great approach in an environment where client projects typically last a few days and on time delivery has been known to depend on a critical bug fix for a legacy codebase. Especially problematic when requests come in either during the launch phase, because there's usually a fairly hard window, or during the reporting phase, because there's a delivery deadline looming.
You ain’t wrong about how most businesses implement “agile,” though. :)
Learning and applying "how to plan a project" has pretty universal career relevance. I suggest 30-40 hours of study time, and it's around ~$600 to buy the book, take the exam, etc.
Other times, it is a problem with the existing developers. Choosing not to learn/follow industry best practices. Creating an over-engineered solution instead of a simple one. Skimping on automated testing and spending more time on manual testing instead. Making their colleagues wait an inordinate amount of time for a simple question, code-review or approval. There are many things developers can do that would slow down their team's productivity.
I don't think tribal finger-pointing is very productive. It really does vary case-by-case.
Reasons for the delay:
1) Proxy is a stand-alone application. Needs it's own deployment and build configuration
2) Main app was running on Node 6.x. High time to upgrade to 10.x as 6.x LTS is running out.
3) Upgrade to 10.x breaks some modules we depend on (Google Cloud datastore) and that module has been deprecated in Node 8.x+.. time to refactor that....
4) Main app is in a mono-repo with backend systems still running node 6.x Don't want to spend the time upgrading everything to node 10, so need to split Main app into it's own repo, decoupling from existing mono-repo codebase
5) Since we are upgrading main app to Node 10, Best to remove dependencies on old definitely-typed typescript typings (upgrade to npm @types definitions). And as such refactor to use latest version of all modules....
6) Security: New Default Proxy should only allow the Main app. code a solution for this using Keypairs....
7) Docs, sample code for users, announcement mail.
thankfully i'm currently on step 7. Usually I expect to be off on my estimates by a factor of 4x. This is excessive :)
It does make me wonder though if there is a trend of 'hiding' necessary maintenance work in larger tasks - if so that seems to me to be indicative of a larger problem around not being given/making the time to do those jobs as their own tasks.
Or to look at it another way, if no stories are maintenance stories, then all stories are maintenance stories.
But yeah, if it needs t be done, do it.
At least in my case, as a public facing SaaS I don't think running Node 6.x past it's LTS window is a valid choice, so better to bite the bullet now. If I went about doing the minimum work, it just adds more code/complexity that needs to be refactored when upgrading Node.
And that seems 100% true.
You do not have to do everything at one time. You can easily to one thing, and then right after that, literally right after, do another.
You could have gotten the proxy up and going, and then upgraded node.
It's not about "doing the minimum work" it's about understanding the problem space, what can be split up, and how to attack the surface area efficiently.
You're also introducing other problems that can occur by doing multiple things at the same time.
Generally speaking I advocate my developers to be really good at separating problem spaces and attacking accordingly. It's really hard to take on multiple issues at one time and actually have high confidence in not breaking many systems along the way.
On one project there were CERT advisories out, and we had deferred upgrades due to breaking changes in those upgrades. All of a sudden we had to deal with the upgrade and a security issue at the same time. It was ugly. After the second time, we started putting at least one upgrade story per month on the work queue. Sometimes we let the engineer pick what they wanted to upgrade (just upgrade something!). Other times we picked the engineer for the work that needed to be done.
That's nothing to do with adding a new feature, and should be planned accordingly. It's unfortunately very common for developers to hide maintenance tasks in other work, but clearly not ideal.
I update Java dependencies maybe once a year; it's usually painless and incompatibilities are highlighted by the type system. If you let your JS dependences get stale by a few months, you're looking at hours or days of work. And that doesn't count the massive shifts in build systems and frameworks that seem to happen annually.
Maybe in the last year they've settled down, maybe not. I learned my lesson and don't take dependencies on them anymore. Much easier.
Take whatever time I think it should take. Then double that number and go up one time unit.
Exp: I think it should take me 3 hours this afternoon to do this -> 6 days.
I think it should take me 2 days to to this -> 4 weeks.
I think it should take me 1 week to do this -> 2 months.
I think this should take us one month to accomplish -> 2 years.
Would you rather deliver a product with finished surfaces or cut corners? Proper design and good overall test coverage, or zero budget for maintenance?
It also helps to understand that management will ultimately decide that we need to do the project in half the time, and with these extra features you didn't ever plan on building.
My budgeting strategy is to always assume that for whatever scope you have planned, 50% of the complexity that you will need to deliver in order to reach the finish line, remains unknown at any given stage of planning. Compensate upward if there still remain known unknowns.
My approach to the same project would be different if I know that I get to walk away when the project is over, versus if I know I'll be the one to maintain the project when this phase is over and it goes into production. I work for internal clients lately, and I can't honestly say it's my way or the highway, but if you ask for an estimate, I'm going to give you an estimate.
If you tell me the deadline then we're having a different conversation entirely. (And that's OK! Deadlines are better than estimates in my book, especially if the consequence of missing the deadline is some unfinished garbage goes to production, or doesn't ship at all as we had to move on to the next thing. If you asked me for an estimate and told me "that's too much, do it in less" then you didn't really want an estimate, did you? Just tell me the deadline and I'll deal with it, in that case, let's not play games.)
Here's another approach: tell me how much time you want to spend on this project, then I'll put a pen to paper and tell you what part of the scope we can deliver given the resources you've allocated. Don't like that very much? OK, you write the plan and I'll say "yes sir." But that's not why you pay me, is it?
Estimates are a function of scope, time, and cost. If you tell me to do it in less time, I'll do it in less time, but it's going to come at the expense of one of either feature scope or quality. This is not a negotiating tactic, it's just a statement of fact. If you tell me I have to shave off a bunch of time without sacrificing anything, then we're not having an honest conversation and we're not going to have a good time. I'm sorry you had a bad experience, mate.
Meanwhile, I see non-software people putting their complete trust in people with less than half my experience, and getting burned.
If estimations are based on FTE then you can factor in meetings, holidays, and even other projects that might suck up resources ("resources" in this case being your engineers work time) into your delivery date. You can detect very early on if your deadlines are slipping (assuming your team are logging their hours against tickers or a timesheet) and you can also make informed judgements ahead of time about whether you need more resources (were that's an option / the work can be scaled across more engineers).
Obviously this is harder if you're tackling a significantly larger "green field" project - you might have to start making some educated guesses in those instances. But most of the time you should have some idea about the work involved.
Now, you can try and do effort estimates using hypothetical people. But, at that point you have already given up the possibility of an accurate answer.
This is one of the many reasons why I disagree with giving projects to specific individuals. Not only do you end up silos of knowledge (which is a risk if that engineer should leave / get fired / die in a bus accident) but you also make it harder for yourself to make estimates for the reasons you've described. Sure there will always be a variance from person to person but you stand a greater chance of that averaging out if you discourage engineers "owning" code bases.
> Further, interruptions don’t just cost time, they also reduce productivity around them.
You missed my point regarding meetings. I'm saying if you estimate a project based on FTEs (different methodologies and frameworks will have different terms but they usually amount to the same concept) then time spent in meetings becomes a modifier you can easily adjust for, rather than a hidden time sink that you can't account for.
> Now, you can try and do effort estimates using hypothetical people. But, at that point you have already given up the possibility of an accurate answer.
You don't need to make the estimate on hypothetical people. You just need a system of tracking and reporting the hours people spend in a working day. For example in JIRA you can log time against a ticket and you can create generic tickets for meetings. Therefore after a week / sprint / arbitrary point in time you can view where your engineers have spent their time and if it's below the allocated time for that project you can then either:
- enforce a new policy (eg are all the engineers going to every meeting when just one or two representatives would suffice? Are some meetings just duplicates or redundant? etc),
- inform project managers that there will be a deadline slippage due to resource constraints
- hire more resource to compensate
- or all of the above
I agree estimates are never going to be 100% accurate but that doesn't mean there aren't better ways to estimate than just applying guesswork based on your current teams circumstance.
The entire dev team walks out, and yes this actually happens surprisingly often and could happen to you’re team tomorrow. Now, what happens to all your estimates?
Well clearly anything short term is now worthless. You can build a new team and use older effort estimates as a guide, but with different skill sets and a massive learning curve they in no way translate into FTE hours.
Effort estimates can survive those kind of transitions. But, time outside of a functional team is not meaningful.
The estimate is still the same because, at risk of repeating myself, you estimate on FTE and not wild guesses at a project end date. Thus you then go back to your project managers following the steps I outlined in my previous post.
Granted in the most extreme of situations you would need to factor in some upskilling time - maybe even get the team to re-groom the tickets (if you're following the agile methodology) but your method of making wild guesses wouldn't put you in any better a position should your edge case example happen. So you're not exactly winning any arguments by raising this point.
When running projects, estimations are based on the required work to do, not the team itself. Teams can fluctuate (as you keep pointing out) where as the work required should be closer to a constant. Thus any feature creeping that happens - as often does happen in software projects - gets captured and costed before so budget holders aren't surprised by hidden escalations in costs. Delivery date is then derived from totalling up the required work.
If you have a hard end date for delivery then it's up to management (eg yourself as a hiring manager, the companies board and the projects PM), to decide if you reduce the complexity of the project, do a staged release, hire more staff or even argue if the requested delivery date can be extended.
The above is how software projects should be run when they're managed properly. Other places might do things a little more adhoc but in my professional experience that almost always ends up being a worse way of managing a project, budgets and teams.
I understand you’re desire for an FTE to be a meaningful measure, but imperially that’s not true. As I said several times effort estimates can be used, but after deciding to add staff progress slows down for a while. So, using effort estimates you would reduce progress over the next 3 months before expecting a longer term increase in rate of accomplishment.
Brooks law applies to late projects, not new ones. If you have good estimates you can determine if you need resources early on.
All a lot does depend on the size of the project. If it's something that will take 12 months or longer than extra hires definitely wouldn't have that affect. Depending on the hire and the work required, you can shorten than time frame significantly too.
However I do agree that hiring isn't a silver bullet. This is why I suggested "hiring" as one of many outcomes that can be considered rather than the preferred outcome in all situations.
> I understand you’re desire for an FTE to be a meaningful measure, but imperially that’s not true. As I said several times effort estimates can be used, but after deciding to add staff progress slows down for a while. So, using effort estimates you would reduce progress over the next 3 months before expecting a longer term increase in rate of accomplishment.
This is where another piece in the jigsaw comes into play - an employee might not be 1 FTE. They might be part time, might have leave booked, might have commitments on other teams (effectively part time from your perspective) or might be a new hire so need upskilling time. Those are just a few common examples - I'm sure you can think of others.
This is why I keep reiterating my point about calculating your figures based on work required from FTEs. By talking about new hires as you are, you're again thinking about "teams" rather than "work required". When you look at work required then you can use your team as a variable in the calculation and instantly estimations become easier.
I'm undoubtedly explaining the process poorly but I do strongly recommend you read some books or articles online about managing projects and teams using (for example) agile methodology - even if you've already worked in places that employ scrum (again, for example). You could potentially really improve how you estimate work which in turn will improve how you manage your team. From personal experience, I've been a manager for a number of years and have found my skills really improved as I've adopted those lessons too.
It’s mostly bullshit. This team has historical reasons for their bloated schedules, but at root, they’ve simply been coddled, and never forced to justify their behavior.
I’ve been on both sides of the table now. Developers like to gripe and moan about “unrealistic” schedules, but without aggressive pushback from management, a huge number of programmers will simply never ship.
If it weren't for schedules I'd have no defense at all for 'wasting time' on something that saves each of my teammates an hour of pain per week. (I've had a lot of shitty managers. Stuff like this shouldn't need a defense).
Well,this is basically lack of trust between you and your devs. Or lack of common understanding of priorities. I'm not sure if you're a project mgr or a client, but clearly some expectations are misaligned. Perhaps you don't see their barriers or they don't agree with your priorities
The existing contract doesn't work, so some mediation is due to close that distrust before any damage is done. After all it's the team that you currently consider yours.
It’s pretty amusing how many definitive responses I’m getting from devs who are diagnosing my problem without any knowledge. It’s also telling how different they are.
Like I said, there are reasons, and I’m aware of them. I’m also fully aware of the technical barriers.
The devs are padding their schedule, and think they’re being clever about it. They are not. Some of it does boil down to trust (i.e. bad history with previous managers), but a lot of it is just that they don’t want to do the thing because it’s less fun than other things, and nobody has ever held them to account in the past. A sufficiently lazy developer
will find endless Legitimate Technical Justifications for not doing something he doesn’t want to do.
I’m not trying to generalize to all developers here, but I do know the people and problem I’m working with.
Also, if they give you non-padded estimate, that is the one they will occasionally miss, will typical manager of your company force them to work weekends and evenings or otherwise punish them? Or will typical be ok with it and accept it as risk factor?
I am asking because the one manager that complained about paddings was the one that took pride in manipulating people into weekends - including by claiming they promised it. I do tight estimates when I know I can be late and generous when I assume it will be law or don't trust management.
To answer your question: no. I don’t do any of that. Also, this situation has nothing to do with aggressive schedules.
That particular manager was not worst.
Note: I'm a dev and I can tell you I very much dislike working with prima donnas.
Yes, but not the way I think he’s thinking (and maybe not the way you’re thinking either). If you were actually in a room with him and said, “ok, let’s just walk in there right now, fire these clowns and replace them with somebody better” he’d immediately hold up his hand and say, “but I can’t find anybody better…”. So yes, it’s probably a hiring problem but the problem is that he’s putting unreasonable expectations on the people he’s hiring.
Ate they having bloted estimates or not shipping within estimates? Why there are such large features anyway? Why is manager not a ten member but instead someone who don't belong?
Sure I can fire off that one line change and tell you it's done. But, does it work? Is it right? Do you care?
Typically this sub-par work is done by inexpensive outsourced development shops being managed by a client rep without the capacity to see through the lies until the project goes sideways in the future. The developers who rushed the work are paid and out of the picture and never have to fix the problem or touch that code again, so they don't care.
Unfortunately not everyone cares about resilience and certain performance thresholds when it comes to construction, especially when budgets become involved.
Some customers will are happy to cheap out for a hack-job remodel in the hopes that they can flip their home and run off before the bagholder realizes they got a lemon.
The civil engineer spends 20min thinking about how to solve the problem and 20-days making sure there's no way whoever signs his paycheck is going to be told by a court to pay out a bunch of money if something goes sideways.
An professional engineer is basically just a lawyer for the laws of physics. You're not paying for his ability to come up with a solution. Anyone can read the books and do that. You're paying for the fact that other people take that solution seriously.
If you let dumbasses through the system, they will build a pedestrian bridge in a hotel lobby that fails and kills a hundred people at a party.
 The system I worked on was hardened against all of that and some more failure modes.
I mean both are valid depending on what you want. Are you asking me to whip up a script to answer a one-off question or a pipeline to answer that question for every customer every quarter?
I was 1st SE at the company I am at now. Previous applications were developed by outside contractors. I tried making some changes to that code base and found massive issues. Source code was older than the compiled application. Massive methods that were 1k+ lines long. Business rules all over the place in IF statements. Barely any documentation/comments. Only way forward is complete rewrite.
I'm a relatively fresh SE, but I read books/research good practices because I'm trying to avoid major mistakes. Also, it is hard to explain to people who even know some code, how difficult it is to make what seems like a simple form.
At this point I would refuse to work without time for unit testing/refactoring/research.
Interestingly, few days ago I made a very minor change in app I developed straight out of school. App had 0 unit testing and minimal integration testing. It created a bug, because I allowed the specs to change weekly and I just coded it without thinking. Therefore, many lessons were learned that day.
Furthermore, it's often nearly impossible to even be able to ascertain the 'right answer' until you've gained significant exposure to both the application & the business needs; in my experience, you'll have a far better understanding & appreciation of the architecture after a year (or three) of exposure. At that point you are much better positioned to objectively understand the potential ROI (or lack thereof) on a rewrite project vs a more conservative but concerted effort towards incremental improvement over time.
When mentoring developers on this general topic, one of the key things that I emphasize is that a functional application (even if substandard in architecture) is already solving a business need and often generating revenue/profit/positive ROI (as the case may be) for what was probably a "[re]write" project at some point in the past. Rewriting is a large undertaking with many unknowns & high costs, often higher than anticipated, and with no guarantees of reaching full functional parity in a given stated timeline. That results in difficult budgeting & ROI calculations (read: risky), and typically means the project itself is risky -- meaning the potential reward would need to be quite large to be worth it & offset the risks. I find that to rarely be the case when you already have a functional application, even if substandard. ;)
Sometimes you manage to catch some of these early on, but often they aren't caught until later in the process -- at which point the cost for the change (and the impact on budget/timeline, and potentially even the very architecture you set out to fix!) is drastically higher.
I've seen it happen a number of times where the 'spec' seemed simple enough at commencement & throughout most of the project; then you'd have each department test & the reports would start coming in... and the result would inevitably be a multiplier on the project scope.
Not from missing features mind you, but via the omission of 'complexity handling' / business logic which was documented -- but the code itself was the documentation, which isn't overly unusual (and IMO isn't even a particularly bad thing in many cases, though not all).
Code-as-documentation is best-in-class at staying up to date, when it comes to documenting what a system does and how. Often the relevant pieces are very discoverable as you're going to make a change. For other goals, it's much less discoverable. Clarity depends on both how the software is written and who the audience is.
Tests-as-documentation are arguably a special case of code-as-documentation. Outside of the case of well maintained tests-as-documentation, code-as-documentation often has a hard time expressing "why" and distinguishing between what has to be a certain way and what just happened to be that way.
It also has some trouble expressing aspiration - "we've decided it should be this way, but it's not yet".
If we're talking about something in Java that could be better, in the abstract, but it still works, agreed: it might not be worth the effort to redo. Java isn't going anywhere, and versioning differences aren't always critical.
If we're talking about deprecated front end frameworks that no longer have any LTS (Angular 1 comes to mind), moving over to a framework that does have LTS is a pretty smart move, if the code or app is that important and valuable.
In other words, work in the current code for at least a year before proposing a re-write.
Source code older the the application doesn't mean much as a statement - any good process will build off a CI system which only takes old source code. The question is how different is the application the code generates from what exists, and what to do about differences.
Massive methods are sometimes a sign that the good architecture missed something. Refactor them of course, but that doesn't mean the architecture is bad, just that it needs to change to fit current requirements.
Documents/comments lie. Their value needs to be contrasted to the risk that they mislead you. I'm not saying you shouldn't have them, just not a much as you think.
The problem with books and research is knowing when to apply them. The rules exist for good reason, but they are really guidelines not rules and sometimes they need to be broken. I'm considering that now in my application, the GUI depends on the business logic depends on the network at first glance, but looking closer I'm debating calling it fine because there is no business logic (network sends 5, business logic changes the type from int to mm/s, GUI displays 5 mm/s), so the complexity of making the business logic the thing everything depends on doesn't seem worth it. Maybe, I'm still trying to figure out where we go next, different guesses result in different optimal architectures: if we were really sure they wouldn't be guesses.
I do agree that you shouldn't do anything without creating a test. Note that test is singular, don't try to create exhaustive tests, not only will it take too long, but you often will waste time on a test that is incidental to the implementation and not required. Create a test or two around something you want to change, and then make your change. You get some assurance you didn't break anything, if you were wrong and broke something, you at least get another test out of it.
All the while listening to management say, “this should be a one line change, why should it take more than a few hours?”
Oh great, you do it then.
Sometimes it only seems completely obvious until you discover an unexpected constraint that the design decision address well. Sometimes those constraints are no longer a limiting factor and you can safely rethink the design, sometimes they are an elegant way to prevent a specific real problem and you shouldn't.
Consider rewriting only a handful of small parts that are causing problems critical to the success of the product, and making small improvements to the rest of it as needed over time.
I have tried working with the source code but it has many bugs and it appears to be two years out of date. This leaves us with an application only in its current state with no way of making any changes.
I may be able to extract pieces of information, reuse some stored procedures and so on.
If you start from scratch you may bump into the same edge cases that the original writers bumped into, and end up with a code that is not much better than the original - even in the original is 2 years out of date.
I’m sure there were cases when writing from scratch was a good call, but I don’t remember hearing about it.
The issue is primarily that of the reward versus cost -- especially the opportunity cost.
When the system is rewritten, will the business have increased revenue or decreased cost? Will it do so significantly, surpassing at least the cost to rewrite (salaries, etc) -- that's the absolute minimum bar, but then you have to consider the opportunity cost which is the real concern:
If you had instead spent the same amount time adding new features, implementing an A/B test suite to increase conversions, improving marketing capabilities, retention mailers, or really any other activity that could positively impact the company business metrics -- would the impact be better than the impact of the rewrite?
In most cases the customers (internal or external) don't really know or care how good/bad the underlying code is, as long as the product serves their needs. When that's true, even partially, the value of 'rewrite' almost never exceeds the opportunity cost alone, let alone the absolute cost (and that's to say nothing of the risks).
That isn't hypothetical, in the above rewrite we took one part that was considered too essential to wait brought it in, wrapped it, and used the new framework. It was working much sooner than the other code. It still isn't completely re-written - but it doesn't need to be as the core business logic is known to work.
They were producing code that is not compilable, often completely failing on more complex, translating code compiler boilerplate into code.
This was a while ago so maybe things improved since then.
If I had to bet at even odds, I'd bet against it being useful. But the win is potentially large and the effort isn't.
I have decompiled the version in production. I will admit that reading it, is extremely frustrating as there are no objects and fields are named field1, field2, field3.
Btw, it's not a dev who lost the code, it's the company and inadequate development process. It should never have been the single dev's responsibility to make backups.
There are a lot of things you can do with that input but you really have to acknowledge that there's a pretty big problem here, and some empathy is going to be part of our path out of this mess.
We have implemented source control and CI is the next obvious step, but those things were not in place 10 years ago when the application was developed.
There's a happy middle ground between the two, and assuming developers who will deliver quick fixes are all hacks who don't care is counterproductive.
Yeah, the happy middle ground is a management infrastructure that doesn't insist that you say exactly how long everything is going to take before you start doing it and accept that there ARE unknowns in software development.
They think that 'simple' from the user's perspective is 'simple' from an engineering sense and usually those are inversely correlated.
Essentially that engineer has grounded the management team. They can't be trusted to behave so their toys have been taken away.
Doing the change right means making sure that a) your change actually meets the request as the requestor understands it and b) doesn't break other features that already exist.
Everything is a feature:
- Automatic tests, a feature. You can buy it if you want or you can accept accidental bugs any time even for stuff that worked before, or even complete meltdown. You don't have to buy them (I personally usually stop working here as this is professionally unacceptable for me, there are tones of sub standard shops that can do this) but accept warnings and give written stuff about it so I can later just ignore your anger with full confidence.
- CI/CD - you can buy it, means we are agile and fast, we do 20 deployments a day vs 2 per week where some may fail due to insulin spike at the moment. Maybe you don't want it, and snail speed is acceptable for timeframe/budget or is the least evil.
- Epic docs - you can have them or not, again, it will determine how many people you will eventually have in the help desk team, the local IT team, the perceived quality of the system, etc...
- Metrics - yes, we can make nice dashboards and you can know FIRST when anybody gets unknown exception or CPU goes higher then 90% but maybe you don't care or don't have a budget and maybe we will spend time doing wrong things.... because we don't know how often are features used...
- Full auditing - maybe you need this 10 years back in full detail for legal reason, or NOT because you don't give a damn about it as you plan to sell it in 3. Your decision.
and so on and on...
Everything is a feature. I wont accept work without some features - I can be realistic if needed but we need to mutually understand and agree what it means for the system and have that written down on public place (for example company ticketing system).
The DOD is not a singular ruleset, but something defined by the project team, and which can evolve as the needs or the project request. It may or may not involve some or all of the components you've mentioned.
It exists but its dynamic and context/stakeholder/implementator/task/feature specific.
For these devs the Definition of Done is handing it to QA to debug. They do not include all the QA tickets their rush job creates as part of their DoD, and neither does their bad manager. Therefore, it does, in fact, look like they're way more productive than the rest of the team.
In the end they were right. Because they refused to own their own bugs, everyone else was cleaning up after them.
The real problem was when they started using this bullshit ratio to inform their opinions on development processes, pushing back on attempts to mature our process and tools.
Hence the sibling comment of "Handyman Contractor mentality vs Civil Engineer mentality." It becomes a question of identity rather than a question of what the situation demands.
I'm not sure how to fix this, but I don't think the problem is as bad in practice as it is in discourse. I don't think people who see themselves on the "Civil Engineer" end of things would do the equivalent of providing CAD drawings and structural analysis for someone who asks them to replace their mailbox. On the other hand, it's still a problem if they talk as if they would.
That is why any proper RFP answer already provides a broad overview how the problem would be tackled, and possible solutions to the described problem.
Also why during the project development, at various delivery phases, artifacts like architecture diagrams, documentation and UAT from customer team take place.
For a consumer product with millions of users, there's probably a checklist of device platforms, screen sizes, browsers, and screen orientations to check that one-word change, but for a SaaS offering with less than 100k users, probably not.
He responded with, "well, we just have to figure out which doors have kitchens behind them, that's all!"
But during this discussion, I could feel the weight of the countless times I had already had a similar discussion in the past and it was heavy on my soul. Seriously, it feels like we've made no progress at times.
Even here, at HN, I've several times brought up the paper "Large limits in software estimation" and been downvoted as not getting it.
Like for people who want a 50% demo at a 50% timepoint, asking if they'd ask to drive a truck halfway across a bridge when it was half completed. I mean, you could build a bridge that way, but it would be way easier to lay all the foundations first, then build support structures, then finally pave it at the end.
Another one is when someone asks to tack on a feature on top of some system that really can't support it, comparing it to asking to build a second story on a tent. They wanted the cheapest, quickest to set up, lightest on resources system, and that required certain tradeoffs. To build their new feature, it would often be easier, quicker and safer to rewrite the whole thing from scratch.
What's missing is another stream of input that should externalize the constraints and barriers in the flow of both yours and his teams. Even then, as long as PM does not see himself part of your team, all this is in vain, you'd be speaking foreign languages no matter the analogies.
Some of it I think is also personal confidence. If I tell someone it'll take me two weeks, I never budge unless new information becomes available. If you budge, you are welcoming being pressured/bullied into working overtime and/or delivering poor quality all the time to meet infinite demands. Also, we must accept that most customers are willing to accept lower quality/higher risk than we want to deliver. In this case what's important is to state the risks as plainly as possible and set boundaries on what you're willing to do before those risks come back at you.
"This task should take a day. But if the feature of the framework I plan to use turns out to be buggy, it might take a month. A bug is unlikely but not impossible, I handwave a guess at 10%."
This sort of estimate might be reasonable, or not, depending how fractal you want to get with the backup plan of a month of tasks which themselves have wide uncertainty estimates. But it's also quite useless for planning. You end up with a Gantt chart that says "this project might take 3 months, or 5 years", which helps nobody.
Sometimes there's a specific uncertainty that changes radically the estimation. But remove this uncertainty and we're back at estimating and doing +/- 50%.
It is relatively easy for someone to become a maintainer of one specific thing as long as that's the only thing they're doing for a long time.
Then you look at a software developer, sysadmin, anyone in the technical industries, and quite often it's not one specific thing, it's LOTS of specific things. Each with thousands, maybe millions, of human-hours in their development and the development of the things they depend upon. Every layer of hardware and software having it's own quirks and wrinkles.
It's like trying to be the god of a small solar-system of interacting planets each with plate tectonics and life-forms.
I found myself doing it when I get quotes for home repair or construction. My mind immediately goes to "I don't think it should take that long" even though I have no idea how to estimate those types of projects.
I wonder if it's because with something I'm not knowledgeable about I'm only estimating what I can see, but there are 5-10 tasks that have to happen in the background to make that thing I can see work properly. I just have no idea what those background tasks are to begin with or how many.
Devs should set timelines, and business people should set "how much benefit will this bring". The combination of these two things yields priority.
If you start out with an environment where devs feel rushed, you will run up tech debt very quickly, and it will be harder to get on the same page later when dev slows to a crawl
My quite recent conversation with a contractor about estimating our home renovation project went more or less like this: "Demolition: 2 days, foundation: 3 days, ground zero: 3 days, exterior walls: 3 days, floors: 1 day, ceilings: 2 days, attic and chimneys: 2 days... so about five months in total". I felt right at home and want to hire them. The jump from how much time each tasks seems to require in pure work effort to how long it may realistically take given the unknowns, downtimes, logistics and various overhead seemed similar to the way I do estimates. An hour here, a day there, another four hours for that... yeah, I need two weeks for this task.
Inexperienced software engineers are less expensive, so you can hire more for your payroll budget. These juniors tend to think "yeah, that sounds pretty straightforward. I can do that in a week." It actually takes two months. It takes experience to estimate even nearly correctly. And MBAs love to buy some less expensive, less experienced engineers with their money.
I was that inexperienced software engineer once. At the time a grizzled veteran tried to drop a knowledge bomb on me and said "A good programmer can write about 14 lines of code per day". It took me a really long time to figure out that he had mis-quoted something. It should have been "A good programmer only writes about 14 lines of code a day, they're just the right 14 lines."
Some days you read pages and pages of code, but don't write a single line.
However, that makes it possible to figure out just the right 14 lines of code to write the next day, by reusing instead of duplicating code that's already there.
And generally the stronger coupling the more you have to read.
The estimating for juniors should be optional by default, but progress self-tracking mandatory. This way it yields facts without guilt and overpromiss. The team should estimate, the team should deliver.
I've also seen teams where the Defintion of Done doesn't include any steps at all towards deployment. Done is when someone approves the pull request.
Not surprisingly, it takes longer for those teams to 'complete' a change. In the former case, they are continuously surprised, and angered, by 'requirements' that 'no one' told them about. In the later, they stop halfway, and wonder why everyone is waiting on them, because it's 'development complete'.
I was once a tech lead of a team where the architect had been berating and criticizing a team member for weeks over something he thought should take a day.
I suggested he should take over and complete it. It took him weeks to complete.
If berate is truly the appropriate word (upon investigation), and I had any say in the matter, I would fire his toxic ass immediately.
As I've gotten more experience as a developer, I've realized that most tasks take longer than I used to estimate..usually because I want to come up with a long-term solution and not just hack something together with no testing.
If you need a line in a document to tell you that exposing privileged information to the outside world is a bad idea, or that you should make sure the solution you are designing has a reasonable chance of servicing the expected load then you probably shouldn't be a developer.
The last time I seen explicit non-functional requirements the had something like "the solution should have a 99.99% uptime." I realised that our down time for releases (old school I know) was more than that and promptly ignored them.
Based on my own experience with NFRs, these are perfectly reasonable questions.
> I asked them about [the non-functional requirements for this project] and they had to ask [please give me details on those requirements]
or is it
> I asked them about [non-functional requirements as a general concept] and they had to ask [what are these new things you speak of, I have never heard of such a thing]
The former is reasonable, the latter less so.
Requirements like "The system must be secure.", "The system must not go down.", or "The system must not have performance issues."
I'm lost as to what the purpose of such statements are as they don't let me know where real focus needs to be given. Not every system is mission critical and deserves equal resources devoted to ensure uptime.
It's like saying mobile devices must be supported. Does that mean only top of the line recent releases? Does that mean 8 year old smartphones? Does that mean the internet browser on the Nintendo DS?
It's the auto-mechanic problem; if you don't understand the thing yourself, you have no way to know whether or not you're being taken advantage of. So you tend to just split the difference and maintain a constant, moderate skepticism.
One always seem to think solutions were quick and easy, but that's because he never got to the stage where he could write reliable maintainable code. We were constantly fighting fires and he seemed to think that was normal.
I am not convinced that giving people enough knowledge to be dangerous is a good thing.
Why is it essential that 'everyone' knows about code -why draw the distinction?
The only way to fix this is to have someone who understands the work at the highest levels of the organization. But that is impossible because the organization is designed to hire people without the dev background at the executive level
Some of us wish there were safety codes for software like there are for plumbing and electrical. A lot of arguments reduce to "because what you're asking is illegal".
Same as you get with designers and artists when a client cheaps out and says their 4 year old kid could do better, or they’ll pay in something other than money.
I don’t think explaining it works. I think that’s almost empowering them because they’ve put you on the defensive, justifying your time and expense when it needn’t be justified.
Better to get them into the habit of managing their expectations instead of trying to people please.
Entire operating systems are “free”. Software that takes tons of effort may be $.99 on an app store. We have been programming non-programmers to undervalue everything for those same 30 years.
Probably, software packages should never have fallen below $100. The minimum price on any app store should probably have been $10. Then we might see a valued industry.
Our shiny new tunnel took 3 years longer than planned. Our new aircraft are being accused of crashing due to engineering shortcuts to get them in the air faster. A tower crane fell across one of our main streets, killing 4 people, so even if that building was on schedule before, it's sure not going to be now.
Is there an industry which can complete all projects on-time and on-budget, with no catastrophes? I'd love to see it.
No it hasn't. I've worked many jobs where software is still an afterthought. There might be a few software packages that the business licenses for use, but they are far, far, far from every having any software developers on their staff.
We were not a convenience store in central Alabama.
They think that because occasional something that sounds (to them) as being the same simple request does take and hour or two. That's all they remember. That's all they want to remember.
Frankly, short of critical bug fixes, is there ever really a real biz need to deploy such updates almost immediately? Why not say, we expect to have to coded and tested in time for our next scheduled update" or something like that. This way they see the big picture and don't get to know - cause they don't really need to know - the nitty-gritty details.
I'm not against transparency. But the fact is, clients need to be managed, as do their expectations.
Something along the lines:
If you expect to build a cottage and it turns out to be actually a cathedral, that is the source of the extra time or rework.
Or, imagine you build a wall inside a house. But once the wall stands, suddenly you remember, that you wanted to have a window there. Or an electric socket. Imagine what extra work that will be.
It is more easy for clients to imagine that and they understand the necessity of planning and a why some changes, although they seem like a small ones, may actually take a lot of time.
I have the feeling that software is too abstract to reason about for non developers.
I will give you that eventually they will take a bunch of time on something and have to explain furtively "it's more complicated than you think" but they may not even realise at that point why it's getting harder.
What? No, not at all.
IT is just glorified technicians who should live in basement. Business is not willing to share power with IT. This is unwritten class system.
it is that former devs who are now managers don't remember this when they are in a new role. maybe it is the pressure from their superiors to do things quickly or maybe they no longer have the developer mindset
- 0.5 hours investigation and planning
- 0.5 hours feature development
- 4.5 hours manually testing
- 0.5 hours deployment
- 8 additional hours now adding automated testing
Once a code base has decent test coverage the time it takes to add additional tests is pretty reasonable and most of that manual testing time goes away.
No, they're telling you the rationale of what goes into making a change. I agree that this isn't something you should send to clients. But, even sending a summary to someone who invokes this question will ask the same thing for any item you summarize. The next question will be, "Why does it take 4.5 hours to manually test?"
So, explaining the rationale, at least once, will help the customer understand the process and could contest specific elements rather than trying to hide it in vagueness.
> The next question will be, "Why does it take 4.5 hours to manually test?"
I'm not sure it will. In my experience, clients are usually surprised about how long something takes not because they think specific tasks are taking too long, but because they aren't really aware of what the tasks are nor of their tradeoffs.
"You're right, it took longer than expected. I chose to spent some time setting up automated tests. That took extra time now, but will save much more time later."
If I was the client and you sent me this blog post as an answer to "why is it taking too long?", I'd fire you.
Because you spent more time writing the post than it'd take to update the code.
You're better off avoiding getting to this point in the first place. Maintain a good relationship with cooperative clients and they won't (usually) complain, because they value your work. You should fire uncooperative clients and let them be someone else's headache, when possible.
Adding a little more detail to line items in invoices helps a lot too. "Fix bug report #493" should usually be, "Investigate report of incorrect discount calculation (#493), modify 1 file, review code, deploy to test, test for regressions (all tests passed), deploy to production."
It seems dumb and repetitive to us, but one of those descriptions looks like 4 hours' work to the person approving the invoice, and the other doesn't.
So on one hand, I see the argument. Simply opening unknown code, and making a change no matter how small, is a risky game. You need to research the impacts, test, and walk slowly through a deployment you haven'y done in ages. I totally get that.
But I also see the other side. Why DOES it take 8 hours to do a simple one-line code change? That's ridiculous. Somehow, we've developed these fragile systems. We've trapped ourselves in processes that add 7 hours to any change we make, no matter how small.
The status quo does not need to remain. It should be easier to make small changes. It should be cheaper to respond to simple requests. The client is actually right to question us when they just want an email to appear a day earlier and it costs them $1500.
It's akin to trying to constantly remodel/add additions to a building. You may decide that you want new floors, but when you tear up the carpet you realize there's tons of water damage that was being covered.
you might be right, but if you're not in a position to help make it take less time, that's not a helpful discussion to have. the reality is that it takes that long, and complaining doesn't make it faster.
The procedures that take time exist for a reason, and i'm sure in every organization there are opportunities to make the process faster. But when somebody who wants a change says "why does it take that long", they aren't looking for helpful ways to improve the process, they're looking for an exception to the process be made so their specific change gets out faster.
Maybe a savvy client, maybe a happy dev. Most likely this situation happened not for the first time.
The primary point of the article is that there are very common inefficiencies in our industry that, if we tackled responsibly, would greatly reduce the turn around time of producing changes to the code base.
The point I took away the most from is how much having a single point of change for a single concept and periodically cleaning the code base to keep it this way can dramatically increase productivity.
If I were that client, a response of that kind would rather infuriate me as mudding the "clear" picture I see.
I believe, in this case, there's a bigger issue to address - the issue of trust and responsibilities. If client's expectations about how things should be done overpower his understanding of what the devs do, then projects/contract assessments probably missed the target audience. To realign such expectations the team needs more than just a bark back email.
It is a good idea to discuss with stakeholders, at a high level, why things _seem_ to take long, but I find that works best as a conversation. No one wants a tome like this in their inbox.
There are some 2D conversations that are necessary. Dates, times, places. That sort of thing.
This is solidly 3D communication. It needs to absolutely be face to face. Otherwise the receiving person on that e-mail will (a) not read it, and (b) take it as a passive aggressive way to get out of work.
Oh, and (c) look at the length and complete unnecessary detail of that email that should be a meeting as another reason to accuse you of wasting time.
It's not. I make the same complaint about developers I manage and it's so prevalent across the industry, I'm surprised you think it could be fiction. Of course, in large companies with a glut of process, making a change on a production system without testing is basically unthinkable. The reality is that most businesses have nothing more than production to work with.
People STOP saying things like this, when they are berated and/or overwhelmed and/or discouraged by details, as outlined in this letter...enough times.
Interesting fact: I worked on DrLaura.com's forum website in the late 90's when the naked college photos of her came out. They constantly were deleting posts and threads regarding the photos. Then, they ROUTINELY deleted entire forums. I put 7! popup warnings in front of this action, and the owners kept complaining that the moderators were doing it "accidentally" too often.
Never underestimate the tenacity and willful blindness of customers.
Clients would ask for justification on why something would take 5h of support instead of 2.5h because the previous similar request was done in half the time during the development of the app.
What they don't understand is that if they ask for an urgent fix after deployment, they're going to pay for a developer that had free time this very moment to pull and install the project. That dev then need to read and understand the codebase, make the fix, test it, make a pull request, ask for someone else's billable time to approve the pull requests, merge any conflicts, schedule a release, push the release, test again and document the new feature. And that's if the request is in fact as simple as it look like.
Often, requests look very similar but are very different to each other. "Why does it takes 3 hours to add a button on the sidebar while it takes 1 hour to do it in the content of the FAQ page?" Well, one is done by the content team and require no code or release. The other needs to be done in the code. Etc.
 They are told in very simple words that this is how it works.
There is a good XKCD on this topic: https://xkcd.com/1425/
If the bulk of the business that clients deal with are happy to make ad-hoc changes and updates to delivered delivery materials on a per-hour basis they can often turn those around in a few hours, and then they ask for the same from a computer system and get a shock when they are quoted 10 times the price for a change to a "cheaper" thing. They're paying more for the manual work generally and see the computerised system as a "cheap option".
I've been communicating with their lead reports developer. The fix is done, tests are written, QA has given it the rubber stamp, but he's simply not allowed to push it to our instance until their next code release on the 22nd. It makes no sense, and it's certainly not how I run our side of things.
In the meantime, my users will lose 6-12 hours of productivity this month (15 days, 4 reports, 6-12 minutes each.)
I don’t know of any solution for this issue, just wanted to point out that this perspective tends to be often ignored when discussing about how to pay for programming work. Also, programming is hard, not so much the languages or the frameworks themselves, but the human relations that any software program ultimately ends up resembling.
The technical questions behind why a change takes a long time are legion, but so are the UX changes that all need to be accounted for. In software there is no intelligent actor that can solve edge cases you missed when they come up, instead you're building a system that will handle all of those cases (maybe by occasionally crashing, granted) so user stories need to be vetted and resolved.
* Document the customer request
* Investigate the customer request
* Create and document the requirement
* Assign the task
* Switch current context
* Deploy a dev environment
* Make the change
* Submit the change to QC
* Test the change
* Deploy the change
* Clean up dev environment
* Document the change
* Bill the customer
Ideally, you would encourage the customer to bundle multiple requests, so that most of the tasks could be shared among them. However, if they insist that they need it now, then they need to pay accordingly.
This story seems like it could be solved with improved up-front communication.
A better letter if you wanted to write one would be about the benefits you provide and the completeness of your work. I wouldn't even mention the time it takes.
The more explanation you get out there the better, if you're honest (or a good faker) people will be reassured by your explanations - but accept that not all clients are profitable ventures, a customer who can't pay for the time it'll take isn't one you should take on, just try and leave things in an amiable state.
I understand your response, but the (grand?) parent seems to be indicating you don't do that, and just explain that you do good work.
Someone presumably thought it was a good idea to make the change and assumed it would take only a few minutes work.
I think the senario is that they are given an estimate of up to a day and can't understand how a developer could spend that much time on it.
For me it's a little too close to the memories of an old job. I tried in vain to argue with the owner of the company that having a 15 minute estimate category was pointless because the change itself would be drowned out by the fluff (creating branches, setting up data for a manual test, creating pull requests etc).
If you work for a cheapskate who's just nickel and diming about billable hours, the "Why does this take so long" question is nothing more than the rhetorical whining of a bad client. But in a healthy client-developer relationship, the question is important. It's a less-articulated, unfamiliar version of the following, which any good developer would find amiable:
"I, as someone invested in the success of my company, am seeking insight into a part of that success that only you have. I trust that you're not playing Minesweeper on our dime, and yet our current process isn't delivering the returns we expect. As someone who knows better than I do, are my expectations unreasonable and in need of adjustment, or are there investments that can be made to get us there?
It is worth it to try to help your client understand why seemingly simple changes take a long time to implement. I'd just be surprised if the written response in this post would be of much help to most clients.
The face-to-face has a much better shot at being helpful, but, of course, that depends a great deal on the client also.
I believe you're also assuming you're servicing a client whose got a lot of budget to blow on a lot of small changes.
The best clients have more money than time and will pay you well to make their problems disappear. They pay you to think about the software for them, so they can spend their valuable time thinking about the business instead.
Dear client of course we can, just prepare for commits in history:
"That simple fix" 6h ago
"Fix the fix" 3h ago
"Really fix" 1,5h ago
"Really really fix the fix" 10min ago
If all goes well enough, we won't have to craft "Customer data fix" SQL query commit the next day, just because someone forgot about that totally hidden stored procedure that was not included in the "Really fix". Which takes another 6 hours to prepare on next business day and invoice instead of being 3-4h of proper fix for "one liner" turns into bill for 12 hours and 2 days of lost production time.
But in general, the whole letter feels wrong: either the complications of such a trivial change depend entirely on legacy code that isn't responsibility of the current maintainer, and therefore the client knows well how hard and expensive even simple changes are (and this should have been made clear when the project started); or the maintainer is asking the client to pay for developments (refactorings, tests, or a different, more general implementation of the feature) that weren't agreed beforehand.
This is what's in dispute? Something else in the relationship is wrong.
This whole article is just saying "Dear client, here's why your model is insufficient".
Which is a much nicer email to write than "Dear client, here's why my model was insufficient". Which is what happens when the estimates go south
They don't know how to internalize that - they are dealing with a magician that turns dirt into gold and gets paid by the hour.
It's totally irrational. If you were a plumber then there would be a physical manifestation of the work.
Software managers who have never written code are also often suspect, in my opinion. They can be professionals of magic management without understanding, liking, or even appreciating their field.
Thing is, working with a developer as a non-technical team member can be a frustrating, opaque experience. Communicating progress is eye-opening for non-technical colleagues but when a programmer does not communicate, then obviously the non-technical members have no idea what's going on.
Developers can forget too, that the one small change might be holding up marketing, sales and customer support, all of whom themselves are getting flack from above about why X customer is still angry, or why the press release isn't being sent out yet etc. "Waiting on a dev" isn't an answer that reflects well on anyone.
The "Dear Client" letter wouldn't be necessary if there was more communication. It can even be automated. Here's what I see in a Slack channel with my colleagues:
github APP [8:52 PM]
New branch "fix-password-recovery" was pushed by xxxx
[yyyyyy] Pull request submitted by xxxx
#466 Improve password recovery
• Fix styling
• Ensure the visitor is signed out of all sessions
• Redirect to sign in instead of 404 when an old recovery link is visited
semaphore APP [9:05 PM]
xxxx's build passed — d61157d Improve password recovery on fix-password-recovery
I never need to doubt xxxx when I can see the myriad small tasks, failed builds, the commits etc.
And vice-versa. Imagine knowing something very well, something quite complicated. Then not only knowing how to fix that issue, but explain it to a child. Now, constantly having to handhold that child over every step, even when they don't even need to know, and it is slowing you down having to do that. And maybe not even knowing the solution, but trying different things, and having to explain each to that child.
What you have setup sounds awful. Learn the technical side, or let them get on with the job. The updates you need should be at a daily standup.
> then obviously the non-technical members have no idea what's going on.
You still don't know what is going on, you just pretend you do.
> Developers can forget too, that the one small change might be holding up marketing, sales and customer support
Then make this clear during standup.
Not only this, but also: I've recently come to understand the idea that allocating enough time for writing unit tests and integration tests, tends to uncover design issues. If a test is taking longer to write than it seems like it should, or if a particular method or bit of functionality seems harder to test than it was to write, that is a signal that you may have a design issue!
Writing the automated tests will help you find out when you have written some code that stinks, if you know what to look for and have the time to step back and think about it. It's called "Red/Green/Refactor" for a reason, it's not "Red/Green/Red/Green/Red/Green" – if your estimates only allocate enough time to write the feature and nominally prove that it works, you're missing a critical part of the TDD pattern and you're not getting nearly as much long-term value out of your tests as you could.
If you don't write automated tests at all, it's even worse, because those design issues will only show themselves when it's time to make a change, and your bad design is now standing in the way, preventing you from iterating quickly when you really need it.
Even worse, if we don't invest that time, the effort to make a similar change in the future will keep increasing. We need to invest that time just to keep it from getting worse.
We bill by hours as a lot of people suggest in this thread. Which works pretty well. We also have flexible iteration plans, the clients can prioritize any feature that is important to them. If a feature does not worth it, it will likely stay in backlog forever.
Most things are really smooth IMO, though explaining why technical stuff is costly/mandatory is really hard to deal with. Because both of us want the project to be an ever-green project, we need technical advancement, architectural extensibilities. It's so hard to even convince myself if I considered myself a non-technical person. Why do you need to split into services? Why things are not immediately synchronized after the splitting? Why do you need a job here, and what's a job exactly?
At the end of the day, it seems when you are convincing people about things they've no idea, you really need to be trusted -- just like you kind of trust doctor, and being suspicious about witch doctors. To achieve this, it seems better first to be business-focused and solving problems to build trust and reputation, as some kind of credits.
So given this, it is reasonable for requestors to feel like a change should be immediate because it seems simple relative to the way they themselves make changes when they need to. They are trying to use their past experience to come up with the time it should take to make a small change. So many things seem easy when you don't have the right experience.
The only way to counter this is to define very specifically why it will take a day vs whatever time they think it should take to make the changes. Even then it's a hard sell since the requestor will feel that a lot of what you are doing is a waste of time which in their mind equals a waste of money since time is money in business. This is how things have been, are, and will be.
I think part of a programmers training should include how to deal with customers since we all have had to deal with this situation and will continue to do so. It's part of being a programmer for hire.
Taken individually yes, there's no new 'visible' reward.
Looking at the larger picture, say, 3 months from now, changes that used to take 3 days now take 3 hours, and there are fewer outages, reduced downtime, higher uptime, and fewer (or no) hair-on-fire-we-lost-customers issues any longer.
Establishing both short term metrics (request turnaround time) and long term metrics (uptime, data loss, security breaches, customer satisfaction, team satisfaction, employee retention, etc) will help understand justification of effort.
Anytime you get into a conversation where the customers starts off with "Can't you just..." then it's your responsibility to let them know a change order needs to be filed so you can estimate the new impacts it will have, including cost.
Some of the excellent techniques in OP's article will be employed, but "change order" terminology and workflow minimize these requirements after you've stepped them through the change order workflow once or twice.