'Most people assume technical debt is there because of time constraints.'
As opposed to what? He describes three personality traits, only one of which (hubris; not entirely sure why that's the term used given how close it is to arrogance) actually leads to additional technical debt. Both fear, and arrogance (as he describes it) merely lead to technical debt remaining unaddressed; they don't create it in the first place.
I'd posit that technical debt gets created because of time pressure, and that it stays because of continued time pressure.
Almost the entirety of my experience with technical debt has been "Hey, this code kinda sucks" "Yep. We just haven't had time to make it better/it's never been enough of a priority to make it better". Not "Don't touch that, you might break it!" (caveat: I've run into this on long lived, gigantic defense contracts) or "No, it's good the way it's written" (never run into this). I can't even imagine a developer who is any good saying that latter one; you -always- realize ways you could have made it better after the fact.
It may instead be a manager calling the shot that refactoring isn't necessary/worth it (the arrogance he mentions), and we can debate whether it's fair for management, rather than developers, deciding when refactoring is necessary, but it's still based on time/money constraints; I've never met a manager who would say no to a refactoring attempt whose cost was nil (i.e., the dev, QA, and anyone else who'd need to be involved, offered to do it in their free time).
EDIT: And even technical debt incurred because of changing requirements comes about because we didn't take the time initially to fully understand the requirements. This isn't a bad thing, it's what agile is predicated on, that we don't need a full top to bottom understanding of the problem space to deliver something useful. But it still can be expressed as a time constraint, one on the time it would take to expand our understanding, rather than one on the time it would take to code.
And with methods like TDD: "don't touch it or else you will break 1500 pointless unit tests that don't correspond to any actual functionality and will have to be rewritten". But you can, with great confidence, make changes which, so to speak, shuffle the same technical debt among different accounts.
Not specifically to TDD, but tests are one of the things that helps keep that particular complaint from being used. You -can- refactor with some assurance that you won't break anything (as the tests will either continue to pass, or they'll break, and in investigating them you will confirm that your changes broke only the things you intended to change). Without tests, refactoring becomes a "Well, we don't -think- we broke anything..."; you have no reassurance that you didn't.
Ideally tests should only test high level features and their invariants, so that a new implementation of some part should only break if the new implementation is actually broken. Unfortunately, many unit tests are written with a lot of assumptions, basically testing that you have kept the existing implementation. Especially unit tests that are written in a naive way is like this. In such a case you basically have to rewrite both the implementation _and_ all the unit tests. In that case the unit tests is in fact making change harder.
Unfortunately, much of current software development literature more or less encourage writing this kind of unit testing, in that they emphasize the importance of tests ("the definition of legacy code = no unit tests", "code without 100% test coverage is by definition low quality), while at the same time don't tell what a good unit test look like. This leads to brittle unit test suites using a lot of mocking etc, that is nearly impossible to not break if changing code at all.
Of cause good unit tests are not like this, but it means that both you and the GP can be right at times.
>I'd posit that technical debt gets created because of time pressure, and that it stays because of continued time pressure
I think you could boil everything down to "time pressure" and not have any good insights on a problem. For us our technical debt came in the form of resource constraints, namely not having enough money to hire someone to be able to do a task in the most optimal way. Sure we didn't have years and years to learn a new codebase but I don't consider that "time pressure" in the same sense.
I do think in general though if you are "hacking" on a product, you are almost by definition not building it in the most optimal way. In general things are left out or overly complex, whether it be documentation or streamlined code or building with a non-native SDK instead of native development technical debt sneaks in there.
Or how about developer maturity? Looking back at some of the projects I worked on when I was just starting out as a professional developer, I can tell you without a doubt that I introduced technical debt. It wasn't because of time pressures, I had plenty of time, it was simply because I was so green that I didn't know any better.
For example, I built a distributed video transcoding system a few years back, and for the use case at the time it worked great. But let's say people want a new video format? That's a large rewrite. Want to add more servers into the worker pool? That's a large rewrite (the code was written to assume a 1:1 ratio of certain servers, which was dumb).
Basically anything that you might want to change was made harder by the code that I wrote to get the job done initially. Over the years you start to recognize that, and write your code a little smarter and more change tolerant (hopefully introducing less technical debt).
Agreed, though I would distinguish tech "debt" from using your same tool to solve different problems. In general I would view tech debt as something that you know you are incurring eg. this situation: I know that this hack I am doing is going to need to be fixed later to implement Y, but since we need to implement X right now we have to just get it done
Yeah, my point wasn't that you can't drive deeper into the problem and come up with something more specific, but that the statement this article leads with (''Most people assume technical debt is there because of time constraints') is misleading. Of course people assume that, because most of the time it's true.
If you don't know what technical debt is... If you hate making changes in your code, then you have it. If changes are hard to make, then you have it.
I just spent the past 4 hours making a breadcrumb layout in CSS. I hate making changes to fancy layout CSS. Does that mean the CSS language itself is technical debt? :P
(Yes, I know about and heavily abuse less/sass and associated libraries.)
I don't think many people would disagree that CSS/HTML have certain encapsulation deficiencies. But I think that less/sass just make you more efficient at writing unmaintainable stuff.
Shadow DOM and WebComponents solve this better by encapsulating styling and markup concerned with the same widget or object. Without those you still have styling concerned with different markup in the same place. You can use less/sass but it's not true encapsulation - you still have to pay attention that you don't break styling for unrelated stuff when changing styling. Thus the "unwanted pain".
shrug I've gotten good at avoiding encapsulation problems by blowing my architectural foot off with css preprocessors a couple of times. :) I still don't have a clear system, but I've learned the how to decompose a layout over time in ways that let me reuse effectively. Usually this is by extracting emergent structure as soon as the css starts to look like a tangle. Once done, it's more clear where to throw it or fold into existing stuff.
I've been loving React for the past 6 months, but yeah Shadow DOM and WebComponents have exactly the right idea WRT encapsulation.
My main CSS gripe is purely layout. I want to do relatively simple things like make responsive multi-column layouts using relative widths where one of the columns is fixed. Turns out that's hard and needs javascript. Flexbox is a huge step forward for making interesting layouts that were difficult in CSS and trivial in UI framework layout systems.
I still think the whole metaphor of technical debt is borderline worthless. Taken to its extreme the simple act of using programming language is akin to taking a loan against the expertise of other workers.
That is, "technical debt" should be regarded as normal debt, if the metaphor is to be worth anything. As such, so long as you are making more progress due to the debt than you would in paying it back, it is probably wise to keep it.
Consider, nobody would say you should put every dime you own into paying off a mortgage. How would you eat? Now, they will say to be wise and pay attention to the amount of debt you take on.
But realize the main problem with this is looking at personal/consumer debt as at all analogous to business debt. They are not as comparable as basic intuition would lead you to believe.
It is debt though, because as I understand technical debt, it is something that needs to be fixed on a fairly short timeline (6-18 mos) without breaking some major process.
As such, so long as you are making more progress due to the debt than you would in paying it back, it is probably wise to keep it.
I think this makes sense too, because you can fix hacked code as you go to "pay down" technical debt.
Right, but now the metaphor is so malleable as to be worthless. It is almost literally "you should pay down debt, except when you shouldn't." With very little literature talking about the later.
Seriously, the rhetoric in use is "Living with technical debt is like living with a hole in your roof - you clean up the rain water the first couple times but in the long run you want to fix the damn hole." That isn't living with technical debt. That is living in a bloody broken home.
Instead, living with technical debt is living in a home where you still have low efficiency windows. Driving a gasoline car. Not having the latest heat exchange technology. Having AC pumped into a house that is now nothing but converters to DC.
Sure, in the future things will look different. Odds are high that it makes zero sense for you to force these changes today.
Eh, I don't know if one or the other is necessarily correct, I think it is a spectrum and depends on an individual project's situation.
So for one company tech debt is something that will cripple them next week and for another it is something that they can manage over a long period; that still matches how we discuss financial debt so I see no conflict.
I think my problem comes down to my point about living in a broken home. If you have broken windows, fix them. If you just have windows you don't like, do a cost/benefit on replacing them.
And realize that, piecemeal replacing things may not be a good idea. Running with the window analogy, if all you did was replace one window. Sure, in some way you are better than if you had not replaced it. Odds are high it will actually hurt your valuation on the home if it doesn't match the look of the other windows, though.
Same for the car analogy. Driving a low MPG car is ultimately expensive. Unless you have done a cost benefit to getting a high MPG one, it is likely not a good idea to upgrade.
This is especially true for most nonsense articles on paying technical debt. They almost always involve becoming an early adopter of a technology. Something which is known to be expensive elsewhere. Consider, the number of folks that saved money by buying a first model or so prius is probably zero.
At work, this is especially poignant. The folks that were most into paying down technical debt have managed to land us with no fewer than 4 dependency frameworks. Sure, I hate struts as much as the next person, I would welcome that to the frankenstein we have wound up with.
Here's an alternative point of view - Technical Debt is the way that software grows when people take the easy route. If it's easier to add code to an existing method than it is to add a new method to a class, then it shouldn't be a mystery why there are so many large methods.
The reason software goes bad is because bad is easier than good.
The reasons for taking shortcuts always seem good at the time, but regrettable in hindsight. As the saying goes: "Broken gets fixed. Shitty lasts forever."
>it can be shortly summed up as inadequate technical choices that incur maintenance costs
I'm not sure I like that summing up. Inadequate for what? If anything, "good" technical debt is making perfectly adequate choices (butno more) for your short term needs with the full knowledge that you are incurring acceptable long term cost.
A poorly written application does not need to suffer bit rot. A well written application can suffer bit rot. The reason?
Applications are stories, and bit rot comes from not understanding the story and yet you have to alter it. Easy-to-read stories are easier to alter, but they can still be hard.
An application has a history, and it changed over time to accommodate the people and pressures it had to deal with. People who understand the story can maintain even a poorly written application, to the extent that a poorly written app doesn't really suffer from bit rot.
Well written applications are great, but even they have a story, and if you don't understand the story then a good architecture won't save you from bit-rot. Indeed, foisting "small projects" on unprepared juniors, and due to time constraints, just allowing the code in without review, is the seed of a form of bit rot even on well architected projects.
Over time, we see the story of the application grow to include its data, and it's host(s). Indeed, one could say the "devops" trend is primarily driven by the definition of application as story. Extrapolating, we could say that applications are pushing at their boundaries: the build is part of the story, the push is part of it, and now we might add "building the datacenter" or "router configuration" in their story. At some level, for some larger companies, this is certainly the case.
The arrogance that the OP talks about is real, but it comes in two places: when a junior just doesn't want to learn the story, and when a senior thinks his story is so good, so transparent, that it "speaks for itself".
This is not to say that we shouldn't strive to write better applications, and create and use better architectures, but rather to emphasize the fundamental subjectivity of the bit-rot problem, and that it's a very human, very story-telling sort of problem at it's root.
I've seen technical debt accrued without regard at a company once - just before leaving, they had to scrap their whole prior frontend web app and rewrite it from scratch, about a year after they started.
Avoid technical debt at all costs - spending the extra effort on good engineering is worth avoiding an awful situation. Managers of course bear the most responsibility, but developers can do their part too many times.
But think about what you just wrote! "Avoid technical debt at all costs" --- of course you don't mean that literally.
What you mean is "avoid technical debt at a price at which paying down the debt is good value", which is...totally banal.
Technical debt is half of a complicated trade-off. You can't say anything more useful than "make good trade-offs" in the abstract...Which is why I hated this article, and others like it.
Well, "at all costs" is used as a colloquy to mean "as much as possible". Sometimes it isn't possible of course, but managers and developers owe it to themselves to make better decisions many times.
I think the worst offender I see is in developers when it comes to this - many tend to cut time and be afraid of exploring something new.
> they had to scrap their whole prior frontend web app and rewrite it from scratch
How do people decide to do rewrites rather than incrementally improve an existing app? Unless there's a language change involved, I would almost always go with the incremental refactor... as long as someone's committed to actually doing the work.
If I can't properly determine the intent of code, and there are no tests to ensure that refactoring is not breaking expected behavior, then it becomes difficult to work with that code in the future and I may end up creating workarounds and bypasses instead of simply updating the code.
By the time technical debt becomes evident enough for it to be noticed as a problem, it is most often too late to do anything about it. For those that don't believe me, show me enough businesses that have successfully refactored their way out of meaningful technical debts. The majority of case studies I know of prove me right on this point.
Usually when someone argues to refactor a system, one of two things happen. The developer loses the argument and is left frustrated. Or the developer wins the fight but finds that the cost is too high to successfully dig out of the hole.
When you're building a house, you've got to keep its structure sound as you go. If you violate architectural principles along the way, you will find it prohibitively costly (impossible) to revise your prior choices.
As long as we think bad architectural decisions are justifiable -- a mentality enforced by the idea that sound architecture comes at the cost of delivery time -- then we will forever be flailing.
The fundamental problem is that industry has failed to deliver good architects. The hacker culture and desperation of industry for coders has produced an ignorance, disinterest, and almost philistine disdain for the architectural skillsets. (Note: by architectural skillsets, I mean the ability to keep your technical components cohesive and decoupled with minimal interfaces.)
A skilled architect knows that solid design is a function of skill not of time. A skilled architect knows that there's no tradeoff between time and sound design. The comprimises in the face of time pressure are not to be made on the architecture. The concessions to be made are on how generalized you'll make a technical component or which features are most important to deliver today (think Minimimum Viable Product, a notion consistent with good architecture not opposed to it.)
But a majority of practitioners in our profession never learn the architectural skillsets and how to apply them. We learn how to hack, to code, to deliver -- this is all good -- but we don't learn how to build sound architectures.
Acadmeic environments have trouble teaching it because its a skillset acquired through years of practice in the real world. Industry has trouble teaching it because businesses can't afford to send their juniors off to go learn it. They need the juniors to execute, not go through the hard labor of learning architectural skills.
Our industry needs to figure out a way to produce workers with the right skillsets. Then we can avoid the major technical debts in the first place, without thinking we have to make a tradeoff on time. It is too prohibitive to dig yourself out of an unsound architecture. You have to build the house right. That it is "soft"ware doesn't change the game. The house analogy applies.
We have to produce good architectures to begin with. There's no other way. And that requires skill in architecture, a skill far far too but unnecissarily rare in this industry.
Good architecture is a windmill. To do that right, you have to know how your requirements will change and grow over time, and put effort into being flexible and extensible in those feature areas.
But seeing into the future is risky/imperfect. So code bases are full of clever features and libriaries that are used exactly once. And where rapid growth happened but the code wasn't prepared, sketchy chains of conditions and runon code to paper over the gaps.
I have to disagree. It doesn't take prescience to build solid architectures. It takes skills.
Prescience (understanding your future requirements) can help dictate what your priorities are as a business, what features and flexibility to deliver today, but it has little to do with architectural soundness, which is simply a property of a system (independent of and not relative to the future).
This is my point -- to argue that a architectural sound choice is too expensive today is right only if the right skills aren't present. This is a problem for our industry to solve.
There are so many decisions to make, and they depend upon knowing where you're going. The 'soundness' of the system is a cool property, but it doesn't help make your code nimbler unless the right flexibility is in place. What other purpose is there to architecture?
Architectural flexibility and architectural soundness are orthogonal properties. An inflexible system that is architecturally sound may obligate you to build more than its more flexible counterpart when novel use cases come around but it allows you to build more when these use cases come. An architecturally unsound system makes it prohibitively expensive to solve future use cases.
So the purpose of architecture, I'd say, is to first ensure soundness (which should cost no more than building an unsound system so long as you have the right skill set) and only then strike the right level of flexibility in the architecture (another, separate skill set that is valuable but less of a current industry problem, imo).
You can evolve the flexibility of a sound architecture but you can't do much with an unsound one. Inflexiblity in a system is generally a tractable "problem". Unsoundness is not (unwinding coupling, eg, is rarely tractable.)
I can see the merit in dropping titles, but not the idea. There are most definitely "junior" developers and "not-junior" developers (call them "senior", "journeyman", "expert" or whatever). Experience is a real, actual thing, and it's reflected in the practice of every industry, including software development.
I think experience is greatly overrated. The main reason is that languages and libraries and frameworks change so quickly in software development. How much does 10 years of experience in C++ development help in JavaScript development? It's difficult to say, but someone who has 2 years of JavaScript experience can easily be at the level of someone with 10 years of C++ and 1 year of JavaScript.
It's not really difficult at all to say for somebody who actually has experience: were I time constrained (within reason) I'd take the guy with 10 years of experience over the guy with 2, all other things being equal (e.g. the 2-year guy may be a specialist in a specific use/development case for JavaScript, or the time constraint might be very restrictive, to the point that the day or week the 10-year person might need to come up to speed isn't available).
Experience is not necessarily a big factor, and what qualifies as "experience" is highly dependent on context. However, the experienced person generally has just seen more than the inexperienced person. They've had a chance to deal with stuff the inexperienced person just hasn't. That counts for something; in some (most?) cases, quite a lot. It's a bit silly to dismiss experience out of hand, or to minimize the role it plays.
Also, "languages and libraries and frameworks" don't really change all that often in practice, in most places where software has been employed as a solution. That's a very "Web Dev is All Software Dev" (read: myopic) Valley-centric belief. You're referring to very narrow, niche instances where startups "pivot" frequently (or can't make up their mind, or are run by fad-chasing hipsters).
I completely agree. I've been employed as a software engineer for about 2 months now and I would most definitely consider myself "junior". To put myself on the same level as the people at my place of employment who have been SEs for 20 years would be a disservice to them.
What nearly all discussions (every?) of technical debt miss is technical debt interest rate, product value and product interest rate. It's not a problem to carry debt if the interest rate on it is lower than that on your product and its level is lower than your product's.
I like to build cleanly, but sometimes a feature needs to be in production TODAY, so we get it into production TODAY. Or sometimes it's useful to get something into production/usage before investing too much into building it. In fact, I'm about to commit a cleanup to a "get it into production TODAY" bit of code that I wrote 3 months ago and I'm very appreciative of the fact that I got a second crack at the problem: the revision is much more general, has better test coverage, is extensible because the feature has been in production for 2 months and I understand it better now that we've been using it. I would have wasted time and money building it more cleanly 3 months ago.
As parts of the product get larger and more complex, we carve them off into their own products/services/systems so that we can more easily reason about the value, debt and interest rates. To be sure, all of these measures of debt, value and interest rates are very subjective (...in the early days ...perhaps they can be measured in various ways later on), so I'm not suggesting that we sit around saying "the debt on that would be $4,729 and the interest rate would be 39.16%". But they're useful analogies and are helpful to keep in mind and to guide discussion. And, besides, you probably don't know your exact credit card balance and interest rate at this exact moment* but you probably have a decent estimate/feel of/for it.
The problem with technical debt is not technical debt; it's that most people are blind to it. Just as with a credit card, they just swipe the card and, hey!, new stuff!; they just commit the code and push it, and, hey!, new feature! When the bill arrives, they're surprised. Just as with financial debt, technical debt is a tool. Just as with a physical tool, you can use it to hurt yourself.
I run the product/dev group for a startup and we are constantly accruing and paying off technical debt. We do so very consciously and very openly, while constantly thinking/asking: if we do this quickly and accrue debt, when will it need to be paid?; will it slow development of other parts of the system?; did we accrue a debt without noticing it?; how much debt are we paying-off/cleaning-up this week? And I'm clear with the rest of the management team about where and when we're accruing debt and they use that knowledge when considering product/dev requests. IMO, it's critical that your company and team have a philosophy about technical debt recognition, accrual and payment.
* 3... 2... 1... A pedant just checked their balance and interest rate...
As opposed to what? He describes three personality traits, only one of which (hubris; not entirely sure why that's the term used given how close it is to arrogance) actually leads to additional technical debt. Both fear, and arrogance (as he describes it) merely lead to technical debt remaining unaddressed; they don't create it in the first place.
I'd posit that technical debt gets created because of time pressure, and that it stays because of continued time pressure.
Almost the entirety of my experience with technical debt has been "Hey, this code kinda sucks" "Yep. We just haven't had time to make it better/it's never been enough of a priority to make it better". Not "Don't touch that, you might break it!" (caveat: I've run into this on long lived, gigantic defense contracts) or "No, it's good the way it's written" (never run into this). I can't even imagine a developer who is any good saying that latter one; you -always- realize ways you could have made it better after the fact.
It may instead be a manager calling the shot that refactoring isn't necessary/worth it (the arrogance he mentions), and we can debate whether it's fair for management, rather than developers, deciding when refactoring is necessary, but it's still based on time/money constraints; I've never met a manager who would say no to a refactoring attempt whose cost was nil (i.e., the dev, QA, and anyone else who'd need to be involved, offered to do it in their free time).
EDIT: And even technical debt incurred because of changing requirements comes about because we didn't take the time initially to fully understand the requirements. This isn't a bad thing, it's what agile is predicated on, that we don't need a full top to bottom understanding of the problem space to deliver something useful. But it still can be expressed as a time constraint, one on the time it would take to expand our understanding, rather than one on the time it would take to code.