Hacker News new | past | comments | ask | show | jobs | submit login
First make the change easy, then make the easy change (2021) (adamtal.me)
296 points by kiyanwang on Oct 2, 2022 | hide | past | favorite | 88 comments



> In a way, this quote is saying “first do what you should have always been doing; being organized” and “then do what you came here to do in the first place (add a feature, fix a bug).”

This is half the meaning of the quote, but only half.

Even if you've been as organized as can be, business requirements change and suddenly your code organization is wrong. A good codebase can turn bad fast when people try to retain their original code model in the face of a changing real-world model. Changes become harder and harder, the code becomes more and more complicated, and you end up with a big ball of mud.

I suspect this is especially likely to happen when the original architects of the system are long gone. They knew why the system was structured the way it was and would have been able to recognize when the requirements evolved to the point where that design stopped making sense. But subsequent maintainers don't have that picture and either are afraid to make large structural changes or they don't know where the inflection points are that allow adaptation.


Technical debt is often a result of competent engineers dealing with uncertainty, changing requirements or a bad engineering culture, but it is sometimes just a result of unskilled engineers, and I think as a field, we like to pretend that's not true.


I agree that much of the time that is accurate. I think that it is helpful, however, to avoid jumping to that conclusion about any given codebase.

It's often a source of tension when someone joins a new team to have them racing in to fix all the "technical debt" whose origins the new engineer really doesn't understand. Little is gained by assuming that technical debt stemmed from bad engineers, and a lot can be lost if you misjudge.


If you can't tell the difference between decisions made by a poor engineer or decisions made by a good engineer, then it's very possible you're a poor engineer.

I shit you not I once saw the following in an actual codebase.

var len = Int32.Parse(myString.Length.ToString());

I can maybe understand that code evolving to that over time, no way in hell is it a good engineer who put the final adjustment in and thought that was fine. Don't even get me started on systems that are overly stringly typed.

What the other poster said is absolutely true, a good design can turn bad if it doesn't adjust as the business changes. Too many developers think the point is that the initial design should survive first contact or it was bad code, but that thinking is too static. As Mike Tyson likes to say, everyone has a plan until they get punched in the mouth. There's nothing wrong with evolving a design over time, and doing so helps others to be able to recognize good code from bad code.


I agree with the overall spirit of your reply. I'd just add that framing "technical debt" with "bad code" is, ironically, what bad engineers tend to do.

Easy to solve debt tends to be about "bad code". Fun debt is usually about code design. Real fun, grown-up technical debt is mostly (but no always) about architectural characteristics. Code evolving has to go hand in hand with an evolving architecture, otherwise you're just cleaning the kitchen floor while everything's on fire.


Technical debt is a metaphor about intentionally making tradeoffs. If engineers are unskilled, it's unlikely that they can do that.

All problem code is not technical debt.

https://www.castsoftware.com/blog/ward-cunningham-capers-jon...


If you are hiring unskilled engineers to push the product out faster, isn't that just management intentionally making a trade off for technical debt?


Right but that starts to sound like organizational debt, which I think is reasonable to treat differently.


Also management / pacing. Never tried it but maybe allocating one day every two weeks for codebase cleanse. Or clear headed review. And accepting that coding requires messy exploration and therefore there's always a need for post-hoc post-rest work.


For me the hardest part of codebase cleanse is not the refactoring. It's the thorough testing and ensuring that the modification didn't break any existing functionalities.

One day every two weeks is not enough. Better a week every 2 months.


I never did it and was mostly speaking but I was afraid a N months would make the amount of changes to clean too large. It would be nice to ask people in various companies how they pace it.


I'd call that a bad engineering culture


I think "undisciplined" is more specifically accurate.


hmmm, this is true definitely on the face of it. Less skilled engineers don't see whether they are implementing functionality that couples a bunch of logic together that will be difficult to entangle, or duplicating logic, or adding a bunch of very specific logic that will be hard to abstract etc.

Very good engineers are actually those that have come to understand how and why they have done that in the past and what options there are, or those that are shown through mentoring or code review or seeking out instruction on their own what different patterns and tradeoffs there are.

--

But that means nothing on the face of it, since every amazing engineer did those things at one point, there are plenty of things for "less skilled engineers" to do that need to be done and they can do very well while also being helped to grow and learn, and for all of us, even 10/20/30 years in, we are still facing not only the limitations of what we didn't know we didn't know in the past, but even the limitations of the languages and frameworks we are using, and trying to make those better.

--

I could also say that technical debt is the result of non-engineers prioritizing short-term features and bugs over long-term code quality and eventually everyone suffering for all the corners cut over the way - that's a popular one for the dev team, but what does that mean other than the people keeping the business going need this thing to happen and they don't care about where you put the logic, and if you want perfect code, please find someone to pay you to write it...

--

This conversation happens so often in so many ways, and I'm actually in a situation right now as a technical lead where my very real ask and complaint is - we don't actually have enough deeply competent coders that have done this enough times that can make this work. We need a few more people with those deep architecture eyes, and we need to trust that what they see as priorities actually are, not for this feature, but for every feature we might create 2/3/4 years down the road as we scale.

There is value in the simple truth that inexperienced or lazy or crappy coders write bad code. But if we take it out of the realm where that means those people are the problem, all that means is that to collaborate on a large codebase that is doing something in the world, we all need to figure out ways to put soft fences around people's weak spots, help them grow, and try to keep the balance of business requirements and technical debt and scaling and growing a team and hero coders and lazy coders and great product people and unreasonable product people etc etc etc in some sort of productive tension that never collapses under its own weight.

--

The most beautiful code I've ever seen didn't last beyond the MVP. The worst and the best code I've ever written happened in the past, and I don't understand how I could write such things.

--

Personally I think all teams could benefit from one day a week/sprint/month where every eng on the team just goes and makes something better than they know could be better. Cause everyone sees something ugly, but no one sees the same things, and we'll never agree on the priority. Regular time to go do things that are just "this is kindy sketchy and hacky or causes me pain every day, Ima go fix it"


This is also the reason that most heavy abstractions fail. Abstractions often define the rules for one very specific universe. When the next CEO comes in a decides to toss that universe, it can become almost impossible to reason about our new reality in the context of the old abstractions.

For this reason, I tend to stear away from OOO design patterns in domain logic. I've found that simple function based code written around algebraic data types tends to be much more fluid and able to conform to changing worlds.

Ironically, that as I've gotten more SR, the code I produce tends to be much simpler...


Yes, object oriented design promised a lot of re-use that never happened (at least not like that).

Instead in the real world, re-use happened via libraries. And very much along the lines you sketched.


> I've found that simple function based code written around algebraic data types tends to be much more fluid and able to conform to changing worlds.

How do you explain that?


Because the ADT mostly just describe the shape of data moving through the system. Less control flow and logic is tied up in abstraction. Functions are typically small, pure and either "composable" or "disposable"


Like everything else in software development, there are different valid approaches, and it's up to us (the developers) to use them in ways that suit us the most.

I've come to realise that all coding, regardless of the language, comes down to two things:

  1. defining what the data we're working on looks like
  2. defining what do we want to do with that data
Different languages and "paradigms" approach these "things" in different ways: OOP revolves around the idea of the operations and the data being hauled around together, in a single "object"; functional programming strictly separates the two into functions and data types; and so on.

In your case, the approach you're describing works great, and fits your use case perfectly; in some other cases, a more object-oriented approach is a better fit. There is no silver bullet.


> business requirements change and suddenly your code organization is wrong

Agreed, and another important case is when you just learn something. The beginning of a project is when we know the least about the domain. Some early assumptions will be wrong! And that's great as long as we've optimized our team's process for learning. Indeed, that can become a strength, where we release new things specifically to learn. Learn about our users, about our domain, about our technical notions.


As Sandi Metz put it: You will never know less than you know now.


That seems overly optimistic. Knowledge can get lost. Both on an individual level and on an organisational level.


Having worked a lot on a 20+ year old code base I definitely appreciate your point. I think the time frame in the context the quote came from is more limited though with the point being that if you can delay an important design decision/lock-in till next sprint or month, do it.


Oh, definitely. And I appreciate the sentiment.

Also you might want to fight against forgetting by writing things down. Both individually and as an organisation.


And you'd think someone with three decades of experience would have seen the cycle of software "novelty", ignorance, and rediscovery turn a few times.

https://sandimetz.com/about/


Given the context this came from, I think it was about how to model a particular problem in the context of a current project, not an absolute statement.


> when we know the least about the domain

Strategic domain modeling (DDD) is a method to avoid much of the trouble before coding starts and steer timely on an ongoing basis as knowledge of the domain matures. Not suitable for all types of systems, of course, but is anyone using this methodically?


I'm a big fan of Domain-Driven Design. But I think "before the coding starts" is a bad way to look at it. To be ready for a week's worth of coding, all we need is either a small amount of stuff we're sure about or something that we can learn about via shipping something. Delaying for perfect knowledge is a very dangerous habit to get into, so I'd rather avoid it entirely.


Yes, that's bad formulation on my end. Know the domain when coding is what I meant.


If you get organized about building the domain, such as using an ontology, this can be mitigated.


Maybe? It depends a lot on the domain. If it's stable and well-understood, pre-development research can help, both the academic kind and the anthropological. But otherwise, the fastest way to understand the domain may be to ship early and often, while doing lots of user and use observation. And that latter approach teaches you things no amount of pre-development research can, like the right intermediate concepts as you turn human concepts into things that work well for the computers, the users, and the developers.

As an example, consider the social media domain when Facebook launched. No amount of ivory-tower ontology-building would have given them the right conceptual model up front, because the domain co-evolved through interactions between users and platforms.


Yeah, it seems to be a rule that once something is in the hands of users, things that sounded great in theory turn out to be bad in practice.

Or things work so well that new (previously unimagined) improvements become obvious.


The original architects are also well positioned to understand the underlying pivot points of the design, the places where the design can most naturally be shifted towards a new direction rather than hacking around the edges.


And here I thought it meant; convince the stakeholders that the change should be different in a way that it fits the existing system. Shepherding.

I can hear what I want I guess, like most maxims.


This is absolutely critical to achieve sustainable development in large code bases. However, it is also important to know how to pick your battles.

Make the change easy with the critical path and core of the system. But sometimes, especially along the periphery, the right choice is to just make the change and be okay with perpetuating a bad pattern, if it is going to bring disproportionate business value quickly. The key is knowing when to take one approach or the other.


Human communication is a funny thing. We use phrases like “technical debt is bad” and everyone in the room will nod their heads and be in total agreement, but every single person in that room has a different definition of what technical debt is.

To some people it means “buggy code”. To others it means “not the way I would have written it”. Yet to others it means “code that is hard to understand”.

What it means to me could be any one of those things, but the defining factor is that there is an ongoing ‘cost’ to the code in question. What kind of cost? Usually time-based cost. It takes someone’s time to manage and handle the situation.

It could be that the code causes data inconsistencies that have to be managed by hand occasionally. It could be that frequently updated code takes 100 times as long to change as it should. It could be that the code has grown in complexity to the point that I can’t put a junior developer on it, requiring expensive personnel to maintain. It could be that it compromises the user experience, hampering user adoption.

In all of these cases, the underlying is that it costs money. Like true debt does. Which is why using the term “debt” is a perfect description. I would argue that if a piece of code is clunky, buggy, or poorly written, but has no clear impact on the business in any way, is it truly technical debt? One might argue that the person making the decision to write it that way made a pretty valuable business decision. It is like a 0% loan. Until that loan starts costing you money, you would be a fool to spend time paying it off.


> But sometimes, especially along the periphery, the right choice is to just make the change and be okay with perpetuating a bad pattern, if it is going to bring disproportionate business value quickly

Agreed. Also tests - if the tests are catching the cases they are supposed to but have anti patterns in the way they are written - it's not always worth it to fix them so long as your new test cases can be easily covered.


I would add one more conditional to this:

> just make the change and be okay with perpetuating a bad pattern, if it is going to bring disproportionate business value quickly

... and if it gets cleaned up eventually.

Technical debt should be managed like credit card debt. Absolutely take it on in emergencies, but build the discipline of paying it down when the emergency has passed.


> Technical debt should be managed like credit card debt. Absolutely take it on in emergencies, but build the discipline of paying it down when the emergency has passed.

Technical debt is a misnomer. It's a bit more like 'technical equity'.

It's more equity like in that it doesn't have a fixed interest rate. And if the project your 'technical debt' sits on becomes less valuable, so does the value of dealing with the tech debt.

But I am not sure comparing code smells to concepts in finance is necessarily a good idea.


No metaphor is perfect.

Note that a lot of debt doesn't have fixed interest rates. And the value of dealing with regular debt also declines when the asset price drops, which is one way we get corporate bankruptcies.

Possible imperfections aside, the question is whether the metaphor is useful. I've found it very helpful dealing with business-educated but non-technical stakeholders, because it can get them from the "just keep cramming features in" mindset to understanding that code quality has business implications, and isn't just devs being prima donnas.

If you have other ways to get that mentality shift, do tell!


I would agree that it doesn’t have a fixed interest rate. But you might say it has a variable interest rate. The trick is to accurately gauge the true interest rate and only work on the technical debt that costs you over time. The real tricky technical debt is the one that has a small cost, but accrues frequently.

At any given time it is easy to tell yourself “this doesn’t cost much and would take me 10 times as long to rewrite than to just make this quick fix”. But if you (or more likely other people) have to keep paying that cost dozens, hundreds, or even thousands of times in the future, then the right decision is to schedule some time to take care of the debt.


I disagree. It should be only cleaned up, if nothing else is more important.


On the one hand, that's trivially true and I agree. In which case, I'm saying that keeping debt low is pretty important.

On the other, it's a statement that executive priorities mostly trump technical priorities, which I deeply disagree with. Executives generally have a poor understanding of how technical debt works over the long term: https://web.archive.org/web/20190709091156/http://agilefocus...

Week to week, many will say, "OMG, this feature is way more important than technical debt". But year to year, they will say, "Why is it so hard to get anything done here?!? Why is our stuff so buggy?!? We must have terrible developers!" Which is both a management failure and a failure of developers to maintain professional standards. The first is mostly out of our control, but the second is up to us.


I just do not think that there is anything magic about what some people call "technical debt" (but I prefer to call a "code smell", since how debt works fundamtentally is quite different from people's typical conclusions around "technical debt"). Eliminating a particular code smell is like any other improvement to an application, but with a special focus on its lifecycle and the associated costs and benefits.

And there are opportunity costs: Fixing a particular code smell means that future features of the application will get delayed. And the question is not only at what point in the future, in some (YAGNI) cases perhaps never, our now improved code quality let us catch up with otherwise earlier feature implementations: The extended time-to-market for our new features also comes at a price.

So to decide what is actually best, it is not enough to look only at the technical side of a software project. The time-to-market (or time-to-use) cost, which has nothing to do with engineering per se, must also be taken into account.

When deciding when a particular code smell should be fixed, both technical and business aspects are important: Technical effort and benefit must be evaluated and weighed against the aforementioned opportunity costs of feature delays.

That is why good cooperation and open communication between engineers and business people within a company is important. If this is not the case, well, this is another smell that perhaps needs to be refactored first. If it is worthwhile ...


I disagree that technical debt should be viewed through the lens of code smells - those are specific, narrow concerns and if they are not burdensome then they do not constitute debt. There exists debt-ridden code with zero code smells (because its domain model is flat out wrong) and smelly code without debt (because it’s simply not of tangible impact).

Technical debt is, to me, usually half-finished work. Instead of completing the job, we accepted an 80% solution that is designed in such a way that makes the 100% solution impossible without a complete rewrite. Eventually new features always break something or take a year each. It is this drag on speed that is the interest on the debt - we pay it with our time, endlessly.


I agree with you that open communication is vital to correct prioritization. But one good way to resolve that problem is to agree up front that there's a bunch of small stuff that's just not worth getting business stakeholders educated about, and that they should trust professional technical judgment as to how to spend some of the project time.

And I think technical debt is a fine metaphor, because the term gives non-technical people useful intuitions about what is normally invisible to them. E.g., they can take it on when needed, pay it down as available, declare "bankruptcy" via rewrites. And if they don't pay it down, it will consume more of the "income", constraining future choices.


I think there is a minimum floor that should be allocated to this kind of work.

It doesn’t make sense to clean everything to a perfect condition but there should be some amount of ‘keep your room clean’ level of hygiene that’s maintained


Exactly. Professional kitchens are very busy, time-sensitive places. But most of the good ones work pretty clean. Not because they're aesthetically fussy, but because they understand that being pretty fast requires being pretty clean.

There's a bit from Bourdain's Kitchen Confidential that really resonates with me along these lines:

Mise-en-place is the religion of all good line cooks. Do not fuck with a line cook’s ‘meez’ — meaning his setup, his carefully arranged supplies of sea salt, rough-cracked pepper, softened butter, cooking oil, wine, backups, and so on. As a cook, your station, and its condition, its state of readiness, is an extension of your nervous system... The universe is in order when your station is set up the way you like it: you know where to find everything with your eyes closed, everything you need during the course of the shift is at the ready at arm’s reach, your defenses are deployed. If you let your mise-en-place run down, get dirty and disorganized, you’ll quickly find yourself spinning in place and calling for backup. I worked with a chef who used to step behind the line to a dirty cook’s station in the middle of a rush to explain why the offending cook was falling behind. He’d press his palm down on the cutting board, which was littered with peppercorns, spattered sauce, bits of parsley, bread crumbs and the usual flotsam and jetsam that accumulates quickly on a station if not constantly wiped away with a moist side towel. “You see this?” he’d inquire, raising his palm so that the cook could see the bits of dirt and scraps sticking to his chef’s palm. “That’s what the inside of your head looks like now.”

Having a reasonably tidy codebase that the whole team understands well is very much like that for me.


Also discussed at length in the book Work Clean by Dan Charnas


I think "cleaning up x" should just be handled as any other to-do-list item. That means evaluating its cost and benefits in context with the other items on the list. Then do what is on position one, then on position two, then on position three, ... Re-evaluate the list from time to time.


Fine in theory, but often bad in practice, because in a lot of places prioritization is done by people with little or no technical background and lots of incentive to show progress.

There are a lot of ways to do this, but my general preference is a black budget plus explicit "credit card" usage. E.g., the team has high standards for any new feature work and quietly spends 15% of their time every week on continuous technical improvement. If the product manager wants to break the normal standards and take on technical debt (e.g., rush have feature X ready for trade show), then you break the work into "rush to add feature X" and "clean up feature X mess". The first gets done before the trade show, the second after.

And personally, if a product manager doesn't honor the deal, I say they get their credit card taken away for a while, because they've proven they can't be trusted to do right by the team and the company.


Actually, I have an explicit agreement that I can spent around 10% of my time on issues of my choice, without having to check with the management. And sometimes, for bigger refactoring work, I have negotiated explicit to-do-list items. On the other side, in the process of re-evaluating the to-dos, I myself often recommend to postpone refactoring when I see the need for implementing a certain feature fast or when I have the impression that the code quality is "good enough" for the moment. Since management trusts me that I can appreciate the business perspective as well, I actually have little trouble asserting myself when I think a particular refactoring issue should be prioritized.

So in my case, it works in practice.


No, treating as "any other to-do list item" does not work fine in practice. And you know it, because you spend 10% of your time on work specifically kept out of the normal to-do list flow.

And I'm glad you have a very good relationship with your business stakeholders, but please recognize that's not the modal case, and that giving advice as if it were is going to be bad for people with other circumstances.


Something I think hasn’t been discussed much in these comments is what the alternative would look like, and why that is undesirable.

If you only perform the refactoring after making a change (eg. adding a ticket for cleanup later), then (a) you made it hard to do the change, but also (b) it’s now far less vital to perform the refactoring - as the desired outcome has already been reached.

Refactoring for refactorings sake can be hard to figure out what the goal or end-state is. Making a refactoring in order to facilitate a particular desired change acts as a great forcing function for helping you decide what to change and why.


Organized developer:

Find clean plate and utensils. If there are no clean plates and utensils, then wash some. Then eat putting the food on the plate, using the utensils. Then wash your dish and utensils when you are done.

Most developers:

Eat a little bit of food using any object from the kitchen as utensils, then discard all the rest by throwing it on the floor, because your task was only eating that one bit of food. Bugs and rats will infest the kitchen and eat the leftovers.

Then the organized developer will be put in a performance improvement plan, because fuck logic. There's no time to be clean because our performance is being measured on how much food we eat.

Company interviews will be about eating a 10 food course using a Victorian cutlery set with 120 utensils. On the first day, the developer will ask where are the forks, the team will reply "we know that forks are the proper utensils for this, but our policy is to eat all food using spoons".


>> Organized developer: >> Find clean plate and utensils. If there are no clean plates and utensils, then wash some. Then eat putting the food on the plate, using the utensils. Then wash your dish and utensils when you are done

And in almost every organisation I've worked for, someone outside engineering (or frequently inside) will complain about wasting time on clean plates when there are perfectly good scraps of food in the bin.


> In a way, this quote is saying “first do what you should have always been doing; being organized” and “then do what you came here to do in the first place (add a feature, fix a bug).”

Maybe it's just me, but that's not really how I understand it.

Perhaps it's because I generally run into this when I'm thinking about accomplishing something new with a hack. Specifically, I want to do something that was unanticipated in the original design, and "goes against the grain" of what is there now.

One solution, as suggested by the above interpretation, is to refactor your codebase in a way that the addition now fits in smoothly. That's a great thing to do a lot of the time. But beware: there isn't a "best" or "perfect" factoring that you can ever reach, no matter how much refactoring you do. There may be some things that are strictly improvements over what came before. But beyond that, a codebase has to optimize for a subset of the possible uses and dimensions of flexibility: what do you want to be easy to change?

Your new feature or usage may not be the thing to orient the codebase around. Sometimes, it really is just a one-off, or an experiment, or temporary. In that case, refactoring everything to make it easy is just wrong.

The original quote still applies, though. If you inject a hack here, a hack there, you'll quickly end up with a mess. Very often, the better thing to do is to do whatever refactoring is necessary in order to add a general extension mechanism that permits the hack you need, and only then use it to implement the hack. The difference is that the hack will still be messy, it may have hardcoded values, and it may not get along with similar hacks. But its scope is limited and the amount of damage or distortion it can cause to the surrounding design is bounded.

I guess fixes are more likely to require refactoring. Features or extensions should be considered very carefully before adapting a large chunk of the system to suit them, which can come at the cost of making it harder to adapt to the original purposes.


Your understanding may fit, actually. If the one-off hack is easy to add, it doesn't need refactoring to add it. Maybe ten or twenty one-off hacks can be added before anything gets difficult. It's only later that things become difficult and the code requires refactoring.

Or put another way. You are taking something that's difficult to add, and making it easier to add by giving up clean integration with the existing design (ie making it a hack). Maybe this means future cross cutting updates don't automatically work, or you don't get some functionality out of the box. But if you've decided that's acceptable, it lets you put off the extra effort to make something easy via extending the design. Until the cross cutting updates or extra functionality becomes needed too often :)


This is basically the point of Martin Fowler's seminal book Refactoring.

The book struck me as a book on unit tests much more than a book about refactoring itself, for exactly the same reason as described in the article.


This makes more and more sense to me the longer I program, and the few projects where I've been able to do it, are the few projects whose code still seems delightfully elegant to me. It is basically taking care of your "technical debt" before it becomes debt at all.

(And I agree with other comment threads that it isn't always because of a mistake earlier, it's because the program has been called upon to do new things that weren't anticipated before, it's not so important to decide if they _should_ have been or not).

But to do this requires a few things which can be rare:

1. Sufficient "Ownership" of the code to be permitted to do the refactoring. Like if you show up with a PR on an open source project that is new to you, to refactor to first make the change easy... it's going to go nowhere. Internal projects then will depend on the culture and social organization...

2. Time. If it's essentially eliminating your technical debt before it's acquired... well, the reason technical debt is acquired is because for better or worse, choices are made not to spend time on it, whether wisely or unwisely.

3. Skill. It's a craft. The more you do it, the better at it you get. Ideally learning by observing people do it who have already done it more than you. And it also requires having a general mental model of the existing architecture you are changing, which, first see #2 above, and second, can be hard to do if it's a mess. (Also, and this may be the most challenging -- with insufficient skill and sensitivity, your refactorings to "make the change easy" can actually make things worse, make your immediate change easy at the cost of a complicated spaghetti architecture which actually just over-complicates everything).


Re: 1 the refactor is fairly likely to be ignored until the functional change is also proposed but that's ok, review tools should be able to deal with a patch series two deep.


Even with the functional change proposed, i would say your chances of getting such a relatively "complex" change actually reviewed and merged are lower though, than if you just proposed the smallest change that would result in function; and your investment higher, for higher risk of it just sitting there.

I think for this reason and similar, open source code especially can end up subject to accretion of things people are reluctant to change or seriously refactor. Committed non-burnt-out steward(s) can mitigate.


I don't think it's particularly related to the original framing, but practising this makes for much better commits IMO.


The same concept applies for system administration, your environment (read codebase) eventually gets so complex that it accumulates odd infrastructure bugs that pile up over time exactly like tech debt. Sometimes you must just spend time figuring out how this infra piece (read code block) that was implemented by someone long gone from the company even works. This usually involves having to fix it as well.

You and your team must piecemeal the long term fixes if you want to make any progress towards understanding, scalability and reliability.


I generally like this approach. Typically my workflow will be to prototype a feature, see where the existing API needs to be extended, and then carve out a two-PR sequence; I) refactor the code to make the new change easy, II) make the change.

One issue with just submitting the refactor PR on its own is it can be hard to tell if the refactor is sensible without seeing the new usage. I like stacked PRs for this case.


This is the opposite of TDD's "Red, Green, Refactor".

https://www.codecademy.com/article/tdd-red-green-refactor

The TDD version protects against making useless changes.

The "make change easy" version protects against management stopping work as soon as the tests pass.


TDD applied twice is Red, Green, Refactor, Red, Green, Refactor.

The only difference here is where the start/stop boundary is: Refactor, Red, Green, Refactor, Red, Green.

Combining the two: (optional Refactor), Red, Green, Refactor each time.


I've noticed with this approach it's important to try not to rush. If you're like me you see the deadline or the triviality of the bug and think I can't charge two days to fix this. But the average time to fix bugs in the code base will reduce if you fix these things.


Don't change it, but replace it! As in CRUD, only CRD; as in append-only event sourcing; as in blockchain; the code represents your business value, similar to the bookkeeping book to record your money value.


Refactor then functional changes is the right play where reviewers are involved. It separates moving things around and renaming, which is really noisy in a diff tool, from the functional change which hopefully changes much less code.

The first review is looking for any functional change in the noise, the second to assess a functional change without any noise. Adding tests in the first phase is great but existing ones shouldn't change. Maximises chance of review finding mistakes / minimises cost to review.


I agree with this, but I phrase it differently. I refer to eliminating technical debt relative to the change before making the change.

More details in [1], where I explain my software development process.

[1]: https://news.ycombinator.com/item?id=32210402


I've always been curious, is this how atomic changes are accomplished in databases?

I always figured if you changed a table, you would set up the data post-change in a separate place, then switch a pointer to point to the new data as the very last operation.


What's "this" in your question? I don't see the immediate connection between a programmer's workflow and a database row update.

But your understanding of what happens inside the database engine is too simplistic: first of all, the data does not necessarily exist as only one copy because there may be multiple indexes or materialized views covering that data. And each copy needs its own pointer switch as the very last operation.

Secondly, the "atomic" change you mention is only atomic from the application point of view. Multiple clients may each have their own isolation levels and execution state, which means that the in-memory "switch" to the new data may happen at different times for different connections (especially for isolation level repeatable-read).


...unless every time you try to make meaningful changes, people try to urge you to hack in stuff quickly so the code only gets messier to the point the maintainer gets a mental breakdown ;)


Question for everyone: do you refactor in a separate pull/merge request, or just a separate commit within the same pull/merge request?


I don’t know about everyone else, but for me it depends on the scale of refactoring.

If I myself think that receiving such a mixed PR would be troublesome to properly review, then the refactoring gets to be a PR of its own.

If the changes are small enough to all keep in your head at the same time… Then it’s one PR.


I think separate is ideal. But sometimes I only realize the refactor halfway through my changes, at that point I might try to do a chunk on a clean workspace so they can be moved to a separate PR or I’ll just proceed and give generous explanation of all changes in PR with inline comments (comments in PR not in code) and offer to walk reviewers through it.


Does anyone have the original quote by Kent Beck handy?

I feel like this is one of those things that seems obvious but is a bit more deep than that.


A 2012 tweet said, “for each desired change, make the change easy (warning: this may be hard), then make the easy change” and clarifies in a comment “Glad you found it helpful. Sometimes when the work is hard it signals that we're doing it wrong. Sometimes it's just hard.” [1]

A 2020 tweet claims to have an older form of this insight c. 2000 which would be written in DOT as

    digraph {
      "straightforward?" -> "add feature" [ label = "Y" ];
      "straightforward?" -> "isolate change" [ label = "N" ];
      "isolate change"
        -> { "refactor" "create" }
        -> "straightforward?"
    }
See [2].

So the claim is that new feature development per se should always be “easy” (in the Rich Hickey sense, "adjacent" or "in nearby reach"), but that there should exist potentially a lot of hidden work which is not feature development.

For example, you build a log ingester which runs super fast, it buffers the logs and batch inserts them into a NoSQL store, gets very nice throughput... But then you want to add a feature for which your NoSQL storage just doesn't index right or so. The advice here is to abstract the problem first, figure out where it creates friction. Are these for example ad hoc user searches? Or do we know them ahead of time? Those create new structural constraints, for some work that is not directly tracked... Perhaps just adding some appropriate index, computing periodic summary rows, streaming the recorded logs into a MapReduce type cluster, switching to a relational database that better matches your domain, or whatever.

None of this work directly yields value to your customer, indeed you are attempting to do a pure refactor with no observable changes to the clients. But, because you abstracted the problem you now have the ability to easily solve a bunch of nearby problems at once, and you use this to solve that target problem.

Why would Kent Beck want this? Because he is talking about extreme programming and agility and changing the software in response to user feedback and all that. He needs that freedom to solve nearby problems easily, because the core tenet of Agile is that you ship ≈crappy software and let your customers tell you what's crap about it, rather than pretending to know yourself. “Hey the colors are awful but here's a first pass.” “I don't care about the colors, I care that you think every purchase tracked by the system belongs to a contract.” “Uh. What. I thought you said contracts had the purchases underneath them? We had a long conversation about this!” “Yeah but just use your head! Sure we usually purchase things for contracts. But we also purchase things for bids, or for demo products, or to upgrade our tools...”—that is the Agility problem in a nutshell, I want to find this out before the codebase is 30,000 LOC+ and fixing the broken data model requires three months’ rework!

So Kent Beck needs to be able to offer a client a “change/feature” with the understanding that they might say “What. On. Earth. No. Just, no. What?” and say “oh ok my mistake I can tweak this to do what you want, give me a day or two.”

My own version of this is the aphorism, “Magic is just cleverly concealed patience.” What I mean is that if you dive into what magicians do, you will often discover that it just required a lot of prep work: and the reason that you assume magic is you can't imagine someone spending that much time on that stupid of a thing. “Who would buy 52 decks of cards just to assemble one that is ENTIRELY made up of the Jack of Spades?” Well, a professional magician would. Some sleight of hand, they fan put a normal deck, swap the decks so that you shuffle the Jack of Spades deck and pick any Jack of Spades you’d like, how about that. Same happens with professional kitchens prepping everything before service. If you cleverly conceal the patience, you get a reputation for working magic at your job.

[1]: https://twitter.com/KentBeck/status/250733358307500032

[2]: https://twitter.com/KentBeck/status/1218307926818869248


This post is like 12 words long. How is this of value?


The value is the quote itself. To me, its resonance is much wider than the scope of the post (refactoring). It reminds me of LessWrong’s removal of trivial inconveniences.

The post is just an opportunity to bring the quote to our attention, and it can be safely disregarded or interpreted more widely.


Ah Kent Beck, where would the industry be without him?


Probably making snarky comments about some other programmer whose book(s) they never read and whose contributions they don't understand.


I mean without Kent Beck we would not have made much progress as an industry. We wouldn't have agile or scrum for example!


Was it snarky? Personally, I think he has made a number of really important contributions to our field, and so I read that as sincere.


Maybe I was being ungenerous, but most comments that are one-liners like that, "Where would we be without <Person>?", when it relates to individuals like Beck or Fowler or others here seem to be sarcastic, not sincere. I suppose the original commenter could clarify that.


Given the lack of apologetic reply, it looks like you were right!


Also assuming my comment is snarky and making a snarky comment back is pretty cool.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: