It sounds like most of your pain came from the code not being DRY. That is, this data size constant was duplicated in many places, rather than defined in one central place.
Unless I'm misreading you, that's not an appropriate YAGNI case, as Fowler writes:
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"
That's a very convenient distinction. It lets you No True Scotsman anyone who challenges your position, yet provides little if any practical guidance about the best thing to do in the real world.
That is an argument for refactoring only immediately prior to implementing a new feature in order to support development of that feature. In itself that is reasonable enough, but it becomes less effective as a strategy if the cost of just-in-time refactoring prior to implementing each new feature turns out to be significantly higher than the cost of setting up the same design at an earlier stage.
When to refactor/clean up code is an interesting topic. My rule is to only refactor old code when the bad design gets in my way. If we have some bad code that just keeps working, there is not much reason to clean it up.
New code I try hard to factor into tip top shape.
This is entirely separate from YAGNI in my dictionary.
That all sounds perfectly reasonable, but please answer me this: how do you decide what "tip top shape" is for your new code?
If YAGNI is an argument for not making any sort of advance judgement about future requirements until it's clearly necessary, then it is necessarily also an argument that as soon as any code meets its immediate requirements you should stop working on it and move on to the next sure requirement, without wasting any time on refactoring that might never prove useful for future development.
I suspect that many here who would say they agree with YAGNI do in fact edit their code beyond just working no matter how much it looks like spaghetti, in which case I would argue that the difference between our positions is merely a matter of degree, not a difference in the underlying principle.
Yeah, some/many people forget about the Ruthless Refactoring part of XP. Or they're just not good at it. Like how some decide to not write documentation and declare themselves "agile".
The successful XP teams I've been on probably spent 1/4 of their time refactoring. Once your code works, you clean it up, and refactor anything it touched. THIS IS THE DESIGN PHASE! Without it, you're just another pasta merchant. What truly blew my mind was that designing/architecting the code after you write it is so much easier and effective.
> If YAGNI is an argument for not making any sort of advance judgement about future requirements until it's clearly necessary, then it is necessarily also an argument that as soon as any code meets its immediate requirements you should stop working on it
That is not the YAGNI I know. It applies to external requirements only. Keeping your code base well designed, readable and bug free is an entirely separate concern.
> it becomes less effective as a strategy if the cost of just-in-time refactoring prior to implementing each new feature turns out to be significantly higher than the cost of setting up the same design at an earlier stage.
Or rather, the cost of setting up the best design you could at an earlier stage, knowing what you knew at the time, and then of modifying that design to be the design you want now.
But yes, if that turned out to be cheaper than just-in-time refactoring then that would be a better way to proceed. (IME it never is cheaper though).
But yes, if that turned out to be cheaper than just-in-time refactoring then that would be a better way to proceed. (IME it never is cheaper though).
This is always the danger of proof by anecdote or personal experience in a field as large and diverse as software development. I could just as well tell you that I have seen numerous projects get into trouble precisely because they moved ahead too incrementally and consequently went down too many blind alleys that were extremely expensive or even impossible to correct later.
It's true that I have not often seen this in something like the typical CRUD applications we talk about a lot on HN. However, if for example you're working on some embedded software with hard real time constraints, or a game engine that will run on a console and has to give acceptable frame rates with the known processing horsepower you have available, or a math library that needs to maintain a known degree of accuracy/error bounds, or a safety-critical system that requires a provably correct kernel, I suggest to you that trying to build the whole architecture up incrementally is not likely to have a happy outcome.
You missed the point that I probably made too subtly, that it went from being a set size to a variable size. Yes, we did have some terrible non-DRY code and it would have been a little easier to make the change if not for that. But going from same size all the time to different size every time was never going to be as easy as changing the value of a constant.
>It sounds like most of your pain came from the code not being DRY.
Exactly this. Yagni does not preclude following generally good design principles, which almost always saves future pain; unlike attempting to predict and design for the future.
Unless I'm misreading you, that's not an appropriate YAGNI case, as Fowler writes:
"Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify"