Hacker News new | comments | show | ask | jobs | submit login
Why is YAGNI a good rule? (lionelbarrow.com)
29 points by lbarrow on Aug 24, 2013 | hide | past | web | favorite | 54 comments



YAGNI is well and good for certain types of problems, especially prototypes and proofs-of-concept--essential to succeeding, even.

However, the very second you know for sure that you'll need to expand the system, you need to stop and pull on your big-kid pants and do some design. Any fool can keep throwing code at slight variations of a problem until they're buried in spaghetti (and many do!), but people that actually know what the hell they're doing will bust out the abstraction hammer when it becomes clear that it's prudent (and not a moment before).

Generally, once I've had to solve a similar problem three times it's time to refactor. Once is providence, twice is coincidence, but three times suggests that things need to be designed.

EDIT:

The author's example is particularly poor. A dog, chicken, and cat all need to move, follow a player, interact with the game world, update, spawn, and die. It makes perfect sense to have a Pet class handle this glue entity lifecycle code, and then to specialize a subclass to override the rendering or update code.

There are cases where YAGNI is alive and well, but this example simply isn't it.


> The author's example is particularly poor.

I thought the author's example was reasonably good, but it doesn't capture the spirit of YAGNI as I see it.

Instead, the way I'd phrase it is: we're introducing pets to Minecraft. For now, this means a dog. Maybe we'll add chickens and cats later, but our first release of this feature will only be dogs. I could create a Pet class and a Dog class that inherits from that, but maybe we won't. Creating a class hierarchy is only going to slow me down for now, so I'll skip it for now and just create Dog.

Fast forward a couple months, and the feature is a huge success. But, instead of an interest in cats, the users are now clamoring for different breeds of dogs. Good thing I didn't waste time creating a class hierarchy rooted at Pet, because then I'd have to juggle Pet -> Dog -> {Shiba Inu, Golden Retriever, etc.} (although I'd personally implement dog breeds with new attributes on Dog, but you get the point).

Or, the feature is a flop, and is going to be cut. Good thing I didn't spend time implementing a class hierarchy because that would've been even more wasted development time. I'm glad I got the feature into market as quickly as possible to validate it.


Sure, sure, but A/B testing the feature is going to be hell of a lot easier when you can easily switch out dogs for cats for chickens for optimized dogs.

You don't want to get stuck in a local design minima. Adding a class hierarchy here would've been basically free (subclass overrides for two or three behaviors, tops).

Again, there are good examples out there of when YAGNI is important--this wasn't one of them.


If it's so little effort to add a layer of abstraction then it will be just as easy to add it later. If you end up needing to add a feature that the abstraction would make easier then you can still go ahead and do it first. You won't have wasted any time, it's just that you added the abstraction after rather than before a different piece of work. If you don't end up needing it then your code is simpler and you haven't wasted any time.


Sir or madam, the road to hell is lined with classes whose interfaces were defined after the fact. Usually you'll end up with a bunch of special snowflake classes and some classes which actually have thought-out interfaces, just enough to make you angry.

Software engineering through accretion and backporting is not, by definition, engineering.


Perhaps a better way to put it is that your code base develops through assimilation rather than predicting abstractions and then shoe-horning features into them. In my experience the most elegant interfaces consistently emerge from the repeated process of adding features and refactoring the code to best fit them at that time. I can't tell you how many awkward abstractions I've had to work around at earlier points in my career because the project lead is trying to get everything to fit into their grand vision. Projects developed through assimilation mature far more smoothly because the design emerges to match reality. When you try to predict abstractions projects tend to leap and lurch. The repeated cycle is prediction, stretching, breaking and then finally rewriting.


Of course, at the other end of the spectrum, you have ridiculously over-designed systems whose class hierarchies stand as a testament to the hubris and folly of the developers who created them.

I've dealt with both, and—more frighteningly—systems that veer from one extreme to the other depending on how far behind schedule the project was at the time that particular piece of the system was implemented.

And then, of course, there's lots of code that lives somewhere in a happy middle.


Your example of three times, is another common refactoring heuristic called the "rule of three" (I think Don Roberts coined the term). It's not incompatible with YAGNI, rather by the time you've got three examples you have proof that you do need something.

Arguing about YAGNI is sort of hard because developers fall into 2 camps, those that have spent more time in systems that were too abstract, and those that have spent more time in systems that were not adaptable enough. Your feelings on YAGNI are going to be very biased based on this.

I fall into the very pro YAGNI camp, because I think it is very, very hard to "design" for an unknown future. You can guess, but that's all it is, a guess. Why not wait until you can know what the abstraction should be?

Finally, introducing a class hierarchy for code reuse, is almost always a bad idea, YAGNI or not.


"Finally, introducing a class hierarchy for code reuse, is almost always a bad idea, YAGNI or not."

Agreed, which is why component architectures are awesome. :)

That said, in certain languages and with certain system philosophies, it's the Right Thing to do--especially when you know that the feature will be needed. I don't promote that we prematurely abstract things out of ignorance, but out of convenience.


The problem is that the design will often prove to miss things or be broken in ways you did not anticipate because people are poor at anticipating future needs. This adds cost whenever someone needs to read/understand or modify the code in the future.

YAGNI is not an excuse for "throwing code at slight variations of a problem until they're buried in spaghetti".

It does on the other hand mean that introducing the Pet/Dog separation before you have chickens and cats may not be a good idea.

As you say, dog, chickens and cats all needs to move, and that's not likely where YAGNI will benefit you. Where it will benefit you is that in a lot of cases it will turn out that the extensions you thought would get added won't, but others will. Say you believe all pets will walk, and someone decides to add eagles. And what about fish - do you treat them as moving? They certainly won't follow the player around.

Or maybe no other pets than dogs will get added at all. This latter point is important, as a huge contributor to code complexity in many projects is simply unnecessary structure because people want to "be prepared". That makes sense for closed source libraries written for external consumption, where you need to predict your users needs and providing extension points etc. is crucial to making your software usable. But for open source, or internal projects it makes more sense to instead wait until you can get actual feedback from someone who wants to use the code.

When you add another type of pet, you should still do it cleanly rather than hack it in. The point is not to defer proper structure, but to implement only the structure that is justified directly by the current code.

At the same time, it is worthwhile to ensure that you keep this intent of future restructuring in mind - you should keep coupling down anyway, but if you apply YAGNI, louse coupling becomes even more important as it keeps refactoring costs down. E.g. if you refer to the Dog class by name all over the place, that's not a great idea.


I chose that example precisely because it makes sense to have a Pet class that holds common code between Dogs, Cats and Chickens. When the code only contains Dogs, though, the Pet class is unnecessary, even if you plan to introduce Cats at a later date.


Plus, the generic behaviors of a Pet (say, life and death), are all logically separate from dog specific things. Even if the world only contains dogs, having a Pet class might lead to more organized code.


But you knew right then that you were going to implement the other two classes, and that you would need the code. It's not like you aimlessly added abstraction to cover your lack of design and grasp of the problem space--you had done some back of the envelope design and knew you needed it.

That's not premature at all.


Your code shouldn't be designed to reflect what you believe is going to happen to the program eventually; it should be designed to fit the requirements as they are right now. If a Pet class makes sense later on, add it later on.


The example stated "when we add chickens and cats", not "if we decide to add chickens and cats". It was clear that the requirements at the time included 3 types of animals--if it wasn't clear, you should've written the spec better.

Look, I'm on your side about YAGNI and shipping products not code and whatever other stuff we probably both genuflect at. I'm actually dealing right now with a lot of fallout involving overengineering a project when it should just be kicked out the door and set to work.

That all said, too much shoddy engineering and cowboy-coding is done with YAGNI given as a hollow excuse. There is a lot of pushback on doing upfront design work, but even a cursory consideration of the problem domain and code structure will oftentimes yield better code that otherwise would've been ignored under the banner of "we aren't gonna need it".

EDIT:

Hell, I'll take it one step further: when developers do decide to be lazy and claim YAGNI, they often aren't lazy enough, and instead write hacks that are too large to just get something done. If you are going to half-ass something, by God, half-ass it all the way.


If I had a dollar for every time clients have said "when we X" and then later went on to insist we need to do Y instead, that is completely opposite to X, and counter to everything agreed in specs, I'd be happily retired by now, living off my fortune.

It'd be great if we could all know in advance what the requirements will actually be, but that requires us to be able to visualize exactly how everything will actually work in advance. That is rarely the case.

YAGNI is not about avoiding design (though I agree it might be used as an excuse), but about deferring decisions that does not need to be locked in yet, until they need to be. That does not mean you should not design. Just don't go into unnecessary levels of details, and iterate when more detail is needed.


> if it wasn't clear, you should've written the spec better.

This is basically a spec-writing problem in the first place, not a code problem. OP is wearing the spec-writing hat and the coding hat at the same time. But it doesn't change the question at hand. Should the spec for 1.0 include hooks and scars for features you expect to be in 2.0, or should you wait until it is confirmed with 100% certainty that they will exist?


A lot of the time when people say the know, they only think they know. And even if they do know, chances are they don't know how they will do it, and so which functionality will actually be shared.

Even if you insist it isn't premature, the question I'd ask is: for what purpose are you adding it? If not code currently needs it, it's just extra complexity at this point taking time away from writing code that is needed right now.

Now, if you're going to add the Cat class tomorrow, and it will happen 100% guaranteed unless you get hit by a bus, then sure - that's still in keeping with YAGNI, I'd argue.


So what happens down the road when you do need it? Seems to me it violates DRY and borks your serialization for save states. Maybe it's just in this particular example though, or maybe it should be viewed not on a code level but more on a feature set level.


So would your Dog class have both dog stuff (e.g., code that models dog psychology) and game stuff (code to deal with animation loops, collisions, and such)?


I've worked on some big, mature code bases, and I've always appreciated it when other people believe in YAGNI. It tends to result in relatively simple code with just enough complications to support the features which actually exist.

In theory, it would be nice if the code included appropriate hooks and abstractions. But in practice, it's often quite hard to guess where those hooks should be, and which aspects of the code should be abstracted. And since the abstractions are based on hypothetical designs, they're very often useless.

Basically, I'd rather maintain over-simplistic code that implements a few features well than code that has been prematurely generalized to support half a dozen features which don't exist yet. All too often, the generalizations are wrong, or simply get in the way of other features I'm trying to add.

Unfortunately, YAGNI is one of those rules that requires common sense to apply well. It works best as a sanity check for an already-experienced programmer: "Really? Do I actually need a whole new set of abstract interfaces here? Or can it wait?" If a programmer is totally lacking in taste and judgement, YAGNI won't save them by itself.


Maybe YAGNI should be applied to generalizations as well. Don't make one until it becomes hard to add something to your code, but if you do - generalize as little as possible.


I'll confess, I don't understand YAGNI.

I think I've been tainted by too many counter-examples: seeing a piece of code that was specially written for a very narrow and specific use case, and needing to extend it to some scenario that the original author did not envision. If the author hadn't been thinking in terms of re-use and extensibility ("because they ain't gonna need it"), it gets frustrating and regression-prone.

In other words, you ain't gonna need it, until you eventually do. Lazy programmers shouldn't be excused for dismissing maintainability with a simple catch phrase.


I understand it. I've formerly been employed at a BigCo during a 6 month period when the entire IS department was in the throes of an architectural analysis paralysis. If you've never seen an abstract factory factory factory, then you could be forgiven for thinking the only people that invoke YAGNI are slackers that want to be lazy.


I've worked at Microsoft, so maybe I do know what a slow-moving BigCo is like. Yet for all the cruft I did see, I never once recall thinking, "if only these people had YAGNI..." Usually the problem was just the opposite, that something was built that was overly specialized and not re-usable and somehow managed to spread its implementation details all over the place.

The factory-factory-factory example reminds me of a certain brand of OO spaghetti, very typical of people who are applying ideas from OOP but don't have a firm grasp of what problem those ideas were meant to solve. I see that problem as distinct from YAGNI, it's just a problem of bad design.

My own advice would be: it's fine to make some shortcuts in the interest of productivity, but please try to keep them isolated. The details of your shortcut shouldn't leak heavily into your design, and the next person to maintain it should be able to swap out pieces without fearing the rest of the thing will fall over.


Agreed. Most of the hand-wringing I see against overengineering is just people trying to justify bad programming practices.


YAGNI is not about being lazy, it's about being smart about the code being written, about the craft.

Over-engineering, architecture astronauts, hurt a system a lot more than someone following YAGNI. Making systems that are a big mess of hard to understand and modify code (even worse). I've seen this happen again and again.

Architects sitting in ivory towers pump out Word documents which result in overly complicated systems. Large, slow, waste people's time and money. The whole idea of "oh, in 10 years we may want to handle this and that" adds unecessary code which then needs to be tested and creates many more points of failure. And did I mention slow? These hairy-balled monstrosities are terribly sow.

If over-engirneers got paid by the line of code, they'd be millionaires. If they got paid for writing elegant, short and easy to maintain code, then they wouldn't have two pennies to rub together -- the YAGNI developer would be sitting in his yacht.

You Ain't Gonna Need It!


I'm not so convinced that YAGNI generally leads so directly to "elegant, short, and easy to maintain code" as you say. Certainly, with a very keen sense of when to make the transition from Ain't to Are, it can lead to time-savings and simpler and more focused code, but with less discipline or tight deadlines or (typically) both, it can also lead to repetition and the complexity of maintaining many one-off features, even if each is relatively simple in itself. Something very similar can be said for a more architectural philosophy - with a very keen sense of when a design is liberating and when it is burdensome, it can lead to less repetition and targeted maintenance, but with less discipline it can lead to analysis-paralysis and over-engineering. Like most things, the best answer is somewhere in the middle.

My preference is YAGNI while keeping a sharp eye on things that are relatively hard to make extensible later and relatively easy to make extensible now. To tack on another not-great example to the not-great Dog example - if I find myself writing 'dog' a bunch of times in a bunch of ways, I may well pull that out into some sort of `type` variable even if I-ain't-gonna-need-it.


You're correct when you say about YAGNI not necessarily leading to "elegant, short, and easy to maintain code". That is, in the end, up to the developer's skill.


Well, I agree with you because you're agreeing with me, but I'm not sure your comment that I replied to agrees with you. From my reading, it definitely implies that YAGNI leads to good code and architecture astronautics doesn't, entirely ignoring developer skill. Perhaps you just oversold your point.


It would be much easier to discuss issues of design patterns and when to apply them if we weren't constantly being bombarded with straw men. Ivory towers and word documents? Come on now, this adds nothing to the discussion and yet this is basically the entirety of the argument against anticipatory structural code.


I divide YAGNI into two cases: security and bloat.

Bloated features are what you just described - more often than not, it's unproductive to code in extra things you don't need on a deadline. You can better work elsewhere. And yet, I think you're right, a lot of developers who will eventually need/want what they're putting in later anyway.

On the other hand, YAGNI has significant security implications. I posted a example downthread about how a database could be compromised by adding in a back-end feature for lazy admins. In that case, YAGNI is really something you want to avoid.

A lean system is an easier system to plug up. But I also understand that's a bit outside the scope of strict features.


For another example where YAGNI is not a good idea, see https://news.ycombinator.com/item?id=6268480 about apps not handling HTTP response code 410 (resource is gone) correctly.


Maybe it shouldn't be seen as a hard rule but as some heuristic in the back of your mind, to avoid over-abstraction, digression and never-shipped depression. A counter-balancing notion.

And most of the times people mean YAGNI for the first expression of need-vs-implementation. Surely they know there's a lot of chance that one will need more solid abstraction, but it's just to avoid digging too deep into the wrong hole. Somehow it's like http://en.wikipedia.org/wiki/Simulated_annealing . Avoid being stuck in a local maximum.


> seeing a piece of code that was specially written for a very narrow and specific use case, and needing to extend it to some scenario that the original author did not envision

There's pain in both options: making something specific that has to be made more general later, and making something overly general and never actually having a use for the additional complexity.

The former, though, is solving problems you have, and the latter is solving problems you often don't end up having. Given the choice, I would always prefer the former.


Thinking about reuse and applying YAGNI are not opposites.

On the contrary, I find that to apply YAGNI successfully, you must think about reuse, or you'll waste a ton of time refactoring all the time and the cost of adding the features you do end up needing will be massive.


I can't really agree, although its hard to argue against a contrived example and a general rule that is almost a tautology (don't write code you don't need).

A lot of times "architecture astronauting" is simply good separation of concerns. To take the Dog example: there is a lot of code necessary to get a Dog into the world of minecraft that isn't specific to the concept of Dog at all, such as drawing itself, keeping track of its state in the world, etc. These are the types of concerns you would want in a base class of Dog and potential Cats and Chickens. This is something that should be factored out, not because you may eventually make a Cat, but because those concerns have no connection to the concept of Dog at all. Keeping such code together will just make comprehending the actual Dog specific code harder.


(OP here.) I agree that code examples would be better than intangible statements; I included a few samples in early drafts of this post, but ultimately concluded they made it too long.

I'm not saying we shouldn't try to make components reusable or that we shouldn't use abstractions. Rather, we should introduce reusable components when they're actually reused rather than just used once. Component re-use should be introduced into a program lazily.

In the Minecraft example, once I add the Cat class, I would go back and refactor the Dog and Cat classes to inherit from Pet. The point I'm trying to make in the blog post is that I wouldn't do that until I had actually written at least two classes that I knew would be inheriting from the Pet class.


Yes, it was clear what your point was. My point is that there is code in your Dog class that would do well on its own, irrespective of any supposed Cats or Chickens. Separating positioning/movement code and barking code makes real design sense. Separation of Concerns is at times the antithesis of YAGNI.


You are assuming that you know upfront which of that code actually will be separate.

Some of it may seem very obviously separate, but that does not mean it is clear what the generic version will look like for all practical use cases.

E.g. a generic "move" method may look substantially different if it is to only deal with pets that walk on the ground vs. handle pets that fly too.

How much do you generalize it in advance before it is too much?

If the code really is becoming so large it is a maintenance issue, then sure, yes, separate it anyway. But do it explicitly for that reason.


A thousand times yes. I totally agree but this is one of those practices that's really hard to teach. A big part of the job for a software engineer is balancing abstraction with specificity. The argument from the architecture astronauts is always that once the system gets big, we'll be glad to have the AbstractServiceFactoryFactoryManagerFactory. My argument is that unless we're shipping the "big system" on this release cycle, we should do what's best for the small system. In all likelihood the system will never get big and by the time it does the requirements will change. In all likelihood useful components will get rewritten many times anyway.

I think YAGNI is a really good rule of thumb when you define "need it" as "need it for the current release".


There are three reasons why YAGNI works so well.

First, people systematically underestimate uncertainty. The human brain is good at recognizing patterns and making plans but not so good at dealing with randomness and uncertainty. When talking about software development and maintenance we're talking timescales of years. Companies go out of business, projects get cancelled, code gets thrown out and rewritten. It's easy to paint a pretty picture about how the Cat class will get added to the Pet hierarchy but there is tons of uncertainty involved in even the simplest project. Because the scenario is specific the brain overvalues its likelihood compared to the nebulous unknown alternatives.

Second, keeping code as simple as possible is almost always for the best. YAGNI forces smaller simpler code. More complexity and more code means more room for bugs. All the studies I know of consistently show that more code means more bugs.

Finally YAGNI is a principle that forces priorities to be in the right place. The question is never between spending time making something that might be useful in the future or doing nothing, it's between doing one particular useful thing or another. I can't think of a single case where the right priority is to do something that MIGHT help in the future. You start by doing things that DO help now, then things that WILL help in the future.


It is even simpler than this.

There is a non-linear relationship between number of requirements and the complexity of the resulting software. The exact relationship is hard to nail down - I've seen some data suggesting that complexity goes as the cube of the number of requirements - but it is clearly non-linear.

This goes whether or not the requirements are handed down by a product manager, or generated by a developer who mistakenly believes that the future types of requirements are predictable. The impact on the complexity of our software of additional requirements is all out of whack with how many requirements you added.

Here is food for thought for those who are not convinced. Suppose that a developer who wants to abstract things doubles the set of requirements and the relationship is the cubic one that I suspect. Then your task is 8 times as hard. And you don't have working software until it is done.

Suppose that 2/3 of those requirements turn out to be needed eventually and, horror, there is no way to do it without a complete rewrite. Then you expect the full energy now, 4x as much work later, and your total effort eventually is 5x what it takes to produce what you really needed right now, AND you get to enjoy a working partial solution before you're done.

Even a worst case scenario for YAGNI (yet) comes out better than trying to aggressively plan for the unknown future.


The problem with YAGNI as a development slogan is that it can be too broadly applied.

I think a better wording might be, "don't increase complexity writing code you may not need". The issue is usually not the presence of additional code. Rather, it's making design decisions based on requirements that may never in fact come to bear.


I'll join the chorus. If you're not going to need it don't do it but if you will need and you don't do it you're going to pay dearly. "Software Engineering" is about being able to tell the difference vs. just throwing something together.


No, i wouldn't think so at all. A little foresight and modular design is not _that_ hard. Sure, after you implement the Dog Class you know better what the Pet Class may need, but you can most likely already have 75% of that Pet Class ready beforehand.

If the OPs software design (for a particular subset!) is so complex he can't imagine it in his head: either the design is too complex or the programmer is not very good, imo.

Any why the hell do we need the next acronym?!


I was nodding along, but man was that example painful. I'm sure that it was in good effort...but it really didn't do justice to why YAGNI is bad.

I'll bite: it's really terrible for security purposes. Aside from the typical YAGNI feature examples, there's plenty for application maintenance and back-end development.

Let's say you have a database with several layers of privileges. You frequently have to perform actions that require higher privileges than that user has, and you're a lazy dev. So you try to automate this process by adding further code to your system design, expediating the entire process at the cost of some authentication.

Bam, huge red flag and a half. Yes, you can have this and defend it at the same time, but in most cases, it's silly to try. Adding code just adds another vector and a half.

That said, yes of course adding features you don't need will bloat the system. But that pet example kinda sucked, I'm sorry.


Design Patterns are one of the biggest culprits in large, over-engineered patterns. http://37signals.com/svn/posts/3341-pattern-vision


Perfect post about one of the most important principles in programming. Nine out of ten, concise design with fewer lines of code is better design than over-engineered abstractions over abstractions over abstractions over...


If you are going to use a rule like this, you really need agile project management to guide what you are going to need. If you are doing Scrum properly, you plan the features (and internal architecture needed for those features) for up to a month at a time, but you don't architect for purely internal things more than a month in advance (the YAGNI principle) since one of the Scrum principles is to limit work in progress.

If you are doing a waterfall PM process, it doesn't apply so much.


I totally agree! The problem is that sometimes you just can't convince your coworkers. They'll say this XYZ feature is coming sooner or later, they'll argue that abstraction means cleanness. And as future is so uncertain, arguing about what might happen in the future always ends up with no conclusion, and they still think their future-proof approach is worth it.


But the true future proof approach is to not add the code, but write the code you do write in a way that makes it easy to support the expected changes anyway.

E.g. for the pet example: Avoid hard-coding the class name all over the place, so that if you do end up having to add a Pet superclass and add Cat's in the future, you don't need to make changes more than one or two places.

Writing loosely coupled and reusable code is good practice anyway, and also makes testing easier, but if you practice YAGNI it becomes extra essential: You know your code almost certainly will need to change in the future, so you should structure your code accordingly to make extension and refactoring easy.


I'd like to see some analysis on the real "cost" of a line of code.


This sort of reminds me of Minimum Viable Product methodology (MVP) but I think YAGNI is at a later stage in the development cycle.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: