
Yagni - petercooper
http://martinfowler.com/bliki/Yagni.html
======
eridius
Something this article largely ignores is that often you want to build the
support for the presumptive feature now because the act of implementing it
affects architectural decisions. It came close to this by saying

> _it does not apply to effort to make the software easier to modify_

But there's a difference between making software easier to modify (where the
same effort could be expended later to retrofit the software) versus making
architectural decisions. Often times trying to implement something teaches you
that your architecture isn't sufficient and you need to change it. If you can
learn that while you're building the original architecture, then you've
avoided a potential _huge_ amount of work later trying to retrofit it.

To use the example from the article, what if supporting piracy risks reveals
that the fundamental design used for representing pricing decisions isn't
sufficient to model the piracy risks, and the whole thing needs to be
reimplemented? If you can learn this while implementing the pricing in the
first place, then you haven't lost any time reimplementing. But if you defer
the piracy risks by 4 months, and then discover that your pricing model needs
to be re-done, now you need to throw away your previous work and start over,
and now your 2-month feature is going to take 3 or 4 months instead.

~~~
saganus
As I get older, both as a person and as a dev, I have come to the same
realization as you: that a lot of the time I end up building things that I
don't need "right now", but which I have to at least plan at the architectural
level, lest be it a nightmare later on.

And then it hit me.. THIS is one of the areas of software development where I
don't think reading more articles, techniques "mantras" etc will help. Only
more practice will help. It ends up boiling down to how much experience as a
developer you have, making those particular decisions.

I.e. It's a craft, not a science.

So as you grow up as a developer you suddenly find that you start to have
certain "intuition" as to why I should actually plan for something that I
don't need, vs actually ignoring something (a real "yagni"), but you just
can't explain it in terms of generalizations. You end up saying.. "well.. it
just feels like I Should do this, because I have been burned in the past when
I ignored this intuition"

And I don't think I'm alone in this train of thought.

~~~
lowbloodsugar
This sounds like the journey from "realization of the cost of hacking" to its
"cure", the grand solution.

I had the same epiphany in my 14th year of software development when I
realized that programmers Must Be Given Solid Platforms of Well Architected
Object Oriented Code or they will Make A Mess.

However, now in my 34th year, my view is more "Plans are useless, but planning
is indispensable".

I have "grand" architectures, but my architectures and their implementations
are designed with the assumption that they will be wrong in a week's time.

And my advice to anyone reading this is YAGNI. More practice helps. But
practice the right thing.

I.e. its war, not craft.

~~~
saganus
This is actually very interesting.

However, I still am left with the feeling that you have reached this
conclusion by means of more than three decades of practicing both the right
thing and the wrong thing as well (we all make mistakes, etc).

So basically, how did you realized that "Plans are useless, but planning is
indispensable" if not precisely by trying to plan, and then observing it's
uselessness?

I could almost bet that you now assume that the architecture will be wrong in
a week's time and yet you always find a way to actually make it resilient to
this and make it work, otherwise I would have to wonder how did you ever
accomplished something if indeed 100% of your architectures was wrong in a
week's time.

I.e., this is your experience in your craft. Flowing like water, Bruce Lee
would say.

Now, the point I did not get though, is, what exactly are the differences
between being a craft or being war?

If it's a craft I kind of assume, like you, that I'll never be able to design
the perfect architecture, but only the best architecture that I could have
produced at the time. And just strive to get better every time.

If it's war, then it basically means that at any time anything can explode,
but then I would be filled with despair all the time I guess. Maybe not. Is
this what you refer to? if it's war you have to design assuming it will break
very shortly and then work around that? or "make do with what I have" as they
say?

Again, very interesting points and I appreciate the discussion since this has
been haunting me almost my entire career.

Edit: By the way, I hope you don't me if I quote you. Those are some nifty
points right there.

~~~
lowbloodsugar
I have done my fair share of over-engineering. =) Its not over-engineering if
you need it, but turns out, most of the time, you aint gonna.

A team that just hacks and does YAGNI by accident, or a team that does YAGNI
but doesn't plan, they often have a choice of how to build the feature right
now. With a little bit of planning, we can see past the first few meters and
choose the first step to be in the direction of the bigger plan. Without
planning, the first step might be in the wrong direction. The hard part for my
teams is to plan, and then still only _implement_ the first step. They're
getting better =)

Its war because if you're doing something worthwhile, there's someone else
trying to get your customers. I'm not suggesting your stuff should break. The
opposite. Combine YAGNI with TDD and push working, tested features to
production ASAP.

~~~
ScottBurson
_With a little bit of planning, we can see past the first few meters and
choose the first step to be in the direction of the bigger plan._

I had a suspicion that despite the black-and-white way you have expressed your
opinion elsewhere, particularly in your exchange with eridius, there probably
was actually some common ground here. I don't believe it's possible to avoid
the failure modes eridius describes without thinking ahead to some extent;
this passage suggests you actually agree.

I can't speak to your experience with your team, but I can say that I've spent
a lot of time fixing or working around other people's poorly-considered design
decisions. In many cases I understand there wasn't time to do it better; the
product had to be shipped. But in some cases I think a little reflection would
have shown a better way which wouldn't have taken any longer to implement.

------
krupan
I dealt with this recently. We wrote a system that had a hard-coded data size
in it. Later, that data size had to change to a flexible value chosen on the
fly, and I was the one that had to go through all the code to make all the
changes so we could handle that. It took a long time. Partly because we had no
unit tests (a whole 'nother discussion), partly because I wasn't familiar with
the area (all the areas) of the code that needed to change, and partly because
we had said, "YAGNI" and not made our code configurable enough from the get
go.

I started to curse YAGNI in the middle of that chore, but then I paused and
thought about the many months of productive use of this code that we had been
enjoying before this point. And even though it took me significant time to
make that change, the production code was still running along just fine during
all that time, still bringing us value. I decided that I was glad we had said,
"YAGNI" at the start.

~~~
BurningFrog
It sounds like most of your pain came from the code not being DRY. That is,
this data size constant was duplicated in many places, rather than defined in
one central place.

Unless I'm misreading you, that's not an appropriate YAGNI case, as Fowler
writes:

"Yagni only applies to capabilities built into the software to support a
presumptive feature, it does not apply to effort to make the software easier
to modify"

~~~
Silhouette
_" Yagni only applies to capabilities built into the software to support a
presumptive feature, it does not apply to effort to make the software easier
to modify"_

That's a very convenient distinction. It lets you No True Scotsman anyone who
challenges your position, yet provides little if any practical guidance about
the best thing to do in the real world.

~~~
BurningFrog
That might be a way to "win" internet debates. But I use it to write software.

It's actually quite simple. YAGNI applies to features/functionality only. Not
refactorings.

~~~
Silhouette
That is an argument for refactoring only immediately prior to implementing a
new feature in order to support development of that feature. In itself that is
reasonable enough, but it becomes less effective as a strategy if the cost of
just-in-time refactoring prior to implementing each new feature turns out to
be significantly higher than the cost of setting up the same design at an
earlier stage.

~~~
BurningFrog
When to refactor/clean up code is an interesting topic. My rule is to only
refactor old code when the bad design gets in my way. If we have some bad code
that just keeps working, there is not much reason to clean it up.

New code I try hard to factor into tip top shape.

This is entirely separate from YAGNI in my dictionary.

~~~
Silhouette
That all sounds perfectly reasonable, but please answer me this: how do you
decide what "tip top shape" is for your new code?

If YAGNI is an argument for not making any sort of advance judgement about
future requirements until it's clearly necessary, then it is necessarily also
an argument that as soon as any code meets its immediate requirements you
should stop working on it and move on to the next sure requirement, without
wasting any time on refactoring that might never prove useful for future
development.

I suspect that many here who would say they agree with YAGNI do in fact edit
their code beyond just working no matter how much it looks like spaghetti, in
which case I would argue that the difference between our positions is merely a
matter of degree, not a difference in the underlying principle.

~~~
BurningFrog
Yeah, some/many people forget about the Ruthless Refactoring part of XP. Or
they're just not good at it. Like how some decide to not write documentation
and declare themselves "agile".

The successful XP teams I've been on probably spent 1/4 of their time
refactoring. Once your code works, you clean it up, and refactor anything it
touched. THIS IS THE DESIGN PHASE! Without it, you're just another pasta
merchant. What truly blew my mind was that designing/architecting the code
_after_ you write it is so much easier and effective.

> If YAGNI is an argument for not making any sort of advance judgement about
> future requirements until it's clearly necessary, then it is necessarily
> also an argument that as soon as any code meets its immediate requirements
> you should stop working on it

That is not the YAGNI I know. It applies to external requirements only.
Keeping your code base well designed, readable and bug free is an entirely
separate concern.

------
rsp1984
In my years of experience (both in big corporate R&D as well as startups) I've
_never_ seen a project fail because of the end product being too simple or not
having enough features.

However on the other hand I've seen _countless_ projects suffering delays,
staff departures, emotional team arguments and eventually bad end product
because of too complicated software architecture that overwhelmed its creators
and because of obscure features that well-meaning engineers built in because
"it'll save time in the long run".

Btw this well known XKCD sums it up just perfectly:
[http://xkcd.com/974/](http://xkcd.com/974/)

------
beat
As always, it's a joy to read Martin Fowler.

It's so, so hard to resist the siren song of premature optimization, though.
It's even harder in a corporate environment where "Well, why didn't you plan
for that?" is a question lurking behind every failure or delay. Overplanning
isn't punished.

~~~
briandear
I disagree. If overbuilding or over planning is wrong. Then the question comes
as to why you spent $200k building a feature that just got thrown out. Or
worse, they try to use a unneeded feature just to justify the fact that it was
built.

~~~
beat
I didn't say it isn't wrong. I said it isn't _punished_. That's a very, very
different thing.

Enterprise environments encourage toxic behavior in numerous ways. This is one
of them. Risk aversion is generally more important than cost control in the
enterprise. Conway's Law in action, if you think about it.

------
jonahx
These days, I think "Yagni" is often used as an excuse for sloppy code and as
a justification to avoid thinking about design.

I think the principle as Fowler describes it is valuable, but it can be easily
abused. Taking the application of "Yagni" at the microlevel to its logical
conclusion can be used to justify a mindless, just move on it and fix it later
style of coding into which good architecture cannot be retrofit.

~~~
lmm
I have yet to see up-front "architecture" add value, and I have seen it sink a
company.

~~~
jonahx
Hi lmm, perhaps the world "architecture" has bad associations for you that I
don't intend to imply. If you code anything non-trivial, you are doing
architecture whether you like it or not. The only question is how well you do
it.

Over-engineering is a real danger, to be sure, and terrible ideas can by
justified in the name of "architecture," just as they can in the name of
YAGNI.

Fwiw, I have personally seen costly decisions made in the name of YAGNI,
literally. So I don't think it is the essentially "safer" default.

My view is that you can't always wish away complexity with heuristics. Often,
you have to think through your special situation, weigh the factors, and make
tough judgment calls without knowing if they'll be right.

~~~
lmm
I'd be interested to hear the specifics, or examples of the kind of
"architectural decision" you're talking about. Let me tell my story of how
architecture went wrong:

* We defined the data structures early on in a generic way, when we thought we were building a product for multiple regions. In fact we tried two regions and discovered it only sold in one of them, so we were producing a product that only actually needed to support a single region, with a massively overengineered generic data structure

* We defined component boundaries, saying that certain functions would live in certain modules. In fact these were not the correct boundaries, and simple operations involve several rounds of back and forth, exacerbated by:

* We built a "SOA" style system with multiple components on the grounds that we would need horizontal scalability. We never got to the level where we needed that kind of scale, and the architecture massively slowed down development/debugging.

* We decided that certain interfaces would use Java datatypes because we thought the system would primarily be written in Java. As we built the system it became clear that Scala was a better choice for many components, so we ended up with a lot of code that was converting scala objects to java just to send them through an interface, and then the system on the other side was converting them back to scala objects.

* We tried to make a choice of RPC system early on. We ultimately went through three iterations of different RPC systems, as the various approaches failed.

You could say these are just wrong choices, and there's an element of this.
But I think in all of these cases we'd have made the correct decision had we
been driven by use cases (YAGNI style) and deferred making decisions until we
actually needed them.

------
endymi0n
We have one rule when planning product features: Everyone may shout YAGNI at
every time. Saved us so much time and complexity, it's incredible.

On the other hand, I have a love/hate relationship with YAGNI, as it requires
careful weighing - additional features that need to be reflected in
architectural changes which would not only take refactoring code but API and
data model changes as well, possibly with data migrations, coordinated across
multiple teams are a completely different picture. The costs of building it
now may be an order of magnitude cheaper than doing it later. Further down the
road, startup mechanics and traction apply as well. If you have an order of
magnitude more resources to fix YAGNI later, it might be worth the additional
speed right now. I feel that putting all of these into context and doing
something that makes sense is just freaking hard every single time.

------
bsder
Except that Fowler used YAGNI to produce the "Chrysler Comprehensive
Compensation System" that _FAILED MISERABLY_. I still don't understand why
people continue to listen to him--he has yet to demonstrate a large project he
was in charge of that succeeded.

YAGNI gets you the easy 80%. The problem is that YAGNI means you didn't
_prepare_ for the hard 20%, and now you're going to get killed.

Of course, YAGNI is really good if you're an ambitious manager since you'll be
gone when the 20% comes home to roost.

~~~
lmm
Most C3-like projects fail after a decade and millions of wasted dollars.
Failing faster and cheaper provides real value.

~~~
bsder
Except that there was already a working system. So, the C3 system was
_totally_ wasted money, and never achieved any significant subset of
functionality of the old system.

A "proper" use of agile would have been to gradually convert chunks of the
legacy system to a unit-tested system. Use bug reports and feature requests to
prioritize the chunks which need to be tested. Once that happened, you could
begin evolving the old system and adding features without worrying about
breaking it.

But _that_ wouldn't get noticed. Much better to have a _BIG, NOTICEABLE_
failure that you blame others for than a less noticed success. That's how you
get promoted, don'tcha know.

See:
[https://en.wikipedia.org/wiki/Putt%27s_Law_and_the_Successfu...](https://en.wikipedia.org/wiki/Putt%27s_Law_and_the_Successful_Technocrat)

------
chris_va
I think this is trying to simplify an already simple idea, succinctly:
"Optimize your time intelligently".

Rules of thumb (like YAGNI) may not be correct, depending on the situation.
Estimating time complexity is hard, so deferring development defers mistakes
in estimation. Great.

However, if you know that you have a 90% chance of needing a feature in 6
months, and it will be 2x easier to build now when you have a team actively
engaged on a similar project, then YAGNI is the wrong choice. Counter examples
exist, like you are in such an extreme environment where (like first month of
a startup) the opportunity cost of those programmers is very high.

Regardless, I would ask "Is this the best use of time now" rather than
"YAGNI".

~~~
pbreit
The whole point of the article is that you're wrong. And I tend to agree. Not
trying to predict feature needs down the road is extremely empowering.
Building you're unnecessary feature now is just as likely to increase the
complexity of everything in the future as it is to be 2x easier to build now.
YAGNI dramatically simplifies the answering of your question: if it's not
needed now, don't do it.

~~~
chris_va
Using your example: Let's say I know it will cause a complexity burden in the
future. Shouldn't we ask if that complexity burden and opportunity cost are
outweighed by the expected benefits, rather than rather than simply relying on
a simplistic rule of thumb of always deferring?

People are capable of (which, to your point is not the same as being
consistently good at) making good calculations as to what is a waste of time.

~~~
pbreit
Quite a contrived scenario, but, no. Just build the feature you need and move
on. No need over-analyzing.

See? I didn't have to get trapped in your pretty much un-answerable pondering.

Following the rule crystallizes a lot of decision-making. It's very empowering
when you don't have too many objectors.

~~~
chris_va
And yet, you might take longer to finish the project than someone else. Just
because something is freeing doesn't mean it's a good idea. Nor does it mean
it is a bad idea, it just means it's an uninformed idea.

Obviously it's suboptimal to ponder an un-answerable question. That doesn't
mean it you shouldn't spend 1 minute thinking about it.

------
jmadsen
I think it is important to differentiate between "early building" of features
you don't yet need, and building in a way that doesn't require much
refactoring in order to extend/add those features on.

More common than overbuilding features is overbuilding function or library
capability. The right balance, IMHO, is _thinking_ about the extendability &
designing for it but leaving out the actual details. This tends to lead toward
good, SOLID designs that are easy to work with later rather than a blind-ally
you need to spaghetti-mofongorate to get to work.

"Refactor later" is fine for folks like Martin Fowler who know _how_, and who
haven't painted themselves into a corner with bad early decisions. It is a bit
dodgy to tell the average mid-level dev who just wants to "get it done" so his
Project Manager can check off their Excel spreadsheet that they are on time.

~~~
moron4hire
Extensibility is itself a feature, so YAGNI would imply it shouldn't be
assumed until proven necessary.

I personally use the "zero, one, many" rule (i.e. those are the only counts of
things, there is no such thing as 2 or 3, etc.) as my proof of necessity. If I
ever get to the point that I need two of something, I make it useful for many
of something.

~~~
beat
I've found that designing for extensibility is wrong more often than right.
The code doesn't stretch the way you expect, and then it strains against other
boundaries you didn't realize you were setting.

~~~
jmadsen
that's not quite what I mean.

saying, "I think it will need feature X, which will require parameter x_id, so
I'll add x_id = 0 now" is wrong.

writing (just as an example) a function that takes an config array as its
param so you can later build out, instead of passing a_id, b_id, c_id...."oh,
crap, how long a list will this be?" in the early build is the type of thing I
mean.

Writing extensible code more often than not causes you to write smaller,
tighter, more testable components. Writing in this style is what I'm trying to
say. Many less experienced devs would not do this and end up with long
procedural code because they feel it is wasted time for something they aren't
anticipating.

------
phamilton
I'm always a bit surprised when we treat "cost" as something binary.

Successful people, in my experience, are those who take calculated risks and
are right often enough to differentiate themselves. Software is no different.

When faced with a YAGNI situation, you need to make a judgement call. There
will be costs in building it now. Those costs might be less now vs later
(sometimes just because the context is fresh). The cost may vary depending on
how much work you do.

There will also be "expected benefits". The estimated (judgement call)
likelihood of actually needing it times the cost savings (future cost -
current cost). Compare the expected benefits between projects and pick the one
you expect to add the most value.

As pointed out by others, this often means simply drawing the abstraction in
the right spot to make future development easier. Even if future development
never happens, it's likely you built a good abstraction that was well thought
out. The cost is low (you had to frame the abstraction somewhere) and the
expected benefits are high.

------
angersock
_Yagni only applies to capabilities built into the software to support a
presumptive feature, it does not apply to effort to make the software easier
to modify._

And here is where the _actual_ frontlines of the battle are fought--because
it's needlessly verbose and abstract architectures (or what people think are
them, anyways) that are railed against by folks chanting "yagni yagni yagni".

This article neatly sidesteps the actual thorny issue of "well, when is
designing extra actual a good idea?" by ignoring one of the biggest use-cases
of yagni.

~~~
apalmer
I hear you, I think this article is really good for someone who isn't familiar
with the term YAGNI. However as the above poster mentioned it sidesteps a big
issue which is when YAGNI is wrong.

Generally I feel like YAGNI is appropriate for features but not appropriate
for architecture. I feel like the purpose of architecture is to try to plan
ahead, or to make the determination how far ahead one is attempting to plan.
Its perfectly find to architecturally choose not to support something, but it
should be a conscious decision.

~~~
Jare
If I may extend your metaphor, the problem then becomes the scaffolding that
lies between architecture and features. Another attempt at a rule of thumb is
to ask the question: can I describe how this likely-but-not-yet-needed change
would be addressed in the code? If there's no answer (or the answer is not
satisfactory), the question becomes: what do I need to do to have a good
answer to the previous question? And so on.

------
webtards
I think the ability to use YAGNI scales with the experience and abilities of
the developer and the team. I have walked into wonderful lean codebases where
an experienced hand has kept featuritus down to a minimum, but I have also
walked into codebases where multiple kitchen sinks were not only present and
plumbed in, but a pile of new ones were ready and waiting by the side to join
them, and it was a massive distraction.

------
addisonj
These comments seem to illustrate something I have felt for a while: yagni is
incredibly divisive topic and for every person saying praising it is another
who is at the very least cautious of its application.

It makes me wonder where the divide is.

Perhaps more large 'enterprise' shops where projects can spin out of control
into what they 'must have' whereas the small shops where you need to get a
product out ASAP?

Or maybe age of developer, with the young eager guy confident he can whip out
all n features in 3 weeks so we might as well add that feature too vs the
experienced dev who knows better and we will add it once it is needed?

I have put some time and effort into asking people about this very question,
but still am no closer to understanding why some really appreciate it and
others hate the phrase.

Personally, my guiding principle after being bit by yagni is that it is a
wonderful principle to use when making _product /feature_ decisions but should
be exercised very carefully when making architecture decisions. Which makes
sense, features come on and go, but architecture is generally something you
need to live with. In other words, whenever yagni is used is an argument
against what are more good coding and design practices than practical
problems, then it may be something more of a case of 'no really, we are going
to need it'

~~~
mtVessel
Funny, I think of the age divide running the other way. Younger devs, who came
up during the age of agile, take YAGNI as a given, while older devs have seen
the results of too many hasty decisions.

The advice I was given when I started out was, "it's cheaper to fix something
upstream". Catching a potential problem during design is always cheaper than
catching it during development, which is cheaper again than catching it after
release.

This argument always seems to get lost in the YAGNI discussion.

------
robmcm
The use case in these discussions is never about framework or shared code.

Taking the example of the lookup tables for error messages, imagine if you
shipped your framework and lots of teams started using it, then you iterate to
change your APIs in a breaking way and suddenly the cost of that change is
multiplied by all the teams refactoring.

It is also worth considering the cost of changing existing production code,
this is typically where most bugs are introduced, especially when someone else
is implementing the fix without the domain knowledge of the original author.
Unit tests won't help if they also have to be re-written to match the new API,
as they are by definition new/different tests.

I think writing malleable code is the key takeaway from this article, and if
thinking about future possibilities helps with this, then it should be
encouraged.

Being unable to predict the future doesn't mean you shouldn't anticipate and
prepare for change.

------
anigbrowl
There's an extensive literature on _opportunity costs_ in economics, but you
wouldn't know it from reading this article, even though that's basically what
it's about. Discussions of programming methodology often seem to involve
reinvention of the wheel. Has there been any work done to quantify these
theories?

------
lowbloodsugar
I used to play Age of Empires. Its a real time strategy game where a common
match is for two teams of three to try to kill each other. A feature of the
game is that each player can spend resources (gold, wood, etc) to reach new
"ages". Players start in the stone age, and advance to tool, bronze and iron
ages. Each new age offers better weapons, better technology, greater
efficiency.

Consider two games. In the first game, we observe that the players take on
average 30 minutes to get to tool age, and only one player ever makes it to
iron age before a team is victorious after about two hours.

In a second game, we observe that the players average ten minutes to _bronze_
age and all but one player makes it to iron age before the game is won after
about an hour. All players invest in technology before fighting.

Who are the better players?

~~~
pdpi
I can only assume that the point you're trying to make is that there's not
nearly enough information for a reasonable answer. It's certainly hard to come
up with solid ideas without a clear understanding of the game's pacing.

In RTS games, teching and building armies compete for resources, so I'd hazard
guessing that Game 1 happens because somebody opted for early aggression,
which forces everybody to spend money on units to fight off the early threat.
Looks like a well balanced game where both teams kept the aggression level
high, which justifies the long duration, and the low-tech end-game.

Second game looks like there was little to no early-game aggression, which
meant that everybody got to funnel resources into teching up early. I'm not
sure if late-tech battles are more all-or-nothing, or whether one team was
just clearly stronger than the other -- though if that's the case, why didn't
they push for an advantage earlier? AoE is much more symmetrical than, say,
Starcraft IIRC, so one team having a weaker early game but a stronger late
game doesn't seem likely.

I'm curious, though: how does this relate to the article?

~~~
lowbloodsugar
Thank you for biting =)

The players in the first game are likely experts and if a team from the first
game plays against a team from the second, the game will be over in less than
ten minutes. Its not merely that "someone opted for early aggression", its
that when playing at the elite level _all_ players opt for aggression. Its not
rock, paper, scissors. Its a game of rock, paper.

The point is that deployed features trump undeployed "it'll be great when its
ready" technology.

I'm happy for my competitors to opt for spending money attempting to achieve
"good architecture" over "working code".

Ok, so the analogy is kind of shitty. Unlike Age of Empires, spending money on
"good architecture" is actually less effective at creating good architecture
than writing working code.

------
mc808
Other than the navy wiping out all the pirates, I foresee a couple of other
unforeseen scenarios:

1\. Within the next few months, a new risk emerges with higher priority than
piracy. Implementing this feature will delay the piracy addition, but only by
1 month because much of the basic "support multiple risks" problem will be
solved by this other feature.

2\. Within the next few months, a new product or tool will become
available/known that makes it trivial to support storms, piracy, and a dozen
other risks, making this whole effort redundant. It may be impossible to
migrate to the superior alternative if all the resources are already tied up
in a half-finished project that has generated no benefits to date.

------
pbreit
I love how these types of articles always bring out the architecture
astronauts with all their pronouncements of "it depends" and whatnot. How hard
is it to understand the notion of "building what you need, and not what you
don't"?

~~~
asgard1024
> How hard is it to understand the notion of "building what you need, and not
> what you don't"?

Very hard, because it's in fact a tradeoff. Let's say you are building a
house. Will you have wife and kids in the future? YAGNI says you can always
expand your house once you have kids. But do people really build houses that
way?

To take more extreme example, first thing you expect from housing is a
protection from elements. So according to YAGNI, you should build the roof
first, and see if it's good enough for you (and you don't bother, for
instance, that you can only stand in the middle of the room). Again, no one
really does that with real houses.

The whole art of engineering is to pick the reasonable tradeoffs, and this
includes hedging against the risk of future expansion. That's why simple
pronouncements as YAGNI are not of much help.

~~~
pbreit
Extreme examples are not that interesting (to me) since it's far more
important to optimize what is not extreme.

A better house example is to go ahead and build the 2-4 BR house (like pretty
much everyone does) but wait on the pool and jungle gym.

~~~
asgard1024
The extreme examples are just to illustrate the tradeoff.

Some architectural decisions have to be made in advance, because they are hard
to take back. In your example, size of the property (if any) on which your
house is standing is such a decision. Good luck building a pool in a 4-bedroom
apartment.

(Interestingly, you said, "like pretty much everyone does" \- isn't that
actually a sounder advice than YAGNI?)

~~~
pbreit
I don't put much stock in extreme examples. Much more useful to review common
examples.

