This article reminds me of the adage about ops people which goes, "You know you're doing a great job in operations when nobody thinks you do anything."
The problem with taking the experience that "future proofing" is never worth it, is that it suffers from survival bias, which is to say that when you did it and it made the future event a non-event, it didn't even register on your brain. If you practice more mindful engineering you can teach yourself to recognize the things you did in the past that made what you are doing today easy or even possible.
That is not to say that all future proofing is good, especially if it ties up the group in knots trying to predict all possible futures. That is where the mantra of keeping it simple really helps. Simplicity is the greatest protection against the future precisely because it has the smallest disruption surface. The simpler a process or product is, the more difficult it is to find a change that would invalidate its effectiveness.
So, to me, "future proofing" has some very specific connotations. It doesn't mean building for eventualities that you know will happen and already know how to build for (though, if it takes extra time to do so, you do need to be mindful of the cost of delay), and it doesn't mean things like writing well factored and loosely coupled code (within reason) so that you get a system that's generally easier to modify.
It means going out of your way to try and handle scenarios that you don't know will occur, or whose details you don't really understand yet, with clever one-off facilities.
E.g., I've come into a couple projects now, where I'm supposed to add in some new feature, and I discover that the original author has thoughtfully added a bunch of extension points to try and help me with that. Both times, I also discovered that building in those extension points accounted for a large amount of the pre-existing complexity in the code, even though they weren't actually being used for anything yet. And both times I also found that none of them actually addressed my needs, so all that extra effort and complexity was a waste. One time I was unfortunate in that the existing code was of the "600 line functions and a whole mess of mutable state and temporal coupling" variety, so changing it would have incurred excessive risk, and I opted for just jamming my square peg into that round hole. The second time it was well-factored code, so it was easy enough to rip the round holes out to make room for square ones. But it would have been even less effort if I could have skipped that cleanup step and just jumped straight to adding the extension points.
Exactly. When people talk about "future proofing," they're imagining the future as a predictable, linear progression--which it definitely isn't. The people best equipped to deal with the complications of the future are those who have direct experience with it ... in the future.
"This code works for now, but if I move this part into a factory, and create an interface for these methods, it'll also support all these future cases I can think of!"
They've read all the articles on how to structure code. There's a lot of patterns out there for object-oriented (or functional!) code. Their code has all sorts of useful interfaces, abstraction layers, factories, extension methods, data structures.
Things break down when the project needs to move in an unexpected direction and it turns out that implementing the change requires changing a lot of the codebase because there were a bit too many abstractions that ended up creating hidden dependencies across the code.
Yep. Some of my favourite programming advice is that the easiest code to change is code you haven’t written yet. In other words short, straightforward code is always the easiest code to adapt to new requirements.
Simple systems are easy in nearly every dimension (extension, debugging, performance), and they are at natural odds with the state of affairs. There are hoards of programmers waiting to do a tech debt driveby to "look good" for having bolted a feature on in minutes to days.
Simple systems require extreme vigilance to remain simple.
The simpler your codebase is, the easier it is to adjust to fulfill a different purpose.
I think this is the most important sentence in the article. I've worked on a few extremely abstracted and horribly-indirect codebases that happened to be so very flexible and extensible in exactly all the wrong directions, so much that it made it much harder to change in the way that was actually required.
Agreed 100%, this is all that matters I’d go as far to say. It’s also is a key lesson in the disposable code shift that serverless is enabling as well.
As Martin Fowler correctly explains, YAGNI is primarily about end-user features, not the flexibility of the code and the toolchain. If you don't keep your tech stack (code, infrastructure, people) flexible, then iterative development becomes impossible. The assumption behind YAGNI is that 'you can't predict requirements accurately'. Which is true, so don't build them ahead of time. However, the same assumption requires you to be flexible, modular, etc.
Specific example from TFA: distributed app vs. large server. A distributed app might give you better scalability, which you might not need because large servers are cheap. However, a distributed app also forces you to think about state, so if you screw something up and have to fix it middle of the day, you can BG deploy the app instead of having a 4 minute outage. It forces you to modularize the logic and think about interfaces, so if you need to add, remove or change functionality then it'll likely be faster.
Etc.
What this article forgets to say is: "Stop future-proofing your software if you are still looking for a good product-market fit. As soon as you know what your business is going to be, make sure your technology stack is flexible."
"As soon as you know what your business is going to be, make sure your technology stack is flexible."
As a form of devil's advocacy - Why? Just improving the codebase under the hood won't bring any new users in the initial market segment where your product-market fit is proven. Focusing on marketing, will...
Growth comes with technical issues. As a rule of thumb, 2 orders of magnitude increase in traffic requires significant refactoring or change in architecture. 1 orders of magnitude in team size requires new forms of management. Etc.
This is a good question. The answer is so that we can amortise the expense of feature development. Side note: This includes bug fixes. A bug is just a feature that you expected already worked. A bug is different from a "software error", in that such errors will impact you when you next work on the code. It is often possible to fix bugs without fixing the underlying software error and vice versa.
Having a "flexible" code base (to me) means being able to change the code easily. That change could be to add new functionality, or to refactor the design, or both. "Flexible" (to me) does not necessarily include having actual facilities to do anything.
As an example, your normal "Hello, world!" C program is very, flexible. It's easy to modify. A similar "Hello, world!" written using a large framework is less flexible because you have more constraints on what you can do, and how you can change the design, even though it has more facilities.
Adding flexibility does not mean adding more facilities to the code. Often it means removing unused facilities from the code in order to make it simpler. IMHO, this is the true distinction with YAGNI. We remove YAGNI code in order to increase the flexibility of the code. Sometimes people are tempted to write wonderful elaborate designs for their code in order to ensure that it's easy to do something in the future. This is a classic case of YAGNI. We want to remove that code in order to increase flexibility (because often new requirements move in a direction that is opposed to the original design).
However, I sometimes wish that there was a cute acronym for "I Actually Need It Now" (suggestions welcomed :-)). In this case, let's say we've written some code and it's awkward, but it does the trick. A little while later, we end up doing the same thing. It's relatively easy to copy and paste our previous awkward code. Then we have to do it again. In those cases, you've found that you do need it, and you're adding complexity to your code base every time you copy and paste the same awkward code. It's beneficial to build a nice system to make it easy to do it. Next time, not only will it be quick to add, but it won't add unnecessary complexity to the code base.
You've probably heard of the "rule of 3". Of course, it's a rule of thumb, so you have to use your own judgement, but the idea is that if you have to do it once, then just do it. You can't generalise what problem you are actually solving because you just did it once. There's not much sense in agonising over the "ultimate" design, because you are likely to get it wrong anyway. But by the time you've done it 3 times, you've got a pretty good idea of where this is going. At that time you should invest in an appropriate solution. The solution will likely increase the architectural complexity, but it will simplify the interfaces and will slow down the increase in overall complexity.
Again, 3 times is a rule of thumb. Sometimes you know right away that you need something more complex. Sometimes it takes you a lot more than 3 times before you can really wrap your head around what direction is best. So a better rule is: delay making design decisions until you have enough data to answer your questions.
And to roll it all back to the beginning, YAGNI happens when you make decisions before you have the data to support your decision making. This can be both with user features and with design. By delaying these decisions until we have good data, we keep our options open and the system remains "flexible". But if we do not make the decisions when we do have enough data, the system becomes complex because it lacks cohesion. We have to act promptly to address those issues, or else we will have a very uneven development experience. "Hard" things may take a short amount of time, because we didn't build all the infrastructure we needed. "Easy" things may take a long amount of time because we suddenly need to build the infrastructure that we didn't build before.
And that's what I mean by "amortising" the expense of feature development. We don't spend a lot of time up front to build a complex, but inflexible system. However, we spend time intermittently, to maintain flexibility and to keep the overall cost down. Even more importantly, businesses depend on the predictability of development. By maintaining a high degree of flexibility, we allow the cost of solving problems to approximate the complexity of the problem. Without having a flexible code base, the cost of solving the problem is often dominated by the complexity of the code base instead of the complexity of the problem. This leads to management not being able to "trust" development and often leads to project cancellation.
Another aside: I often say that upper management is the most dangerous part of the team for a project because they are the ones that cancel the project. If they do not have a good feel for a project, they may very well cancel it for extremely poor reasons. Thus, aligning the project to the expectations of upper management is one of the most important parts of software development.
I personally don't like doing development work that isn't tied to direct economic value. This includes refactoring. Refactoring may contribute to indirect economic value (by reducing development time int he future), but it is risky work. As I mentioned in the previous paragraph, upper management depends on the predictability of development. If you randomly (from their perspective) say, "We're going to do some feature sized work, but it's not going to result in any different functionality", it separates them from the logic of the development. They may tolerate it, but it makes planning difficult.
Instead, I prefer to spread that kind of work out and to "same size" the work. This is harder to do from a design perspective because you have to find small goals to achieve and to work piecemeal. You need to have good communication with your team and good buy-in for the process, so that each team member takes the appropriate opportunities every time they touch the code. In short, it makes the development process more difficult, but the advantages are many. In that way, I do exactly what you suggest: I only work on code that is adding customer value. But I intentionally take a little bit of extra time in each story to find ways to improve the flexibility of the code. I also spend time each story to add facilities (which I've called "capabilities" previously) where appropriate. Occasionally you get the, "OMG, we need to do a big refactor", but if you have been diligent in maintaining flexibility, then it should limit the amount of time that is necessary to move in the correct direction.
Ultimately, even code and toolchain flexibility comes at a cost - for example, should you go for a non-relational database because of potential future scalability needs, or cross that bridge when you come to it? Or should you make your code internationalizable, even though the first release is in English only? The right answer may depend on the probability of the business need occuring in some foreseeable time frame.
Unfortunately, the OPs point is too black and white, where the reality is much more grey. In my view decisions in a project need to be subject to a cost-benefit trade-off, and future-proofing is no exception.
Exactly. The same “end user” problems recursively repeats. You just think of other (or future) developers as the end users, and those actual user requirements you can’t predict, that ends up meaning you also cannot predict what the developers flexibility requirements will be either.
You end up “making your codebase flexible” in all the wrong ways by introducing abstractions, interfaces, and extensibility designs that end up being the wrong tool for the job once the future requirements are known.
There may still be a simplistic, high level where you can have certainty about how to factor code. E.g. separating front-end and back-end, using tests as scaffolding for helping verify deployments and facilitate changing code, using a few simple tricks here and there to reduce boilerplate or make sections of code reusable.
It’s totally fine to pursue these optimizations, especially if they are part of a healthy backlog process to prioritize them.
But the problem creeps in when principal developers or “philosophizing architects” take it as a goal unto itself and begin trying to mandate it all the time, especially with the anti-pattern where huge architecture discussions become synonymous with routine “just get something done that works and refactor later” code reviews.
But you can’t make it equally easy to add in every type of functionality. For any type of extensibility design you bake in, some other types of extensibility now become harder.
So “making it flexible” hinges entirely on whether or not you can correctly forecast exactly which types of flexibility you’ll need. But this is no easier, and often harder, than predicting how end user requirements might change in the first place!
FWIW, maybe I'm living in a bubble but I don't seem to run across unnecessary "future proofing" very often. In fact, I see the opposite as a problem much, much more frequently. I work in a boring corporate job though, not a hip startup, so nobody at any of my jobs are trying to be the next Google or Facebook - that's probably the main difference.
>We need to hire a team of developers and build in-house software, despite wordpress and shopify being a much easier alternative, because when our customer base grows to 100 times what it is now, it will make our lives easier.
I think this comes from "not invented here syndrome" rather than the assumption you'll need to support hundreds of orders a minute in the future. Also job security.
I used to see future proofing all the time in boring corporate jobs. One example I saw multiple times was complicated "generic form builders" where the dream was users would be able to add their own forms quickly and easily. Then 5 years later the whole app would be say 10 screens, but maintenance was a nightmare because under the covers everything was poorly abstracted and generic.
I also saw lots of "split it into microservices so we can scale out for load" on systems that years later still only saw minimal use.
My mantra is - never assume features, but always assume change.
As others have mentioned - it's orders of magnitude easier to add a layer of abstraction to a simple system as requirements evolve, than it is to remove an unneeded one down the line. The latter is often impossible.
I thought this was really spot on. The point about the glorification of people working on really hard problems at scale. I just read The Google SRE book and while there are definitely lessons I can use in the company for which I work (~5 devs), it definitely isn't a straightforward "do what worked there".
I couldn't find hard numbers from the Bureau of Labor Statistics, but this Stackoverflow survey [0] from 2016 points out that 50% of developers work at companies with less than 100 employees (that's employees, not developers). So many many software jobs are at smaller companies. Why don't we hear from them about their trials and tribulations?
Here are some ideas I've had (off the cuff), would love to hear other thoughts.
1. Big companies invent the future because they have the resources to do so. That wisdom then gets rolled out often in the form of software (Hadoop, k8s, React).
2. Small companies are boring. They are still struggling with product market fit or are stable and boring. They are not sexy, especially if they aren't growing quickly or are a services company.
3. Bigger companies have a bigger platform to announce work and/or support people to write and speak. Software they develop touches more folks.
4. Smaller companies are not doing anything interesting to the wider development community. They are focused on niche problems that aren't really interesting.
5. Folks at smaller companies don't write/speak as much.
6. We are hearing from smaller companies, just not as much/it's categorized as 'startups'.
7. Consultants, who tend to do a lot of proselytizing of new software trends, work for larger companies because "that's where the money is".
8. The scale of small business is such that software quality doesn't matter as much.
I don't really know what the reason is. I've primarily worked for small companies my entire career and I think the impact that software can have on a small business is transformative and well worth writing about (as I have for years and years [1]).
> We need to use a kubernetes & docker based solution for our infrastructure, despite a single large server being a much easier alternative
It's a half-truth that one doesn't have to do it from the very beginning.
You don't have to set up a whole cluster where a single host suffices for a long while. You do have to plan that your project may eventually become big enough to require a cluster - and consider doing yourself some design favors here and there to make that transition easier. Or knowing that you're going to spend a while redesigning stuff, while being pressed by the growth.
I mean, I've regretted not doing some design assumptions (e.g. about storage) from the start more than once or twice.
I find using Docker (or any container) actually simpler than maintaining a pet server, even if you never need a cluster or Kubernetes. Just the ability to develop & test in an environment identical to production is a huge benefit on its own.
The fact that we’re having these discussions shows to me how immature our industry still is. Say, people figured out pretty well by now how to design, build, maintain and use aircrafts. Not so much spacecrafts, though. Yes, hardware is improving faster than anything, but I wouldn’t say the same about software, unfortunately. I’m still puzzled what CS education does – those graduates seem incapable of doing real coding jobs. And the camps phenomena is a different story all together. As a result, we keep on guessing and experimenting on every project, everywhere. Just imagine the same in the airspace industry (not that they don’t experiment, just in a completely different manner). I don’t know what I’m doing wrong, but it seems everything I built or coded wasn’t great: under-engineered, over-engineered, took too long, used wrong platforms, etc. There are certain minimums below which the software is crap, but we never know what they are, and there are no standards that last long enough, as they lack value and have to be rewritten. The big software companies don’t lead, and they are actively opposed by the general industry public; the Open Source community sounds great, but I think that the Every Man for Himself state of the industry is till predominant. So, lots of things to work on and improve – got to sound positive at the end.
I just stick to the rule of three: if I see the same pattern three times I consider the abstraction I’m missing.
This surprisingly leads to “future proof” code. I didn’t anticipate the needs of the business. I just wrote what needed writing. Yet over time I build the abstractions in that the code is actually using.
I say surprising because code written this way looks obtuse and... messy. You get very senior and very confused developers asking you why you didn’t apply The Singleton Factory Pattern or use the XYZ architecture. So you say, because I didn’t need it and they just don’t get it.
But that’s part of the reason why I like Haskell so much these days. And why I liked Python for so many years. I like languages that bake in good ideas and abstractions for you. Python baked in the iterator pattern, the decorator pattern, etc so that you could use them in the language with little ceremony. Haskell bakes in the most powerful abstractions of all: type theory and algebra.
And still the idea of only writing the code you need to solve the problem; nothing more or less; stands up. People may look at you like you’ve lost your marbles. But if you can get them to stick around in the codebase for a year or two they’ll start to see it too.
I don't have a problem with future-proofing. I do have a problem with half-solutions (which is mainly what I see in the industry). E.g. if you need to run some code before or after a transaction, make a generic way to make this happen.
The problem is that devs do not create enough leverage in their solutions, so they're never that useful in the future. And, devs tend to over estimate what is actually re-usable. The second point may be the most important. Too often, I see solutions that are half-way reusable, which means they're not re-usable at all.
In the end, building out a solid foundation is key. The problem is, this is pretty hard and takes a lot of experience to know how to do well.
The way I have approached this problem is to take a look at previous large code bases where I and others inevitably built frameworks and solutions to problems we ran into. They were created because we either couldn’t find a reasonable alternative or our needs were simple that we didn’t need to introduce a lot more complexity to our app. It’s nice to take a step back every once and a while and try and build toy problems where you revisit those problems in architecture and see how you would do it different using today’s existing solutions. I find balance using really simple and boring tech while taking advantage of some frameworks that make life easier (depending on preference)
I saw startups struggle with ops and run out of runway implementing trendy micro-services from first day. But then I saw successful startups flourished from faux pas CLI-generated monoliths and no-brainer deployment templates. MVP FTW!
Better idea: dont pretend you are about to be the next google. Worry about that when it happens.
I see too many startups burning time/money on scalability they dont need. If you have 1000 customers, and your rack can handle 20000, dont talk to me about future-proofing against the day you are building your own datacenter. Keep your code working for today's customers. Worry about scale when it happens ... IF it happens.
Imho, limit your perspective and planning to double your current customer base (good day senario) and half (bad day senario). Any day beyond those bounds will require a rethink. A rethink THEN, not today.
If you're optimizing for investment, I wonder how much of potential funding is tied to your ability to communicate that you have thought the scaling problem out. If your investors are more technical, service oriented architecture and orchestration tools (like k8s) may serve as signifiers that you have some sense of what you're doing.
Depending on your target audience, if your product becomes viral you may need to be able to scale to 100x in a very short time. That's too late to rethink stuff.
Of course this depends on whether your product CAN become viral in the first place, but I think many people are just hoping it will, even if it never does and it looks like just a waste of time.
> We need to use an inheritance based design for our types,
> despite composition being a much easier alternative, because
> after 5 years of codebase growth, it will make our lives
> easier.
And a equally future-proofy statement:
"We need to contort our code to avoid our implementation language's inheritance features to instead use composition because after 5 years of codebase growth it will make our lives easier"
There's a reason why the GoF says "prefer composition over inheritance" and then spends half the book covering inheritance.
Does it really? The bulk of the patterns are about composition and delegation which is why wags using languages less restrictive than C++ have been able to do things like show 47819.73 patterns are variants of one or two things.
I admit being confused about this one. Composition or inheritance isn't harder or more work, and neither provides velocity or maintainability benefits. The argument is about how to model complex behavior -- they have tradeoffs in duplication and statefulness. And you can mix both in the same code base with ease.
Here's another one though. We need to use Promises despite callbacks being trivially understandable and supported everywhere in arbitrary JavaScript versions because in 5 years everyone will probably always use Promises for everything.
I think the real takeaway is that composition is much more likely to model the semantics you have, even though inheritance feels like it gives you better code re-use.
I admit to being a composition bigot when green fielding some functionality, and I've never felt like code reuse was a thing either mattered for. I like composition because it's easier to create small testable component contracts. But I maintain plenty of inheritanace based code and, while it's less testable, it's... fine. Not really a big deal to use either or both.
Edit: to be clear, when I hear "code reuse", I think "using a component outside its original intent". Composition does impose some duplication sometimes, but it's braindead plumbing code, not buggy logic code, that gets duplicated. I generally think that cost is worth it to gain more statelessness. Hence my greenfield bigotry
> But I maintain plenty of inheritanace based code and, while it's less testable, it's... fine.
I'd wager that it's rather good code then. There's also the inheritance-based code that isn't though, which slowly is driving me to the conclusion that "unsupervised" inheritance is some sort of petri dish for cthulhu-esque architectures.
That doesn't mean inheritance is bad, per se -- just that we should stop and think much more about using it. Composability, on the other hand, isn't nearly as "invasive" since the whole point of it is the interaction of uncoupled things.
This somehow reminds me of the promise of fusion vs. fission energy, where it's argued that in fusion you don't have to constantly prevent your reactor from exploding... :)
I like composition because it's easier to create small testable component contracts.
You don't need testability or 'greenfielding' for this. 'Design by contract', as you mention, is a useful way to think about this, without attaching value judgments to composition or inheritance. Inheritance is a much stronger and therefore burdensome contract. It's much less likely bits of your model can really adhere to it. The advice to lean towards composition is a function of that - few models can meet that high bar.
Right. Inheritance implies contracts that include behavior, composition implies contracts that don't.
I talk of greenfielding because it's rare that the inheritance alone holds a codebase back so much that it needs to be rewritten. But code I originate is always composition based
This may be true while searching for product market fit, but once you hit a certain (small but stable) size, your ability to identify what your existing and future customers currently want, and will in the near to mid future want goes up.
There's a difference between future proofing to be able to deliver a roadmap or using your market research to get the next X customers and trying to guess at what some as yet unknown market might one day want.
We built an SQL compiler in Haskell, and this has been our model from day 0 (back in 2010)...
It's really easy to follow these rules with Haskell. We develop our syntax to be easily extensible.
Being strongly and statically typed allows us to extend our parser/compiler with relative ease, and not think too much about how the future will look.
like so many things written about software, this ignores everything in the process that isn't a computer program. Future-proofing is a natural consequence of a management style that doesn't clearly set goals and boundaries. Engineers wind up future-proofing because their goalposts are constantly moved. Picture this scenario:
Engineer: I can build it in different ways. This simple way will accommodate N users, this more complicated way will accommodate M users but will take three times as long. I recommend we do the first option, because it's unlikely we'll see more than N users.
Management: no, we want to get to M users, build it the second way
That sort of situation is exceedingly common. The reality is that the vast majority of engineers don't get to set the product development road map based entirely on engineering concerns.
my understanding is that keep code modular, if it repeats 3 times then 'maybe' abstract it, and no matter what you do .. in time you'd have to rewrite from scratch if not all but parts of it.
there is no future proofing as there is no way to predict or time to put effort in predicting how business needs are going to change, how apis you talk are going to change, and how technologies you use are going to change.
it's easier to just write simple functions, that you can come to later and see what they do quickly and change them.
Only on going problem I do have is dependency, that I've to check every thing that used the said function, does it continue to work
You still somehow have to think about the future though, but you gotta make sure to keep it simple as well. It's all about juggling different needs vs available resources.
The problem with taking the experience that "future proofing" is never worth it, is that it suffers from survival bias, which is to say that when you did it and it made the future event a non-event, it didn't even register on your brain. If you practice more mindful engineering you can teach yourself to recognize the things you did in the past that made what you are doing today easy or even possible.
That is not to say that all future proofing is good, especially if it ties up the group in knots trying to predict all possible futures. That is where the mantra of keeping it simple really helps. Simplicity is the greatest protection against the future precisely because it has the smallest disruption surface. The simpler a process or product is, the more difficult it is to find a change that would invalidate its effectiveness.