
Ask HN: How to avoid over-engineering software design for future use cases? - h43k3r
I have worked at Microsoft and Google in multiple different teams.<p>One thing I realized that sometimes engineers go extreme in designing things&#x2F;code for future cases which are not yet known. Many times these features don&#x27;t even see any future use case and just keep making the system complex.<p>At what point we should stop designing for future use cases ? How far should we go in making things generic ? Are there good resources for this ?
======
mabbo
> engineers go extreme in designing things/code for future cases which are not
> yet known

They're afraid.

Fear: If I _don 't_ plan for all these use cases, they will be impossible! I
will look foolish for not anticipating them. So let's give into that fear and
over-architect just to be safe. A bit of the 'condom' argument applies: better
to have it and not need it than to need it and not have it.

But the reality is that if your design doesn't match the future needs really
well, you're going to have to refactor anyway. Hint: there will always be a
future need you didn't anticipate! Software is a living organism that we shape
and evolve over time. Shopify was a snowboard store, Youtube was a dating
website, and Slack was a video game.

So my answer: relentlessly cut design features you don't need. Then
relentlessly refactor your code when you discover you do need them. And don't
be afraid of doing either of those things because it turns out they're both
fun challenges. The best you can do is to try to ensure your design doesn't
make it really hard to do anything you know or suspect you'll need in the
future. Just don't start building what no one has asked for yet.

~~~
flak48
>> engineers go extreme in designing things/code for future cases which are
not yet known

>They're afraid.

In many companies (think FAANG), engineers, especially senior engineers are
incentivized/forced to show fancy design docs as part of the annual appraisal
process. The more complicated the design, the more 'foresight', the better.

If it sounds kind of ridiculous (TPS reports from Office Space anyone?) it is.
But on the other hand, taking a bit less cynical look, the more massive a
company gets, the more the voices which demand objectivity in all these
promotion/bonus multiplier processes. So a kind of obsession about such weird
'measurable' metrics gradually builds up in the name of objectivity.

And as soon as there are metrics, you can bet everyone in the system will do
their best to game them (it only makes too much sense to do so).

And thus you end up with over engineered systems all over the place.

A lot of the 'not-invented-here, let's reinvent it' style culture also
develops similarly - you have too many smart people in a room where the work
is just not that demanding. Even if you were to get over your personal
existential crisis (why am I writing yet another crud app?!), if you're the
type that wants to see a promotion every other year, your're forced to invent
work this way.

~~~
ex_amazon_sde
> In many companies (think FAANG) > The more complicated the design, the more
> 'foresight', the better.

Certainly not in Amazon. Surely there can be exceptions but in general the
company has a culture of simplifying stuff anywhere is possible.

~~~
numlock86
>simplifying stuff anywhere is possible

[https://raw.githubusercontent.com/aws-samples/aws-refarch-
wo...](https://raw.githubusercontent.com/aws-samples/aws-refarch-
wordpress/master/images/aws-refarch-wordpress-v20171026.jpeg)

Ironically, that's how you are supposed to run Wordpress on AWS.

~~~
ex_amazon_sde
You seem to be confusing Amazon's internal infrastructure and its development
model to how AWS is used by customers.

~~~
regularfry
That's the result though, right? Radical internal simplicity forces incidental
external complexity.

~~~
ex_amazon_sde
No, I'm talking about internal services that aren't directly exposed through
AWS API. This represent the large majority of the internal codebase.

------
kqr
Something I've recently realised after having listened to Kevlin Henney talk
about software engineering is how much of the existing knowledge we ignore.
The early software engineers in the '60s and '70s were discovering pattern
after pattern of useful design activities to make software more reliable and
modular. Some of this work is really rigorous and well-reasoned.

This is knowledge most engineers I've met completely ignore in favour of the
superstitions, personal opinions, and catchy slogans that came out of the '90s
and '00s. It's common to dismiss the early software engineering approaches
with "waterfall does not work" – as if the people in the '60s didn't already
know that?! Rest assured, the published software engineers of the '60s were as
strong proponents of agile as anyone is today.

Read this early stuff.

Read the reports on the NATO software engineering conferences.

Read the papers by David Parnas on software modularity and designing for
extension and contraction.

Read more written by Ward Cunningham, Alan Perlis, Edsger Dijkstra, Douglas
McIlroy, Brian Randell, Peter Naur.

To some extent, we already know how to write software well. There's just
nobody teaching this knowledge – you have to seek it yourself.

~~~
whytheplatypus
Couldn't agree with this more.

A current favorite of mine is "The Emperor's old cloths", the Turing award
lecture given by C.A.R Hoare. In particular his line "The price of reliability
is the pursuit of the utmost simplicity", it applies equally to
maintainability and extensibility (I think these are part of what it means for
a system to be reliable).

An idea I try to keep in mind while working is not to plan or build for future
features but simply leave room for them (meaning don't actively prevent their
eventual existence through complexity). It has taken some practice, but it
helps guide to a simpler implementation.

~~~
kqr
> maintainability and extensibility (I think these are part of what it means
> for a system to be reliable).

Frequently forgotten is the duality of extensibility: subsettability, or
contraction. Being able to remove or disable code without rewriting large
parts of the application is just as important as being able to extend it!

~~~
whytheplatypus
yes!

------
jjav
The junior developer sees a pattern and thinks why not make this a bit more
generic so it handles similar cases. Oh but these cases fit a larger pattern..
and eventually you'll have written yourself an entire framework. This is very
valuable, for the experience. Not for the framework, which will never be used.
But you should go through it.

The mid-career developer knows this from experience and sticks to the minimum
necessary to meet the requirements. Fast and efficient in the short term.

The senior developer mostly agrees with that but also knows that there is
nothing new under the sun and all software ideas repeat. So from experience
they can selectively pick the additional implementation work to be done ahead
of time because it'll save a lot later even if it's not needed yet.

As an aside, this is one of the many reasons why I interview based on
discussing past projects and don't care for algorithm puzzles. Unless I
specifically need an entry-level developer, I'd prefer to have the person who
has written a few silly (in hindsight) complex frameworks and has painted
themselves into a corner a few times with overly simplistic initial
implementations. That's the person I know I can leave alone and they'll make
sane decisions without any supervision. The algorithm puzzle jockey, not so
much.

~~~
InvOfSmallC
Personally this is not my experience. Plenty of seniors spitballing not useful
solutions, premature optimization of sort etc...

The mid-career is perfectly fine, give you are also good at refactoring if
needed.

Also, another thing that I'm not seeing stressed enough on this topic is:
measure.

~~~
alexbanks
One thing I've found throughout my career is having Senior in your title is
sometimes totally unrelated to your ability to write code or design solutions.

------
chooseaname
It really depends on what you're doing.

If you're writing code for a device that's going to hang out in the forest for
10 years with no updates, write just enough code to solve the problem and make
it easy to test. Then test the fsck out of it.

If you're writing code for a CRUD web service that you _know_ will get
rewritten in 10 months, write just enough code to solve the problem and make
it easy to test. Then, test the fsck out of it.

If you're writing an Enterprise app that will be expanded over the next half
decade, write just enough code to solve the problem and make it easy to test.
Then, test the fsck out of it. You simply _cannot_ know how your code will
have to change so you _cannot_ design an architecture to make any arbitrary
change easy. Accept that. The best you can do is write clean, testable code.
Anyone... anyone who thinks they can design a system that can handle any
change is flat wrong. And yet... time and again, Architecture Astronauts try
it.

~~~
wry_discontent
I don't really want to disagree with anything in here, this all looks good.

But I think it's silly to imagine that there's nothing you can do to
anticipate changes and make your code more amenable to change.

~~~
AnimalMuppet
The problem is the changes you anticipate often are not the changes that
happen. Those anticipated-but-not-happening changes waste your time, and also
clutter up your code, getting in the way of the changes that _do_ need to be
made.

------
Nursie
When I was young and fresh, I wanted to cater for all sorts of future
possibilities at every turn. But what if things change this way? What if
things change that way? I came to realise that when things change, unless the
design change is trivial (allow use of database Y as well as Z, etc) then you
probably haven't anticipated the way in which it is going to change anyway.
Better to have clean, straightforward code than layers and layers of
abstraction and indirection, to the point where it's hard to tell what's
actually going on. You'll probably find that when change does come, your
abstractions are an annoyance and a straight-jacket.

A junior engineer recently presented me with some completed work. There was a
data schema change on one interface and an API version change on another. He
had created a version of the microservice he was working on which could be
configured to work on either the old or new version of each of these, with
feature flags, switching functionality and maintaining the old and new model
hierarchies for both. He viewed this as an achievement. While technically it
was, the result was code bloat and unnecessary complexity. The API upgrade and
schema change were happening at the same time, and would never be rolled back,
his customisability was a net negative.

Code is a liability. Be sparse. Build with some flexibility but overall just
build what is required and when the future comes, change with it. Don't build
to requirements you can imagine, because the ones you don't will kick your
ass.

~~~
qes
> The API upgrade and schema change were happening at the same time

Atomically? What if your DDL times out?

What you describe is, in my experience, extremely standard process for rolling
out breaking changes (you do of course remove the switch and old version
support after everything is rolled out).

~~~
Nursie
> Atomically? What if your DDL times out?

Effectively yes, during a defined outage period (don't ask, this is finance),
on any error anywhere in the platform during rollout, the entire platform
would be reset to its prior state.

> What you describe is, in my experience, extremely standard process for
> rolling out breaking changes

But entirely unnecessary here. There was no requirement for a single version
of the microservice to be able to address multiple versions of either
dependency. It was a net negative to have that support, added significantly to
the software complexity and the raw LoC.

------
donw
The short answer is: don't design for future use-cases.

Period.

Instead, only build what you need to solve the problems you have in-hand, but
do so in such a way that your software and your systems are as easy to change
as possible in the future.

Because you very, very rarely know what your future use-cases really are.
Markets change over time, and your understanding of the market will also
shift, both because some of your assumptions were wrong, and because you can
never really have a total understanding of such a complex phenomenon.

Your business needs will change as well; your top priority from 2019 is likely
very different than it is today.

That is why you build to embrace change, rather than barricade against it.

As to how, I'd start with Sandi Metz's 99 Bottles of OOP:
[https://www.sandimetz.com/99bottles](https://www.sandimetz.com/99bottles)

Learning to write readable code is also pretty important; Clean Code is a good
starting point ([https://amzn.to/3168z3A](https://amzn.to/3168z3A)), but I'd
be keen to know of any shorter books that cover the same materials.

Growing Object-Oriented Software Guided By Tests (GOOSGT) is a good read as
well: [https://amzn.to/3du1sEL](https://amzn.to/3du1sEL)

~~~
silveroriole
Reading Sandi Metz is like walking into a beautifully tidy room. I can’t
explain it any better than that. +1 on that book and POODR.

------
daniel-levin
32 comments so far and no mention of the word budget. There's a great analogy
between software engineering and construction. Does your organization build
skyscrapers and gorge-spanning bridges? Or does it build driveways and
swimming pools? Commercially developed software consumes capital to get
something in return. Are the people spending capital budgeting for a driveway
or for a skyscraper? Must it be done this month or in two years? Sure, those
are false dichotomies, but they illustrate the point: it is desirable for the
bosses to clearly define engineering spend.

Over-engineering can be avoided by carefully sticking to a budget.

For a lot of developers, there's a trade-off between rationality (in the sense
of ROI) and feeling good. It doesn't feel good to make every engineering
decision against a budget. My dopamine levels [0] skyrocket when I visualize
making some component generic, or future proof. I come crashing back to earth
when I realize the budget is for a driveway. Budgeting earns the bosses /
customer / capital a better return. Engineering for future use cases or making
things generic (or the mere anticipation thereof) is an easy way to get a
massive hit of neurotransmitter that makes you feel good.

[0] Not a neuroscientist. But I think it's useful to label that spike of feel
good and motivation. It may have nothing to do with dopamine.

~~~
Nursie
> My dopamine levels [0] skyrocket when I visualize making some component
> generic, or future proof.

I think I've successfully rewired that impulse in myself now, at least
partially. I do get that feeling from imagining great systems, but I also get
that dopamine hit from deleting old stuff and simplifying things as much as
possible. We can ditch compatibility for API v8? Awesome, I just made my code
smaller and we can scrap that entire abstraction layer! The service just lost
20% of its LoC :)

~~~
saalweachter
Honestly I think this may be a component: besides code shame (my code is bad
and I don't want other people to see it) a lot of programmers also feel
protective of their code. They want it to last forever, a paean to their
skills. Writing simple code to throw away and refactor feels like a failure --
it should be perfect and eternal.

~~~
psahgal
I totally understand the sentiment of writing elegant code that lasts forever,
but I think developers need to manage their expectations a little better.

If you're writing foundational code for an operating system, then yeah,
whatever you write is going to stick around for a while. (Which means you need
to be a LOT more careful about what you write.)

If you're writing code for a mobile app or website, expect your code to get
thrown out in a few months due to shifting requirements. I'm sure the Facebook
app has been completely rewritten more than once, since they switched to React
Native at one point.

If you're writing a JavaScript library, expect your code to be replaced next
week. :D

------
mcv
Don't design for future use cases unless they are known in detail (in which
case they're not really future use cases anymore). Things you don't know, will
change. Your extra effort may actually hurt you in the future.

Better is to design for change. Keep everything modular. Keep your concerns
separated. When something needs to change, you can just change that thing.
When the whole basis of the system needs to change, you can still keep all the
components that don't need to change. The only future use case you should
design for is change, because that's the only thing you can be certain of.

So don't make things more generic than you need today, but make sure it can be
made more generic in the future. And in that future, don't just add new
features all over the place, but reassess your design, and look if there's a
more generic way to do the thing you're adding.

I do this sort of stuff all the time on my current project, and it works quite
well. There's no part of the system we didn't rewrite at some point, but all
of them were easy to rewrite.

~~~
antris
Additionally, even if you _know_ the use cases, often I find it more
productive to form the code of the application one use case at a time. This
allows greater focus on the task at hand. People cannot effectively multi-task
so breaking focus into different use cases at the same time is slower and
causes more mistakes than just coding one use case at a time.

------
mmxmb
The chapter _The Second-System Effect_ form _The Mythical Man-Month_ book
(Brooks, Jr. Frederick P.) talks about this problem:

“An architect's first work is apt to be spare and clean. He knows he doesn't
know what he's doing, so he does it carefully and with great restraint.

As he designs the first work, frill after frill and embellishment after
embellishment occur to him. These get stored away to be used "next time."
Sooner or later the first system is finished, and the architect, with firm,
confidence and a demonstrated mastery of that class of systems, is ready to
build a second system.

This second is the most dangerous system a man ever designs. When he does his
third and later ones, his prior experiences will confirm each other as to the
general characteristics of such systems, and their differences will identify
those parts of his experience that are particular and not generalizable.”

The overall advice is to practice self-discipline:

“How does the architect avoid the second-system effect? Well, obviously he
can't skip his second system. But he can be conscious of the peculiar hazards
of that system, and exert extra self-discipline to avoid functional
ornamentation and to avoid extrapolation of functions that are obviated by
changes in assumptions and purposes.

A discipline that will open an architect's eyes is to assign each little
function a value: capability _x_ is worth not more than _m_ bytes of memory
and _n_ microseconds per invocation. These values will guide initial decisions
and serve during implementation as a guide and warning to all.

How does the project manager avoid the second-system effect? By insisting on a
senior architect who has at least two systems under his belt. Too, by staying
aware of the special temptations, he can ask the right questions to ensure
that the philosophical concepts and objectives are fully reflected in the
detailed design.”

~~~
JoeAltmaier
Yet 'spare and clean' can mean breaking the problem down into primitives. And
primitives can be reusable.

I agree, contriving baroque features and APIs are usually wasted time. But
planning simple basic operations that others are built upon is good sense.
When possible.

~~~
mason55
Yeah we call them "building blocks." Instead of building specific individual
features we create building blocks that we can use to implement features. The
key difference is that by exposing the building blocks to customers they can
use the building blocks in ways that we didn't imagine.

The most important reason for us to do this is so that our customers can
always accomplish what they need, even if it's somewhat manual or painful. By
exposing our building blocks we never have an emergency where we need to
implement the feature "copy foo to bar so that customer X can accomplish
important thing Y." Instead we can say "you're able to do that manually and
we'll talk about a feature that will help you automate if the manual process
is too painful."

Sometimes you just need to write some straight business logic, don't get me
wrong, and it takes experience to know when to create a new concept and when
to just bang out some code. But exposing primitives or building blocks reduces
the amount of "critical" development substantially.

~~~
valand
I found out about primitives/building blocks after 2 years of dealing with
foreign discipline,

but I have problem sharing it with people because I just don't the credential.

Is there a well-known book/resource that I can share with people about this?

------
rkangel
I am a firm believer that the most important superpower a software engineer
and their team should have is refactoring.

By refactoring what I mean is continually revisiting the architecture of your
code, identifying common functionality, better organisation of code, better
abstractions. The right decisions for your codebase change as it evolves and
so you need to keep reexamining these (implicit) decisions.

Continuously incrementally refactoring your code enables so many things to
work better. Your example here is a great one - don't implement something you
don't have a direct need for. Don't design for it. If you have a culture of
refactoring then you can be confident that in the future if that need crops
up, you can implement it with appropriate refactoring that the result will
look at least as good as if you designed it in up front. If you don't refactor
as a team, then you _have_ to do it now, or putting it in later will result in
a mess.

~~~
talkingtab
This. I did not decide to do this, but over time I refactor things. As I gain
experience doing that, I realize I have started to write "refactorable code"
(you heard it hear first. In JavaScript land (I'm a fan by the way) this is
inevitable. JQuery, React, hooks, context, promises, async, ES6, on and on.

The churn in JavaScript is painful, as many people note, the code I started
with is now both smaller, clearer and easier to refactor. So yes to
refactoring.

~~~
rkangel
I was lucky to work with a great and experienced SW engineer early in my
career. Among other things we were working on a Bluetooth stack of
approximately 1 million lines of C.

I had assumed in my naivety that as I got more experienced was that I would
get better at doing things right 'first time' and wouldn't need to keep going
back and fiddling with code I'd already written because it now wasn't quite
right. What I learned instead was that he spent at least 50% of his time re-
working existing code.

There are two important things. The right design changes as you add more code.
You learn more about what the right design should be as your work on the code.

You're never going to get it right first time. So instead concentrate on
continuously making it better.

------
SkyPuncher
Data is the only thing that I personally care about "making future proof".
Migrating data is a pain. Most everything else can easily be feature flagged
or coded around.

Even then, I don't dig too deeply into things. I look for two things:

* What a migration path to realistically expected features _might_ look like. If I'm debating between two column types or table setups, go with the more flexible option.

* What scenarios will be extremely hard to get out of. Avoid those when reasonable.

\----

In other words, instead of proactively planning for some feature to evolve
into the unknown in some way. I'm looking to make sure I'm keeping flexibility
and upgradability.

~~~
mason55
Really agree with your two bullet points. When I'm doing design reviews with
people those are always the things I bring up. We're not going to actually
design or build for requirements that we don't have yet but we can take a
guess at what future high-level requirements might look like and use that as
an input when talking about the current design.

An example of this conversation might look something like "This design is nice
and simple but here are the possible scaling bottlenecks. If we need to scale
X then what would be the path for that?" or "There's no reason to make this
configurable now but if we needed to in the future how would we do that?"

Those sorts of questions shouldn't drive the design but if you're trying to
decide between two designs that feel roughly as good as each other then it's a
good way to tip the scale. I also find it a good way to introduce junior devs
to thinking through those kinds of ideas.

One thing I had to learn to be careful with was to make sure it's really clear
we're talking about future hypotheticals. I've had cases where the
conversation about future design ideas ended with "Ok I'll go get started on
that" and then we have to talk about how we weren't discussing real
implementation changes, just hypotheticals about how the design might work in
the future.

------
picometer
Remember that design is about making choices that actually limit the
possibilities in design space, rather than extending them.

Building something “generic” to anticipate “future use cases” is not design;
it’s the postponement of design.

If you anticipate some unpredictable future feature, then either you do not
understand the design space yet, or you are following the directive of a
business which does not understand it.

Startups (prior to product/market fit) have a legit reason for not
understanding their design space; they’re still exploring it. That makes
design pretty hard. Instead of creating The One Generic System To Rule Them
All, I’d recommend small, less-generic, low-risk prototypes, that can be
easily replaced or refactored. Keep them uncoupled. Meanwhile, use those
prototypes to build up your understanding, so that you will be able to make
informed design choices at a more mature stage.

I’m speaking in broad strokes; reality may apply.

------
ss3000
> How far should we go in making things generic?

My rule of thumb for this is to repeat yourself at least 3 times before trying
to build an abstraction to DRY them up.

3 use cases is obviously not always sufficient inform the design of a good
abstraction, but IME:

\- abstracting at 2 use cases is very often premature and results in
leaky/brittle abstractions where the harm from added coupling between
fundamentally incompatible usages far outweighs whatever little harm can be
introduced by keeping the 2 usages as repeated code, and

\- with any more than 3 use cases, the maintenance cost of keeping the usages
in sync starts to scale non-linearly, so at the very least thinking about a
design for a potential abstraction at that point becomes a worthwhile
exercise, even if we end up deciding it's not yet appropriate (hence the "_at
least_ 3 times").

~~~
jaredsohn
>My rule of thumb for this is to repeat yourself at least 3 times before
trying to build an abstraction to DRY them up.

Also known as WET (Write Everything Twice).

------
okaleniuk
This is an interesting question. Intuitively, there should be some sweet spot,
some golden middle between going 100% ad-hoc and writing for the future with
no real use cases. I'm not sure if anyone in this world knows where is this
sweet spot though.

In 2017 I started
[https://wordsandbuttons.online/](https://wordsandbuttons.online/) as an
experiment in unchitecture. There are no generic things there whatsoever.
There are no dependencies, no libraries, no shared code. Every page is self-
sustaining, it contains its own JS and CSS, and when I want to reuse some
interactive widget or some piece of visualization somewhere else, I copy-paste
it or rewrite it from scratch. In rare cases when I want multiple changes, I
write a script that does repetitive work for me.

This feels wrong, and I'm still waiting when it'll blow up. But so far it
doesn't.

Sure, it's a relatively small project, it has about 100 K lines along with the
drafts and experiments. It is although inherently modular since it consists of
independent pages. But still, this kind of design (or un-design) brings more
benefits I could ever hope for.

Since I only code that I want on a page, every one of my pages is smaller than
64KB. Even with animations and interactive widgets.

Since all my pages are small, I have no trouble renovating my old code. Since
there is no consistent design, I forget my own code all the time, but 64 KB is
small enough to reread anew.

And since there are no dependencies, even within the project, I feel very
confident experimenting with visuals. Worst case scenario - I'm breaking a
page I'm currently working on. Happens all the time, takes minutes to fix.

I still believe in the golden middle, of course, I'll never choose the same
approach in my contract work; but my faith is slowly drifting away from
"design for the future" in general and "making things generic" specifically.
So far it seems that keeping things simple and adoptable for the future is
slightly more effective than designing them future-ready from the start.

~~~
syats
Thanks for sharing wordsandbuttons, it is full of very interesting content,
which is, after all, the only thing that humans consume from it. It is a very
good answer to OP. Furthermore, the site is very satisfying to navigate,
specially with the dev console's network tab on. I'd just love if the whole
web was like this again. Keep up the good work!

------
rosshemsley
I think a common conflation is seeing "making something future proof" as
"making it more generic".

IMO, good future-proof design is about putting in place good components and
system boundaries.

Those components and boundaries can be highly specialised and have as few
options as possible - it's much easier to make a system boundary more complex
than to make it simpler. So start as simple as possible!

Now, with those boundaries, you can easily write tests, and iterate on the
different parts of the system. Bad code in one component doesn't "infect" bad
code in another part of the system.

Most "balls of mess" systems that I have seen came down to not having clear
boundaries between components of the system, rather than being too generic or
not generic enough.

~~~
kqr
David Parnas makes the distinction between general software and flexible
software. General software runs without modification in a variety of
environments. Flexible software can cheaply be modified to run in a variety of
environments.

When you cannot predict the future, flexible is often more efficient than
general.

------
flyinglizard
1\. Embrace refactoring/rewriting as inevitable;

2\. Understand it's very often easier, faster and better to write something
trivially but twice than writing it well once;

3\. Realize no matter how hard you try, unless this is something you've
already done once (which brings you back to #2 above), then your information
of the problem is incomplete and therefore any all-encompassing solution you
may have will essentially be partial and based on extrapolation of your
knowledge and guesswork.

The above also deals with a significant cause of procrastination - an internal
subconscious feeling we can't do what we're about to do well, therefore we'd
rather not even start it. Just sit down and tell yourself, "today I'm going to
write a bad module X", and with enough time you'll be able to sit down again
and write a good X.

The one thing you should spend a bit more time on is interfaces and
decoupling, but as for internal implementations of things, or even entire
system blocks that you can swap later on - don't bother too much. It'll be OK
at the end, and you'll get there faster. Even if you need to rewrite the whole
god damn thing.

~~~
d0m
^ This

Also:

1\. cut features and enjoy saying no.

2\. set deadlines and ship (if not enough time, see 1 above)

3\. don't skip tests; makes refactoring/rewriting a breeze

------
otikik
Two things in combination help me fairly well in that regard.

1\. Keep It Super Simple (KISS).

Implement the solution which works the easiest first. Copy-pasting code is ok
at this point. So if an "if" will do it, use an "if". (Don't start with a
BaseClass with an empty default method and a specialized class which overrides
it).

Once you have something working (hopefully with some automated tests?) you are
allowed to refactor and abstract. But see next point.

2\. The Power of Three!

You are only allowed to abstract code once you have copy-pasted it in at least
three places. If you only have two, you must resist the urge and move on.
Maybe add a comment saying "this is very similar to this other part, consider
abstracting if a new case appears".

After abstracting stuff, run tests again (even if they are manual) to make
sure you have not broken anything.

Be warned that this method is not guaranteed to produce the "best possible
code for future you". If you keep doing this long enough, you might get stuck
at "local maxima" in design. Future you might need to do big refactors. Or
not. That is the nature of programming IMHO.

------
chasd00
Knowing the point where you pass good design and start over-engineering is an
art developed through wisdom and experience. Sort of like the art of making
good LOE estimates given incomplete requirements and an unknown team.

I would look to senior people who have boots-on-the-ground experience
delivering and maintaining projects/products, ask them if they think you're
overdoing it.

edit: I want to add, get input from non-engineers too. Ask them "in your
experience, how far have projects diverged from the initial
requirements/purpose? I'm trying to plan for the future but not over-
complicate things"

------
imtringued
The stupidest over engineering pattern I've seen is having an interface for
every single class in a Java project. This may make sense if you are building
an API within a framework or library that people other than you are allowed to
implement but when the interface is only used within the project it was
created in then it is absolutely meaningless to add the interface before you
need it. You can always add a necessary interface later by editing the code.
Doing a search for "ClassName" and replacing it by "InterfaceName" isn't
exactly difficult.

------
mschaef
Some of this is the experience to know what parts of a system are likely
matter in the long run and some of it is the discipline to hold yourself and
your team accountable.

If I had to boil the first part down, I'd say you need to focus your
engineering on the interface points - network protocols, schemas, API sets,
and persistence formats immediately come to mind. It's a truism of the
profession that those are the most expensive aspects of a system to change,
and therefore they're the least likely to be mutable down the road.

That's really the easy part... the harder part is maintaining the team and
self discipline to keep things simple. For better or for worse, engineering
job placement (and oftentimes personal satisfaction) is highly resume-keyword
driven. Organizations and people all tend to chase the next resume keyword on
the assumption that it'll help them deliver more efficiently or get the next
job placement or write the next blog post or whatever else. The net of this is
that there's a very strong built in tendency for projects to veer in the
direction of adding components, even before considering whether or not they're
appropriate for use. So keeping your eye on actual, real project requirements
over all else is both important, and can require convincing and political work
throughout a team and organization.

------
dgb23
Disclaimer: I have never worked in large teams but this problem arises
everywhere with the caveat that in smaller teams and solo development the
coordination requirements are much simpler.

That said, I think there are two steps in designing abstractions:

The first is to split up and isolate both code and data into small, simple
pieces. These can easily be non-DRY (structurally) and often are. You'll have
cases where you often (always?) pass things or do things in sequence in the
same way.

The second is to merge the pieces with parametrization (or DI in OO), defining
common interfaces, polymorphism etc.

The first part is very often beneficial and makes code more malleable, which
makes future features/changes less cumbersome.

The second part however is the dangerous but also powerful one. It can lead to
code that is much more declarative, high-level and adding new features becomes
faster. But the danger is that a project isn't at the stage where these
abstractions are understood well enough, so you end up trying to break them up
too often.

I try to default to writing dumb, simple small pieces and deferring
abstractions until they become provably apparent. Fighting the urge to
refactor into DRY code.

Now there is also another issue with writing "future proof" code, which
doesn't involve abstractions but rather defensive programming, which is an
entirely different issue.

~~~
kls
I would agree with the parent here, try to not go into abstractions until they
are apparent but there are some pretty solid patterns that have worked thru
the years and still hold up today.

The first being pub-sub, this can be an event system or and observer pattern
but it is good to have loose coupling of when something happens it can loosely
tell it out into the ether and not care about if anyone is listening.

The second being sole responsibility of change, this does not have to be
something as formal as a Redux or some other library if you are using a back-
end language but there should be one function that changes one segment of data
and that is the one function that owns the writing of that data.

The third is more optional but in certain places it really shines and has
become kind of a lost art and that is plug-in style interfaces. Say you have a
portion of you app that parses text files and stores them in a common data
structure and you know that there will be future text file formats but you do
not know what they will be. A plugin architecture is a natural fit here, where
you define the interface and the data storage side, then you only need to
implement a plugin for the parser side for each new file format. I always
write these as true plugins where you can drop a new one in a plugin folder
and the application picks it up, no recompiling no configuration, no
restarting, just a black box that is self contained.

~~~
dgb23
I fully agree with the first. You usually need some designed way of handling
communication and/or IO in a sufficiently large application. Whether these are
event queues, channels, commands, pipelines or w/e depends. But thinking about
this up-front is the key here.

I would describe the second as a constraint rather than an abstraction. And I
fully agree with this. The most obvious and probably most talked about benefit
of FP would be avoiding state management complexities.

The third one is neat. But I would say we're already in danger territory here.
Just writing a function and naming it properly is the minimal step required to
make refactoring into a strategy or plugin pattern later fairly
straightforward.

------
msluyter
One heuristic I've adopted: when faced with a design decision, I ask the
question "what's the simplest possible way to do this (that meets the
requirements)?" So, do you need a fully event driven Kafka based system, or
would a cron job running a python script be sufficient?

The followup is, if we have to change/replace said system, how difficult would
that be? If scaling the simplest solution would be difficult/painful, then one
can start look to higher complexity solutions. The idea isn't necessarily to
always _choose_ the simplest solution, but having it in mind can be helpful.
It crystalizes one of the endpoints of the simple/complex spectrum and helps
weigh the pros/cons of various approaches.

As a side note, it's sort of amusing that a lot of design focused interview
questions are around things like "design a twitter/instagram like system
that'll scale to billions of requests per day." I've never had to do anything
like that IRL, but no interviewer has ever asked me to design, say, an
invoicing system that gets called rarely. So perhaps one of the reasons the
arc of software engineering bends towards complexity is that we're
continuously rewarding a "build massive scalable systems" mindset?

------
zzzeek
If you ask an ice skater how not to fall, they may likely tell you that they
have falling tens of thousands of times and continue to fall all the time. But
they will know, they know how not to fall because they are intimately aware of
what it feels like to fall and know how to avoid it as a result.

I don't know that there is any readable wisdom that will teach you how not to
overengineer or underengineer, such that with that knowledge, you
automatically know how to achieve the right balance. It is likely a necessary
part of the process to build out software projects that are in fact
overengineered or underengineered and to intimately learn from that process as
well as the aftermath how to tune one's own process to strike an artful
balance between the two extremes.

put another way, the MS and Google teams you worked with were screwing up, but
screwing up in a way that is necessary for people to learn, if they do in fact
learn (they might not learn).

all of that said, starting out by intentionally underengineering, if you know
that you are one to start overbuilding, might be a good strategy. but you
might have to do some really huge refactorings subsequent to that when you
learn your architecture is insufficient.

------
sam_lowry_
This by far the main reason to have people with decades of experience in
engineering teams.

~~~
rramadass
Well said! This is why older engineers should be prized and actively
recruited. When i see engineering teams staffed solely with "Young'uns" i know
there are hard times ahead.

Enthusiasm and Energy needs to be controlled and directed by Experience and
Good sense.

------
throw1234651234
Make your code modular. Use dependency injections. Write short, clear
functions.

When everything is DRY and functions follow SRP, it's hard for your code not
to be "future-proof."

Tests only when they make sense, not when you are mocking half the app.

------
lonelyasacloud
> At what point we should stop designing for future use cases?

Guessing the future and engineering to cope with it is a risky, error-prone
business. Good engineering practice should always seek to minimise reliance on
unreliable (prediction) data to create future-proof designs.

So I'd go with stopping as soon as it meets current use cases AND then shift
gears to make it easy for someone else to pick-up in the future, e.g. write
whatever tests are necessary and document thinking.

If it's easy to extend now, it will still be so in the future. Plus, there
will be whatever learning has occurred since to make an even better job of it.

> How far should we go in making things generic?

Only if: 0) It's not only about guessing future needs. 1) The maintainers are
over 90% certain it will make it easier for them to maintain and test now.

> Are there good resources for this?

There's a mountain of media on "Agile" software development, and it's
different flavours. It's not particularly Software Engineering focused, but I
enjoyed the "The Lean Startup" by Eric Ries.

Good luck and have fun.

------
sobellian
This tendency has been noted since at least the Manhattan Project, described
by none other than Richard Feynman. The prescription for oneself is pretty
simple - just say no! It is relatively easy to catch yourself whenever you
think about some improvement that's irrelevant to delivering so long as you
have convinced yourself that you need to wholeheartedly focus on delivering.

It is more difficult to persuade others not to over-engineer. I have tried and
failed many times to do so. In fact, if you try too hard you may just make
them hate their job. Folks get into software development for a whole host of
reasons and only one of them is shipping. They may not be satisfied with a job
where they funnel requests from a PM directly into code with little creative
input. I'm not sure I can give good advice on this front, other than to look
out for certain red flags during hiring.

~~~
dragonwriter
> The prescription for oneself is pretty simple - just say no! It is
> relatively easy to catch yourself whenever you think about some improvement
> that's irrelevant to delivering so long as you have convinced yourself that
> you need to wholeheartedly focus on delivering.

That's fairly true if you intend a monomaniacal focus on delivery and you are
dealing with improvements that are truly irrelevant.

If other concerns like code hygiene have nonzero value, and/or if improvements
have some potential relevance to delivery but are not unmistakably essential,
then things get more interesting. Doing the least work possible to get by with
a work item may be good for initial velocity but at long term cost.

It's probably easier in an environment where things like code hygiene
expectations and review standards are well-known from a combination of express
standards and team experience, but not all environments are like that.

~~~
sobellian
Two points here. First, I think the primary trap is that developers think they
can judge what code is hygienic. This is the evil twin of the notion that
developers can judge what part of a system is slow without profiling.

Developers regularly hold religious wars on basic matters of code style. While
you may profit from 'hygiene' there be dragons - developers often consider as
hygienically critical such tasks as refactoring React class components into
functional components with hooks. The industry is not full of Edsger
Djikstras. In practice the sub-optimal greedy algorithm gets the job done
since it encourages a straightforward approach to the problem at hand.

The secondary trap is that developers think they can amortize up-front
development costs over a long period of time. Projects get the axe if they do
not deliver. The longer you spend writing a feature, the longer you wait to
test your hypothesis that it is valuable. In many cases the odds that the
feature is valuable run below 50%. You may be laying a solid foundation when
all that was asked of you in the first place was a shantytown.

Everyone thinks they are the reasonable person with discerning taste who only
refactors code as necessary to optimize total cost. In practice that person is
as mythical as a unicorn. It is very difficult to convince developers this is
true, but true it remains.

~~~
dragonwriter
> First, I think the primary trap is that developers think they can judge what
> code is hygienic.

If software engineering _is_ engineering, they can. It may be a skill that is
weaker and stronger between different engineers varying particularly by
relevant experience, and it may require complex and context (including present
development team) sensitive evaluations, but it is a real skill that exists.

> This is the evil twin of the notion that developers can judge what part of a
> system is slow without profiling.

Well, it's not, because there is no analog to the “without profiling” part.
It's true that there is an analogous tendency of _prejudging_ problem code,
but certainly, at a minimum, hygiene issues can be discovered by experience of
problems of development/maintenance stemming from, e.g., code duplication for
shared functionality instead of use of shared abstractions.

> The secondary trap is that developers think they can amortize up-front
> development costs over a long period of time.

Note that when I said long term I don't necessarily mean a “long period of
time” but “some period longer than the minimum development time of the present
item”...

> Projects get the axe if they do not deliver.

Yes, they do, but plenty of real projects aren’t, especially at initiation,
delivering actual value every iteration, just progressively refined
demonstrations (this is particularly common on replacements that, for whatever
reasons, can't go the strangler/ship of Theseus appproach, and on those
projects the team often has an idea where things need to be for real delivery.
It's quite possible for code that makes subsequent tasks more costly even
though it saves time this iteration to delay real delivery.

> Everyone thinks they are the reasonable person with discerning taste who
> only refactors code as necessary to optimize total cost.

No, in my experience that's not even approximately true. _Most_ developers
I've encountered think that, in their current team environment, they
individually have a natural tendency either to excessively favor direct
solutions that produce ugly code with outsized downstream cost or to go a few
steps too far with overengineering abstraction (and most of them also
recognize that that that overall tendency is only on average, and that they
also miss on the other side sometimes.)

My point is that when you get beyond the simplistic rejection of things which
have _no_ relevance to the task at hand (which isn’t a scenario that occurs
all that much), evaluating what is the right balance is nontrivial.

~~~
sobellian
> If software engineering is engineering, they can.

Therein lies the question :)

> Well, it's not, because there is no analog to the “without profiling” part.
> It's true that there is an analogous tendency of prejudging problem code,
> but certainly, at a minimum, hygiene issues can be discovered by experience
> of problems of development/maintenance stemming from, e.g., code duplication
> for shared functionality instead of use of shared abstractions.

Even in this case we are limited. I've seen many cases where someone correctly
identifies a piece of code that is painful to maintain and then dives in only
to multiply the problem. We are even better off in performance land because we
can test our code immediately after we write it to verify we haven't left
things worse.

> No, in my experience that's not even approximately true. Most developers
> I've encountered think that, in their current team environment, they
> individually have a natural tendency either to excessively favor direct
> solutions that produce ugly code with outsized downstream cost or to go a
> few steps too far with overengineering abstraction (and most of them also
> recognize that that that overall tendency is only on average, and that they
> also miss on the other side sometimes.)

I don't want to wade into the thicket on this one except to note that it's
possible for someone to articulate something they may even believe on some
superficial level without truly internalizing that belief and that IMO I
observe this behavior a lot.

I don't want to invalidate your point outright because I think it's valuable
and I even agree that it's rare to encounter a code base that would not
benefit at all from 'code hygiene.' All models are wrong but I think an
aggressive focus on delivering functionality is a useful one in most contexts.
Put another way, a good but imperfect rule of thumb is to refactor only in
order to gain something tangible for end-users.

------
nhoughto
Summed up as YAGNI

You aren’t gonna need it.

As with anything tho, ruthlessly cutting stuff or rounding corners is in
tension with supporting future capabilities, sometimes you do need it it turns
out.

------
stephc_int13
Over-engineering is the curse of smart guys without enough experience.

I believe this is well documented today, but maybe not that much in academia.

What they need to avoid this trap is to talk with people who got burned
earlier.

As a rule of thumb, don't design for future use case, ever.

Future-proofing is a fool errand.

------
TYPE_FASTER
It depends on the impact of the change to add flexibility in the future. A
schema change to a table with many rows that could involve re-indexing usually
makes me think harder about the initial data model and schema design.
Refactoring code usually involves less cost and risk, especially with modern
CI/CD practices, so I'm less likely to add a layer of abstraction if it's not
required.

I've also found that designing for unknowns can sometimes be resolved by
asking questions about the unknowns until they're known. Sometimes, business
stakeholders have an answer, they just weren't asked.

------
WalterBright
Designing for the future rarely works out very well. What does work, however,
is encapsulation.

For example, in the 80's the linked list was my go-to data structure. It
worked very well. But with the advent of cached memory, arrays were better.
But my linked list abstractions were "leaky", meaning the fact that they were
linked lists leaked into every use of them.

To upgrade to arrays meant changing a great deal of code. Now I
encapsulate/abstract the interface to my data structures, so they can be
redesigned without affecting the caller.

------
drol3
In my mind it's surprisingly simple: you VERY honestly ask yourself the
question of you know exactly what you are building. If you do then don't be
afraid to plan ahead. If you don't, then ship the smallest thing that works :)

If you are doing a rewrite of an existing system or have many years of
experience with a similar product then thinking ahead can save time. Otherwise
you are probably better of not trying to be too clever.

The difficult part for most people is actually being honest in the process :)

Also Kent Becks timeless advice is good to keep in mind

1) make it work

2) make it right

3) make it fast

In that order. You might not need all three :)

------
est31
I have been in this trap myself. I tried writing only perfect code. I ended up
thinking more about the design than actually progressing on the issues. The
solution I found was to write explicitly hacky solutions first, then improve
them as you go forward. Either hacky by being slow, or by not having all the
features you want it to have eventually. Not hacky by having huge bugs, don't
do that, as finding them is a huge cost the later you find them! Put TODOs
about the aspects that need improvement, this allows you to grep for them.
Also, as you write the first implementation, you'll gain more knowledge about
the problem domain than any non-coding research could give you. Maybe you
don't need that one feature after all, or maybe your approach was totally
wrong and you should do a different one. It's better to find that out when you
only have invested time into a prototype instead of a full generic solution!

And this is not an obligation. If you are really sure about the design of some
component, you can also do it right the first time. But in most places, you
usually don't know the design well enough.

Also, the imperfect solution can help as a validator. Have a problem that has
a unique solution? Make a slow algorithm for it which you are sure is correct,
then build a faster version and use the slow algo to compare for correctness.
You can also use the slow algorithm to tell you about non-output values that
both algorithms compute implicitly or explicitly.

------
outime
It's a difficult battle in many teams where some people will just go overboard
for multiple reasons, usually in this order: fun, fear of being seen as short-
sighted and/or promotions. It mainly happens in big orgs and your examples
definitely fit the bill although I haven't worked there specifically. I'm not
that old (very late 20s) but I've seen overengineer code (or infra) ending in
the trash so many times... many people could see it coming and warned about
it, yet it still happened.

I usually try to support the opposite idea by bringing agile to the table.
Agile doesn't say "please don't overengineer" but "this can change a lot in a
couple sprints, requirements may be different, there are more (or none
anymore) use cases now" and hence why to spend so many resources doing
something that not only doesn't cover all potential unknown use cases but also
adds overhead when reading the code for no benefit. For some teams this has
clicked very well. Appropriate agile training can help a lot.

However it doesn't always work and if the person(s) doing this have higher ups
backing it up (even if unintentionally) you're fighting the wrong battle. If
you can't change it, move teams or to a company where money is a bit tighter
and/or where these behaviours aren't tolerated - in other words, where things
needs to get done and there's no time for experimenting with things that most
likely won't yield returns.

------
gonzo41
Solve the problem directly in front of you.

Then understand a new problem.

Repeat.

If you then start doing things like externalizing inputs, writing tests then
when your past solutions become future problems.. You will have created the
guide rails on how to solve both the past problem and the future problem
together. This is the futureproofing that you should be aiming for. Make sure
you write at least a few unit tests that mock out the important logic parts,
and write up a short 1 page document about the intent of the program, maybe
with a picture and drop that in the README.

------
addictedcs
One of the most challenging things in growing a software product is managing
complexity. If you are designing a product for future use cases, you add
complexity upfront. To keep a healthy balance, I try to follow simple
guidelines:

* Focus on the current assignment. Implement it using clean code principles, don't overthink the problem.

* Rather than spending time on design decisions, allocate time on handling edge-cases. These will save you from PageDuty alerts.

* Plan excessive for future use-cases only around data models that insert/read into the database. Data migration is super-painful. A more generic design around your database is almost always preferable.

* Have a feature toggling[1] service in your codebase. It will provide you with a better understanding of how you can implement new features alongside existing codebase in the same branch. Releasing features from long-running separate branches is almost always a wrong decision.

* Always keep in mind time-constraints and how urgent is the requested functionality. Don't let perfection be the enemy of productivity.

* Have a process in place that allows for the technical debt tasks to be tackled independently. It helps fix some bad design decisions, which become apparent in light of new requirements.

[1] [https://martinfowler.com/articles/feature-
toggles.html](https://martinfowler.com/articles/feature-toggles.html)

------
cjfd
You should immediately stop designing for future use cases.

Use TDD and only write only just enough code for the currently known required
functionality. When you get to know more required functionality the tests
protect you from breaking existing functionality and you can extend your code
to support the new use cases as well. At this point you should make your code
just generic enough to support all known use cases without code duplication or
too much boilerplate code. If you can support functionality in more than one
way you can decide what way to choose based on what you expect in the future.
But choosing the most simple solution trumps attempting to future proof your
code. It turns out that predicting the future is quite hard and there will be
new feature requests that nobody had foreseen and code that has been made as
generic as possible will not handle this well.

Spiderman says that with great power comes great responsibility. The converse
also holds true. With great responsibility comes great power. You cannot just
pick the easy part of TDD, be irresponsible and expect to have any power. The
less easy parts of writing a test for every use case and of refactoring all
the time make the practice of not attempting to guess the future possible. If
you leave out the prerequisites the end result will not be so very pleasant.

~~~
throw10382
TDD has costs. It's expensive and only works on certain types of systems. It
also makes exploration of the problem domain _extremely_ costly and makes
refactoring a nightmare. I dislike TDD full stop but there are some domains
where it's unambiguously a bad idea.

It implies:

\- Behaviour can and should be compartmentalised

\- Certain types of efficiency are ignorable

\- Data structures are better off being relatively simple

\- Behaviour is able to be understood before the system emerges

\- The system doesn't fundamentally need a lot of mutable state

\- Dispersing functionality across small, atomic functions (that obscure
sequential flow and state mutation) is good for the code

\- It is easy to extract the functionality of this system into pure functions

\- A high-level veiw of the system is unnecessary (!)

\- In this system, most bugs will come from small units, not interactions
between units

\- Behaviour of units is likely to be relatively unchanging

\- Refactoring primarily happens between interfaces, not to them

\- Test rigging is cheap and easy at every level of abstraction, or at least
that it's better to contort your system into a structure where that's true

None of these things are a given.

~~~
cjfd
The only thing that somewhat makes sense is that during exploration TDD may
not always be the most practical. It immediately starts to be extremely
practical once the exploratory phase is over.

I kind of feel you live in some kind of alternate universe. None of these
sound true to me, at all. The most important misunderstanding that seems to be
going on here is the assumption that in TDD it is a given that one is testing
single classes and/or methods. This is not the case. In fact, in most cases it
is much more beneficial to test a set of classes/methods at the same time in a
way that is representative of something that the customer values.

Honestly, I have seen TDD work so well in so many different circumstances that
I have started to consider people who do not do not write test for most
possible scenarios as being on the not-so-very professional side of things.

------
vs4vijay
Follow YAGNI:
[https://www.martinfowler.com/bliki/Yagni.html](https://www.martinfowler.com/bliki/Yagni.html)

------
ericelliott
There are a few first principles from which we can design code.

The first is: Requirements change.

For many reasons. User needs change. Business processes change. Technology
evolves.

If you're going to keep up, you must design code that is easy to change.

Code that is easy to change tends to be code that can work with lots of
different code.

Code that can work with lots of different code tends to lean towards the
general end of the spectrum, rather than being too specific.

Designing code that is modular does not mean that you need to over-engineer,
and it doesn't even mean that it needs to be used more than once.

It should mean that the code has locality: The ability to understand the full
effect of the code without also understanding the full context of the code
around it or the full history and future life of every external variable it
uses.

It may sound to new developers like I'm talking about something complicated,
but it's the opposite:

Code that can be easily adapted to future use-cases tends to be more simple.
It tends to know the least about it's environment. It tends to do only one
thing, but do it so well as to be perhaps the optimal solution.

What follows from the first principle is perhaps the most important principle
in software development. Remember this and you'll find yourself needing to do
a small fraction of the work you once did to produce the same value:

A small change in requirements should lead to only a small change in
implementation.

------
tmaly
I personally like the simplicity of designing for change. Simple rules like
TDD help you to think about the design up front.

I was talking to a new guy that just joined my team yesterday on this subject.
You really cannot predict the future.

You could also look at it from one other angle. If you are only building the
bare minimum to satisfy the requirements, that is a lot less code you are
writing. If you need to replace the system, that is a lot less work to go back
and rework.

------
dukoid
By being aware of the true costs: It's not the cost of making the code more
generic (typically relatively cheap) -- but refactoring costs when it turns
out that the code actually needs to be more generic, but in a different way.
It's easy to refactor something simple into something more complex/generic,
but it's hard to refactor something complex into something that's still
complex, but in a different way.

------
watwut
When you have good refactoring tools, a lot of this is significantly less
issue. So, design for easy refactorabilitym lack of repetition and
readability.

I dont like "used the simplest solution possible" advice, because people who I
claimed doing that in real life tended to do unmaintainable spaghetti mess. It
was "simple" in the sense of not having abstractions or nested function calls,
but hard to read and understand big picture. Sometimes generic is a way to
untangle such previous spaghetti mess. As in, it is second step on the road
that requires 2 steps till it is really good.

Understand politics. A lot of those "future cases" are things that analysis or
management requires initially or indirectly. They are also often result of
trying to hit vague requirements - you dont know what customer really needs in
enough detail, so you do it configurable in the hope of hitting the right
place too. They are also situations in which people burned out in the past or
special cases of special cases.

Otherwise said, the complicated thing is often requirement, initially. Someone
had reason to ask for it or thought to have reason. It gets forgotten and
ignored after a while in which case it is ok to cut it off.

------
dmoy
Invest in very good refactoring tools.

Invest in very good testing, especially higher level integration / system
testing.

Invest in a good dev/staging setup for your production environment, and also
try to make rollouts and rollbacks automated and as painless as possible.

There will always be the need to change stuff, so get the pieces in place to
make changes easier code, easier to test, easier to deploy, and easier to back
the fuck up when you inevitably cause something to burn.

------
bhouston
I find the best designs are simple ones that reflect the underlaying concepts.
Most designs that lead to complexity are based on the wrong abstraction. I
have found this to be true nearly in all cases.

Of course the issue you run into is that the problem software is trying to
solve changes over time and then the original abstraction which were correct
become wrong over time. Then you should advocate refactoring to the new
abstractions if possible.

------
ateng
Disclaimer: the following method may not work if your library is public
facing, as breaking changes to public API is usually a terrible idea.

Ask yourself the following questions: * Are you an experienced developer on
this particular problem domain? * Do you have good understanding of the future
use cases, and the data structure when the new feature is added? * Will the
new feature make business or technological sense?

Unless you have a concrete “yes” on all questions above, your design should go
minimalistic. Without good understand of the new feature, the code base will
likely need refactoring when the feature is implemented. Your effort of going
the extra mile (which definitely should be applauded!) is better spent on
making the code easier to read and refactor - better tests, better
documentation, code review to propagate knowledge and catch bugs.

Another thing, extensibility is sometimes in conflict with readability! Here’s
an hilarious example:
[https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...](https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition)

------
cbanek
> At what point we should stop designing for future use cases?

Immediately. Never design for a future use case until it's a present use case
and you're implementing it right then.

> How far should we go in making things generic ?

It depends on what it is. By not designing a thing to be generic up front, you
have to figure out what n=2 looks like. Is that a function? A class? Copy a
little bit of code? Then n=3. Once n=10, I feel like I have a good idea of the
problem and how to make it generic, and it's rarely what I would have thought
at the beginning.

Sometimes n never reaches 2. Then you've saved a lot of time. Also, you
realize when you have to change things - maybe it's once a release, or very
frequently. Things that are touched frequently probably need a refactor.

My rule of thumb is: never make tomorrow's possible problem today's
complexity. If you design for future use cases, not only will it take you
longer, but your code will inevitably be more complex than it needs to be, and
therefore have more bugs at the very least due to that complexity.

------
theshrike79
If there is an actual customer, try to get the scale of users/data the system
is expecting. A system handling 1/1k/10M things every day will be quite
different and need different solutions.

Problems arise when people reach for the Cool Tools and start building
Webscale things that can handle 100M operations every second. ...but the
customer only needs the system to handle 4 users who type in everything crap
by hand. But hey, it has a clustered database and Kafka and Kubernetes and
looks REALLY good on your Resumé.

When the scale is determined, I personally like the mantra of "Make it work
first, then make it pretty". First build an MVP (or an ugly Viable Product),
that proves your plan actually works, then you can iterate over it to make the
implementation cleaner or faster or have better UX.

If you get stuck making everything super-generic and able to handle all
possible cases, you'll spend time bikeshedding and never get anything
deployed. Just Repeat Yourself with wild abandon and copy/paste stuff all over
until your Gizmo actually works. You can spend time figuring out which parts
are actually possible to make generic later.

~~~
badams2527
How much time do you spend "making it pretty" after you've got it functionally
working? Interested to hear your experiences on that.

For a long time we would "pretty it up at the end", in one of my coworkers
words, which lead to a ton of horrible UX decisions early on that required
major legwork later. We switched to doing ux-driven development from the start
and it's saved us a ton of time and saved ourselves from poor decisions haha.

~~~
theshrike79
I make it pretty until I hit a deadline or the customer runs out of money :D

It depends on the customer and project which parts I focus on making "pretty".
If it's a data-intensive thing, I might spend time optimising the protocol,
compression and making the data pipeline robust. For a web app I'll spend more
time making it more usable by streamlining the most relevant flows (which I
know of since the client has been testing the (M)VP already).

------
aabbcc1241
I usually take YAGNI further and avoid using database (sql or nosql) at all
(at least in the beginning)

My typical approach is to log all the commands to the file(s), and keep the
system state in memory. When the server restarts, it simply replay most of the
commands (skipping those intended to cause side effect)

This is like simplified event sourcing, or command sourcing.

I'm not relaying on any framework to build the backend, simple library handing
rest and websocket API is enough.

I'll open source it soon but it's mostly as simple as it sounds, with some
optimization to reduce disk space consumption after they becomes a problem in
practice.

------
mishaker
This is a typical case I am struggling with during my whole career in
software. It is very hard to fight against because it is counter-intuitive.
Saying that "let's put aside all those designs and let's do a minimalistic
version to satisfy current needs and then we move from there" is always less
attractive than "let's do a great architecture which can support our
'exponential growth'".

In my experience following categories of people tend to fall into the trap
easily (and it is really hard to convince them):

\- Corporate people.

\- Business school people.

The following can amplify it:

\- Fund raising... you were successful in raising money so that means there is
a market and people love your product ... well guess what? you are wrong.

If you are in such a situation, debating is not useful. Changing this mindset
is only possible when you hit the wall at least once in your life. BUT ... you
can accelerate the process of realizing it (before it is too late). Try to
find a way to make people realize that all those bells and whistles and
designs were not necessary cause nobody actually asked for and we are changing
it anyway now.

------
rramadass
There can be no definitive answer to your question but of course some "thumb
rules" are applicable.

* The nature of your "system" defines how much attention you should pay towards futureproofing. If it is an end-user app/feature only do what is required and nothing more. You are solving one instance of a (maybe) small class of problems. The future will dictate what needs to be added/modified/refactored when you reach that point and not before i.e. no predictions unless you know the domain well.

* If you are building an "architecture" component like OS/Framework/Library etc. then you need to pay attention to generality and extensibility and design with futureproofing in mind. Use standard best practices like data/api versioning, narrow module interfaces, information hiding etc.

* Always focus first on Readability and Maintainability to the exclusion of everything else. Only when you hit a wall with respect to aspects like Performance etc. do you go back and redo/refactor the code for the newly prioritized requirement.

------
namelosw
Don't design for future use cases, unless it's a library that may be extended.
Instead, write concise code.

Modify the code when the use cases change. In most of the times, open-closed
principle is a trap.

However, over-engineering is not only a technical problem. It's more of a
problem on project / people / finance, here are some examples but it would
definitely not limited to these:

1\. The margin is big and the team is big, so we need to keep these people
busy.

2\. The current stakeholder will cover the budget these iterations until we
finish these features. after that, the maintenance costs maybe partially falls
on us, or there's will be no budget at all. The less problems in the future,
the better.

3\. The current budget is for functionality A and someone pays it. And we plan
to implement another feature B that is similar to these, and the budget comes
from our own. Better make the solution generic so we can use it almost for
free.

4\. The list goes on...

Better fix those root problems first.

------
nate
I think most of the time this is just Parkinson's Law playing out: "work
expands so as to fill the time available for its completion"

Things get over-engineered because too much time is allowed for the task.

The forcing function that's helped me is simply giving me and my teams small
but meaningful time boxes to get a feature done. Sort of like "sprints", but
with more teeth. E.g. An important feature is going to get announced in a
newsletter ever 2 weeks. Sprints too often don't seem to focus on the shipping
to customers part.

So we focus on shipping the smallest thing that could possibly work in an
arbitrary time box. You know users need X. So you make X or X' or X'' \- some
version or whittled down version of X that'll relieve user pain. The time box
does works wonders from keeping over complexifying from getting out of
control.

------
talkingtab
Demo driven software development oddly enough

* A demo is a thing you can show. The first demo is usually
    
    
       - the program will compile, run and print "hello world"
    

* A demo is a contract between stake holders

* Demos happen frequently. For a developer each day, for a team each week, etc.

Why does this help? A demo, actually showing something, is the only
enforceable token that commits both part to the contract. In that way it is
almost a currency. No stakeholder, whether manager, sales or developer can
argue - either the demo was met, or not or was not clear.

This is much like sprints, test-driven, but a demo has the contract aspect.

Demo-driven reduces many of the causes/motives for over-engineering. * Is the
over-engineering bit in the contract? Why not? * New function requires a new
contract so no more last minute changes. (Been there done that). *
Stakeholders gain experience designing demos. * Demos are adaptive. They
provide tactile feedback.

My 2 cents

------
lbriner
Historically, this made much more sense. RAM was incredibly scarce as were CPU
cycles and the hardware was often tied intrinsically with the software so a
modification later down the line was a really big deal.

With modern higher-level languages and scalable cheap hardware, this
motivation should have gone away and we should be writing code that is
relatively easy to re-factor.

If I don't _know_ that we will need this thing in the future, it doesn't go
in. Simple. If 6 months down the line, we now need to add the new feature, I
like to think that my code is largely maintainable enough to refactor it to
add the feature.

The only exception I can think of is where something is designed to be
extendible like e.g. Instagram filters where you might have 10 when you launch
but you know that you will have more in the future so you write your code to
allow additional filters to be plugged in relatively easily.

------
econcon
By having a time limit. When I was new and over excited young guy I started
with the assumption that we'll be working on it for a really long time but it
rarely proved true.

Truth is most projects will be shelved before even they are complete.

You'll be taken off some projects because of financial or political BS.

So start with the assumption, there is limited time to deliver - now ask
yourself - how can I maximize the effectiveness of that time? How can I do
what truly matters without going into the rabbit hole of optimizations and
using the best thing possible everywhere.

And once you've done that, you can always come back and improve the thing if
the project is still around but most likely it will not be. Either you'd have
switched the company or company would have switched you or the project has
switched the company.

------
hackerman123469
Break the problem you're trying to solve up into simple steps. Each step
should be nothing more than a single task.

Ex. if you have a problem to solve that goes like: Must create store with
products that are blue only.

Then you'd break it up like:

Create store Filter products (Blue only) Create products in store

Then when you start coding you solve nothing more than what you put down as
each task.

Ex. you don't do anything more than what's required. You might think you need
to create a store and stores are businesses so you need to create some
business wrapper etc. but nope. You don't need that until a client comes one
day and requests it. Right now all you need is a store with blue products and
that is all you're going to solve.

Often when you over-engineer something then you never need the extra
abstractions you created.

~~~
kqr
Though this "implement one box of the flowchart at a time" approach can lead
to inflexible software due to insufficient modularization.

------
1penny42cents
We overengineer when we overestimate how hard it is to modify details. As
juniors, we wrongly learn that changing code is hard and must be avoided at
all costs. As mid-level and seniors, these refactors are much simpler, but the
painful memories stick.

Architectural boundaries are hard to change later. Drawing the dependency
graph and isolating nodes that can change independently is where the majority
of our effort should go (imo). Even still, simple is less risky than
complicated. Anytime there are more moving pieces than necessary, there is a
risk of an unexpected requirement blowing a hole in a design. So identifying
those dangling pieces and spending a lot of thought and energy in removing
them is where I've found it to be rewarding to "overengineer".

------
SonOfLilit
It's a balancing act.

You need to always ask yourself "What are the reasonable future use cases? How
much will it cost to add infrastructure now to enable each of them, and how
much will completing the feature in the future cost? How much will it cost to
refactor my code later to add it if I don't make preparations for it now?" and
only prepare now for things where not preparing would cost a lot more.

I'm finding the game of Go (Weiqi, Baduk) to be a great way to train in this
skill, because it's all about seeing potential moves and deciding not to play
them _yet_ , and judging if a move should be played sooner or later and how
much of a shape should be built now to enable it to be built later without
wasting resources on building all of it now.

------
lumost
I like to build a forward looking document for a team/groups software which
extends out to cover the companies goals + N months/years. The goal of the
document is to help define the core components, their role, and integration
points such that everyone can reason about future looking tradeoffs. Teams can
then use the document to see how their specific roadmap fits into the broader
picture. This also helps frame use cases terms of current, goals, and dreams.
If you're building for a use case beyond the team's dream features then you're
probably over engineering the solution.

In practice goals start to move after horizon/2\. And a new document needs to
be created to capture where the business is doubling down and where it is
trimming.

------
hakunin
I wrote about this a few years ago.[0] The gist to avoiding premature over-
architecting is to keep sticking with as static/hardcoded behavior as possible
to meet the requirements. Just make sure it’s well organized and cleanly
written in your language of choice. Then over time add configurability to that
behavior as specs change. Architecture is easy to change not when it correctly
predicts future change (almost impossible), but when it is straightforward
enough to follow and reshape.

In my experience, keeping behavior static/hardcoded is the architectural
equivalent of avoiding premature optimization.

[0]: [https://max.engineer/cms-trap](https://max.engineer/cms-trap)

------
a_imho
Imo it is an organizational thing. Big is good, powerful and important. The
more subordinates the more powerful a manager is. Allocated resources (e.g.
dev hours) must be spent on something and more often than not must deliver
just a little bit short in order to allocate more the next time.
Overengineering is rewarded, it creates more tickets to work on and keep the
people busy. Then on the other end there are the resume driven folks, who are
mutually interested in piling up complexity. Still, invoking Hanlon, it can
not be ruled out that some people just do not know better. Sometimes I think
KISS is directly at odds with enterprise development.

------
phendrenad2
Intelligence signaling. Programmers love to show off to their colleagues how
smart they are, so they try to anticipate all possible upgrade paths. "What if
the user doesn't have an email address, betcha didn't think of that. That's
why you hired me, the tech master, with over 9000 confirmed code commits".
This leads to code that is overly generic, but still a big ball of mud. What
you want, instead, is for people to be humble and accept that future changes
won't be something anyone can anticipate now, and instead adhere to principles
that, in general, lead to modular codebases which can respond to changes
flexibly.

------
CyanLite4
Management overvalues complex systems and undervalues more elegant approaches.
Architects are often more than willing to oblige mostly to live out their
fantasies.

Thus, we have every company it seems these days running around trying to
implement microservices because “our architecture is like Netflix’s”. No, that
LOB CRUD application isn’t like Netflix and you don’t need polyglot storage,
Kubernetes, and microservices to capture input from a web form. However, one
of the managers between the architect and the CEO (see: Peter Principle) is
extremely impressed by this idea and can use it to advance their personal
career.

------
SergeAx
On a module or service scale I recommend using TDD [0]. It takes some time
before writing actual code, so I can use that time to think about clean and
effective architecture. After that, I have less time to write actual code, so
most of my efforts are going into just painting red tests green, and not
overengineer or thinking about phantom future use cases. After all, I have
code AND tests, and it's always nice to have.

[0] [https://en.m.wikipedia.org/wiki/Test-
driven_development](https://en.m.wikipedia.org/wiki/Test-driven_development)

------
wiseleo
Make code easy to read and extend without breaking it. By now, many of us are
aware of the importance of writing tests. However, there is a problem in
making them readable.

Here's one Kevlin Henney's lecture [1] that crystallizes it. It took me a long
time to find it, so you are welcome.

Once you start naming things like this, adding future use cases becomes far
less risky and thus you don't need to waste time on them.

[https://www.youtube.com/watch?v=tWn8RA_DEic](https://www.youtube.com/watch?v=tWn8RA_DEic)

------
BareNakedCoder
No one has added my favourite quote yet so hear goes: "Perfection is achieved,
not when there is nothing more to add, but when there is nothing left to take
away.", Antoine de Saint-Exupéry

------
mrbonner
My observation is that as I’m getting older as an SDE the number of classes I
create in a project goes down! Like a lot of people in this thread point out:
almost most of your code has to be changed anyway in a refactor no matter how
clever and abstract the design is, what’s the point in over engineering and
thinking too far ahead of success? I would rather focus on my data structures
and their flows. I know if I screw up the foundational of the data, I could be
in a lot of trouble comparing to just a bad code organization.

~~~
smabie
Yeah, the organization of data (and the data itself) is really the only thing
that matters. Code itself is cheap and disposable. Just write the minimum
number of lines of code required to get the job done, but spend time on the
data model. And because you're not anticipating any future use cases, your
code is often so short it can just be thrown away in the future.

A larger codebase than solves the same problem as a shorter codebase is
_almost_ always worse, has more bugs, and is harder to maintain. Code is the
enemy: the only good line of code is the line that doesn't exist.

------
aszen
There's a fine balance between over-engineering and under-engineering, right
now I'm working on an old piece of software that has been designed without any
significant engineering process, while things remain easy to understand and
fix, there's a deep sense among us that simple dump code repeating code has
brought duplication, inconsistency and bugs that are easy to fix in one place
but impossible to fix everywhere. I think most software outside of silicon
valley may well be under engineered.

------
exabrial
Simple: don't. Do the simplest possibly thing that can work. Only code for
today's requirements. You can't predict the future. Creating extension points
without a past pattern of data of how the system is being extended is just
guesswork, not engineering.

Once you have a established a pattern of how the system is regularly extended,
then you can use that to make predictions about the future. Keeping your
codebase well-tested, small, and light will do far more to help you respond to
change than guessing.

------
iabacu
Sometimes the source of the problem is political: one trying to make a more
general framework to deprecate another team, or to avoid being deprecated by
generalizing in a slightly different direction.

How do people “defend” that in big Corp?

Boat the technical roadmap and requirements with fantasy future features, so
they can say “well that framework doesn’t support X, and our framework will
support X”. It doesn’t matter whether X is useful in practice or not, since
the managers don’t always have the power to make that call.

------
gwbas1c
1: Simple code is easier to refactor than complicated code. Try to keep your
design as simple as possible.

2: It's easier to refactor code with good automated tests, like unit tests,
because you can push a button and know that it works.

3: Make sure that your startup order is well-defined. It's easier to debug a
well-defined startup order than a startup order where everything is implicit.

4: Know the difference between a design pattern and a framework. Frameworks
don't replace knowing how to use a design pattern correctly.

~~~
spaetzleesser
"2: It's easier to refactor code with good automated tests, like unit tests,
because you can push a button and know that it works. "

One thing too keep in mind is that the tests should also be kept relatively
simple. I have seen codebases where there were so many and often complex tests
that modifying the actual code was easy but changing the test would have been
prohibitively expensive.

------
alkonaut
Don't design your software for future features you might need. Design it to be
easy to change. This means it has to be decoupled and simple. The trick is to
make the software decoupled without making it complex. Adding layers and
interfaces and abstractions is the easy way to achieve loose coupling, but it
also adds complexity. Making software that's _simple_ while still being easy
to change is much harder than making some layer lasagna.

~~~
tartoran
> The trick is to make the software decoupled without making it complex

Any insights into how to do exactly this? Any rules of thumb or guidelines?
Also what is not considered complex to some adds a cognitive burden to others.

~~~
alkonaut
No, I can’t think of any simple guidelines. It’s a tingling sense you get
after 10-20 years of doing it that says “I should probably make this simpler
and direct” or “I should probably make this more general/layered”.

I think the sense is mostly developed not by successful designs but by
mistakes and the subsequent refactorings.

The only “simple” advice I have is that in FP the simple and decoupled seems
to happen without added work, while in OO it’s quite a cognitive overhead to
avoid tangling things up. So my only insight is perhaps “make everything
simple functions and shy away from state whenever possible”.

~~~
tartoran
Yes for FP. It seems that when I join OOP complex projects/systems it is much
harder to grok the codebase if the original creator isn't around to tell me
how it works and explain the intricacies. FP on the other hand makes that part
a whole lot more easier to digest.

------
Alex3917
Best practice is to follow the example of the subway system and build short
spurs at the end of each line in the directions you might want to continue
expanding. That way you're not incurring any real additional cost upfront, but
you're saving a ton of money in the future if you decide to keep expanding.

Don't do work before you need it, but also don't give up optionality unless
you're getting commensurate benefits from doing so.

------
hamandcheese
There’s a lot of folks in here shouting “don’t develop until people ask for
it!”

But there is a certain joy that comes when people ask “but what about XYZ” and
you can respond “Yup, we thought of that!”

Granted, I work mostly on developer facing tools and services, which makes it
much easier to anticipate the needs of my customer since I too am a developer.

And even with that caveat it’s not always a slam dunk... but it certainly is
possible to anticipate requirements in a useful way.

------
joe8756438
It is different if you are working on a team or as an individual. On a team
it's important to build ways for people to work autonomously because once
that's done the team as a whole will have more throughput. For an individual
those same separations might be a drag on productivity. Regardless, careful
consideration for what is _needed_ at every step of the way is important.
Extreme Programming is worth a look.

------
thdxr
Although software development tends to come across as deterministic with rules
on what to do when, a lot of it is up to developing good judgement.

The more time you spend thinking about the business and how what you're
building will support it the better judgement you'll develop. You might not be
able to articulate or argue it but you'll have an instinct on when an
abstraction is going to be useful vs brittle.

------
atsaloli
See [https://www.codesimplicity.com/post/the-accuracy-of-
future-p...](https://www.codesimplicity.com/post/the-accuracy-of-future-
predictions/) for some thoughts on the accuracy of future predictions. I've
found Max Kanat-Alexander's writings on software to be very workable; I've had
many successes applying them.

------
echlebek
Understand and implement hexagonal architecture.
[https://en.wikipedia.org/wiki/Hexagonal_architecture_(softwa...](https://en.wikipedia.org/wiki/Hexagonal_architecture_\(software\))

If you build systems with these principles in mind, you can create systems
that are extensible, without creating technical debt and YAGNIs.

------
sys_64738
Never design for future use cases. Design only for the use case in front of
you by developing a point solution. Accept the tech debt and move on.

------
sidchilling
Read this post a month ago about future welcome architecture and found it
quite practical (with some philosophical musings though), hope it helps -
[https://www.twowaystolearn.com/posts/future-welcome-
architec...](https://www.twowaystolearn.com/posts/future-welcome-
architecture/)

------
zhoujianfu
Just another vote that the right answer is do NOT add any code/plans/schema
for “future” functionality.

It’s actually not that hard to refactor things when you do need a new
feature... and who doesn’t love refactoring? It’s fun!

It’s also easier to refactor when your code is smaller and simpler because
it’s only been coded to do the things you actually wanted it to do now.

------
fsloth
Unless mandated, _don 't_ design for _unknown_ future use cases. Keep the code
as simple and dumb as possible. Cater only to those use cases that are known.
Strong-type everything. Lot's of unit tests.

If new requirements come in it's _anyway_ lots of typing. You might as well
spend the effort only when the full situation is known.

------
jl2718
Always complete each use case from scratch in the simplest and most efficient
non-abstract form. Then try to merge use cases into an abstract framework.

Accept the abstractions if: \- the stack trace depth is never greater than
double \- the total code size is decreased \- performance/memory hit is less
than 10%

Thank you for bringing this up.

------
msigwart
As Kent Beck says [0]: "for each desired change, make the change easy
(warning: this may be hard), then make the easy change"

[0]
[https://twitter.com/KentBeck/status/250733358307500032](https://twitter.com/KentBeck/status/250733358307500032)

------
davidajackson
Big companies like to brag about operating in 'agile' teams like startups. And
in startups you can't afford to over-engineer. It kills lots of startups. So,
pretend you're running a startup that has a solid amount of cash? Not sure if
there's an analogy for that with what you're doing...

------
maps7
I don't mean to be controversial but I think planning like that shows lack of
experience and more than likely a problem with how work gets done. The
developers might feel they have no chance to develop the software further
after the first release. That means you have to capture all scenarios straight
away.

------
pedro1976
If possible I try to apply the rule "it's not a problem, if it's not a
problem". Its much easier to write new code than to get rid of old one. Its
really helpful if you have a strong feedback cycle that challenges the amount
of time you want to spend on something (budget or time scope).

------
tmaly
Seeing the forest here, if you learn to ask better questions, you will get a
better idea of what the business side is trying to accomplish.

It is really hard to know what other people want or what they mean. If you can
really understand what someone wants, perhaps you can avoid write 50% of a
system you initially imagined.

------
ible
Trying to design for many unknown futures is expensive.

Changing in the future costs something.

Designing for future changes up front makes sense if cost(future change) >
cost(future proofing)

SAAS? Do virtually no future proofing.

IOT, do some.

Space probe? Do lots.

Also, if you haven't built quite a few relatively similar systems, don't do
future proofing without talking to people who have.

------
pkrumins
You should never design for the future use case. You don't know what the
future will be. You should only design for the current use case you have today
and deploy. When the future comes, you return to the code that you wrote and
update it to fit the new needs.

------
ChicagoDave
It’s hard to flip your brain, but abstractions are bad. Copying code for
different business purposes is good. Simple patterns are vastly better than
complex frameworks, even if you think it improves unit or integration testing.

~~~
Wilem82
> abstractions are bad

They're as good as you make them. Abstractions aren't bad by themselves.

> Copying code for different business purposes is good.

Only when it makes sense, ie when you can't make a good abstraction.

~~~
ChicagoDave
Nope. Different domains shouldn’t share code or data. One domain may have a
customer record with address info and another domain may have a customer
record with past purchases.

Abstractions end up wiring things together that blur business rules.

------
throw10382
IMO if designing for future use cases is hard, you probably don't understand
the problem well enough to be designing for future use cases. Any design you
make will be wrong. Write what works and budget in the rewrite.

------
joaogfarias
1 - Define an executable specification of the next smallest thing you can do
to move towards your goal;

2 - Write the simplest code to fulfill this specification;

3 - Improve on what you wrote so it will express better exactly the
specification you have so far.

------
de_watcher
Sometimes designing for hypothetical cases makes software simpler. The general
case is often easier to understand.

Other thought: good engineers can guess future cases more accurately, that's
why they're good engineers.

------
knackundback
YAGNI
[https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it](https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it)

------
poletopole
I have been learning Haskell and Gluon. 100% worth the time investment. In
functional programming, the over engineering is one or two lines of code
compared to hundreds.

------
codenesium
I lean towards building it to the exact known specification and don't worry
about how it's going to change. If you write tests you can refactor anything.

------
JohnBooty
Best way, IMO?

A well-maintained, well-pruned test suite.

Writing tests forces you to clarify -- to yourself and to others -- precisely
what this piece of software is and is not going to handle.

------
fyfy18
I was brought into to help at an early stage startup a few years ago. The
company was building an e-commerce platform and the product owner had this
idea of 'attributes' that could be attached to any kind of entity in the
system (e.g. product, category, order, customer). If they needed a new
attribute they would be able to simply set it up through the admin UI without
any developer intervention (because developers are expensive!).

When I joined the attribute system had been build with a beautiful UI, and the
backend was mostly working for managing attributes, but that was pretty much
it. The first feature I was working on was showing products on the store, and
for this the idea of attributes made sense. If you are selling a product in
the "OLED TV" category you probably want a "Screen Size" attribute, and to be
able to use it to compare against different products in that category. Through
the platform we had maybe 500 product level attributes, with more being added
all the time, so having them hard-coded wouldn't have been manageable. That
was pretty much as well as it worked though.

Sellers needed to be able to manage their stock through the system, so on the
warehouse entity there were attributes describing the number of products in
stock, lead time, how often they restock, etc. The attributes didn't really
have validations, but they had types which described what UI element should be
displayed when they are entered. However all of the validations around that
were at the whim of the front-end, and in some cases it would send what you
would think should be a numerical type as a string (and then if you try to
change it something would break as that expected it to be a string), so doing
any kind of calculations or logic on the attributes was basically impossible.
In the end I just added db-level fields to the stock entities, with
validations in the backend to make sure these were as expected. The backend
was a Rails app, so this took 10 minutes vs days trying to coerce the
attribute system into doing what I needed.

As it was a Rails app we couldn't actually name the model of these Attribute,
so had to give it another name, and whenever someone new joined (it was an
early stage startup, so had high turnover) we had a 30 minute discussion
explaining this. I never got the explanation of how the product owner expected
logic to be attached to these attributes without a developer doing any work,
but I'm sure they had an 'ingenious plan' for that too.

Needless to say, the startup burned through all it's funding without even
launching, then managed to convince the investors to give them a little bit
more, launched a half working product, and it turned out nobody wanted it.

------
dustingetz
Minimize LOC, that points true north, everything else is fake and results in
absurdities like AbstractProxyFactorySingleton

------
Aqueous
are they doing something besides making software compasable and modular?
because if so the point is those patterns expand the possibility space of what
you can do down the line by making the software easy to change. as long as
type software is following principles of comparability it is not over
engineered it is just good design.

------
iceman_w
If you have too many engineers on a problem, they will overengineer stuff.
This is a management problem.

------
tydok
> How far should we go in making things generic ?

Never make things generic apriori.

------
slipwalker
my rule of thumb is:

requirements -> tests -> specific code

if i reach the same code more than once, refactor and bother to generalize....

~~~
alunchbox
Can you explain this a little more in your own words, I've tried to read
through TDD and talked with co workers but never actually seen this in the
wild.

How do you go about planning your tests / separation of concerns. As in do you
only write tests for your service layer? I find I'd be wasting time to write
it at the controller level/route level.

What about the DML schema?

Personally I always start at the database layer and work up because then I
know at least what data and models I'll be working with

~~~
d0m
Here's how I look at it.. when writing code, you need to run it to try it.

Often people refresh the browser or re-run their CLI program until their
feature is finished.

But if you think about it, every "refresh to check if it works" is just a
manual test. TDD is just making that manual test automated.

1\. Write that test (that you'd anyway have to run manually)

2\. Write code until test pass

3\. Repeat 1 until done.

Code that aren't designed with a test-first mentality is _often_ really hard
to test and require complicated tools or need to mock the whole world.

For the examples you've mentioned:

\- I'd unit tests the db service layer (I.e. functions to fetch from db, make
sure schema is valid)

\- I'd unit tests the various API queries (I.e. filtering, pagination, auth)

\- At the controller level, I'd just unit test the business logic and data
fetching part.

\- Then I'd add a few E2E tests for the UI and user interactivity.

But if you think about it, any of these tests would have had to be run
manually anyway. I.e. You'd probably have queried your API with various
options and refreshed the page(s) a few time to make sure data was fetched
correctly.

~~~
tmaly
I think I would also add, try to find a way to make your tests run fast. If
you have a huge system and 3-4 minutes to run all the tests, that is really
slow. Your feedback look gets limited.

------
pictur
today i think it has become a disease. now simplicity is disliked. people see
complexity more useful.

------
nybblesio
Preface:

Once upon a time, I worked for a company who rents movies on DVD via kiosks.
When I joined the team, pricing was _hard coded everywhere_ as a one (1),
because YAGNI. The code was not well factored, because iterations and
_velocity_. The UI was poorly constructed via WinForms and the driver code for
the custom robotics were housed inside of a black box with a Visual Basic 6
COM component fronting it. It was a TDD shop, and the tests had ossified the
code base to the extent that even simple changes were slow and painful.

As always happens, the business, wanted _more_. Different price points! (OMG,
you mean it won't always be a one (1)!!?) New products (OMG, you mean it won't
always just be movies on DVD??!) And there were field operational challenges.
The folks who stocked and maintained the machines sometimes had to wait for
the hardware if it was performing certain kinds of maintenance tasks
(customers too). Ideally, the machine would be able to switch between tasks at
a hardware level "on the fly". Oh, and they wanted everything produced
_faster_.

I managed to transform this mess. Technically, I would say it was (mostly) a
success. Culturally and politically it was a nightmare. I suffered severe
burnout afterwards. The lesson I learned is that doing things "right" often
has an extremely high price to be paid, which is why it almost never happens.

On "over-engineering":

I find this trend fascinating, because I do not believe it to be an inherent
issue. Rather, what has happened, is that "engineering" has moved ever closer
to "the business", to the point of being embedded within it. What I mean by
"embedding" here is structurally and culturally. [Aa]gile was the spark that
started this madness.

Why does this matter? Engineering culture is distinct and there are lessons
learned within we ought not ignore. However, when a group of engineers is
subsumed into a business unit, their ability to operate _as engineers_ with an
_engineering culture_ becomes vastly more difficult.

The primary lesson I feel we're losing in this madness is the distinction
between _capability enablement_ and the _application of said abilities_.

Think about hardware engineering: I do not necessarily know all of the ways
you -- as the software engineer -- will _apply_ the _abilities_ I expose via
my hardware. Look at the amazing things people have discovered about the
Commodore 64 _years_ after the hardware ceased production. Now, as Bob Ross
would say, "Those are Happy Accidents." However, if I'm designing an IC, I
need to think in terms of the _abilities_ I expose as fundamental building
blocks for the next layer up. Some of those abilities may never be used or
rarely used, but it would be short sighted to not include them at all. I'm
going to miss things, that's a given. My goal is to cover enough of the
operational space of my component so it has a meaningful lifespan; not just
one week. (N.B. This in no way implies I believe hardware engineers _always_
produce good components. However, the _mindset_ in play is the important take
away.)

Obviously, the velocity of change of an IC is low because physics and
economics. This leads everyone to assume that _all_ software should be the
opposite, but that's a flawed understanding. What happens today is we take C#,
Java, Python, Ruby, etc. and start implementing business functionality _at
that level_. To stretch my above hardware analogy, this is like we're taking a
stock CPU/MCU off the shelf and writing the business functionality in assembly
-- each and every time. Wait! What happened to all that stuff you learned in
your CS undergrad!? Why not apply it?

The first thing to notice is that the "business requirements" are extremely
volatile. Therefore, there must be a part of the system _designed_ around the
nature of that change delta. That part of the system will be at the highest,
most abstract, level. Between, say the Java code, and that highest level, will
be the "enablement layers" in service of that high velocity layer.

Next, notice how a hardware vendor doesn't care what you've built on top of
their IC component? Your code, your problem. Those high-delta business
requirements should be _decoupled_ from software engineers. Give _the
business_ the tools they need to solve _their own_ problems. This is going to
be different for each business problem, but the pattern is always the same.
The outcome of this design is that the Java/C#/whatever code now has a much
lower change velocity and the requirements of it are _future_ enablement in
service of the tools and abstraction layer you've built _for the business_.
Now _they_ can have one week death march iterations _all they want_ : changing
colors, A/B testing, moving UI components around for no reason...whatever.

There are real-life examples of this pattern: Unity, Unreal Engine Blueprints,
SAP, Salesforce. The point here isn't about the specifics of any one of these.
Yes, a system like Blueprints has limits, but it's still impressive. We can
argue that Unity is a crappy tool (poor implementation) but that doesn't
invalidate the pattern. SAP suffers from age but the pattern is solid. The
realization here is that the tool(s) for _your_ business can be tailored and
optimized for their specific use case.

 _Final thoughts_

Never underestimate that the C3 project (where Extreme Programming was born)
was written in Smalltalk, with a Gemstone database (persistent Smalltalk). One
of the amazing traits of Smalltalk is that the _entire environment_ itself is
written in Smalltalk. Producing a system like I describe above, in Smalltalk,
is so trivial one _would not notice it_. Unfortunately, most business
applications are not written in environments nearly as flexible so the pattern
is obscured. I've held the opinion for a long time that XP "worked" because of
the skills of the individual team members _and_ the unique development
environment in use.

As I stated at the beginning, this path is fraught with heartache and dragons
for human reasons.

