
Software should be designed to last - kiyanwang
https://adlrocha.substack.com/p/adlrocha-software-should-be-designed
======
d_burfoot
I don't think anyone who is serious about software craftsmanship would dispute
the basic idea here ("you should be tasteful and minimalist when choosing
dependencies").

The problem is that the economics of software development don't really ever
require craftsmanship. There are two basic modes of development. 1 - You are a
startup, struggling to survive; you have to ship features as fast as possible,
with many fewer engineers than you really need to do things right. 2 - You are
a huge billion dollar corporation, probably a monopoly, with an enormous moat
protecting you from serious competition.

Neither situation prizes craftsmanship. The startup just needs to glue
together a hack to raise the next founding round or solve the immediate user
problem. The gigacorp has more money than it knows what to do with, so they
"solve" every software problem by hiring more engineers. There's actually not
many companies who exist in the middle ground, where it might be important to
produce high quality software. I believe this is also the reason we don't see
more uptake of high concept languages like Haskell and Lisp.

~~~
forgotmypw17
I'm writing something designed to last, and here is how I am doing it:

1) I am doing it by myself, without any corporate sponsors or financial pay.

2) I am writing it in such a way that it could have already been working for
25 years.

~~~
thundergolfer
I like that 2nd criteria. Seems like a smart way to avoid components that are
much more likely to get software decay.

Have you been programming for 25 years?

~~~
tluyben2
> Have you been programming for 25 years?

I have longer than that and it is really scary that software to see that
software I wrote early 90s is still used. I thought, at the time, it would be
rewritten in something cooler every few years. But nope. It does make me pick
tech that can or will last; not many dependencies, open source, jvm, .net core
or C/C++, so it is possible to revisit it in 20 years and not be completely
lost (like I imagine it would be with js frameworks/libs/deps at the moment).

------
stupidcar
What this article really boils down to is the usual coder arrogance: Thinking
that most problems are inherently simple[1], thinking they are a better coder
than average, and thinking that the answer is always to do more yourself, so
that more of the codebase will benefit from your superior skills.

The truth is, there are no easy problems in programming. There are always a
thousand corner cases and unexpected complexities. And all those buggy,
unmaintained libraries you find in your language's package repository are
still probably better than anything you're going to roll yourself. Look into
the source code of such libraries and alongside the hacks and outdated code,
you'll find hundreds of subtle problems avoided, because the author spent a
long time and effort on the problem you've spent ten minutes thinking about.

The answer to quality problems in software engineering is not less libraries,
it's more and better libraries. In the same way that the high-quality civil
engineering the author admires is not achieved by having one person do
everything themselves, but by the successful collaboration of a large number
of skilled and specialised subcontractors.

Right now, we're still in the infancy of our industry. This, combined with
continued the evolution of hardware capabilities has meant that software
engineering has never had the long-term, stable base on which to build
repeatable, reliable routes to success. That's fine. If we're in the same
position in a couple of centuries, it's a problem, but right now we should be
experimenting and failing.

Even today, the situation is not as bleak as if often portrayed in articles
like this. There are a huge number of robust and high-quality libraries and
frameworks providing extremely complex capabilities in an easily reusable way.
If a programmer from the mid-nineties were to be transported to the present, I
suspect they'd be amazed by what off-the-shelf libraries enable even a single
programmer to do. The undeniable quality problems that do affect much modern
software often have more to do with the hugely increased scope of ambition in
what we expect software to do for us.

[1] Outside a handful, like encryption, that are acknowledged as hard for the
purpose of being the exception that proves the rule.

~~~
tomlagier
> The answer to quality problems in software engineering is not less
> libraries, it's more and better libraries. In the same way that the high-
> quality civil engineering the author admires is not achieved by having one
> person do everything themselves, but by the successful collaboration of a
> large number of skilled and specialised subcontractors.

I think one of the fundamental problems with our industry is that there is no
funding mechanism for those subcontractors. The "base" that we build on is
either unpaid volunteers, or tailored to a specific companies needs and
goodwill.

If free software was not the norm, I think we'd see a lot of investment and
enterprise around producing high quality libraries.

~~~
anchpop
I recently wrote about this problem here [1]. I think "free" software should
still be the norm, but companies who use it should have to pay into a fund
that distributes the money to where it can be used most productively to make
new and more useful libraries. Otherwise there's no sustainable way for most
people to contribute to open-source and be paid for it.

[1]: [https://blog.andrepopovitch.com/complement-collution-
paradox...](https://blog.andrepopovitch.com/complement-collution-paradox/)

------
GnarfGnarf
I would like to answer the question "Why is software so frequently
disappointing and flawed? "

As a developer, there is nothing I would like better than to turn out the
highest quality software, as long as I could be compensated for it.

Does anyone really think that the millions of lines of code that went into
Windows is worth the piddling $100+ the consumer pays for it?

Let's start with a basic premise: quality is worth money. I'm sure we agree
that a Hyundai (or whatever passes for a cheap car in your neighbourhood)
costs less than a Porsche or BMW because the Porsche is better designed and
better built. Better design means more experienced and brilliant engineers,
more talented designers; better built means more skilled and dedicated
assembly-line workers. All these people demand more money. Hence the Porsche
company commands a higher price for their Carreras, 911s, whatever.

The same should be true of software. If a company invests great care in
designing a better operating system, or a better word processor, that never
crash, and always have helpful help and meaningful error messages, how much do
you think that would be worth? I'll give you a hint: the military do in fact
get top-quality software for their jets and rockets. They get software that
almost never fails, and does exactly what it is designed to do.

Do you know how much this software costs? $50 per line of code.

Translating into everyday terms, a bullet-proof operating system would cost
you, at a rough guess, $5,000 per copy.

Now I have no doubt some people would be happy to pay $5,000 for a stable OS.
However, there are many people who couldn't afford this amount.

So what would happen? In any other field (automobiles, stereos, TVs,
restaurant meals, housing) people who can't afford quality just put up with
less and shut up.

But in the software field... well, they just make a copy of someone else's
software, and enjoy the full benefit of top-of-the-line quality, without
paying for it. I wager even you couldn't resist obtaining a $5,000 OS for
free.

How long do you think a software company would last if their product cost
millions to make, and they only sold a few copies at $5,000? Why they would go
broke, of course.

This is the crux of the software dilemma: except in a few specialized cases
(commercial or embedded software), the maximum price for software is the
monetary equivalent of the nuisance value of duplicating it.

In consumer software, this is in the range of $19-$29.

The digital world turns the economics of quality upside-down: in traditional
models where quality is supported by price, the market pays the price if it
wants the quality.

In the digital model, a perfect copy of the merchandise costs virtually
nothing, and undercuts the legitimate market, putting a cap on the maximum
that can be charged for a product.

There is a built-in limit to how much time, effort and expense a company can
invest into a mass-produced product. This cap is equivalent to the "nuisance
value" defined above. It is not reasonable for the consumer to expect
warranties and liabilities that go way beyond what the manufacturer receives
from sales of the product.

The music and movie industry are wrestling with the consequences of easy
digital duplication. They have taken a different route to protecting their
intellectual property.

I challenge anyone to come up with a business model where the software
developer that invests great expense in building a quality product, can obtain
full compensation from the market segment that values his quality.

Whose fault is it anyway? Simple: it's the consumer who copies and pirates
software that forces the price down and therefore the quality to remain low.
Any analysis that does not take this into account is simplistic.

It is naïve to think that most developers are not struggling to make ends meet
and stay in business (Microsoft notwithstanding).

~~~
justinclift
What are your thoughts on where OSS fits into this?

~~~
GnarfGnarf
[https://progenygenealogy.blogspot.com/2018/01/the-open-
sourc...](https://progenygenealogy.blogspot.com/2018/01/the-open-source-
mirage.html)

------
SigmundA
I want to build software that lasts, but I don't think you really can at this
point.

We haven't even figured out fully how we should represent a string or date or
number in software let alone have an enduring language or ABI.

I feel like I am building everything on a foundation of sand at it will need
to be rebuilt every 5 years, 10 if you're lucky.

I do think it will change, but it will be awhile and by then it will probably
just be a few big players making software anyway, kinda like car companies.

~~~
cmroanirgo
I've got the same exe running in client installs since the late 90's. It's had
a tweak or two here and there, but largely unchanged. I last recompiled it
about 2yrs ago and had a fresh install of it about 6 months ago.

It's not the prettiest thing and could have certain upgrades in light of new
security practices (eg use new encryption algos & go thru an in-depth security
audit). But to the clients, it's unbeatable and there's nothing remotely close
to migrate to.

Unbelievably, it's a vb6 app that I never bothered porting to dotnet. Even
more unbelievably, it's a port of its predecessor, written in turbo pascal. As
long as I can continue to find the dev environment installer, it's good to go.

So. I think s large part of the problem is that ppl don't redirect their dev
environment enough to keep an install disk, (if one was available in the first
place).

A second large problem is that modern development relies heavily on 3rd party
library use, which means your software is reliant on more than one company for
your binary.

So. Find an environment that you can archive/keep, and ignore the not invented
here rule to a large extent

~~~
user5994461
>>> I've got the same exe running in client installs since the late 90's.

That's on Windows that has stable API and ABI. The 90's is probably the limit
because older software are 16 bits and don't work on current 64 bits Windows.

On Linux all software break with each distro release because "core" libraries
are unstable. Got executable and libraries compiled on RHEL 6 and most of them
fail to load on RHEL 7 because something.so not found.

~~~
laumars
You can bundle the .so files in Linux just like you would bundle .dll files in
Windows. Or statically compile if you don’t want dynamic libraries. Linux and
Windows isn’t really any different there aside from Windows has more in the
way of novice friendly tools to create redistributables.

It’s also worth noting that you’re comparing Apples to Oranges in that Visual
Basic 6 is a very different language to C++. VB6 has its own warts when it
comes to archiving such as it’s dependence on OCX and how they require
registering for use (they can’t just exist in the file system like DLL and SO
libraries, OCX required their UUIDs loaded into Windows Registry first).

To further my previous point, if you wanted to use another language on Linux,
maybe one that targets the OS ABIs directly (eg Go), then you might find it
would live longer without needing recompiling. Contrary to your statement
about user space libraries, Linux ABIs don’t break often. Or you could use a
JIT language like Perl or Python. Granted you are then introducing a
dependency (their runtime environment being available on the target machine)
but modern Perl 5 is still backwards compatible with earlier versions of Perl
5 released in the 90s (same timescale as the VB6 example you’d given except
Perl is still maintained where as VB6 is not).

~~~
PaulDavisThe1st
_Linux_ ABIs might not break often.

But GTK, Qt, libwhatever ? More often than one would like.

~~~
laumars
I’d discussed that problem in the first paragraph of my post.

~~~
user5994461
For reference, the C++ ABI is changing with every minor version of gcc, making
compiled libraries like glibc or Qt incompatible. Major distro releases
upgrade gcc so all packages are incompatible.

I agree on compiling statically to avoid DLL hell. However it is fairly
difficult in practice because software rarely document how to statically build
them and they very often take dependency to some libraries on the system. All
it takes is one dynamic dependency to break (libstdc++ is not stable for
example).

~~~
laumars
> _For reference, the C++ ABI is changing with every minor version of gcc,
> making compiled libraries like glibc or Qt incompatible. Major distro
> releases upgrade gcc so all packages are incompatible._

It’s actually not as dramatic as that and you can still ship libc as a
dependency of your project like I described if you really had to. It’s vaguely
equivalent in that regard to a Docker container or chroot except you’re not
sandboxing the applications running directory.

This is something I’ve personally done many times on both Linux and a some
UNIXes too (because I’ve had a binary but for various different reasons didn’t
have access to the source or build tools).

I’ve even run Linux ELFs on FreeBSD using a series of hacks, one of them being
the above.

Back before Docker and treating servers like cattle were a thing, us sysadmins
would often have some highly creative solutions running on our pet servers.

> _I agree on compiling statically to avoid DLL hell. However it is fairly
> difficult in practice because software rarely document how to statically
> build them and they very often take dependency to some libraries on the
> system. All it takes is one dynamic dependency to break (libstdc++ is not
> stable for example)._

There are a couple of commonly used flags but usually reading through the
Makefile or configure.sh would give the game away. It has been a while since
my build pipelines required me to build the world from source but I don’t
recall running into any issues I couldn’t resolve back when I did need to
commonly compile stuff from source.

------
hyko
The idea that removing external dependencies necessarily simplifies your code
and leads to a maintainable system is a fallacy. Use external dependencies
appropriately, follow SOLID principles and a clean architecture and you can
leverage the benefits of other people’s code without losing the
intelligibility of your system.

In the example the author gives, the real mistake is allowing the web
framework to define a system that you then try to customise to produce your
application. The battle is already lost because your business logic is now a
dependency of someone else’s (rotting) generic web app template. This may be
appropriate for you if this is a proof of concept or spike but for something
with a longer lifetime, any time you save will be paid back with interest when
the host framework diverges from your needs.

A JSON parser is not an equivalent class of dependency _if_ you keep it at the
periphery of your system. There’s not much point in writing a JSON parser
unless you have a particular requirement that cannot be satisfied by a third
party library. You should be able to swap it out for a different JSON parser
in a few hours.

80:20 seems like a ratio that was just pulled out of the air; I have no data
to hand either but for most user applications I would suspect the real ratio
is more like 99:1.

Edited to add: I’m still in agreement with a lot of what the author says here,
such as: evaluate your dependencies seriously, understand them, and be mindful
of their impact on things like binary size.

------
zokier
For truly long lasting software, I feel all code that make up a system should
be treated equally; i.e. not divided into "own code" and "third party
dependencies". Basically extreme form of dependency vendoring.

But it is very expensive way of doing things, and would not work well with
modern constantly updating libraries. But I do personally consider software
maintenance to be an antipattern. I'd rather have my software be correct and
eventually replaceable than constantly changing and "maintainable"

------
msla
In order to have software that lasts, you need environments that last.

The classic 1950s Big Iron software, now run swaddled in layer upon layer of
emulation on current mainframes, is software which lasts because people will
recreate its environment again and again, and ignore the people who are ill-
served by it because they don't fit into that environment neatly. Oh, your
name doesn't fit into an all-caps fixed-width EBCDIC form? Go fold, spindle,
and/or mutilate yourself, NEXT! (This happens to me. Over and over again.)

On the opposite extreme, unmaintained Internet software rots:

[https://utcc.utoronto.ca/~cks/space/blog/tech/InternetSoftwa...](https://utcc.utoronto.ca/~cks/space/blog/tech/InternetSoftwareDecay)

Or, to be more precise, software is built with assumptions underpinning it,
and those assumptions change out from under it more quickly on the Internet
than in other contexts. Software can go from being secure and completely above
reproach to being a major factor in DDoS amplification or spam runs because
the world changed around it. Software that lasts is like ships that last:
Replace the hull, the mast, the sails, the cabins... same ship, neh?

------
_def
It's very sad for me to see that software in consumer products isn't that
reliable or performant anymore, just because vendors can "push updates", which
seems to result in crappier software overall.

~~~
crazygringo
Why do you think consumer products at a similar price point used to be more
reliable or performant in the past? I don't think that's true at all. In the
90's, software was riddled with bugs, you saved files every five minutes
because your word processor regularly crashed, you had to restart your
computer multiple times a day when the OS hanged, and starting Photoshop
easily took a full minute.

Also why do you think the ability to push updates results in crappier
software? In the 90's, if your copy of CorelDraw or Windows had a bug, you
lived with that bug for years. Today, if it's a common showstopper, it gets
fixed quickly.

To my eyes, everything's gotten _far_ better.

~~~
Nextgrid
I don't think he's talking about the 90s. But the late 2000s were definitely
better than now. Computers were powerful enough to do most things we do today,
and yet both OSes and applications seemed to be much better. Compare the
quality and stability of Windows 7 (or even Vista) and their macOS
counterparts to their nowadays' versions.

~~~
crazygringo
I guess I'm not seeing that either.

Compared to the late 2000's, today's computers have SSD's and retina displays
and play 4K movies. They're _vastly_ faster, with typography you can't see
pixels in.

And OS's and applications are basically the same. My macOS is no less stable
than it was a decade ago. The main difference is that my Mac is far more
_secure_ , so I trust third-party software much more.

I'm just not seeing how the quality of OS's _or_ applications has gone down. I
think what would be more accurate to say is that both OS's and applications
have added more _features_ , and that otherwise quality has remained basically
the same.

Sure, I still have finicky Bluetooth issues today. But I had finicky Bluetooth
issues 10 years ago too. It's certainly no _worse_. But now Bluetooth gives me
AirDrop too.

~~~
Nextgrid
> today's computers have SSD's and retina displays and play 4K movies

And yet, something as basic as a Slack client now requires gigabytes of RAM,
microblogging such as Twitter loads a monstrosity of a webapp that immediately
makes the fans spin up to display 240 characters in the rare case it actually
loads without errors which require to refresh the page. Modern entry-level
laptops have the processing power of a decent machine from a decade ago, and
yet they appear just as slow to do the same computing tasks we did 10 years
ago.

My 2017 Macbook lags and stutters when loading a YouTube video page. YouTube
used to load fine and not stutter in 2009 on a laptop with a third of the RAM
and CPU that I currently have, and yet the task at hand didn't change at all,
it still just needs to display a video player and some text.

Windows 10 broke _start menu search_. Come on, this problem was solved a
decade ago.

Every large website's login flow now involves dozens of redirects through
various domains which can break in all kinds of interesting ways leaving the
user stranded on a blank page in the middle of the flow. I know the reason
behind them (oAuth, OpenID Connect, etc), but as a user I don't care; this is
a major UX downgrade and the industry should've done better.

We've replaced offline-first applications with cloud-first. Nowadays even
something that should work fine offline will shit itself in all kinds of
unexpected ways if the network connection drops or a request fails.

~~~
blackrock
It’s not the software. It’s all the ads that load up in the background, that
tracks you, even when you’re browsing incognito.

~~~
ric2b
Slack doesn't have ads.

------
zmmmmm
The kicker these days is that "standing still" is not enough. Software that
"lasts" has to constantly change, because externalities force it to -
primarily, security.

Whatever library, language or foundation you built on, you can guarantee
whatever version you used is going to stop getting security updates in a
couple of years. And then there will be a whole lot of baked in dependencies
you didn't know about - URLs (like XML schema URIs that were never meant to be
resolved but lots of libraries do), certificates, underlying system libraries
etc.

So designing software "to last" now is designing software to be constantly
updated in a reliable and systematic way. And the interesting thing about that
is from what I can observe, the only way to achieve that currently IS through
simplicity - as few dependencies as possible, those that you do have very well
understood and robust, etc etc. So it sort of comes back to the author's
argument in the end, though maybe through a different route.

~~~
fulafel
It's possible to engineer for this. The mental model: imagine you are
equipping a time travel mission decades into the future, to integrate with the
future internet, and to return home with acquired results.

A TLS client is not possible to future proof like this of course, but you
could have a roster of approaches. For example, try getting your hands on the
latest curl / wget builds (try each with a few different approaches), or the
latest JVM/Python/Powershell & use those TLS APIs.

A fun game would be to try this with tools available 20 years ago, and then
redo it to make a sw time capsule from present day to 20 years into the
future...

~~~
mjevans
A better approach is to have the "TLS" library accept either an already
connected socket (and host-name to validate) OR return something that behaves
like one after dialing and validating an address. That way when the underlying
library is re-linked to something modern it'll still have the same API / ABI
but the underlying security implementation will belong with the updated
library.

~~~
fulafel
That could indeed be an idea, if you can assume the SW can be rebuilt/linked
against API compatible libs, and you rely on the API existing far into the
future. The automatic CLI tool scouring option would allow you to hedge your
bets over several implementations because of looser coupling.

------
marcinzm
As a counter point to the author:

* Half baked in-house implementations of things are often filled with bugs and broken edge cases. Especially when it comes to security and concurrency.

* In a larger code base, you won't understand everything whether it's in-house code or third party code. The in-house code was written by someone who quit 5 years ago and is now a monk in Tibet.

------
humanfromearth
Software should be built to accomplish its purpose. It's not a special
snowflake, it's a tool to do some specific job. To think otherwise is to setup
yourself and your company/team for disaster.

Kernels, compilers, language runtimes, databases? Sure, build them to last.
Web pages that will not be used in 2 weeks? Don't waste your time with "build
to last".

The problem is that some of us think that our software should be build to last
- but in reality it's just some mediocre thing that needs to get in front of
the clients as fast as possible and some bugs are "OK" to live with.

~~~
leonidasv
That would be fair if all those "pages that will not be used in 2 weeks"
actually ended up not being used after just 2 weeks. Instead, they will
probably continue to be used for 2 months or 2 years.

------
ocdtrekkie
I try to use no dependencies beyond my database connection. The one area I
feel unprepared to handle myself. Anything else I try to code a minimally-
viable functionality that meets my needs. As a bonus, I have learned what all
these things do and how they work while fiddling with implementations, and if
I want to customize their functionality in any way, it's generally pretty
easy.

I'd almost always take a pasted tutorial over a robust dependency every time,
the former is more reliable. Any dependency I do take on is something I have
to watch the changes of and project status of like a hawk.

~~~
PetahNZ
That doesnt work in most cases though. You are not going to reimplement jpeg
encoding, mpeg, etc.

~~~
ocdtrekkie
If the language/framework you are using handles it already, you're probably
golden: Your language/framework is functionally the one dependency you always
have, and features implemented in the platform tend to be fairly stable. Pick
a platform that supports the most difficult things core to your application.

I'm most comfortable in .NET or PHP, both which have extremely large long-
lived built-in function sets that generally are non-breaking over a span of
many years. Meanwhile, I generally don't add anything to them via
NuGet/Composer/etc. They both operate almost opposite to the JavaScript world,
where everything is built by random third parties. If it's popular enough a
need, it tends to be added the basic package.

------
rado
Pure HTML with progressive enhancement has worked fine for 30 years and will
continue so for many more. But a good, lasting product is incompatible with
the current "take the money and run" business strategy and analytics for
management. If it works well for so many people, what can we do?

~~~
aboringusername
That's all well and good but go and visit some random websites and see how
many of them depend on sites like unpkg or jsdelivr or something.

Every time you try to load a resource from a domain you're introducing a
_huge_ risk by letting them be the judge, jury and executioner.

Domain gets hacked and injects malicious scripts or collects user data? Domain
fails entirely (no-renew) or in 10 years just stops responding?

A lot of web pages today have built in self-destruct features purely based on
the JS/resources they're loading.

For 0 benefit, as well, since the amount of BW saved in so marginal.

~~~
rado
Yes, security is the most sensible reason we use mega frameworks, because FB
and Google will patch any issues instantly, for free and forever.

------
cable2600
Software is not always designed to last. Take video games that use servers,
the video game company can take down the servers and the video game no longer
works. You'd have to set up your own server to emulate what the video game
company's server did.

All those mobile games that require a server, once the server goes down the
game no longer works.

In the old days you set up Doom as a server on your PC networked to other PCs
for multiplayer. Those can last and be remade for new platforms.

~~~
abj
LAN hosted games are the golden area. Online game servers are a really
interesting example of brittle software.

When the server is shut down, the game breaks and the software _is_ unusable.

This often happens because of the misaligned incentives of the business to not
release the server code. This disappoints both the original developers and the
consumers.

I'd argue that this is not only a software problem, but mostly an problem of
copyright distorting the incentives of publishers.

One could imagine a client hosted game or alternative architecture that
removes the central dependency on the company could be the way to build
software that is built to last to last longer.

In the online game I'm developing - I make sure that every client can also run
as a game server. This way if my servers are shut down players can play
without depending on me.

My decision to remove myself as a dependency directly goes against the
incentives I have as a publisher. I slightly narrow the possible monetization
options for the game.

Mostly I'm trying to explain why developers/publishers decide to make brittle
software, even if I don't agree personally.

------
AYBABTME
More often than not, the code I have to write needs to be written fast, and I
(mostly) won't have time to look at it again once deployed. I've mostly worked
at fast growing companies, so that's been the case everywhere.

What I've learned is that making stuff as simple and stable/low maintenance as
possible is the only way to be sane. I can't afford to deploy code that then
requires me to hold its hand with a team of 3-4 engineers for the upcoming
years. It needs to run itself and be forgettable. And it needs to be written
fast.

This has become much harder in recent years with the switch to things like
Kubernetes. Kubernetes moves so fast and needs constant nurturing, even for
casual users. Running old versions of it is painful and dangerous, so you're
forced to update. And the ecosystem is still so early in its life that all the
paradigms change and flip on their head every year. Odds are that something
you deploy in it today, will need to be looked at in 18 months. And the whole
thing will need a team dedicated to keeping it healthy.

Anyhow, that's my angle on the author's rant. Things need to last, IMO, mostly
because I don't have time to go back to them later.

~~~
vbezhenar
I'm using completely opposite approach. I'm trying to define a strict
requirements to a given task and then I code assuming those requirements. If
any small detail changes, my code will break. So I'm trying hard to break it
as soon as possible (using asserts and similar patterns) with enough
information to understand the reason of this breakage.

So my code definitely requires maintenance in a changing world. But more than
once those breaks actually identified a bug somewhere else and fixing that bug
was essential. If I would try to self-fix wrong data, those kinds of bugs
could be missed. Sometimes I need to fix a code to adapt to a changed format
or something like this. But that's not a big deal, because change is simple
and something like exception stacktrace allows to instantly find a code which
is to be fixed.

------
boznz
If you want your software to last remember that you also have to have a
working and upto date development environment, ie your code may still work
fine but if you cannot compile it to the correct target it is useless.

I found this out the hard way when I had to recompile some fpga firmware for a
$100K piece of equipment which used a programmer not supported by windows, a
chip and code not supported by the current IDE and a binary blob which we did
not have the source code for and would not link, luckily we winged it by
replacing the FPGA with a hardware compatible replacement... We now preserve
the whole development environment on a VM and save it with the source code.

------
magwas
If you write your code properly, you will have tests which break on every
library incompatibility, and a tiny wrapper around your libraries. (If you do
TDD properly, you will have it.) And that's all you need to keep your code up
to date. When upgrading a library, you will of course build and test your code
with it, and that will show you any problems, which you can fix in the library
interface layer before having any chance to go to production.

~~~
Silhouette
_If you write your code properly, you will have tests which break on every
library incompatibility, and a tiny wrapper around your libraries. (If you do
TDD properly, you will have it.)_

In general, I'm not sure it's possible for all of these things to be true.

Many libraries are useful because of their side effects. Depending on what
those side effects are, it may simply not be possible to integration test your
entire system in a way that would satisfy all of the above claims.

The alternatives tend to involve replacing the parts of the code that cause
real side effects with some sort of simulation. However, you're then no longer
fully testing the real code that will be doing the job in production. If
anything about your simulation is inaccurate, your tests may just give you a
false sense of confidence.

All of this assumes you can sensibly write unit tests for whatever the library
does anyway. This also is far from guaranteed. For example, what if the
purpose of the library is to perform some complicated calculation, and you do
not know in advance how to construct a reasonably representative set of inputs
to use in a test suite or what the corresponding correct outputs would be?

------
drol3
I think we can all agree that software should be designed to last :)

My experience tells me that when this fails to happen its usually not
explicitly due to too many dependencies. Rather it is because the developers
have a poor/partial understanding of the problem domain.

External libraries help this problem a bit because it allows developers to
offload tasks onto other more experienced developers. They can (and
occasionally will) misuse those solutions, but that is not the dependencies
faults. On the contrary skilled developers will produce quality solutions with
or without dependencies.

The signal seen (poor software has many dependencies) does not mean that
software with dependencies are poor. It means instead that developers who have
a poor domain understanding but good business skills are disproportionately
successful compared to their peers that have good domain understanding but
poor business skills.

Make of that what you will :)

------
AshamedCaptain
Isn't the inevitable conclusion "you should design your interfaces so that
they last" (and/or choose existing interfaces which look likely to last)
rather than "you should avoid dependencies"?

------
kristopolous
It's all an illusion. Software kinda isn't lasting, some things merely have
competence behind them

Look at the GNU coreutils, things you probably think shouldn't have changed in
a while:
[https://github.com/coreutils/coreutils/tree/master/src](https://github.com/coreutils/coreutils/tree/master/src)

There's actually frequent edits. Legendary disk destroyer was just updated
last month. The difference is the competence

------
TravelPiglet
Building things to last has a really high cost and is usually not worth it.
The vast majority of the buildings and infrastructure humans have constructed
have been torn down, completely rebuilt or abandoned. What remains are usually
extremely expensive monuments to religon, culture or governmental
constructions.

Most software is not worth to preserve the same way most buildings are not
worth to preserve (except for perhaps the aesthetically pleasing facade).

------
leothekim
"...bridges and civil engineering projects are designed to last, why can we
also design software for it to last?"

Some software has lasted for many decades. This isn't necessarily a good
thing. [1]

[1] [https://arstechnica.com/tech-policy/2020/04/ibm-scrambles-
to...](https://arstechnica.com/tech-policy/2020/04/ibm-scrambles-to-find-or-
train-more-cobol-programmers-to-help-states/)

~~~
ryukafalz
On the other hand, there’s lots of software that’s been around for decades
that still works quite well today. Most GNU software for example, and
particularly Emacs.

------
jakuboboza
90 days certs that needs renew based on external service API that can change
says No.

Software in a hardware that is designed to do certain job should indeed last.
Like Telecom towers should work fine "forever".

But software as in APIs and services that constantly has to evolve and be
expanded. Yeah that is a harder sell.

We live in crypto world where each advancement in cputing or ways to solve
could render algorithms "bad" means we have to update and adapt.

------
thdrdt
Software that lasts is software that is a joy to use.

Imho the tech, how good you are at programming is all not very relavant when
the user doesn't like to use your software.

I have written a small CMS for a company which was replaced twice but every
time they ditched it to return to my old simple version.

This is just one example but I think that should be in people's minds when
they create software that lasts: people must enjoy using it.

This can even apply to a command line program.

------
gentleman11
If somebody is a solo developer working on a huge project, it would be lovely
if the software lasted... but the big fear is that it won't get users. Can't
really spare the time to worry about 10 years from now when you are thinking
about how you have to get a regular job if you can't get X users in the next 6
months

------
geofft
Civil engineers and spacecraft designers build their systems to last because
they don't have a choice. You can't deploy fixes to a production bridge ten
times a day.

There might be good reasons to build software to last, but "These other
engineering fields build their things to last" isn't one of them. I have no
doubt that if you _could_ reliably send fixes to bridges and spaceships,
people would do it - and that many lives would have been saved.

My personal opinion is you should design the larger ecosystem around your
software to last and be robust to the replacement of any individual part.
Building on Go's standard library HTTP server would have seemed needlessly
cutting-edge about five years ago. Building on Apache and mod_cgi would be
considered dated today (even though it would work). The stack you should be
using _will_ change. Build your system so that, when change comes, you're not
afraid to write new software.

In practice, this means making good designs to avoid hairballs of complexity,
having deployment pipelines that let you roll out canary versions (or better
yet, send a copy of production traffic against a sandbox environment), keeping
track of who's using APIs internally so you can get them to change, having
pre-push integration testing per the Not Rocket Science Rule so you can be
confident about changes, etc. You should think about your choice of web
framework or JSON library up front, sure, but more importantly, if a
sufficiently better web framework or JSON library comes along tomorrow, you
should be able to switch. As a principle - any time you see something that you
think you'd be afraid to change in the future, don't ship it if you haven't
shipped it yet and figure out how to build the right abstraction around it if
you can.

------
Ericson2314
Reusable abstractions is the foundation of what makes programming productive.
This person does give respect for libp2p, which is good, but clearly they've
never seen a small abstraction that is still immensely valuable.

------
mehh
The software (as in the actual code for an implementation), should be
ephemeral in my opinion.

The data model, protocols and knowledge base (docs, requirements etc) should
be designed to last and be extensible.

------
jonwalch
Constructive criticism if the author is in here, I find multiple bold
sentences per paragraph to be taxing on the eyes.

------
dqpb
> _If all three are in agreement, they carry out the operation. If even a
> single command is in disagreement, the controller carries out the command
> from the processor which had previously been sending the correct commands._

By this definition, wouldn't all three processes be the last ones to send the
correct commands?

~~~
MintelIE
Actually the way it usually works is a voting system, where if one component
disagrees, and two agree, the majority vote is upheld.

------
alliao
for some reason when I saw the headline I went over to check up on grc.com
that guy's obsessiveness into coding everything in assembly gives me comfort

------
andrewstuart
>> Software should be designed to last

Not necessarily true.

If the requirements of the project are to be secure or fast or easy to
maintain or built quickly or have beautiful user experience or be maintainable
or be designed to be robust & failure resistant, then that is what it should
be.

And if the requirements of the project are "the software must be designed to
last", then the software should be designed to be designed to last.

But if it's not a requirement, then no, it does not matter if it lasts.

The point being that blanket statements about what software should be are
missing important context about the purpose of the software and the
constraints under which it was developed and the specified requirements.

~~~
mcavoybn
Couldn't you have just said "Not all software must be designed to last"?

Also, the author said software _should_ be designed to last not that it
_must_.

Lastly, you didn't address the core argument of the article which is that you
should carefully select and manage dependencies. Reading your comment I wonder
if you even read the article..

~~~
andrewstuart
True. somewhat long winded, edited thanks.

------
MintelIE
This is why I like long-established technologies. You can build most anything
with typical POSIX utilities and time-worn libraries. And they will be there
for you in 25 years. Who knows where Rust and Go and Node will be in a quarter
century?

~~~
_pdp_
You can apply the Lindy power law :)

~~~
MintelIE
Looked it up finally, I agree. It's why we have steering wheels instead of
joysticks (except for the people who have a leg-related disability, they've
been driving with joysticks for like 100 years).

------
LoSboccacc
no, software should be designed to work for as long as necessary, because
we're responsible professional and don't want to squander other people money
on infinite perfection

the whole bit is nonsensical included the parallels with civil engineering,
bridges are built with tolerances, maintenance schedule, and a disposal plan
for when maintenance cost become higher than replacing the bridge, because
nothing in this world actually lasts.

heck our internet is built around operating systems that didn't exist 25 years
ago. imagine investing capital and sink opportunity costs in delivering the
perfect vax typing program...

mind, this is not to say that all software should be quick and dirty. but the
conscious engineer knows whether it's building infrastructure to last decades
or a temporary bypass to support some time limited traffic spike.

the important bit is to ask the stakeholder what's more appropriate.

I also want to see people build their own json parsers at acceptable level of
performance, possibly as streaming parsers instead of just gobbling everything
in memory. imagine wasting a month on building that and then having to defend
your choice every time a bug or a malformed input causes silent data
corruption somewhere in the backend.
[http://seriot.ch/parsing_json.php](http://seriot.ch/parsing_json.php)

