The problem is that the economics of software development don't really ever require craftsmanship. There are two basic modes of development. 1 - You are a startup, struggling to survive; you have to ship features as fast as possible, with many fewer engineers than you really need to do things right. 2 - You are a huge billion dollar corporation, probably a monopoly, with an enormous moat protecting you from serious competition.
Neither situation prizes craftsmanship. The startup just needs to glue together a hack to raise the next founding round or solve the immediate user problem. The gigacorp has more money than it knows what to do with, so they "solve" every software problem by hiring more engineers. There's actually not many companies who exist in the middle ground, where it might be important to produce high quality software. I believe this is also the reason we don't see more uptake of high concept languages like Haskell and Lisp.
No inspection. They don't even see the plans, just an exterior render before they greenlight. Imagine how dysfunctional architecture and construction would be in that world.
Software is uniquely intangible. Collectively we spend billions developing software every year, but often the decision maker never sees the codebase, only the UI.
And you're right, the problem is multiplied at companies that consider software a cost.
This is why projects like Jepsen are so powerful. Or Acid3 back in the day. They let a non-technical audience see the underlying implementation quality of different software efforts.
We need a lot more where those two came from. Basic example. Why can't I go to any app in the app store and see the average hours between crashes?
Software craftsmanship will suck until we make it visible.
...encourage it to "grow-up" and mature.
I really like the Postgres example because it was initially just a bunch of academics who refused to implement new SQL features until they could do it The Right Way. See how long it took them to get upsert.
I really hope that GNU Guix takes off in this way - just gradually doing things The Right Way until the product is just the default institution.
The web killed serious desktop development of these outside of Microsoft. Google's services are good enough for everyone else, and run pretty much everywhere that has a reasonably modern web browser. As a desktop Linux user for the past ~2 decades, I'm mostly happy about this.
Building things well so they may serve you for years and years is a form of investment in the long term.
There is a business philosophy credited to Japanese business but evident elsewhere too that thinking, planning and strategy should be as long term as possible, to a ridiculous extent.
I read the biography of Akio Morita, the founder of Sony and while I'm sure it was somewhat biased to laud his achievements, even from the early post-war era manufacturing recording equipment for high end use, in what can be called "start-up mode" the emphasis was on building things of very high quality that would last.
Truly this long term planning mindset is a manifestation of DRY, don't repeat yourself, and is crucial to survival of a nascent business.
The whole disposable software attitude "we'll revisit once we ship" attitude is sinking you before you ever get off the ground.
With long term investment comes a certain kind of risk appetite. It's the same reason banks haven't jumped into crypto or why so many enterprises continue using Java..
But I'm glad you used Nintendo as an example, being a software company. How about SNK or Capcom or Konami? You know what it's like getting blacklisted for jumping ship from one of these companies?
Like in North America you could work for Microsoft and then Google but that sort of thing doesn't always fly.
Here is one article: https://arstechnica.com/information-technology/2017/06/konam... but I have heard of similar stories across industries
1) I am doing it by myself, without any corporate sponsors or financial pay.
2) I am writing it in such a way that it could have already been working for 25 years.
In 2010, Microsoft published the PST file format, and the latest documentation was updated this year
That code was started in 1992, and first shipped in 1995 - 25 years in the wild!
Have you been programming for 25 years?
I have longer than that and it is really scary that software to see that software I wrote early 90s is still used. I thought, at the time, it would be rewritten in something cooler every few years. But nope. It does make me pick tech that can or will last; not many dependencies, open source, jvm, .net core or C/C++, so it is possible to revisit it in 20 years and not be completely lost (like I imagine it would be with js frameworks/libs/deps at the moment).
Perl as the primary language
HTML/CSS for front-end
txt as the base data format
SQLite for cache and indexing
PHP for optional server connectors
PGP for the identity system
SQLite is a great example that was recently discussed on HN.
There are things we can do (and I'd argue we should) systemically to incentivise building for the long term but I think the cultural mindset needs to change first. I hesitate to lay all the blame on consumerism, but it certainly seems like a big part of the problem. I hope that projects like The Long Now and Artemis signal a shift toward thinking more about future generations.
You need to make a chair so your client can sit down and work at their desk. While waiting on this chair, your client needs to stand awkwardly at their desk. So they care about how long it takes for you to make this chair, as well as how much it costs and all the normal stuff.
You can build a very fancy premium chair custom fit to your client's body-shape, but it's extremely expensive, wouldn't work as well if the client ever gained or lost weight, and took forever to build, with your unhappy client standing around the whole time yelling at you to make it faster.
Maybe you can build a super cheap chair real fast, but then this chair falls apart all the time and you need to keep on fixing it. Client got to sit down fast, which is great, but they keep on needing to get up so you can glue on more balsa wood to try to hold it together.
You could get a stack of crates for them to sit on. Not really a chair, but it works, and you could then work on a nicer chair that won't fall apart instantly in the meantime. Or maybe they're happy enough with the stack of crates and say don't bother with the chair, i don't want to pay for it.
And then maybe your client moves cross-country every year, and so will just toss the chair when they move, and find someone else to build them a new chair when they get to their new apartment.
1. Everybody is swamped. Sales is now scaling up, which means new customers, new demands, and new fires every day.
2. The product's complexity has now grown tremendously. No single person, not even founding engineers, can fit the entire system in their head now. Very few people are equipped to even dig in to the more esoteric pieces and understand how and why they work, because enough time has passed that any context that was undocumented is either forgotten or hidden deep in a random person's head, and very little was documented (not necessarily the wrong choice - the time saved on not documenting things likely contributed to the company moving quickly enough and surviving to this stage).
3. Hiring has been approximately solved, and occurs on an approximately predictable cadence. Your engineering team has a small but predictable stream of newcomers. Every year or so, the team doubles.
In this region, you have some interesting constraints to deal with:
1. Building new features is necessary to increase revenue, but each new feature has rapidly increasing marginal cost (due to overall system complexity). Therefore, we need more throughput.
2. Adding new engineers is necessary to increase throughput, but each additional engineer now provides rapidly diminishing marginal throughput.
There are many tactics for succeeding in this region. The general focus here is "increase marginal throughput per engineer". Some not-writing-code tactics include investing in solid onboarding, developing effective documentation at the system level, and narrowing focus on product initiatives (becoming more deliberate with "we will try experiments A, B, and C in this order" as opposed to "we will throw the kitchen sink at the problem and see what sticks").
From a "writing code" perspective, I think this is where craftsmanship really shines. Constructing abstractions that dramatically increase the productivity of each marginal engineer provides an enormous pay-off in this region. Of course, the correct engineering abstractions must also be coupled with the correct engineering team structure. The effects of Conway's Law in this region are felt very, very strongly.
Unfortunately, I think it is rather unlikely for someone to just be able to drop in to a company in this region and begin working on this kind of neat problem. I think the most likely ways to get to work on this are:
- Be there from early on (arguably, first 5 or 10 engineers). Having domain experience is extraordinarily helpful in understanding which systems will be force multipliers.
- Be very, very experienced. I can see a role for a senior staff engineer to be hired in at this point to help build these force multipliers. I think this person would need to have previous experience in companies of this size to correctly value domain experience and judge the right pieces to build.
- Join in this region, and work with on the team of one of the two people above.
This is basically what I specialize in, albeit mostly accidentally. It's actually not as hard as you'd think, because almost every company that gets to 2.5M ARR has already gone through at least one and sometimes two complete rewrites, that have solved some but not all of their issues. So at this point they already understand that their marginal cost of adding new features is increasing, they know what their problems are, and they understand the value of software architecture.
If you want to do this though then you can't really just apply for a job at the company, the technical co-founder has to kind of invite you to come in to work on this stuff.
The truth is, there are no easy problems in programming. There are always a thousand corner cases and unexpected complexities. And all those buggy, unmaintained libraries you find in your language's package repository are still probably better than anything you're going to roll yourself. Look into the source code of such libraries and alongside the hacks and outdated code, you'll find hundreds of subtle problems avoided, because the author spent a long time and effort on the problem you've spent ten minutes thinking about.
The answer to quality problems in software engineering is not less libraries, it's more and better libraries. In the same way that the high-quality civil engineering the author admires is not achieved by having one person do everything themselves, but by the successful collaboration of a large number of skilled and specialised subcontractors.
Right now, we're still in the infancy of our industry. This, combined with continued the evolution of hardware capabilities has meant that software engineering has never had the long-term, stable base on which to build repeatable, reliable routes to success. That's fine. If we're in the same position in a couple of centuries, it's a problem, but right now we should be experimenting and failing.
Even today, the situation is not as bleak as if often portrayed in articles like this. There are a huge number of robust and high-quality libraries and frameworks providing extremely complex capabilities in an easily reusable way. If a programmer from the mid-nineties were to be transported to the present, I suspect they'd be amazed by what off-the-shelf libraries enable even a single programmer to do. The undeniable quality problems that do affect much modern software often have more to do with the hugely increased scope of ambition in what we expect software to do for us.
 Outside a handful, like encryption, that are acknowledged as hard for the purpose of being the exception that proves the rule.
Simple != easy. It's much more difficult and more time-consuming to identify the simple solution than to implement the easy solution.
I think that a diverse selection of libraries and tools exploring new problem spaces is good - but once the problem space is understood, we should replace them with something closer to the fundamental truth of the problem.
I think one of the fundamental problems with our industry is that there is no funding mechanism for those subcontractors. The "base" that we build on is either unpaid volunteers, or tailored to a specific companies needs and goodwill.
If free software was not the norm, I think we'd see a lot of investment and enterprise around producing high quality libraries.
I've come to think of my work as figuring out ways to think about problems so they become easy.
Not quite the same thing, I know.
As a developer, there is nothing I would like better than to turn out the highest quality software, as long as I could be compensated for it.
Does anyone really think that the millions of lines of code that went into Windows is worth the piddling $100+ the consumer pays for it?
Let's start with a basic premise: quality is worth money. I'm sure we agree that a Hyundai (or whatever passes for a cheap car in your neighbourhood) costs less than a Porsche or BMW because the Porsche is better designed and better built. Better design means more experienced and brilliant engineers, more talented designers; better built means more skilled and dedicated assembly-line workers. All these people demand more money. Hence the Porsche company commands a higher price for their Carreras, 911s, whatever.
The same should be true of software. If a company invests great care in designing a better operating system, or a better word processor, that never crash, and always have helpful help and meaningful error messages, how much do you think that would be worth? I'll give you a hint: the military do in fact get top-quality software for their jets and rockets. They get software that almost never fails, and does exactly what it is designed to do.
Do you know how much this software costs? $50 per line of code.
Translating into everyday terms, a bullet-proof operating system would cost you, at a rough guess, $5,000 per copy.
Now I have no doubt some people would be happy to pay $5,000 for a stable OS. However, there are many people who couldn't afford this amount.
So what would happen? In any other field (automobiles, stereos, TVs, restaurant meals, housing) people who can't afford quality just put up with less and shut up.
But in the software field... well, they just make a copy of someone else's software, and enjoy the full benefit of top-of-the-line quality, without paying for it. I wager even you couldn't resist obtaining a $5,000 OS for free.
How long do you think a software company would last if their product cost millions to make, and they only sold a few copies at $5,000? Why they would go broke, of course.
This is the crux of the software dilemma: except in a few specialized cases (commercial or embedded software), the maximum price for software is the monetary equivalent of the nuisance value of duplicating it.
In consumer software, this is in the range of $19-$29.
The digital world turns the economics of quality upside-down: in traditional models where quality is supported by price, the market pays the price if it wants the quality.
In the digital model, a perfect copy of the merchandise costs virtually nothing, and undercuts the legitimate market, putting a cap on the maximum that can be charged for a product.
There is a built-in limit to how much time, effort and expense a company can invest into a mass-produced product. This cap is equivalent to the "nuisance value" defined above. It is not reasonable for the consumer to expect warranties and liabilities that go way beyond what the manufacturer receives from sales of the product.
The music and movie industry are wrestling with the consequences of easy digital duplication. They have taken a different route to protecting their intellectual property.
I challenge anyone to come up with a business model where the software developer that invests great expense in building a quality product, can obtain full compensation from the market segment that values his quality.
Whose fault is it anyway? Simple: it's the consumer who copies and pirates software that forces the price down and therefore the quality to remain low. Any analysis that does not take this into account is simplistic.
It is naïve to think that most developers are not struggling to make ends meet and stay in business (Microsoft notwithstanding).
Software piracy, by the consumer, at scale, is historically negligible. It's even more negligible now.
How can one pirate software like netflix or google docs, where there's a centrally hosted server that runs the actual software?
Apple's App Store is full of software that is incredibly difficult to pirate. How does a consumer duplicate a copy of paid iPhone app? Yet, the cost of an iphone app isn't significantly more.
I don't think consumers pirating software has as significant an impact on the price of software as more normal market forces. If I want to manufacture a paperclip and sell it, I can't charge a thousand dollars, even though the factory is pretty expensive to build, because others are able to charge less and profit.
If I want to make an iphone app to share short videos, I can't charge $1000, even if it's perfect code, because my competition can do the same thing for free. And if I'm sending video to other users, well, even perfect software will hit network errors.
The reality is, many software companies are profitable right now while selling software for 10s of dollars, so if I charge more, I won't be competitive.
Consumers don't pay for software that's better than a certain amount. Once it's "good enough" to solve their problems "well enough", that's it.
Curiously, that is the same industry that is well known for pumping out some of the worst/crappiest software in existence, so I seriously doubt your assertion that software piracy is at the core of the problem.
I think the main reason is that most people don't really care about "bulletproof" software, they'd rather have something with more features, or cheaper, or easier to use or more visually attractive as long as the bugs aren't too annoying.
We haven't even figured out fully how we should represent a string or date or number in software let alone have an enduring language or ABI.
I feel like I am building everything on a foundation of sand at it will need to be rebuilt every 5 years, 10 if you're lucky.
I do think it will change, but it will be awhile and by then it will probably just be a few big players making software anyway, kinda like car companies.
It's not the prettiest thing and could have certain upgrades in light of new security practices (eg use new encryption algos & go thru an in-depth security audit). But to the clients, it's unbeatable and there's nothing remotely close to migrate to.
Unbelievably, it's a vb6 app that I never bothered porting to dotnet. Even more unbelievably, it's a port of its predecessor, written in turbo pascal. As long as I can continue to find the dev environment installer, it's good to go.
So. I think s large part of the problem is that ppl don't redirect their dev environment enough to keep an install disk, (if one was available in the first place).
A second large problem is that modern development relies heavily on 3rd party library use, which means your software is reliant on more than one company for your binary.
So. Find an environment that you can archive/keep, and ignore the not invented here rule to a large extent
And what I like most on such cases of success is that users mostly feel pleased by using them -- those systems basically do the job, fastly. For me, that tells a whole lot about how wrong are some philosophies getting mainstream, focusing on great aesthetics, mobile-first and, in my opinion, a very mistaken sense of user-friendliness
A coworker has written a tool that edits the registry such that you can do a fresh install of our compiler/IDE (Delphi), run the tool, and then load up our project and it'll compile. The tool gets updated when needed, usually when we change dependencies. All dependencies are kept alongside the source in the same repo, which makes it easy to ensure you got the right stuff available for compilation, as well as being able to make patches if we really need to fix an issue ourselves.
Need to compile an old version? Check out the right branch, run the tool and start the IDE version used for that branch. Easy.
That's on Windows that has stable API and ABI. The 90's is probably the limit because older software are 16 bits and don't work on current 64 bits Windows.
On Linux all software break with each distro release because "core" libraries are unstable. Got executable and libraries compiled on RHEL 6 and most of them fail to load on RHEL 7 because something.so not found.
It’s also worth noting that you’re comparing Apples to Oranges in that Visual Basic 6 is a very different language to C++. VB6 has its own warts when it comes to archiving such as it’s dependence on OCX and how they require registering for use (they can’t just exist in the file system like DLL and SO libraries, OCX required their UUIDs loaded into Windows Registry first).
To further my previous point, if you wanted to use another language on Linux, maybe one that targets the OS ABIs directly (eg Go), then you might find it would live longer without needing recompiling. Contrary to your statement about user space libraries, Linux ABIs don’t break often. Or you could use a JIT language like Perl or Python. Granted you are then introducing a dependency (their runtime environment being available on the target machine) but modern Perl 5 is still backwards compatible with earlier versions of Perl 5 released in the 90s (same timescale as the VB6 example you’d given except Perl is still maintained where as VB6 is not).
But GTK, Qt, libwhatever ? More often than one would like.
I agree on compiling statically to avoid DLL hell. However it is fairly difficult in practice because software rarely document how to statically build them and they very often take dependency to some libraries on the system.
All it takes is one dynamic dependency to break (libstdc++ is not stable for example).
It’s actually not as dramatic as that and you can still ship libc as a dependency of your project like I described if you really had to. It’s vaguely equivalent in that regard to a Docker container or chroot except you’re not sandboxing the applications running directory.
This is something I’ve personally done many times on both Linux and a some UNIXes too (because I’ve had a binary but for various different reasons didn’t have access to the source or build tools).
I’ve even run Linux ELFs on FreeBSD using a series of hacks, one of them being the above.
Back before Docker and treating servers like cattle were a thing, us sysadmins would often have some highly creative solutions running on our pet servers.
> I agree on compiling statically to avoid DLL hell. However it is fairly difficult in practice because software rarely document how to statically build them and they very often take dependency to some libraries on the system. All it takes is one dynamic dependency to break (libstdc++ is not stable for example).
There are a couple of commonly used flags but usually reading through the Makefile or configure.sh would give the game away. It has been a while since my build pipelines required me to build the world from source but I don’t recall running into any issues I couldn’t resolve back when I did need to commonly compile stuff from source.
Indeed. I do not understand why dynamic linking is still routinely used as part of so many software deployments. A lot of the arguments for using it that once made sense are now obsolete; some have been for a very long time. It's not that it's never useful, but it seems to be used almost by default in a lot of situations, even when a simple, static link would result in software that is both more efficient and more reliable.
This is exactly how I feel. I'm hoping at some point, the amount of churn will decrease, the community will figure out what problems it actually needs to solve, and come up with standardizedish practices for this. Obviously not all software can fit into this mold, but a lot of it probably can.
To do that I think an important part is it "move fast and break things" to figure out what works and doesn't work in practice (in practice including how to actually manage people to build your software). I do think for that to happen we need to retain more knowledge between generations, I see a lot of rediscovery and reinvention of the wheel.
If Java were made today it would probably be UTF-8 instead, like I said we are still figuring strings out, I hope UTF-8 is it.
What do you mean? UTF-16 works well with all unicode characters. Some characters are encoded by two code units, but that's true for UTF-8 as well. ASCII strings are stored using 1-byte encoding on modern JVMs. I would prefer UTF-8, but hybrid ASCII/UTF-16 string works fine too.
you have to go at greath length to get a broken character out of a valid string, including casting and cutting it intentionally
In the example the author gives, the real mistake is allowing the web framework to define a system that you then try to customise to produce your application. The battle is already lost because your business logic is now a dependency of someone else’s (rotting) generic web app template. This may be appropriate for you if this is a proof of concept or spike but for something with a longer lifetime, any time you save will be paid back with interest when the host framework diverges from your needs.
A JSON parser is not an equivalent class of dependency if you keep it at the periphery of your system. There’s not much point in writing a JSON parser unless you have a particular requirement that cannot be satisfied by a third party library. You should be able to swap it out for a different JSON parser in a few hours.
80:20 seems like a ratio that was just pulled out of the air; I have no data to hand either but for most user applications I would suspect the real ratio is more like 99:1.
Edited to add: I’m still in agreement with a lot of what the author says here, such as: evaluate your dependencies seriously, understand them, and be mindful of their impact on things like binary size.
But it is very expensive way of doing things, and would not work well with modern constantly updating libraries. But I do personally consider software maintenance to be an antipattern. I'd rather have my software be correct and eventually replaceable than constantly changing and "maintainable"
The classic 1950s Big Iron software, now run swaddled in layer upon layer of emulation on current mainframes, is software which lasts because people will recreate its environment again and again, and ignore the people who are ill-served by it because they don't fit into that environment neatly. Oh, your name doesn't fit into an all-caps fixed-width EBCDIC form? Go fold, spindle, and/or mutilate yourself, NEXT! (This happens to me. Over and over again.)
On the opposite extreme, unmaintained Internet software rots:
Or, to be more precise, software is built with assumptions underpinning it, and those assumptions change out from under it more quickly on the Internet than in other contexts. Software can go from being secure and completely above reproach to being a major factor in DDoS amplification or spam runs because the world changed around it. Software that lasts is like ships that last: Replace the hull, the mast, the sails, the cabins... same ship, neh?
Also why do you think the ability to push updates results in crappier software? In the 90's, if your copy of CorelDraw or Windows had a bug, you lived with that bug for years. Today, if it's a common showstopper, it gets fixed quickly.
To my eyes, everything's gotten far better.
Compared to the late 2000's, today's computers have SSD's and retina displays and play 4K movies. They're vastly faster, with typography you can't see pixels in.
And OS's and applications are basically the same. My macOS is no less stable than it was a decade ago. The main difference is that my Mac is far more secure, so I trust third-party software much more.
I'm just not seeing how the quality of OS's or applications has gone down. I think what would be more accurate to say is that both OS's and applications have added more features, and that otherwise quality has remained basically the same.
Sure, I still have finicky Bluetooth issues today. But I had finicky Bluetooth issues 10 years ago too. It's certainly no worse. But now Bluetooth gives me AirDrop too.
And yet, something as basic as a Slack client now requires gigabytes of RAM, microblogging such as Twitter loads a monstrosity of a webapp that immediately makes the fans spin up to display 240 characters in the rare case it actually loads without errors which require to refresh the page. Modern entry-level laptops have the processing power of a decent machine from a decade ago, and yet they appear just as slow to do the same computing tasks we did 10 years ago.
My 2017 Macbook lags and stutters when loading a YouTube video page. YouTube used to load fine and not stutter in 2009 on a laptop with a third of the RAM and CPU that I currently have, and yet the task at hand didn't change at all, it still just needs to display a video player and some text.
Windows 10 broke start menu search. Come on, this problem was solved a decade ago.
Every large website's login flow now involves dozens of redirects through various domains which can break in all kinds of interesting ways leaving the user stranded on a blank page in the middle of the flow. I know the reason behind them (oAuth, OpenID Connect, etc), but as a user I don't care; this is a major UX downgrade and the industry should've done better.
We've replaced offline-first applications with cloud-first. Nowadays even something that should work fine offline will shit itself in all kinds of unexpected ways if the network connection drops or a request fails.
An android phone from 5 years ago is pretty much unusable on the other hand.
One may argue phones are a platform and complex. Fair enough. So then compare to something like a Sonos player.
I can pull up speaker setups from the 80s 90s and even earlier and have no issues with them working. But something like Sonos may not last 3 years.
And talking about pure software, I am not sure what the definition of consumer is, but a lot of software from the 80s and 90s is going strong and is extremely usable.
Not sure what you mean? It might not get updates anymore and you may not be able to install the latest versions of most apps but the same is true about iPods.
Now, as you say, the common showstoppers get fixed quickly, but they're probably a lot more common. Meanwhile the overall cohesion and well-thought-out feeling of a lot of older software is largely gone.
Whatever library, language or foundation you built on, you can guarantee whatever version you used is going to stop getting security updates in a couple of years. And then there will be a whole lot of baked in dependencies you didn't know about - URLs (like XML schema URIs that were never meant to be resolved but lots of libraries do), certificates, underlying system libraries etc.
So designing software "to last" now is designing software to be constantly updated in a reliable and systematic way. And the interesting thing about that is from what I can observe, the only way to achieve that currently IS through simplicity - as few dependencies as possible, those that you do have very well understood and robust, etc etc. So it sort of comes back to the author's argument in the end, though maybe through a different route.
A TLS client is not possible to future proof like this of course, but you could have a roster of approaches. For example, try getting your hands on the latest curl / wget builds (try each with a few different approaches), or the latest JVM/Python/Powershell & use those TLS APIs.
A fun game would be to try this with tools available 20 years ago, and then redo it to make a sw time capsule from present day to 20 years into the future...
* Half baked in-house implementations of things are often filled with bugs and broken edge cases. Especially when it comes to security and concurrency.
* In a larger code base, you won't understand everything whether it's in-house code or third party code. The in-house code was written by someone who quit 5 years ago and is now a monk in Tibet.
Kernels, compilers, language runtimes, databases? Sure, build them to last.
Web pages that will not be used in 2 weeks? Don't waste your time with "build to last".
The problem is that some of us think that our software should be build to last - but in reality it's just some mediocre thing that needs to get in front of the clients as fast as possible and some bugs are "OK" to live with.
I'd almost always take a pasted tutorial over a robust dependency every time, the former is more reliable. Any dependency I do take on is something I have to watch the changes of and project status of like a hawk.
Every time you try to load a resource from a domain you're introducing a huge risk by letting them be the judge, jury and executioner.
Domain gets hacked and injects malicious scripts or collects user data? Domain fails entirely (no-renew) or in 10 years just stops responding?
A lot of web pages today have built in self-destruct features purely based on the JS/resources they're loading.
For 0 benefit, as well, since the amount of BW saved in so marginal.
All those mobile games that require a server, once the server goes down the game no longer works.
In the old days you set up Doom as a server on your PC networked to other PCs for multiplayer. Those can last and be remade for new platforms.
When the server is shut down, the game breaks and the software _is_ unusable.
This often happens because of the misaligned incentives of the business to not release the server code. This disappoints both the original developers and the consumers.
I'd argue that this is not only a software problem, but mostly an problem of copyright distorting the incentives of publishers.
One could imagine a client hosted game or alternative architecture that removes the central dependency on the company could be the way to build software that is built to last to last longer.
In the online game I'm developing - I make sure that every client can also run as a game server. This way if my servers are shut down players can play without depending on me.
My decision to remove myself as a dependency directly goes against the incentives I have as a publisher. I slightly narrow the possible monetization options for the game.
Mostly I'm trying to explain why developers/publishers decide to make brittle software, even if I don't agree personally.
What I've learned is that making stuff as simple and stable/low maintenance as possible is the only way to be sane. I can't afford to deploy code that then requires me to hold its hand with a team of 3-4 engineers for the upcoming years. It needs to run itself and be forgettable. And it needs to be written fast.
This has become much harder in recent years with the switch to things like Kubernetes. Kubernetes moves so fast and needs constant nurturing, even for casual users. Running old versions of it is painful and dangerous, so you're forced to update. And the ecosystem is still so early in its life that all the paradigms change and flip on their head every year. Odds are that something you deploy in it today, will need to be looked at in 18 months. And the whole thing will need a team dedicated to keeping it healthy.
Anyhow, that's my angle on the author's rant. Things need to last, IMO, mostly because I don't have time to go back to them later.
So my code definitely requires maintenance in a changing world. But more than once those breaks actually identified a bug somewhere else and fixing that bug was essential. If I would try to self-fix wrong data, those kinds of bugs could be missed. Sometimes I need to fix a code to adapt to a changed format or something like this. But that's not a big deal, because change is simple and something like exception stacktrace allows to instantly find a code which is to be fixed.
I found this out the hard way when I had to recompile some fpga firmware for a $100K piece of equipment which used a programmer not supported by windows, a chip and code not supported by the current IDE and a binary blob which we did not have the source code for and would not link, luckily we winged it by replacing the FPGA with a hardware compatible replacement... We now preserve the whole development environment on a VM and save it with the source code.
In general, I'm not sure it's possible for all of these things to be true.
Many libraries are useful because of their side effects. Depending on what those side effects are, it may simply not be possible to integration test your entire system in a way that would satisfy all of the above claims.
The alternatives tend to involve replacing the parts of the code that cause real side effects with some sort of simulation. However, you're then no longer fully testing the real code that will be doing the job in production. If anything about your simulation is inaccurate, your tests may just give you a false sense of confidence.
All of this assumes you can sensibly write unit tests for whatever the library does anyway. This also is far from guaranteed. For example, what if the purpose of the library is to perform some complicated calculation, and you do not know in advance how to construct a reasonably representative set of inputs to use in a test suite or what the corresponding correct outputs would be?
My experience tells me that when this fails to happen its usually not explicitly due to too many dependencies. Rather it is because the developers have a poor/partial understanding of the problem domain.
External libraries help this problem a bit because it allows developers to offload tasks onto other more experienced developers. They can (and occasionally will) misuse those solutions, but that is not the dependencies faults. On the contrary skilled developers will produce quality solutions with or without dependencies.
The signal seen (poor software has many dependencies) does not mean that software with dependencies are poor. It means instead that developers who have a poor domain understanding but good business skills are disproportionately successful compared to their peers that have good domain understanding but poor business skills.
Make of that what you will :)
Look at the GNU coreutils, things you probably think shouldn't have changed in a while: https://github.com/coreutils/coreutils/tree/master/src
There's actually frequent edits. Legendary disk destroyer was just updated last month. The difference is the competence
Most software is not worth to preserve the same way most buildings are not worth to preserve (except for perhaps the aesthetically pleasing facade).
Software in a hardware that is designed to do certain job should indeed last. Like Telecom towers should work fine "forever".
But software as in APIs and services that constantly has to evolve and be expanded. Yeah that is a harder sell.
We live in crypto world where each advancement in cputing or ways to solve could render algorithms "bad" means we have to update and adapt.
Imho the tech, how good you are at programming is all not very relavant when the user doesn't like to use your software.
I have written a small CMS for a company which was replaced twice but every time they ditched it to return to my old simple version.
This is just one example but I think that should be in people's minds when they create software that lasts: people must enjoy using it.
This can even apply to a command line program.
Some software has lasted for many decades. This isn't necessarily a good thing. 
There might be good reasons to build software to last, but "These other engineering fields build their things to last" isn't one of them. I have no doubt that if you could reliably send fixes to bridges and spaceships, people would do it - and that many lives would have been saved.
My personal opinion is you should design the larger ecosystem around your software to last and be robust to the replacement of any individual part. Building on Go's standard library HTTP server would have seemed needlessly cutting-edge about five years ago. Building on Apache and mod_cgi would be considered dated today (even though it would work). The stack you should be using will change. Build your system so that, when change comes, you're not afraid to write new software.
In practice, this means making good designs to avoid hairballs of complexity, having deployment pipelines that let you roll out canary versions (or better yet, send a copy of production traffic against a sandbox environment), keeping track of who's using APIs internally so you can get them to change, having pre-push integration testing per the Not Rocket Science Rule so you can be confident about changes, etc. You should think about your choice of web framework or JSON library up front, sure, but more importantly, if a sufficiently better web framework or JSON library comes along tomorrow, you should be able to switch. As a principle - any time you see something that you think you'd be afraid to change in the future, don't ship it if you haven't shipped it yet and figure out how to build the right abstraction around it if you can.
The data model, protocols and knowledge base (docs, requirements etc) should be designed to last and be extensible.
By this definition, wouldn't all three processes be the last ones to send the correct commands?
Not necessarily true.
If the requirements of the project are to be secure or fast or easy to maintain or built quickly or have beautiful user experience or be maintainable or be designed to be robust & failure resistant, then that is what it should be.
And if the requirements of the project are "the software must be designed to last", then the software should be designed to be designed to last.
But if it's not a requirement, then no, it does not matter if it lasts.
The point being that blanket statements about what software should be are missing important context about the purpose of the software and the constraints under which it was developed and the specified requirements.
Also, the author said software should be designed to last not that it must.
Lastly, you didn't address the core argument of the article which is that you should carefully select and manage dependencies. Reading your comment I wonder if you even read the article..
It would be interesting to hear of software that served it's purpose well for a year or two and then was no longer needed. I'm sure these projects exist, but I'd imagine they aren't very common given how expensive software development is in the first place.
I feel they will continue to be "supported" for a long time, like Perl 5 surely will be. (Though I'd avoid starting anything new in it, there are stable mature things written in it that, like with mainframe stuff mentioned elsewhere, might just end up being encapsulated.)
the whole bit is nonsensical included the parallels with civil engineering, bridges are built with tolerances, maintenance schedule, and a disposal plan for when maintenance cost become higher than replacing the bridge, because nothing in this world actually lasts.
heck our internet is built around operating systems that didn't exist 25 years ago. imagine investing capital and sink opportunity costs in delivering the perfect vax typing program...
mind, this is not to say that all software should be quick and dirty. but the conscious engineer knows whether it's building infrastructure to last decades or a temporary bypass to support some time limited traffic spike.
the important bit is to ask the stakeholder what's more appropriate.
I also want to see people build their own json parsers at acceptable level of performance, possibly as streaming parsers instead of just gobbling everything in memory. imagine wasting a month on building that and then having to defend your choice every time a bug or a malformed input causes silent data corruption somewhere in the backend.