
The software industry is going through the “disposable plastic” crisis - ColinWright
https://lwn.net/Articles/829123/
======
spamizbad
People blame developers but it's all driven by a product mentality that favors
rapid iterations and technical debt to run business experiments on customers.
Slow-and-steady, carefully written software isn't tolerated within many
product orgs these days.

~~~
papito
I am really hating working in the current state of the industry right now. I
am used to writing robust, clear, well-tested, and _easy to maintain_ systems.

The job right now seems just stitching AWS services together, and spending the
rest of your time debugging and putting out fires.

~~~
jmcqk6
The lie we tell ourselves is that the quality of code matters to non-
engineers. It seems it doesn't.

The most uncomfortable truth of our field is that there is _no_ floor for how
bad code can be, yet still make people billions of dollars.

Because that's the outcome everyone else is seeking - making money. They don't
care how good the code is. They care about whether it's making money or not.

~~~
mint2
Haha the non-engineers at my place don’t care at all about code quality.

The non-engineers also decided to write their own internal tool, with no
review and zero attention to quality. It’s been outputting wrong results that
will cost them a few million at least.

~~~
WorldMaker
Reminds me of the job where we'd have huge slowdowns in our massive SQL
Cluster because some non-engineers in "R&D" "had" to run unoptimized, barely
legible Access queries against it.

We eventually had to create a replica of Production specifically for such
nonsense.

Well then we moved a lot more business logic that they "helped develop" that
ran on a distributed platform closer to "real time" for the application,
because it was very dynamic in nature, and couldn't be denormalized back into
the DB and remain performant. So I had to build a new system for some of their
ad hoc queries and teach them how to use it.

What eventually chuffed me was realizing the "R&D" person I was "training" on
the system always wore a different three piece suit every day and at least in
fashion signals was probably making a magnitude higher salary than me for
slower, wronger answers than any system I worked on, but probably was great at
throwing it (very slowly) into very fancy graphs in Excel and useless
salesmanship.

There were a lot of reasons I already wasn't long for that job at that point,
but that was a lasting lesson that quality and performance won't matter to
certain styles of upper management.

------
caconym_
Something like four years ago I wrote a command-line tool in Rust to interact
with a service I wrote (not in Rust). Then I left that company for a new job.

At a recent happy hour, I learned that despite nobody there having much of a
clue about Rust, they were able to clone the repo, build it, and run it based
on a short wiki page I left behind. I thought that was really cool, and a
great validation of the care the Rust project takes around dependency
management and backwards compatibility.

~~~
edflsafoiewq
One of the more attractive things about Rust for me is that I can easily write
portable code. I distribute Windows binaries for a certain Rust project and
they've kept working despite the fact I haven't tested on Windows in years.

I'm not sure I would even know how to _build_ any of my C/C++ projects on
Windows.

~~~
pansa2
> they've kept working despite the fact I haven't tested on Windows in years.

How do you know? :)

Cross-compilation, avoiding the need to _build_ on many different systems, is
great - but I’d still want to _test_ on all those systems.

~~~
steveklabnik
As a Windows user, I see the other side of this, and it is usually true. It is
easy to run in CI, maybe that’s part of it.

I’m not sure exactly how it’s the case, but I’ve been thinking about it a lot
lately; I had a convo on HN last week about this...

------
at_a_remove
I am reminded of the profession "programmer–archaeologist" in Vinge's _A
Deepness in the Sky_. In roughly 2010 I reminded my grand-boss that a great
deal of the important business process we had consisted of largely uncommented
C and various mismashes of shell scripts.

It's true, there's something about all of the frameworks and the fads and such
which causes so many projects to need, well, continual attendance to the
dependencies. One proposed platform at a previous job bothered me for reasons
I could not explain, so I mapped out the major dependent technologies, pointed
out that we had nobody who had even a passing familiar with with them, and
also pointed out how quickly some of them were moving.

There's something to be said for keeping your tower of dependencies short.
It's a tradeoff, to be sure.

~~~
caconym_
I absolutely love the vision of software technology in the _Fire_ / _Deepness_
books. The "zones of thought" concept dovetails nicely and lets him expand on
those ideas, too. Highly recommended reading (for that and a bunch more
reasons).

~~~
arvinsim
I have put off reading this book for so long. Now I have the reason to pick it
up!

------
reissbaker
I've found that the further back in time you go, the harder it is to run
programs without encountering odd bugs, until you hit a certain inflection
point where the systems are so old that they're relatively simple to emulate,
and for popular systems we often have high-quality emulators that are capable
of running the software reasonably well. For example, try running Diablo 2
(released in 2000!) on modern Windows, or modern macOS: it's not
straightforward or bug-free, in my experience. It's reasonably good on Wine,
ironically — because the Wine developers have built a really good emulation
layer.

I don't think it's (application) developer fads; how would that even make
sense? Either the OS broke userland, or it didn't; if it didn't, then the same
binary should run just as well today as it did then. If it _did_ break
userland, that's not the fault of application or framework developers.

~~~
jiggawatts
I agree to a point, but there are harsh realities to this.

To name just a few off the top of my head, where there is no _choice_ in
whether the userland changes or not:

\- Cipher suites and crypto protocols changing often end up "cutting off"
older applications because they used an embedded SSL library that can't do TLS
1.2. This is required because vulnerabilities are discovered in older
protocols and they _must_ be turned off.

\- Changes in the CPU instruction sets. For example, it is not possible to run
16-bit Windows binaries in a 64-bit host OS, but there are no 32-bit supported
versions of Windows. Even if the majority of the code is 32-bit, a tiny 16-bit
component can be a showstopper. I had to support a large (20K concurrent
user!) environment that ran a 16-bit DOS application. On Windows 2008 32-bit.
In the year 2015. Really.

\- I've seen apps that just can't handle high performance CPUs or multiple
hardware cores, simply because they were written in an 100MHz single-core era.
I've seen all sorts of timing issues, deadlocks, etc. For example, one
particular release of World of Warcraft simply couldn't patch the game on
4-core CPUs, because one task would race ahead, time out, and then deadlock
the downloader utility. That persisted for about a year, because Blizzard
thought it was "okay" to support only 99% of their userbase that had 2-core
processors that were common at the time.

\- If you app has _any_ external dependencies of any kind, its days are
numbered. License servers. Update checkers. Content providers. Cloud services
of any kind. Whatever. It's going to be turned off eventually, and the app
will stop dead.

~~~
wheybags
> there are no 32-bit supported versions of Windows

Windows 10 has a supported 32 bit version

EDIT: It support 16 bit apps too:
[https://www.groovypost.com/howto/enable-16-bit-
application-s...](https://www.groovypost.com/howto/enable-16-bit-application-
support-windows-10/)

~~~
jiggawatts
But there's no server version with 32-bit support, but this application
required server components not in the desktop edition.

------
Hokusai
Problems linked to this:

\- Developers wanting the last shinny programming language instead of
mastering one.

\- Developers wanting the last shinny framework instead of sticking to one and
learning it inside out.

\- Developers wanting to start projects from scratch instead of learning to
refactor. The new code is full of tech debt half a year later and some
developers already want to move on to some new code because if you don't know
how to refactor and evolve architecture your new shinny product lasts clean a
few months.

\- Developers that care more in the hiring process that you know the latest
version of the framework/platform/library but do not understand algorithms,
data structures and other basic stuff. They think that functional programming
is something new that starter 2 years ago, because it is the first time they
heard about it.

In my experience business pushing for new features is only half the problem,
approaching development as fashion instead of engineering is the other half.

~~~
brianmcc
Very much agree (friendly tip it's 'shiny' with just one 'n' btw)

------
dreamcompiler
"You wanted a banana but what you got was a gorilla holding the banana and the
entire jungle."

When Joe Armstrong said this he was talking about object-oriented programming,
but it applies to dependency hell as well.

------
ssivark
The rich irony is that the problem is probably due to our reluctance to throw-
away old code and rewrite better software with improved domain understanding.
But that requires thoughtful and disciplined stewardship. Instead we
recklessly patch leaky abstractions on top of each other because we’re too
lazy to work from scratch and are focused on “shipping” instead (cue:
Titanic).

We are so afraid of reinventing the wheel and actually having to understand
something that we will happily refactor out a car wheel and use it for a
bicycle (hey, “code reuse”!) and play the charade of “best practices”.

Software is one context where we don’t have to worry about “disposing off”
waste; we ought to make much better use of that.

~~~
sseagull
Actually I feel it is the opposite of that. Existing code can be battle tested
and reliable, with lots of unforeseen bugs encountered and fixed.

But those bugfixes made the old code difficult to work with, and rather than
take the time to understand it, new code is developed instead with a whole new
set of bugs (many having been solved previously). Working with old code is
unexciting and unsexy anyway, and it is unlikely to advance your career. And
so the cycle continues.

~~~
ssivark
The contrast between the two perspectives depends on the relative emphasis
between human understanding of the domain knowledge and accumulated domain
knowledge in pre-existing code.

Biased by the domains I have worked in, I feel that we undervalue the human
ability to figure things out and create good solutions, and overvalue code
that is available for “free” even though it addresses a problem only
tangentially related to what one is doing. My comment pertains mostly to such
situations. In situations where the opposite is true (the domain in mostly
static, and the old code is a good fit, and we don’t trust human intelligence
to figure things out and solve problems), “stable” code is certainly valuable.

~~~
Falkon1313
I agree with both sides to a degree. The tendency to use monster truck wheels
on a shopping cart or vice-versa (because reinventing the wheel is considered
evil) makes software a mess, but so does throwing out years of debugging
subtle/rare conditions.

Code reuse however also tends to add tons of bloat and potentially bugs caused
by things that aren't even relevant to the current usage. Not to mention all
the added complexity of having to add another layer over to translate the
current terms into the terms the re-used code understands and then override
what it does and translate the output.

There are some cases where it would be preferable to preserve the embedded
domain knowledge, but in many cases the cost of 'reinventing the wheel' would
be less than adapting an unfit wheel.

------
rhacker
Pretty much any electron app you download is probably tied to a backend and
will fail if their servers go offline. It is kinda sad. Electron apps are
usable as long as the company is really iterating forever.

Is the real problem that we're overemphasizing UI and React and UI frameworks
so much that we're now just really focused on building really pretty apps that
have little or no long term usefulness? At this point it feels like full
Companies are just Apps and those companies make something that fills exactly
one need, like renting a broom from your neighbor. And it's really pretty and
they had the best lawyers in the world thinking about all the potential legal
nightmares of neighbors renting brooms that it really is a great app, should
you ever need to rent your neighbor's broom. Or you can walk over to your
neighbor and borrow it.

~~~
dreamcompiler
Wait, what??

Electron apps contain a local web server. (That's part of the reason they're
such resource hogs.) In principle at least they work offline. Github's desktop
app, for example, works fine on a disconnected machine. You obviously cannot
push or pull, but you can do local git operations with the GUI.

~~~
svnpenn
Electron apps contain _an entire headless browser_. How anyone ever thought
this was a good idea blows my mind.

~~~
dreamcompiler
I actually think it would be a great idea iff they would use a webview, which
every OS provides nowadays. Apparently I'm not the first person who's thought
of this.

[https://blog.stevensanderson.com/2019/11/01/exploring-
lighte...](https://blog.stevensanderson.com/2019/11/01/exploring-lighter-
alternatives-to-electron-for-hosting-a-blazor-desktop-app/)

~~~
userbinator
That strategy of "web applications on the desktop" was actually pioneered over
20 years ago by Microsoft:

[https://en.wikipedia.org/wiki/HTML_Application](https://en.wikipedia.org/wiki/HTML_Application)

Even in the Windows world, they were never all that popular.

~~~
dreamcompiler
HTAs were not cross-platform; Electron is. Furthermore HTML5 and JS are much
more capable now than they were prior to Windows Vista.

That's why Electron is popular and HTA wasn't. But Electron is still the worst
possible example of the resource inefficiency of static linking.

------
scottrogowski
How closely can we can link the rise of SaaS (and corresponding disposable
frameworks) to an effort to find a stable revenue model in a world which had
problematic levels of software piracy? I remember when I was a teenager, you
might be looked down upon if you obtained your copy of Windows XP legally.

There are a lot of posts like this that bemoan the current state of [insert
social ill] but don't care to explore what structural forces contributed to
this state.

~~~
csharptwdec19
Actually a very salient point.

Still, I don't like the reality because it obscures the bad parts of where we
are.

I've seen a _slew_ of software licencing models in my travels. I hate where we
have wound up.

Some of my more-appreciated ones:

\- Not-Crappy-Hardware-Lock: \- When I was in the RF/HFC Drafting industry, a
very entrenched program was utilized for most signal level calculations. This
company used a solution that involved a Parallel or USB Device that took a
Vendor-provided Dallas one-wire device. You paid X$ for the software, you had
updates up to X date with that key. They did have a way to 'upgrade' but I
don't remember what that involved as we never did it. The system seemed fair,
I'm guessing it was fairly PITA to hack (remember, in this industry you have
access to a decent number of people with good signal analysis equipment) but
also had a fair cost standpoint. \- It was nice that "Who can use it" was just
tossing a key around.

\- Usage + Audit: \- Same industry. Market forces led us to also have a
product from a vendor that also competed with the biggest CAD vendor out
there. This company essentially had a multi-stage setup; The systems running
the software would check in with our locally-installed license server running
on a box, license server would periodically check in with vendor's home-base
server (uploading our usage data and downloading some response, I'd assume).
If there were any issues with our usage the vendor would call us and inquire
as to what was up. \- If your machine was not going to be able to check in to
the license server (i.e. Field use, this was also before mobile hotspots were
cheap, and not too many corps had proper VPN setups) you could 'check-out' a
license and generate a time-expiring key \- They were WAY WAY more than fair.
Our usage was balance out based on some algo on their end. IDK quite how it
worked, but for example; if you had 3 licenses, didn't use any Monday or
Tuesday, 2 on wednesday, but then had 4 people simultaneously using on
Thursday/Friday, _they didn 't care_. \- When somebody who may have been me
left the app up for a month _and_ had a checked out license for a laptop,
while we were in crunch mode and others were using the application, we
eventually _did_ get a phone call. But even during the conversation they
discussed the amount of logged time and it didn't result in any penalties or
fees. \- We got updates as long as we paid a fairly low maintenance fee. If we
stopped paying updates, we only could use up to a certain version. \-
Seriously, maintainers of the DGN format, you guys were amazing about
licensing and just great to deal with.

\- Per-Seat licenses: \- Arguably the most fair \- You know what you're
getting \- Can do either with a central reg or reporting server \- Possibly
more balanced than above towards vendor, definitely lower maintenance level.

\- Jetbrains style licensing: \- Similar to Usage + Audit, but only the usage
part \- Works well. I dig it. \- But seriously, what is up with the awkward
2-factor setup?

\- SAAS: \- How would you like to be charged today? \- Vendor Lock-in \-
Change Friction \- Don't own the hardware \- Offloading of liability

Unfortunately, SAAS wins out because it's the best way to make money,
especially when one considers that _last_ bullet point. But also, when you
consider 'base' charges versus 'overage'; as an industry we might be diving
head-first into the same malarkey that we have complained about in places like
this, or forums, or even usenet groups; the forced minimum pricing/bundling of
services.

What's ironic here (for me,anyway) is that Cloud computing used to be
different; you used to just be able to buy a hosted server. Somehow we jumped
from that straight to the current model instead of more consumer-friendly
concept of burstable VMs.

~~~
fnord77
SAAS is also the easiest to maintain, because at any one time there's
(usually) only one version of the software running on one OS & hardware
version. And the vendor is running it, so access to logs and core dumps is
effortless. It's a big problem getting logs, etc. from customers running on-
prem software.

------
crazygringo
I'm very confused. What does it matter, for running old software, whether a
framework was a fad or not? And what does a "brittle" dependency even mean in
the context of preserving software?

If you're trying to preserve software, you've got two options.

One, you can preserve the binary and run it with emulation. For webapps, this
means preserving a deployable image. Your faddish/brittle dependencies are
100% irrelevant, because your image includes them already, right?

Or two, you can preserve the source code. Which for _any_ project is going to
involve a mess of complicated tooling that dates to when the project used to
be compiled. But if you're intending to preserve something for the long term,
you should preserve the dependencies too -- e.g. node modules or whatever. Or
else you're gonna have to track down the packages used on a certain historical
date, just like you might have to track down some random specific version of a
Borland compiler that ran on a certain version of an OS.

I don't really see what's all that different.

~~~
Polylactic_acid
My favourite iOS game from when I was a kid is no longer actively developed
and as a result it was removed from the app store because it doesn't work
properly on new devices.

Its perfectly understandable why this happens but its a shame there is no way
at all to play this game other than finding an old ipad that still has it
installed where as old windows games still work fine.

~~~
social_quotient
Unrelated...

“My favourite iOS game from when I was a kid”

I’m 40 and maybe this is the first time I feel really old on HN. Cheers!

~~~
Tainnor
I'm 32 and even I felt old reading that. :D

------
zdw
Some people question when I've decided to only use a handful of dependencies
in a project - usually "You could do it so much faster if you used X, so why
not?"

The problem is that every dependency is a possible source of failure. For most
languages, you can do a whole lot if the standard library is well-written -
Python 3.6+ is particularly good for this, PHP if you don't mind API
strageness, whereas Go without modules is less convenient, and outliers like
node.js are basically unusable.

Also, frequently you don't need the popular "everyone uses it" libraries. For
python, requests isn't needed anymore, as built-in urllib is quite good now
and lacks a lot of "smart" functionality that causes strange breakage. pytest
has more rough edges than built-in unittest. The only things that I use
regularly that legitimately add functionality are pyyaml and tox (only when
testing!).

~~~
classified
Funny that you should call node.js an outlier when it's all the rage these
days. Node does, however, go to show that short-term convenience trumps
reliability and long-term stability by a wide margin.

------
sjy
I wonder if the comment was intended to be mainly about the newer mobile OSes.
I don't think I'd have too much trouble finding a way to run 10-year-old
Windows or Linux desktop apps, and it would be easier than running Mac apps
from the 1990s, but I didn't even get the opportunity to save copies of the
iOS binaries I used 10 years ago. Even if I did, I doubt I'd be able to do
anything with them.

~~~
mylons
they're talking about web apps and mostly javascript.

~~~
emodendroket
Doesn't make much sense then. Your browser will run those.

~~~
Polylactic_acid
I have seen web apps that simply can't be deployed anymore because their ruby
version includes a rubygems version so old it can't use TLS 1.2. Its probably
a good thing and will force upgrades eventually but old web apps can't always
be used.

~~~
mcguire
mod_perl 1. Oy.

Never say you can't rewrite a project. I've done it too many times.

------
systematical
Funny I upgraded to ubuntu 20 from 18 last night. Ruby sass wouldn't work
anymore. Why? Well just read what the maintainer said...

"When Natalie and Hampton first created Sass in 2006, Ruby was the language at
the cutting edge of web development, the basis of their already-successful
Haml templating language, and the language they used most in their day-to-day
work."

The page continues...

"Since then, Node.js has become ubiquitous for frontend tooling while Ruby has
faded into the background."

[https://sass-lang.com/ruby-sass](https://sass-lang.com/ruby-sass)

So I installed the nodejs version. And in a few years I'll move over to the
next fad....

~~~
batsigner
consider not using ubuntu

~~~
JanisL
What would you suggest as a replacement?

~~~
batsigner
arch linux, artix linux

~~~
systematical
and why more importantly

------
marcus_holmes
As commercial software developers, our job is to write profitable code.

Not bug-free, elegant, perfect code. Not buggy, unusable, dependency-riddled
unmaintainable code either.

It's an art getting that balance right. Not everyone does.

~~~
teddyh
Profitable, yes, but considering what time scale? Profitable for six months?
Profitable for two years? Five years? Ten to twenty?

~~~
bluejellybean
Ideally from day 1, the sooner you have the option for profitability, whether
or not the company decides to take profit or reinvest, the better your chances
of not wasting a ton of money. Even if the actual tech takes a year or two to
develop, if you have a paying customer from day 1, the better your chances
should be.

~~~
marcus_holmes
Depends on the business.

For a bootstrapping startup, yeah, day 1. It has to be. Otherwise you don't
eat.

For an Enterprise business with 6 or 7 figure revenues, a lot longer. Possibly
decades. If it gets political, never ;)

------
macjohnmcc
I worked on my previous project for 22 years yes 22 not a typo. It changed a
lot over the years but there was still code that was recognizable from the
early days. At my new job I work on projects that are designed for minimally
viable products. Basically throwaway.

------
x87678r
I also feel old fashioned imperative programming is much easier to debug than
modern event-driven frameworks.

Technically unit tests should prove a system is modular and testable but I
just keep seeing huge dumpster fires.

I'm not sure if I'm just old or it really is a problem.

------
iamjohnsears
When I started out of college as a SWE at HBO in 2009, there was a ton of code
written in like 1999-2002 still in use. It was cool to see well-written and
documented code live on in real perpetuity. Also, most of the engineers who
wrote that code were still there! I think things have both changed a lot in
the last ten years (power of iteration), and also things in a stable,
profitable company are just a lot different than in a startup.

------
Const-me
I mostly use Windows. Most software written after 2005 just works.

Not everything, e.g. I have a flatbed scanner in attic which requires me to
fire a WinXP virtual machine (I use it couple times a year), because there’s
no drivers for Win10. Many low-level OS-related utilities are broken as well.

However, majority of software works. For lulz, I once to tried to run a binary
of pinball from Windows NT4, worked fine on modern Win10.

------
aninteger
I don't know... Can I run Netscape Navigator on Ubuntu 20.04? How about Gtk
1.2 programs or old Qt3 programs? Can I run Opera 12.x?

~~~
kikokikokiko
It's easily doable by running a virtual machine. If you have an OS compatible,
it will run out of the box. Now try to do that with one app that depends on
one of the cutting edge development stacks of say, 2011... Good luck telling
me how you can run it as easily as I can tell you about the last version of
Netscape for instance - just run a Win98 VM and voila, you're set.

------
melindajb
cleaning up unicorn poop has been an excellent business. Nice to see it will
continue for awhile.

~~~
celim307
My buddy has built an entire career hopping to late stage start up after late
stage startup and refactoring rails into java

~~~
phkahler
>> and refactoring rails into java

Oooh planning ahead for round 2.

------
shivekkhurana
I think some technologies, specially the ones related to web, cause this
fatigue more than others.

It's probably because web devs build and switch stacks at a higher velocity.
I'm in my mid-twenties and have seen 4 monumental changes in frontend
development alone.

------
bschne
For an interesting vision of how software could potentially be constructed to
give users more agency/ownership and longevity without ditching the benefits
of SaaS, see Kleppmann et al's "Local-First Software: You Own Your Data, in
Spite of The Cloud" [1]. From a technical standpoint, anyway, what they don't
really get into is how subscriptions are just too damn attractive as a
business model. If anyone has interesting approaches on that front, I'd love
to learn more!

\- [https://www.inkandswitch.com/local-
first.html](https://www.inkandswitch.com/local-first.html)

~~~
ColinWright
Submitted and discussed at length a few weeks ago:

[https://news.ycombinator.com/item?id=24027663](https://news.ycombinator.com/item?id=24027663)

------
nix23
One comment states:

>Given full blueprints, you can reasonably rebuild any machine from 19-th
century from scratch, no matter how complicated. Even if you start from raw
materials.

Wich is totaly untrue, not the blueprint was important but the handcraft
knowledge, and that is often forgotten. Whe even had to relearn many details
how appolo landed on the moon because of lost knowledge, and that thing was
pretty good documented.

Just one of many examples:

[https://www.sciencealert.com/why-2-000-year-old-roman-
concre...](https://www.sciencealert.com/why-2-000-year-old-roman-concrete-is-
so-much-better-than-what-we-produce-today)

------
WealthVsSurvive
It's because software design is guided by individualized, balkanized,
competitive, hierarchical human incentive, rather than public incentive. The
extraneous branching and churning of what amounts to the formal, ever-evolving
definition of all human interaction and knowledge: one might define modern
computing as this. To accomplish this branches must be deleted over time and
unified in order to properly abstract and make infinitely useful the
definition. Branches are useful in uncertainty, in areas where discoveries are
still being made. Branches become impediments in settled areas and Babel-tower
formation begins, increasing brittleness. It's the human-knowledge equivalent
of Keynesian hole-filling, but the price one pays for this is software
duplication, increasing computational complexity, and increasingly less
interoperable systems for like things. Sometimes lightning strikes and we have
to build all the rooms again, but that happens at the bottom of the tower up.
I think the role of the State should be to unify these conditions over time,
so as not to create extraneous noise, confusion, human-churn and suffering,
and material resource waste.

------
manmal
People are using lots of external dependencies (which expire over time) for a
reason. They allow a single software developer to build a social network of
very high quality, or hardware that would have taken a whole department to
build 20 years ago. The pace is so fast that the system vendors can’t catch up
providing all this functionality in their native frameworks, which IMO is the
primary problem here.

------
Ericson2314
[https://github.com/nixos/nixpkgs/](https://github.com/nixos/nixpkgs/)
Everything, I care about, fully reproducible.

I don't mean to "hawk my wares" in what feels like many threads, but
seriously, this is a solved problem, despite the continued mess-making of
people that don't use it.

------
teknopurge
2013-2019 had a lot of "re-discovering the wheel" \- especially wrt
programming languages and frameworks(read: JS)

------
dusted
I blame the consumers. They're being lazy and shortsighted. Computer programs
where half of it runs on some remote server and the graphical frontend is a
home page running in a WWW browser is plain stupid for all reasons except
"easy to update" and "easy to control".

I absolutely refuse to use "web applications" for anything productive. I run
my computer programs on my computer, and if they don't work without an
Internet connection, I'm not using them either.

My Nintendo Wii has never been online, it's only been used with physical
media, so I know it will continue to work as long as the hardware does, and
when that no longer works, the software on those DVDs still work. The same
goes for my Playstation 1,2 and 3, and my xbox and xbox 360.

~~~
tarsinge
How the blame can be on the consumer? They don’t even have a clue what a front
end is. Just look at Adobe customers to see if they are happy with the cloud
for their apps.

~~~
dusted
I blame the consumer because we all vote with our money. Sure, technical
proficiency in mainstream users has probably fallen somewhat since the 80s,
but people should care about the huge difference between buying a product for
their computer, or merely renting access to a product. They should understand
that they're giving up the choice to stay with a version that works for them
and their computer, one that won't force them to upgrade hardware or other
software to keep going.

------
pmarreck
Just wait until all these SPA's stop working without constant maintenance.

Data: Always readable, assuming it sticks to some discernible format.

Data generated by code: Inherits both disadvantages; the changing of data
formats AND the ephemeral nature of old code.

------
maerF0x0
I expect there somewhere someday there will be a company that focuses on just
buying failing SaaS software to have their code as an asset quickstart to a
real market solution .

kind of like buying a bankrupt generator company as the basis of starting a
car company. You need a liquid fuel engine design anyways...

(Edit dont get pedantic about the analogy's shortcomings)

------
ricksharp
Perhaps the issue is that it is far cheaper and easier to start from scratch
and create a small app these days.

As we innovate exponentially faster and have better tools and languages, it is
far easier to tear the building down and start over.

Also, I don’t know of a single app I use regularly that is older than a few
years.

~~~
qes
> I don’t know of a single app I use regularly that is older than a few years.

What did you use to visit this website? Check your email? Open a spreadsheet
last? If your a software developer, what do you edit code in? Your operating
system?

Hard to believe you're using apps less than a few years old for any of those
much less all of them - or even those that are by name older but have been
through a Big Rewrite in the past few years.

~~~
iso1210
Firefox - July 2020

Mutt - July 2020

Gnumeric - August 2020

Vim - December 2019

OS - Ubuntu, April 2020

They might not have had big rewrites, but they are still under active
development (I actually run slightly older versions, but nothing older than 5
years)

The code I use which isn't under active development is stuff I wrote years
ago, but showing its age -- one uses flash to deliver video for example. Last
update was a specific view for a blackberry, which gives an indication on the
age. That view didn't support video as we didn't have 3G in the fleet.

------
thethethethe
I don’t see why this is a problem, I actually think it’s a feature. Do you
really want to be using 20 year old software? I don’t. Furthermore, not being
able to run old code has never impacted me in a meaningful way and I’m part of
the small minority of people who write software for a living. I recon if you
go ask pretty much anyone out on the street about this “problem”, they
wouldn’t even care.

I think it is fair to say that unchanging software systems can be harmful. The
unemployment benefits system in the United States is a good example. They
built those systems in the 70s and never touched them again. Now that the
requirements have changed due to Covid induced economic collapse and the
systems are overloaded, there isn’t a single person who has a damn idea how to
fix it. If there were people iterating on and improving the system this entire
time, things would be much easier to fix, or maybe there wouldn’t have been a
problem at all.

Data is forever, not code. Code should be fickle. Changing code is progress

~~~
AnimalMuppet
Changing code _may_ be progress. Replacing working code with code that works
less well is definitely not progress, though.

The problem is that working old code often has far more corner cases and bug
fixes in it than you think. It's easy to "replace" it with something that is
buzzword-compliant and that uses the new shiny, but that doesn't cover all the
cases and doesn't have all the needed bug fixes. That's _not_ a step forward.

~~~
thethethethe
> It's easy to "replace" it with something that is buzzword-compliant and that
> uses the new shiny, but that doesn't cover all the cases and doesn't have
> all the needed bug fixes. That's not a step forward.

This is a false equivalency imo. If you build a replacement system that is
buggy and sucks, you did a bad job, it has nothing to do with the tools that
you are using. This seems like more of a criticism of management to me

~~~
MattGaiser
> If you build a replacement system that is buggy and sucks, you did a bad job

Or you just didn't know about all the edge cases, niche ways of using it, and
the regulation on page 475 of some arcane law code that requires that
something be so.

~~~
thethethethe
> Or you just didn't know about all the edge cases, niche ways of using it

Sure but this goes both ways. Developers cannot know all future requirements
when writing the software. SaaS models largely alleviated this problem because
people are continuously working behind the scenes adding new features and
fixing bugs

------
domano
A bit oversimplified in my opinion. Sure, SaaS only works as long as it is
online, but with reproducible builds and versioned dependencies most software
can still be built for a very long time. So i think for OSS the argument is
not really valid.

------
wscott
This graph of software dependencies from various sources is fragile but
exactly the sort of issue IPFS was designed to solve.

If something like node.js stored all sources in IPFS then the data would
remain as long as someone builds the software.

------
agentdrtran
I really, really doubt I can run most 1980s software on a modern machine
without effort.

~~~
Spooky23
Folks I work with support some clipper apps written in the late 80s.

It’s getting harder now with 64bit windows not supporting 16bit apps — they
are living on a “legacy” Linux VMs tenant in Azure now, and probably will
continue until around 2030. Possibly longer, as they are on emulated DOS now!

~~~
jandrese
DOSBox has been something of a godsend for getting a lot of 1980s software
working again. Not so great if you also need to interface with proprietary ISA
cards, but lots of software can be brought back to life.

~~~
msla
Not that this is relevant to DOS software, which needs whole-system
heavyweight emulation right from the start these days, but...

It would be interesting to have a solution which allowed you to progressively
wrap software to allow it to run in just as much isolation as it needs, from
the ultra-lightweight scripts that just futz with LD_PRELOAD (when most of the
environment is still there) to actual Docker images (when the OS is still
there) to images to run in a hypervisor (when the _hardware_ is still there)
to full-on heavyweight system emulation like SIMH, for when even the hardware
has gone.

It's all very Tao:

When the userspace libraries have gone, there is LD_PRELOAD

When the filesystem hierarchy is gone, there is Docker

When the OS is gone, there is Xen

When the hardware is gone, there's SIMH. Hi, SIMH!

[https://www.youtube.com/watch?v=Vkfpi2H8tOE](https://www.youtube.com/watch?v=Vkfpi2H8tOE)
Laurie Anderson — O Superman

------
DangitBobby
As long as you vendored your dependencies it all actually still should run
fine. Bitrot is not a new phenomenon. But it is incredibly frustrating.

~~~
Polylactic_acid
Most languages do this pretty well. ruby and rust package managers store all
past versions of packages so its trivial to get the same setup again. Its C
where the problems come from. Finding the right packages to build something is
quite a task since every distro calls them different things and has different
versions and only keeps 1 version at a time.

~~~
zelly
To be fair you could track down the git repositories or tarballs and compile
and install the right versions. If the author used autotools as is standard,
the version number should be listed.

It's not ideal of course. I wish it were standard to build all dependencies
from source and statically link a big binary like Go does.

------
leafmeal
Why 2005?

~~~
pdw
That's about the start of the Ruby on Rails hype.

~~~
Polylactic_acid
If you are trying to run a rails app from 2005 your biggest issue is the
trillion security holes in your app and not getting it running which is
usually not too hard.

~~~
nirse
That is absolutely true, and probably equally true for any other web
application that was wasn't maintained for 10-15 years. But this is article is
talking about the ability to run older applications, not about safety concerns
about doing so. I just wanted to point out that, while Rails is far from
perfect, from a software-archeological perspective there's not really much
ground to pick on it. The ruby gem ecosystem (again not perfect) is at least
pretty stable in the sense that 10+ year old libraries are readily accessible
(as they should be!).

------
x87678r
Surely this is a lot to do with browsers and js frameworks always changing.
Everything non-browser related is probably fine.

~~~
Falkon1313
A lot of desktop software has in recent years moved to assume always-on high-
bandwidth internet connection and/or moved to a subscription model.

------
tomerbd
"The developer guide to handling agile" handbook could help if it existed.

------
jancsika
What was it like setting up CI in the 80s or 90s "without too much hassle?"

~~~
Falkon1313
You typed the compile/build command and it compiled/built. Then you had an
executable you could test and/or deploy. Or you got errors, which you resolved
until it did compile.

~~~
jancsika
In other words, you had one physical machine per platform, and at least one
developer manually running each build on each platform. Or perhaps one
developer per each if you've got an even bigger budget.

I'm just imagining a quasi-Luddite hacker collective that eschews automation
and does their own builds directly on the physical hardware to keep themselves
"in touch with the hardware." That actually sounds pretty fun and social. Plus
I wonder what the effect would be on catching regressions/issues before
release compared to modern CI.

------
RunawayGalaxy
It's more like fossil remains from the Mesozoic than litter.

------
daxfohl
To further the analogy, blockchain is the reusable metal straw that costs 100x
the energy to produce, lasts forever, but mostly gets shown off a couple of
times before being thrown away anyway.

------
jimmaswell
For that to be true, there would have to be no actual domestic crisis, with
poor countries with lax oversight responsible for 99% of the damage, but with
politicians passing laws making our lives worse at home because it makes
uninformed people feel good.

I'm stocking up on plastic straws and bags while I can.

