Hacker News new | past | comments | ask | show | jobs | submit login
The beauty of finished software (josem.co)
203 points by josem on Oct 31, 2023 | hide | past | favorite | 152 comments



>Let me introduce you to WordStar 4.0, a popular word processor from the early 80s. [...] I love how he puts it: It does everything I want a word processing program to do and it doesn't do anything else. [...] , remember that sometimes, the best software is the one that doesn’t change at all.

If we take your Wordstar 4.0 example literally, it's not a good example of the concept I think you're trying to convey. Wordstar did change for 7+ versions after 4.0: https://en.wikipedia.org/wiki/WordStar#Version_list

Wordstar didn't freeze at 4.0. Instead, what happened was George R. R. Martin's idiosyncratic usage of word processing froze at WS 4.0. That's a very different concept.

If we define "finished software" as determined by the end-user instead of the developer, then any software of any version that doesn't force auto-updates (e.g. not Chrome, not Skype) can also be "finished" if the particular user is satisfied with that old version. E.g. In that mental framework, Adobe CS6 from 2012 and Windows 95 can also be "finished software" because the user doesn't need/want Adobe CC 2023 and Windows 11.

Maybe a better example of "finished software" from the perspective of developer would be a one-off game where the programmer never had intentions of making franchise sequels out of it.


> any software of any version that doesn't force auto-updates (e.g. Chrome, Skype) can also be "finished" if the particular user is satisfied with that old version

That’s obviously not true. WordStar 4’s output is interoperable with newer software, it runs on today’s hardware with a little work, and a user won’t be under constant threat of getting hacked because they’re running an old version. It is safely “finished” in a lot of ways the other software Martin was using when he settled on WordStar is not.


Having written printer drivers for MS-DOS, I doubt Wordstar will work properly with modern printers.


I have a few thoughts on how I'd try to make it work, but it does look like WordStar 5, which came a year later, would be an easier one to try, with its Postscript support.


Maybe, except that it would require passing that Postscript file into another application, or getting a Laserprinter still able to plug into MS-DOS infrastructure or FreeDOS.

Anything requiring moving the Postscript file via other applications or operating systems, is working around the problem of WordStar not really working on modern infrastructure.


I think you’d probably use vDos (or something similar) today if you tried to do run old WordStar with modern hardware. I gather there are some features available that help with printing.

I actually used Borland Sprint in Windows at one time during the 32 bit era in the manner you described, “passing that Postscript file into another application,” and it worked great. That was a DOS word processor that invited a little hacking, as I recall.


Yeah but all of that are ways to workaround the issue that it doesn't actually work without issues, and some clever ideas are required to use it like on its MS-DOS heyday.


Or some older hardware, yeah, which is incredibly easy to keep running because of how much of it was made. But the original question was about whether the software was "finished." It wouldn't be worth the effort if it wasn't in some meaningful way "finished."


> any software of any version that doesn't force auto-updates (e.g. not Chrome, not Skype) can also be "finished" if the particular user is satisfied with that old version.

I suppose. But the issue is that software releases used to have defined points where it was complete. With modern software, there is often no such point at all. Every "release" is just a snapshot of a continuous development stream, and is effectively a beta. There's a real difference between the two approaches. The latter is better for the devs, the former for the users.


A release is a release, regardless of the frequency. Quality control or lack thereof is an orthogonal issue.

Don't presume to know what users want. Early adopters often prefer to get new features as soon as possible even if that means working around defects and usability problems. Conversely, some enterprise software vendors still release on a very slow cadence because their customers don't want to do frequent acceptance testing or user retraining.


> A release is a release, regardless of the frequency. Quality control or lack thereof is an orthogonal issue.

I'm not sure it's possible to separate quality from release schedule, when comparing to the old ways.

Back in the distant, ancient past, software had to reach a level of quality where it could be released by writing it to disks (or CDs, or engraving it into stone tablets) then sending it to stores, no patches possible.

Meaning there was Office 95 and it'd better be stable and reliable because the next release will be Office 97. Which will be a paid upgrade, so it'd better have enough new features that users are willing to pay.

This meant that there was no such thing as a bug or performance issue that could be dismissed with "we'll do it later" - it either got fixed before the release, or never. Modern software development techniques have eliminated any such deadlines.

And because in the past a company could be completely wiped out by a bad release, there were dedicated testing teams and suchlike, as by the time users started reporting bugs and complaining, it was too late. Good luck finding such a team these days!


Of course it's possible. I've done it. This requires a pretty high level of discipline and near 100% automated test coverage. Test scripts have to be written concurrently with product code, or at least prior to merging any code to the release branch.

I don't believe that software was actually higher quality in the distant past. That seems like selective memory. I've been in the industry long enough to have used a lot of products that shipped on physical media. There were many defects that we just lived with. Vendors frequently issued patches or dot releases on additional discs to fix defects.


I'm also an old-timer, from slightly before personal computers were a thing.

> There were many defects that we just lived with.

There were, indeed! What the most common defects look like is a bit different now than then, but I don't think software now is generally any less buggy than software then. The main difference is that now software is increasingly unreliable in the specific sense that it updates so frequently. Very frequent updates mean that things are constantly in flux. In my view, that alone is a decline in software quality.


Not just 100% unit test coverage but e2e integration tests too. My company really struggles with the latter because they take forever to run and they are flakey as heck. Most of our bugs are from badly mocked services.


> Don't presume to know what users want.

Fair. I should have said "me as a dev" and "me as a user".


2048 comes to mind as an example of such finished software


I played it and hacked it to 4096 with a 5x5 layout so I could play longer. :)

EDIT: I think I took this project:

https://github.com/gabrielecirulli/2048

and modified it


As someone who gets paid to find ambiguity in language, cf. being paid to write broken software, IMO the term "finished software" does not leave much room for competing interpretations. It seems quite clear.

As a non-graphical software user, I have been using finished software for many years and I actually prefer it to the broken on arrival variety that has become increasingly common today. The type that can be a Trojan Horse for pre-approved remote access in the form of "software updates".


That new versions with new features come out doesn't mean they're improvements on the previous one. At some point it's just bloat that slows down the core functionality.


even "ls" and "cd" are still getting updates:

https://github.com/coreutils/coreutils/blame/master/src/ls.c


Let me explain to you. Clearly superior example of finished software is from business perspective, where cost of additional development is below projected any additional profits.


Finished Software is a beautiful thing, and personally I use a bunch of software I wrote back in the 90's and after.

BUT (and it's a fairly big But) software does not work in a vacuum. If the software does any interactions with other software, think any kind of networking like email, ftp or whatever, think printing, and so on, then you should expect it to stop working (in those areas at least) at some point.

All network protocols get tweaked from time to time - TLS code changes - Authentication schemes change, I could go on but you get the point.

I've lost count of the number of people who complain to me "but it worked yesterday" - and sure it did - BUT the "world changed and you didn't keep up."

I love finished software - but not all (categories of) software will run forever. Once there is other software involved, then sooner or later it'll likely "stop".


For the reason you mentioned, there is no such thing as finished software, because no software runs forever without maintenance: nothing works in a vacuum. You're either at the mercy of an API, an operating system, a BIOS, etc.

The amusing thing is that for game development, the most stable target platform is old proprietary video game consoles like the Genesis and Neo Geo simply because there are dozens of emulators for them that run on hundreds of platforms. "Write once, run anywhere" is more true if you program a new Genesis game than make a native cross platform PC game.

Make a PC game, and there's a decent chance it'll stop working after a couple of decades (most Win95 games are already dodgy). People will be making Neo Geo emulators after you're dead. If you make a Neo Geo game, it's going to run on virtually every computer for a very long time.

The real power of it is that it gives you the ability to latch onto the fame of (and interest in) a popular platform. Even if you make an open source game, the problem is that if you didn't program Doom, there's a good chance that there isn't going to be enough interest in it for large groups of people to maintain it over long periods of time.

All things will break in time, though. Eventually, people will probably lose interest in maintaining emulators for arcane systems like that, and your software will stop working then.


You can spin up Windows 95 and its games today!

But, for a long while now, the forecast has been looking increasingly grim for emulation and software preservation.

That said, if you're targeting the performance envelope of 1990s games, there are options without resorting to anything so esoteric as emulated consoles.


Sure nothing runs "forever" but on Windows it's not uncommon to use utilities compiled for windows 95 or so. Basically anything 32 bit will still run. I use a bunch of command line, and windows-based tools from that era in my daily work-flow. (Hello GREP).

Once it's interacting with more than just Windows though then things get less rosy.


That's more of an argument for decoupling though. Why can't I have a local permanently-updated "fetch mails via IMAP"-tool and then a mostly dumb GUI mail client that does not care about cipher changes for 10 years?

If someone's about to comment with fetchmail, shoo :P

I've never seen that work properly on Windows, it's not end user friendly and I'm not sure how to put "decoupling" better - all I'm saying is that for how email works at least, no, Thunderbird wouldn't necessarily need the means to talk to servers, something local via socket/named pipe/filesystem might be enough - but this would mean more of a paradigm shift and yes, it would maybe also be bad for end-to-end crypto (depends how you define the end, it's still your machine...)


Check mbsybc + msmtp + Mutt.


Not only that, imagine you release something for some Mac, and it goes from PowerPC to Intel, now you adapt your OpenGL code in OSX, now OSX doesn't work in 32-bit, now you need to use Cocoa, Quartz, Uikit or whatever. Hey, we now named it macOS, what about moving your OpenGL to Metal? And we are not x86 now, we are Apple Silicon. This is just one platform, there are several that changed and required adaptation in the software. Some also died, so if produced great software for Silicon Graphics, Sparc, Amiga or whatever, well, you will have to update the software to port over to somewhere too.


Unpopular opinion, but software should not have to keep up with this treadmill. Especially since the treadmill is a deliberate choice the OS vendor is making, and not some fundamental attribute of software. Bits don't tend to simply rot. The foundation they are planted on deliberately gets broken. The vendor could choose to allow backwards compatibility, but often chooses not to.

Sorry, but when I write a program in 2010, I kind of do expect it to still work on the exact same hardware in 2020. I don't think that's an unreasonable expectation.


> The vendor could choose to allow backwards compatibility, but often chooses not to.

This too entails a tradeoff: either the old software sits around in the system collecting defects, or it ends up supported indefinitely and the amount of work required to cut a new OS release increases without bound.

> Sorry, but when I write a program in 2010, I kind of do expect it to still work on the exact same hardware in 2020. I don't think that's an unreasonable expectation.

Unreasonable compared to what? In the abstract it’s not unreasonable but computers are quite an unreasonable environment to begin with. Their capabilities change dramatically every few years, and computers on the internet especially have to withstand essentially constant attempts at intrusion and abuse.


Apple is a particularly bad example of this kind of churn. They have an iron grip on the entire ecosystem and can get away with it, but you have to wonder at what cost.


I believe they lose the goodwill of their users over time. At some point they get pissed off and move on to a cheaper more stable option. I am a historical mac user (started using system 9) and I am disillusioned about Apple. They change things too much, too often, for what are generally very litte benefits at a tremendous cost.

Everybody is over the moon about Apple Silicon, but so far what I see is that we completely lost Windows/other systems capabilities, got even more uncompetitive GPUs than they used to be (if losing windows compatibility wasn't enough, they had to make sure you couldn't play a single game properly) and price and skyrocketed. For what? A better battery life, and a lot of marketing on how things are supposedly faster but outside of video processing they are definitely not if you make equivalent pricing comparison. Power is cheap and ubiquitous, mobile high-performance computing makes no sense at all; external batteries are cheap if they are really needed.

And now you have to deal with annoying software incompatibility, updates of all kinds and re-purchasing stuff that was working just fine before. Some software didn't even get updated and will never because companies have decided it wasn't worth the trouble (they are right).

Sometimes I see some people complain about Windows updates and whatnot. Well, compared to Apple shenanigans, I really do not mind them. I like not even registering on the annoyance scale compared to Apple bullshit.


Obviously at the cost of having a weaker gaming ecosystem than Linux.

They showed off the powerful GPU of their new M3 yesterday and chose... Myst. Now that is almost comical.


> If the software does any interactions with other software, think any kind of networking like email, ftp or whatever, think printing, and so on, then you should expect it to stop working (in those areas at least) at some point.

This is the beauty of line-delineated (ha) text-based pipes. I wonder if some of those unix tools would be considered in any sense 'finished'.


Text based pipes only work well for a very limited set of use cases. In the real world we often have to deal with binary data, objects, and structured documents.


When I use a TLS forward proxy, networking software from the 90's works fine. For example, I still use original netcat from 1996 every day, as well as tcpclient from 2000.

Personally, I actually trust the "TLS code" in the proxy, e.g., to be up-to-date and correct, more than I trust the "TLS code" in today's "apps". It's easier to update the proxy, a single program, at the user's discretion, than to worry many different applications, all expected to be free of mistakes and to update prudently themselves.

With the proxy, it's open source so I can do a diff with the version I am using before deciding whether to compile and use a new version.


I think this is a big part of why some people complain about programming languages and environments seeming to tend towards shorter periods of backward compatibility.

It's frustrating when a piece of software or a library has been functionally "finished" for a long time, but still periodically needs maintenance and updating to accommodate changes that really have nothing to do with the purpose of the program itself. this is probably the one aspect of the Python2->3 complaint that I sympathize with.


Dropbox and Evernote could've been perfect finished software in 2012.

"Finished software" doesn't mean that developers will spend 0 time on it.

"Finished software" is in the sense of no major user-facing changes (i.e., new product features, experimental features...). To ensure "finished software" continues to run in the coming years or even decades, developers may need to keep up with operating environment changes (e.g., OS, hardware...) & continue to upgrade infra software / libraries for bug/security fixes & performance improvements.

Imagine that a tech company lays off all software engineers except for a few devops.


Would probably be a smart move for a lot of companies. If it's profitable it's profitable. Milk it for as much as possible by cutting costs, not adding useless features.


What this is talking about is tools.

If I have a 17mm combination wrench, that will work exactly the same way indefinitely. Nobody, hopefully, will upgrade "17mm bolt" to "17mm bolt v2.0" that is actually 17.1mm and make my wrench not fit.

Old software that does a specific thing, simply and standalone, can be a tool. Modern software, with a mindboggling variety of external dependencies, takes real effort to maintain in the "tool" model as dependencies change or retire. Take, for example an old video game that was fully debugged and released on physical media that can still be played (without even a physical machine, on archive.org) the same way. Versus a no longer maintained but treasured Android app or device, which falls behind and becomes unusable. Example: An old Android tablet that the kids use, was able to play Youtube videos. Then, somehow, it updated itself to a new Youtube version that's no longer compatible with it and now it can't, because Chrome has the same problem, and the native browser on there is too old to play web Youtube. Yet the device is still sound, still holds good battery charge and so on.


So you have a 17mm wrench. It's awesome, feels great in the hand, solid on the nuts/bolts. It's a keeper and a winner.

Then one day you need to 17mm socket wrench.

A year later, you find yourself needing a 17mm deep socket wrench.

The year after that, you've got a scenario where the deep socket wrench would work, but requires a cheater bar, and there's no space, so you need a drive adapter to connect it to an impact wrench.

The following year, you start working on a vehicle where torque matters, and you need all of the above tools to work with a torque wrench.

6 months later, you realize you got a torque wrench that only goes up to 80 (units-of-torque) and now you need 100.

----

The difference with physical tools is that nobody will raise an eyebrow at you having all these variants of your 17mm wrench. By contrast, having all these variants of (to use TFA's example) a word processor would seem quite odd.


I'm not clear what point you're trying to make here. The original point is describing the interface between the 17mm tool and the 17mm bolt, not all of the other tools that could possibly interface with a 17mm bolt.

Your other examples are a bit weird, because you're changing it from a wrench to a socket wrench. The interface is quite different:

Wrench -> Bolt

Driver -> Socket -> Bolt

Not to mention, all of your other examples illustrate that there's a perfectly fine interface between the 17mm socket and all of the various ways to drive it. The 17mm socket is complete software.


I don't agree with this characterization. The GP said "that will work exactly the same way indefinitely." which is correct, but doesn't capture the many subtleties of where and how you need to interact with a 17mm nut or bolt.

The owner of a 17mm combo wrench starts out with what appears to be the perfect tool for the job, but then comes to understand that the scope of interacting with a 17mm nut or bolt is wider than originally understood. They end up with a toolbox that is much more complex when it comes to "interacting with 17mm nuts and bolts" than they originally expected.

And so it goes with software too, not always, but extremely often.


yeah!

that one time my toilet stayed the same for 5 WHOLE years, but you know, there are totally toilets with bidet's, toilets that are motion sensored, tall toilets, short toilets, toilets meant to squat over, ad nauseum.

so for this reason it's totally ok for YOUR toilet to need be replaced every few months! yeah, that conclusion absolutely follows from the premise.

or not, and maybe the other posters point is that there being other needs and variations doesn't mean a specific tool for a specific needs must always change.


I'm the lead author of a cross-platform digital audio workstation. We have a saying : every user needs just 20 features, but the problem is that all but 2 of the features are unique to that user. Obviously it bottoms out at some point, but 100 users means 1800 features, and 100 more users means adding at least another 1500. And people why "a specific tool for a specific needs must always change" - it's because we serving a constantly moving user target. [EDIT: the numbers are just BS, but the principle seems to be real-world]

More than that, most users of most sophisticated creation software have their own evolving needs, unlike GRR Martin and his word processing requirements. The program they needed last year, before their understanding of their own process and their own aesthetic goals expanded, isn't the program they need this year.


You choose to do that, the point of the article is that you don't have to and there's something to be said about that.


At least the toilet:plumbing interface is standardized though!!


>If I have a 17mm combination wrench, that will work exactly the same way indefinitely.

I had to transition my tools from imperial to metric in the early eighties.


Even that happens. Consider for example the shit-show that is JIS vs Philips vs PoziDriv for plus-shaped screw heads.


>Take, for example an old video game that was fully debugged and released on physical media

A bug free game? I wish.


I deeply crave the ability to create finished software like the author talks about, but he skips over a major cause of never-ending updates: business model.

The web and mobile both enable a cycle of extreme optimization where ad-based or subscription-based models are continually squeezed for every last dollar of possible value. This is why if you’ve worked at a tech company that is thriving, a ton of your work just goes poof over time.

The best modern pieces of finished software I use are either free, donation-based, or a one-time paid price. My favorite modern app like this is a little utility called Simple Pacer, which solves distance, pace, or time calculations for runners. https://apps.apple.com/us/app/simple-pacer/id1273474255 It’s perfect and the version history on iOS is sparse. I’d bet it makes only a little money though.


Even better than finished software is modifiable software;

open-source-available minimalist zero-dependency projects that you can fork, fix and compile in 5 minutes.


Although open source and communities are fantastic things, there is just something about "finished" software that hasn't been modified in decades.

Sometimes I play old PC games with Wine that are the exact same executable they were 20 years ago.

It is hard to describe but it feels strangely calming, you just know everything is still the same as it always was.


I'm just as OCD about changes as the next competent programmer, but when it comes to say Starcraft, which is a beautiful one person creation (like most software that reaches a large audience for a long time) it now makes me uneasy as the tables are switching fast underneath.

Emulating X86 on ARM/Risc-V is an abstraction tree I don't want to climb f.ex.

Wine hides that power consumption.

It's a false sense of security.


Why not? Emulation is a beautiful thing. It makes one realize that CPU instructions are just another form of abstraction. Providing the old APIs/ecosystem is usually harder than just emulating the binary.

And running old software on modern hardware often uses only a fraction of the power the old hardware used to need. If one cares about power consumption, modern software has become so much worse.


Well, if you include the power to get to the modern hardware: no; then you are indebted forever by running Starcraft on emulation.

We often discount progress as obvious because we never paid the price of photosynthesis for millions of years before our tiny part in existence.

My problem is specifically getting to the consensus we have today, we were too slow, 6502 (8) freed humans 1982-85, 68000 (16) and ARM-1 (32) came 1985. So we had the instructions (in both senses) for the final hardware then.

40 years, and X trillion barrels of oil, later we are still in complete hardware and software confusion.

64-bit is a big mistake, because nothing scales to infinity; and since nothing is free, we are about to pay the price now!

Indebted to the past and the future to max!

I say roll back to C and Java and rebuild from there on Risc-V (even if it has 128-bit addressing, in case you run out of your 18.4 exabytes 64-bit RAM/disk).

But it might already be too late.


You should try working in embedded software; it has all the C code and 32-bit addresses you could ever wish for.


I would if they made micro controllers that had better hot-reload.

Also would need more cores, SRAM and some OpenGL ES GPU! ;)


C is crap that only exists because it has always existed. Why would you want to reboot the world on C? And why java as the only alternative rather than something more different?

>64-bit is a big mistake, because nothing scales to infinity; and since nothing is free, we are about to pay the price now!

What?


> but when it comes to say Starcraft, which is a beautiful one person creation

Wait, huh? Are we thinking of the same Starcraft? Is there a different program or utility with the same name as the videogame? Because the videogame was absolutely not a single-person creation.


There was a bunch of side people. But the main engine was one dude.


With all due respect to Bob Fitch, I think you're overly romanticizing here. He wasn't starting from absolute zero. He was heavily refactoring the existing engine and bringing in improvements from the Diablo engine, which was being finished up at the same time.

It's still a very impressive technical achievement to rewrite a game engine like that, I just don't think it qualifies as "finished software" in the way this thread intends that phrase to mean...

edit: in fairness your post wasn't talking about finished software per se; I still feel it's inaccurate to call starcraft.exe a one-person creation.


yeah, Blizzard made it. I used to hang out with him in #vb6


I understand your feeling: I have a vague uneasy feeling when I see DOSbox or other emulations running. It is a great achievement to allow legacy software to remain alive and executable... But still it seems fake. Nothing to do with the real software running on real antique hardware.


Pretty irrational honestly. The input and output devices make more of a difference than digital software being emulated or not. I'd rather play old arcade games in an emulator connected to a CRT arcade monitor than real arcade PCBs connected to an LCD using some scaler. The former is going to be a more authentic experience.


I think it's related to latency. There is no way to fake low latency and for that you often need the complete chain of hardware.

I'm the opposite to you though, rather have an original C64 on modern capture (low latency GV-USB2) and (S)NES classic controller input.


Sample size one, but I actually found that the iOS ecosystem works surprisingly well for having a finished app. I have one, created over six years ago with almost no updates over the past years. I intentionally built it in a way that does not require any kind of maintenance: It works offline, i.e., does not require a backend; it's a one-time payment through the App Store (no subscription -> no expectation of continuous updates); the free and paid versions only differ in how many items you can put in the database, i.e., everyone can try the full feature set before buying -> reduces complaints / refunds; etc.

While these are deliberate design decisions, a nice benefit of the iOS ecosystem is that the app has kept working on every new iOS version without me updating anything in the code. Things like installation flow, payment processing, etc. are definitely components that would fall apart a lot sooner when not offered by the platform itself.

Sure, platform lock-in and all that stuff, but so far the app just keeps generating happy users every month (and some pocket money).


Even better, I have a hacked-together-in-a-weekend-for-a-class app for windows on the Microsoft store that was uploaded using a school account I can no longer access eight years ago.

I have received literally daily warnings that the app will be delisted if I don't fix a laundry list of issues including security vulnerabilities, outdated libraries, new EULAs, Icons in different sizes, etc. Despite all of that the app is still there and able to be downloaded (just checked).


TeX and METAFONT are also "finished software."

Their version numbers converge to π and to e respectively when bugfixes are delivered, and will become precisely π and e when Knuth passes away.

https://texfaq.org/FAQ-TeXfuture


Are there no other people who can work on the TeX and Metafont after Knuth? What if someone else wants to create bug fixes for these projects? How would they version TeX? Or do they have to fork TeX and have a new versioning system?


Straight from Knuth's paper: "anybody can make use of my programs in whatever way they wish, as long as they do not use the names TeX, METAFONT, or Computer Modern. In particular, any person or group who wants to produce a program superior to mine is free to do so. However, nobody is allowed to call a system TeX or METAFONT unless that system conforms 100% to my own programs."

https://tug.org/TUGboat/Articles/tb11-4/tb30knut.pdf


I have to say that I agree with GRRM's point about not wanting software to "correct" you. I seem to spend half my time while typing on my phone trying to convince it that what I typed is really what I wanted to type, and not what it thinks is what I wanted to type. Microsoft Office is also bad at this.

Also, whoever decided that highlighting something should try to guess the whole words you wanted and when copying should throw a space on the end for no apparent reason should be, well... I shake my fist at you. I spend a lot of time re-highlighting something either to avoid the space on the end or the beginning or both or to get something out of the middle of the word and it's incredibly frustrating. Just stay out of my way. I've got this.


These (and other examples) are of the same category: Computers should obey the user's commands, not ignore the user, and not try to second guess the user. Computers felt a lot more reliable (and a lot more fun) when they were essentially REPLs: Read a command from the user, start a process, execute the command, print the results, end the process, and repeat. Now, they're running hundreds of processes doing god knows what, trying to do things that you don't want them to do, nagging/notifying you constantly, suggesting this and nudging that. It's gone from something that executes the user's commands to something that commands the user.


> I seem to spend half my time while typing on my phone trying to convince it that what I typed is really what I wanted to type

I'm curious why you don't turn those features off


Laziness, I guess. I haven't wanted to scour the settings for my keyboard and/or system settings for it. But, prompted by your question I did go root through the settings and got it all turned off.


Clojure has a lot of these “finished” libraries and and another functional language like Elm really never gets any updates because it’s considered “finished”. Maybe it’s a functional programming language thing.


I heard this claim a lot but don't find it true, there is plenty of abandonware with lots of open issues


> This program embodies the concept of finished software — a software you can use forever with no unneeded changes.

I think it's important for people to realize this is totally untrue. You can check the wikipedia for wordstar[1] and the first paragraph says, "...originally written for the CP/M-80 operating system, with later editions added for MS-DOS and other 16-bit PC OSes."

Software needs to have the ability to change because the devices we run it on keep changing - either because we make new ones or the old ones wear out. This is why open source is so critical - because there is no such thing as "software you can use forever," there's only "software you can use right now and modify to continue to use in the future."

That said I also appreciate software whose feature set is frozen - allowing it to quickly and easily be made available on many platforms and be immediately usable by anyone who encountered it before.

[1] https://en.wikipedia.org/wiki/WordStar


100%.

Need to differentiate between finished FEATURES and finished software.


Emulation and compatibility layers seems to cover this just fine.


Yes - one way to deal with the fact that "software you can use forever" doesn't exist is to write more software whose source code you can update and just update that. However, that really just confirms what I am saying.


Most programmers never get to experience the amazing feeling of being truly done with a software project. I used to work on video games in the 90s which would go into a box and onto a shelf. We did not have updates or patches. The starting, creating and finishing of a concrete and complete project is not available for most programmers today.


I'm a bit envious of authors in that respect. Once their work is complete and sent off into the world they can stop thinking about it and move on to the next thing.


This sparked a shower thought. Instead of developing infinitely long software that perpetually updates, can we create many tiny products that just persist?


You mean the decades old concept of all that software I use every day, the GNU operating system tools? All the command line tools like 'grep', they are basically done, stable and reliable. And we have one tool for one purpose.

All these ideas have been there for a long time. But they work only if nobody wants to earn money with the software development.


We really take GNU for granted. What an incredible achievement and legacy for all of us in the industry. Say what you might about Stallman but he deserves massive credit and respect for it.


This precisely. There's so many perfect little tools in the *nix environment, stable as rocks. There's one-character tricks in text editors that nets what you want from MSO or 90% of what people generally want from PS/Gimp/Krita, all while using the same text encoding that's been in use since 1974.

The big gap in users is between "computer familiarity" and "computer proficiency". People associate environments with home, and the older the enterprise user base is, the more they want their mouse menus and clicker buttons. Introduce a text interface, say so long to 90% of the enterprise users . . even if the windowed equivalent is a hellish nested circular menu labyrinth no one can use the same way twice running.


Looking at the git logs (https://git.savannah.gnu.org/cgit/coreutils.git/log/) that doesn't seem to hold true.

While the days around 2003 when the project had +2500 commits per year are long gone, coreutils is still fairly active, with ~200 commits per year.

Even looking at really simple tools like 'yes', git blame shows a bunch of fairly recent changes.

On commands with specific repository like grep (https://git.savannah.gnu.org/cgit/grep.git/log/), the project actually peaked not too long ago, in 2010, with 355 commits a year. And the current activity levels (82 commits 2023) are not that far off this high.

Of course, commits are not everything. I expect the actual code changes to be smaller (lots of 1-5 lines fixes vs ~100 lines new functionality). But these tools are still evolving quite significantly.


Interesting shower thought.

I propose calling this idea Uniquely Minimal Individual Extensions, aka UMIX philosophy, and creating a suite of small self-contained programs that can be flexibly composed to create more complex ones.


"The thing that hath been, is that which shall be, and that which is done is that which shall be done; and there is no new thing under the sun."

- Ecclesiastes 1:9


How about calling it mature software? There comes a point in the lifecycle of certain software products where adding new features or minor improvements does not necessarily provide enough value to justify an update for everybody. I can see beauty in that. Often times the upgrade also comes with drawbacks such as increased loading times, new bugs or additional hardware requirements. It can be quite satisfying to realize that what you have does the job and there is no need to take the risk of an update. It makes you feel good about the product and adds to the respect you have for its devs. All in for mature software.


I get your point but "mature" is already used for "not new" software a lot.


I couldn't agree more. It's a real shame that the standard is that software is never "finished" anymore. Using a work-in-progress is always problematic.


Factorio is a common example of a video game that is very polished. It could be frozen in time and be considered finished.

Yet it receives consistent updates finding smaller and smaller fixes and refining the game even more. How do you reconcile those ideas?


What's to reconcile? If there are releases that are beyond bug fixing, then it it's not "completed", it's a work in progress.


Respectfully disagree, I think getting value out of a work in-progress is preferable to the alternative and provides vital feedback to the software builders.


As a dev, I get that. As a user, I very much dislike that software is often in a perpetually unfinished state.


i agree with both, "finished" software i think can happen only with small pieces of sotfware, every medium size+ project needs a continuous development


Now we know why George R. R. Martin has such infamously low productivity.

It's possible I've got the wrong takeaway from this article.


Now imagine if he were to use Microsoft Word and had to manually change all the uppercase letters to lowercase.


Why would his productivity be higher if he used a modern bloated word processor? Is fighting with Word's formatting some form of productivity that I'm not aware of?


Imagine how much quicker he would be with the latest AI extensions to Word.


Why’s that?


Unrelated, but Wordstar reminded me of the time when I used a computer for the first time in 1993. The only thing I knew about computers was

    cd ws 
    ws


It's quite funny to think that back in the eighties ordinary (ie not technical) people were navigating the command line on their PC's.


Pushing forward the state of what software we consider "old Unix standbies" like `cd` and `ls` is part of why I started maintaining my shell-bling-ubuntu repo. I would love to see a day where ripgrep, find, fzf, etc become so ubiquitous they start making their way into the "standard" builds of user facing distros.


Nice in theory, not in practice.

YouTube was already a great video sharing and recommending website in 2005, but if it still looked like this, it would be dead.

https://www.versionmuseum.com/history-of/youtube-website


YouTube without the algorithm? Sign me up.


Hah I've tried to find some of the videos featured on the homepage:

https://www.youtube.com/watch?app=desktop&v=C-JrfjVjHAY

It has crazy stats: 18 years old and only 3.7K views!

I wonder were the compression artifacts this bad when the video was uploaded or has it been recompressed a bit too many times? Still groovy tho


I think the quality was that bad already.

I have uploaded my first YouTube video in October 2006. The quality is still as day one.

However, some comments seem to have disappeared. I can only see my reactions to those comments.


This version had literally everything I need haha, wish it had stayed like this


> When we buy a physical product, we accept that it won’t change in its lifetime. We’ll use it until it wears off, and we replace it.

Not true. We accept that physical products WILL change due to wear and tear, repairs, and even deliberate modifications. It's true that car pedals will stay in the same position, and a table will keep having four legs, but their parts WILL be replaced. Physical products can also have bugs, hence the recall campaigns that happen now and then.

Software just makes it easier to replace and modify, and open source allows more people to do that. It just happens that software runs on a physical product (a.k.a. hardware) that has a much lower need for maintenance than a car or a piece of fabric, so you can use it for longer if that pleases you.


This software design principle is conspicuously absent at this point in time. Sure, Agile is important, and Agile processes do generate good products. But those processes have also come to introduce a cyclic development model, that's permeated back into the tools, and larger tool developers (like, those in the React / NodeJS class in terms of user base size) have abused this good intent.

Software can still be developed iteratively. That's not the problem. With CMake, for example, I don't 'fear' upgrades, because that team values the idea of 'finished' software. As does Microsoft.

On the other hand, the NPM and Apple dev teams do not cherish this idea. And in turn, both their user and developer communities suffer in the long run.

That's how I've come to see it recently.


Microsoft will bend over backwards to maintain compatibility but at a huge cost to themselves. Can you imagine trying to fix a bug or vulnerability in some thirty year old Windows code without breaking anything. It must be like wading through treacle.


Yeah, but to be honest it is worth the cost because it benefits them in the long term. This sends a good signal to potential partner/investors (as in software developer/companies) that it is worth investing your time/ressources into the platform because you can expect things to stay stable enough to create a vision/roadmap/future. In the case of Apple, your software better make money in the next 3 years because after that you can expect to rewrite a lot of it, if said software is still possible at all... There is not a lot of 3D software on macOS (especially CADs), but you cannot blame the devs. Mac were already pretty anemics when it came to GPU power but if you had an OpenGL codebase you would now need to rewrite it all to metal even though it can only work on an OS with one of the smallest market share.


I find this hard to believe.

> cd command implementation does not change.

I'm pretty sure the cd implementation needs to interface with different file systems. Those interfaces change or at least new interfaces appear when new file systems emerge.

> The word processor does exactly what it needs to.

What we use a word processor for, changes over time. Does the print function still work for my inkjet printer, in A4 format? Can I write emails with it, why should I even use the word processor if my email client has an embedded word processor? Can I embed images, only TIFF maybe?

Unfortunately there is no such thing as forever software, unless you don't change the circumstances in which it is used, which is very unlikely.


Rather than "finished software" (which I consider to be impractical), I'm a fan of "converging software" - it has a clear goal, concept, direction. The large features are already there, they're being just refined. The software increasingly makes only small changes - fixing bugs, refining existing features, then refining those refinements.

TeX would be one such example which reflects its converging nature in its versioning scheme. But I think it applies to a lot of other software - e.g. basic GNU utilities like grep, awk etc. are like that.


I wrote a finished software once. It printed “hello world” to the console.


Are you sure it is linked against the latest glibc?

Also, how do you handle locales - "Bonjour le monde" and "Hola Mundo" should be table stakes for an actually finished hello world program.


Are you sure it's finished? Did it handle the case where the output stream couldn't be written to, for example?


That's outside of scope. Most programs should fail when faced with situations outside their purview.


Yes, but if you don't check the printf return code (in C, say) the program won't fail. It'll return 0 (i.e. report success) but it did not do what it promised to do on the tin. If you only check its non-negativity, you might only have written "Hell" and again the program merrily carries on and says "all OK: 0!"


Likely no, just like any other software ever written.


Unless you wrote it in HQ9+ [1], I wouldn't be so sure that it's finished.

If you wrote it in C, I wouldn't be surprised that in a few years time you'd have to check a few security policies before you can actually write to the stdout file descriptor. And in JavaScript, the top level "console" object will probably be removed to avoid possible namespace clashes.

[1] https://esolangs.org/wiki/HQ9%2B


Did it compile to wasm and run at the edge at scale?


but didn't you feel the urge to give an option specifying which font to use?


I figured this 1994 paper would be relevant to the discussion about finished software, "Software Aging" by David Parnas. Read it for a software engineering course where the prof railed against the assumptions underlying the notion of finished software.

https://www.cs.drexel.edu/~yc349/CS451/RequiredReadings/Soft...


Software is never finished, it can only get abandoned.


Any long-lived application is like a species, and every release is like a fossil. I see beauty in both the static and the dynamic.


Really depends on target audience and environment. The closer you are to users/environments that are constantly in flux, the more you are pressured (both socially and technically) to keep updating.

Software being finish-able shouldn’t be a fringe belief. The fact that it is really explains a lot about tech.


Does finished mean it doesn't get updates at all or just no new features? Because most software today has some kind of communication / network connection and therefore could have security vulnerabilities. If these are not patched then I'd rather not use this finished software.


Unfortunately, neither finished software nor durable hardware are compatible with the imperative for growth that firms operate under in a capitalist system with unlimited marketing. Once the market is saturated, next year's revenue must come from a product that is different from what the consumer already owns. It must either actually be better, or marketed in such a way as to be perceived to be better. Usually, the marketing option is easier and more reliably reproduced year after year.


It is more accurately the pipe dream of finished software. In general the environment in which a piece of software is used constantly changes and so the software has to change. Or be abandoned and replaced with something new. And yes, there are some exceptions.


I've mentioned before that my wife was going through stuff her mom sent her from the attic when they sold her childhood house and she found an SNES from 1991 and a bunch of old games, and they still work. She's been playing Legend of Zelda pretty regularly for years now on that thing. There's no reason it can't work at least until the material physically degrades to the point of being unfixable (which is not forever, but longer than most software otherwise lasts). Even if something about the regulatory environment changes, nobody is going to issue a recall on a 30+ year-old gaming console forcing you to turn it in.


If you look at this from that angle, every software can be considered finished and the term becomes pretty meaningless. I can use it and it satisfies my needs, so it is finished. But what really happened here is that the entire system got abandoned - hardware, firmware and games running on top of it - and replaced with a new system and new version of Zelda. They could in principle have maintained the game, ported it to new hardware, improved the graphics, sound and gameplay to modern standards.


Pretty easy to fix if the money that be wished it: Finish the environment.


Depending on what kind of software we are talking about, all kinds of things might change. Hardware and software environments evolve, legal requirements change, it really is an endless list of things that you can not control. Could you have written an image editor 30 years ago that would be usable today when you had less RAM than the size of an images coming from a digital cameras today and in an image format invented years later? Could you have written a text editor useful today before Unicode was invented? Could you have written a tax software that anticipated all relevant taxation changes 20 years into the future? Could you handle touch inputs well before smartphones were widely used and deal with the differences between desktop and mobile in general? Keep up with the changes to HTTP, HTML, CSS and JavaScript without updating your browser?


I wrote a little command line calculator once. Main purpose was to learn about parsing with Bison/Flex.

Now, I'm using it every day and haven't changed it in years. It compiles everywhere even on my M1 with Asahi Linux.

All my other software is not beautiful. It requires changes.


This is a funny example.

From the WordStar Wikipedia:

> no current version of WordStar is available for modern operating systems

https://en.wikipedia.org/wiki/WordStar


[1] Run it in a DOS box; I list a whole bunch here:

https://www.theregister.com/2022/06/28/friday_foss_fest_runn...

[2] Run WordTsar:

http://wordtsar.ca/


I knew a philosopher who was mad for WordStar, ran it on a DOS install on VMWare.


Can't that run in DosBox?


Possibly, I don't use it myself, strictly LaTeX ...


I agree. I even wrote about this. [1]

[1]: https://gavinhoward.com/2019/11/finishing-software/


How about the V2.00 ROM in my 1987-dated ADA MP-1 guitar pre-amp.

It does what it's supposed to. Receives MIDI messages; changes programs; lets you edit and save; does the SysEx dumps and restores.


"Finished" is practically a synonym for "source code not available" (proprietary or lost)


Or as I like to say, Stability is a feature - a very important feature.


Finished software? Never heard of it


It’s a shame Mr Martin is less good at finishing his books.


I get the downvotes, it's kinda snarky but I think it's a legit point. If he had more modern software, would he get more done?


Finished software is not feeding any developer's children. I hate it.


That logic is like breaking windows to feed the glazier's family... Or puncture tires to feed the garage owner's family.

It does not make any sense: it is fake work while there is so much real work to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: