No. This is rose-colored glasses. Maybe the author doesn't remember the blue screen of death, or reflexively hitting ctrl-S every five minutes to save because the software might crash, or clicking on menus in Word and they wouldn't open because you had a macro virus. Maybe the author doesn't remember broken software that just was not fixed, ever, because there were no software updates. Maybe the author is just too young to remember those things. Well I remember them.
Software simply does a lot more than it used to. A lot of people are complaining about chat clients. Today's chat client is a lot more complicated. Sure, go back and use IRC from thirty years ago. Do you remember all those commands? slash-something?
And all that old, simple software is still there if you want it, better than ever. Go get Emacs, or vim, or mutt, or your old IRC client. Go download MP3s and put them in your folders and use a command-line program to play them back. Nobody has taken any of that software from anybody.
And if I want to develop that simple software, it's easier than ever. A Common Lisp compiler used to cost big money, and even a C compiler came in a shrink-wrapped box, unless you had heard of GCC - which was hard because there wasn't much of an Internet. Now I can download that stuff for free. On my Mac I can get the same tools pros use, for free. Microsoft used to charge big bucks for comparable tools.
Software is absolutely getting worse. I think you need to go back in time and really consider whether the SW is "doing a lot more."
The valid excuses for some of the footprint these days are bad protocols (but you pretty much have to use them - JSON is an abomination for resource use), Unicode, and higher resolution assets. There is actually very little else that's driving it other than horrible coding.
One way you can tell is that performance SW hasn't really bloated the way desktop and mobile software has. Postgres? Still pretty tiny and scales as you'd expect. Compilers? Even Excel isn't a huge pig the way the electron apps are.
Slack is so bad it is actually hard to think of a software application that is worse, and barely does anything more than most other, lighter-weight applications of the same sort.
> Slack is so bad it is actually hard to think of a software application that is worse, and barely does anything more than most other, lighter-weight applications of the same sort.
MS Teams comes to mind. It's more bloated and it lacks some of the features of Slack, or implements them in a user-hostile way locked into other MS products.
Huh? While I'm sure there are isolated examples of software getting worse, nearly all software is better than it was even, say, 5 years ago. Slack is great, with an incredible number of features and functionality.
I was back "in time" and I'm much happier with software these days on all levels.
> Software simply does a lot more than it used to. A lot of people are complaining about chat clients. Today's chat client is a lot more complicated. Sure, go back and use IRC from thirty years ago. Do you remember all those commands? slash-something?
So you completely forget AIM/ICQ/MSN existed? They did basically everything modern clients do in a couple megabytes of RAM. Jabber even more so.
In the AIM/ICQ/MSN days, the userbase was much more niche as it was not user friendly.
Simple changes like responsiveness or aesthetics help less adept users adopt and use technology.
Furthermore, technology is now global. You will need to support more fonts and languages. You will need to test for various different localizations. You will also have to add support and localizations for the less physically able.
That seems like a baseless critique. In what way were they any of these less user friendly than what we have today? I suspect you haven't actually used them?
I think if anything their interactions were more clearly designed. Have you tried the likes of Snapchat where you have to just paw at the screen until you learn what your pawing does?
OS level text rendering was basically solved by this time in the early 2000s. I definitely spoke to people in Japanese on ICQ in 2000/2001
> OS level text rendering was basically solved by this time in the early 2000s
Not for all languages.
For starters, my ancestral language's Unicode support didn't arrive until 2008. In that same Unicode proposal, support for a language spoken by 22 million people was added as well. Additional blocks line for languages like Laotian, colloquial Khmer, etc weren't added til the 2000s.
Also, plenty of languages (eg. Burmese) don't use Unicode fonts and instead use localized non-standardized blocks such as Zawgyi.
The internet in the 1990s was designed with the North America, Europe, Japan, South Korea, Taiwan, Hong Kong, and Singapore in mind.
The majority of the world wasn't on the internet yet and hadn't even used a computer. The form factor, usability, and documentation needed simply didn't exist in the 1990s and early 2000s yet.
The Internet didn't truly "democratize" until low cost smartphones like Samsung, Oppo, and Xiaomi began entering the market in the early 2010s.
> Have you tried the likes of Snapchat where you have to just paw at the screen until you learn what your pawing does
Yes. I was the target age demographic for Snapchat. For people who's primary experience with computers is smartphones (aka the majority of the world) an app like Snapchat or WhatsApp or WeChat or TikTok is easier to use than folders in Windows, MacOS, or Linux, let alone terminal or IRC
> Software simply does a lot more than it used to. A lot of people are complaining about chat clients. Today's chat client is a lot more complicated. Sure, go back and use IRC from thirty years ago. Do you remember all those commands? slash-something?
True enough. Modern software does a lot of extra stuff that the user neither asked for, nor wants. This is a double hit to software quality. Not only is there the added complexity of all that crap in the code base, but the intentional "crapification" when everything is working as intended. A recent example: https://www.tomsguide.com/news/assassins-creed-in-game-ads-a...
There are multiple dimensions of software quality, and by some metrics (such as performance efficiency) things are notably getting worse. Some would also argue that fewer features (such as those IRC chat commands) takes power away from the power users, a sentiment I have held. Though I agree that at an overall level, the quality has improved.
The incentives to write software may have shifted over time as well. I seem to recall a lot of good quality, older software being written as a passion project. While those still exist and are being written today, the stuff that gets shown to you in a google search (usually) isn't that. Takes away the incentive to build anything that isn't backed by an investor looking for quick returns.
> Today's chat client is a lot more complicated. Sure, go back and use IRC from thirty years ago. Do you remember all those commands? slash-something?
Not a lot. A tiny bit. They are still displaying text and a few images, and struggling to do even that (I wonder if Slack still needs 20% CPU to display an animated GIF). And yeah, slash commands are stil there.
> Go download MP3s and put them in your folders and use a command-line program to play them back.
Yes. We can play them back with Winamp which has all the bells and whistles that modern players have, for a fraction of resources.
> Software is not getting worse.
Current software struggle to do many of the same things that software of 20 years ago was already doing even though modern software has several magnitudes more resources.
I don't think it's a tiny bit at all! Compared to IRC, chat services nowadays have
-server saved history
-custom user uploaded emotes
-threads and channels
-native gif and video support
-image embedding
-voice and video streaming
-complex automation apis
-detailed and comprehensive moderation systems
None of these are optional or superfluous. If you wanna go back to everyone having to run their own bouncer, approximately 5 people will join you. Tell them that they can't even have emojis and your probably down to 0.
Slack is an expensive software. I expect performance and yet, I can't find messages, I have a hard time to use slack and run IDEs at the same time, they change the interface and I lost 2-3 days until adapt.
If I could save some bucks removing videos, emojis, voice and video streaming, I would not hesitate in do so.
You got a fair point on how things work: it's all about communication. You want to stay where everybody is, so the incentive is to capture market, not make a fast software.
I think that most of the innovation effort in the last decade was on how to delivery fast.
Even your phone is a magnitude or two faster than the computers running IRC.
Do you really truly think that all you listed is or somehow should be difficult on modern hardware? Or that the truly abysmal performance that modern chat apps exhibit can somehow be justified because they embed things many of which are either handled by hardware (images and video) or by the server (states, history, automation)?
> Maybe the author doesn't remember the blue screen of death, or reflexively hitting ctrl-S every five minutes to save because the software might crash, or clicking on menus in Word and they wouldn't open because you had a macro virus.
This is more of a problem with Microsoft Windows products before they universally adopted the NT kernel across their product line. The instability of the 16-bit kernel and its 32-bit extensions was a constant point of ridicule from users of other architectures and operating systems at the time.
You can write buggy application software on any platform. But if the operating and system architecture provides sufficient separation and protection, it won't crash other processes or the entire operating system, as was the case for consumers getting their first Windows PCs in the 90s and having to be paranoid about random crashes after hitting the wrong button.
> And if I want to develop that simple software, it's easier than ever.
I had a hearty chuckle at this bit. It’s completely out of touch. Modern software development is far from simple. It also requires an Internet connection for getting packages.
Ah, yes, software sucks because <spins wheel> the internet exists.
You know, you don't need the internet to develop software, you can still do it exactly like we used to in the 80s. Download gcc and start reading man pages.
I think it is more distribution than development that is the big difference between now and pre-internet times as far as things that affect software quality go.
When I worked at places making software that would be sold to users on floppy or CD in stores like Best Buy and CompUSA back when very few of those users had internet connections these things were true:
1. If a user needed help beyond what was provided by the printed manual that came with the software and whatever help system we built into the software itself they would call our toll-free support number. The typical cost of such a support incident was more than our profit on selling the software to that user.
2. To distribute bug fixes to existing users we'd have to mail them floppies or CDs.
3. The only future revenue we would get from existing users would be from them buying more of our products.
These factors made management place a heavy emphasis on not shipping software until it was good enough that very few people would need support, that whatever bugs we missed were not severe enough or numerous enough that we'd have to ship updates, and that users would be happy enough with it that when they saw other products from us they'd want to buy them.
You released your product when it was done, and then you moved on to work on your next product.
When internet became the main means of both support and distribution much of that changed. Support became a lot cheaper for the developers for a couple reasons. First, users could turn to community forums that would often solve their problems without the company even getting involved. Second, if they did contact the company it would be via email or a web form rather than an expensive toll-free phone call.
Distributing updates to users became routine, and even automated. Accordingly the number and severity of bugs that can be in the first release has gotten higher. It just has to be good enough to not piss the user off too much before you can release an update.
The net (no put intended!) result is that nowadays people release at a stage when they have features and bug levels that in the floppy/CD retail days would have been considered beta or often even late alpha.
So it is not really that software sucks nowadays compared to back then. It is that back then users didn't even see the software until it was finished. Nowadays they get the software while it is still being actively developed.
> Maybe the author doesn't remember the blue screen of death, or reflexively hitting ctrl-S every five minutes to save because the software might crash, or clicking on menus in Word and they wouldn't open because you had a macro virus. Maybe the author doesn't remember broken software that just was not fixed, ever, because there were no software updates. Maybe the author is just too young to remember those things. Well I remember them.
I remember all of this, and it was a lot of time ago. XP was more stable than Vista. Then, Windows 7 arrived and became the most stable Windows version ever. Windows 10 and 11 turned out to be overbloated and problematic. I remember using uTorrent, which, after Azureus, felt like a breath of fresh air. It was written in C, super fast, small, and efficient. Now, it too has become an overbloated mess. In the last 10 years, the huge demand for software has led to many people entering the field without adequate understanding and knowledge about software enegineering, resulting in a market flooded with buggy, overbloated software.
Complicated is worse, not better. IMHO as end user.
Hardware has gotten better. Much, much better.
But I still like to use old software. Because I find it's the only way I can appreciate the new hardware. New software is far too unilaterally manipulative and exploitive of both the user and the user's hardware resources, often for ridiculous, unnecessary purposes such as data collection, surveillance and advertising services.
"I was there Gandalf. I was there three thousand years ago ..."
I was there too and I think your glasses are a bit foggy.
I can give you one example. Windows had a bug where some timer rolled over every 42 days causing some minor issues with applications. Today windows reboots your machine every month! I think about this often. You should too.
To me, this is the main problem. Most software now has so many features that the owning company can't possibly keep eyes on it all anymore, or just chooses not to as a cost cutting measure.
Simple example that has hit me lately: simplified view in Chrome on Android does not work anymore. It just shows a loading spinner forever. I would bet money Google has no idea or just doesn't care (surely they collection exception metrics?). I hit stuff like this all the time now. I just fully expect say 10-20% of features in a modern app to just flat out not work.
> Maybe the author doesn't remember the blue screen of death, or reflexively hitting ctrl-S every five minutes to save because the software might crash, or clicking on menus in Word and they wouldn't open because you had a macro virus.
That was my first thought too. My second thought was that was 30 years ago. Different people may be using a different era as a reference.
When it comes to software reliability, the industry appears to go through a push every once in a while. In the mid-1990's there was a push to introduce memory protection into personal computer operating systems. Software became more reliable because it was protected from other software, and because it became more difficult for software to trash the operating system's memory. In the early 2000's security was a much greater issue, so we saw another push that affected software quality. Well, several pushes. So yes, there has been a definite improvement in software reliability in some respects.
Yet software reliability is also about the people sitting behind the keyboard and writing the code. Better operating systems and better development tools may help to catch some of their errors, but it won't catch all of their errors. If they try to (recursively) calculate the Fibonacci Sequence with `f(n-1) + f(n-1)`, the computer will gladly do it and gladly produce the wrong result. Of course development tools aren't infallible either, and what they catch depends not only upon which tools you use but how you use them. For example: C won't detect integer overflow. Rust appears to be somewhat better. It will detect overflows in debug builds, though it will only not detect overflows by default in release builds. It also offers functions to handle overflows in various way. But as soon as you depend upon a developer changing a default or making an explicit function all, you are bumping into the same issue as the Fibonacci Sequence example: the computer is going to gladly do what the developer told it to, even if it is not correct.
Even though software reliability is tremendously better than it was 30 years ago because of technical improvements in the operating system and development tools, I suspect it also ebbs and flows on shorter time scales simply because there are entire classes of errors that are affected by the people who develop (or fund the development of) software.
(Edit: in a twist of irony, HN was not responding when I initially tried to post this. Another point is that the reliability of a lot of modern software depends upon external services.)
There was also fantastically complete and fully featured software that was stable and all bugs were roughly documented and known by users. Today you get a lot of unfinished software with the attitude that we can patch it if it matters. This is exemplified by video games.
Yes. We are increasingly working with a model where we have resources for free, and the most valuable resource happens to be developer time.
If our tooling and efficiency is getting better, why would we not be able to automate a wide variety of efficiency improvements as mere compiler tasks and get away with it?
It's not. The life cycle prioritizes process over product and real-life constraints, thereby making software engineering a drive-by operation.
Many of our software engineering
methods today are discardable and thus unsustainable as well.
We need to bring back the notion of developer attention and stop relying on mere tooling for improving our software. Our tooling is only as good as we are, and tools are also written by oblivious software developers, after all.
We need to have ways where software developers look to perfect their art, understand the basics, dive bit-by-bit and get back to improving their applications. As opposed to simply complaining that their Electron based application is slow.
They have to work with employers and teams who allow them to do that, at least once in their careers. That may get them to understand a lot of issues better and will lead to better outcomes for themselves and everyone around they write software for.
There are many things a disciplined person could do, yet discipline is hard: picking a few places to do this sort of work is a great way to start this transformation.
Is there any indication that any actual consumers of software care enough to justify the cost for increased quality? The market seems to reward features more so than reduced resource consumption, so it seems that that's what users want. I get that we as software developers want to use all the tools of our craft and build something beautiful and elegant, but it seems that few of anyone outside our craft care how the car looks under the hood.
Sustainability and maintainability are a huge issue though that can kill products as development gets slower and slower. However, that at best orthogonal to things like resource consumption
I think consumers are trapped in a feedback loop precipitated by the rise of ad-driven software. From the perspective of an engineer, "consumers don't care about quality" — but what's really happening is that consumers have no reason to trust the quality of the software they use.
Grab a person off the street and ask them about how they feel about the quality of the software they use at work, at home, on their phones, and I bet they would be quite unhappy with it, but feel that this is just the way software is. What consumer would risk paying £100 for software when they expect it to suck anyway?
Without a demand for good software, suppliers are constrained to turn to alternate value streams like retention or data collection, creating worse experience for the users, and perpetuating the cycle.
So this is all you say that you can't look at the market at a surface level and immediately draw conclusions about what people fundamentally value, because the structure of the market is path-dependent. What may seem like end-users caring or not caring about something might be more of a reflection of choices made within systemic constraints.
If you're spurning customers away, very few will come back and tell you they are leaving. This is often the reason why developers seldom get it and embrace that signal.
More features do not necessarily translate to better outcomes. Features will often be measured at a point in time. It has to be good enough for people to use a product and not inconvenient enough for people to run away.
More development does not mean more usage. Imagine someone in utilities connected your home to a power supply and then decided to periodically change it just because they need to show that they work - and not to be useful to you. Would you like that? That sort of active development thing is found seldom outside the subscription based SaaS business.
Case in point: AWS, you have dozens of services launched every quarter and maybe its developers there get bored and launch new services, yet most people use those few key services a lot and are happy to stick around as long as things don't break terribly.
The issue is that we've come to liken features and bloat with usage. Then we assume our customers want that bloat and can afford to do so. Mostly at that point, companies are optimizing for their own development over customers needs. Over a period of time we get disillusioned that our bloated software is often what our customers need. Then when that doesn't work well, try explosive methods such as platform obsolescence. Thus leading to loss of sustainability.
It really is the "how do you pay for it" question, always, no?
How do you incentivize quality? We've seen two approaches that definitely don't work super well to deliver consistent quality. One is "proprietary, make people buy per unit" and the other is "free and open source."
(And note: I say the latter as a huge FLOSS fan. I'll still push it all day -- but only because I think freedom is more fundamental than quality in this regard; preserve freedom first, then we can think about quality control.)
If we're talking about "how do you pay for it," then there are not just two models.
Closed source products can be sold outright, rented, subsidized via ads, some combination of those, and more. They can even be given away as tax write-offs or loss leaders or somesuch.
All of those models also exist for open source products! It turns out that customers aren't paying for source access so much as they're paying for working software.
No one can agree about what "quality" actually is. Mostly, it's a word used as a weapon whenever someone wants to bully change into happening. Or not happening, for that matter. But a weapon either way.
Of course, I didn't mean to suggest that there were only two at all, just that "here are two that DON'T work for this."
Good point on "quality," it makes a lot of sense to try to look at the thing we're talking about here without talking about "quality" or "software."
There are problems and opportunities in this technological space; what good ways are there to handle those, noting that much of the work that needs to get done for them isn't likely to occur without payment? Something like that.
Also: quality for whom? What Eloi consider quality and what Morlocks consider quality are, if not opposed, at least in strong competition due to opportunity cost. (if it's not obvious, I spend my efforts on quality for Morlocks, and am willing to leave the much-more-time-intensive quality for Eloi up to "the market")
The DEV environments I crafted at work, all have constrained CPU, DB backend, and memory. After all, if you need gobs and gobs of CPU and RAM for a single user loading a web page, what on earth will PROD be like, with countless simultaneous page loads??
No, it doesn't slow down DEV work, coding takes place elsewhere.
Shockingly(sarcasm), we've caught many resource issues, eg poor code, poor schema, lack of indexes or db optimization, before they hid prod and become disaster.
AWS is hellish expensive, having a bill 10x what it could be is suboptimal. And the best way to prevent that, is resource constraints from day 1.
Most people and companies write software to make money.
If you are trying to optimize for this goal, does it really matter if your software is as effective as it's physically possible?
There's a certain threshold for performance your customers implicitly expect you to reach. If you're within this boundary, is it really worth to spend days coming up with a new inverse square root optimization instead of bringing more value to your customer with new features?
It's a balance. But I would say it's important to not view performance as binary (acceptable or not acceptable), and also to not view features as inherently positive.
Couldn't agree with you more. I spent the last 6 months leading a web performance optimization effort of one very popular site.
The dichotomy between upper management and ICs in the weeds doing the actual work is mind boggling at times. ICs would understand the spectrum, and desire to create baseline targets across the service for the best experience we could muster. Meanwhile management only cared whether the core funnel met a specific target and viewed all other efforts as a distractions of the bottom line.
This narrow minded thinking directly results in a subpar experience for customers. It's a shame too, because product quality is critical at this organization.
You must be talking about personal chat clients. Surely you don't think changing enterprise chat clients, which don't use standardized protocols, is easier than changing a house? The average employee has no say in the matter.
You picked an example of software that is famously not easy to switch. Regardless, "if people didn't like them, there would be a better one" is a confusing stance, unless it wasn't intended literally. There are of course a ton of variables that go into what software gets built, maintained, and distributed. "Is it better than the others" is not even a particularly prominent one, frankly.
100% I used to be rather obsessed (can't think of a better term) and making my code performant but as I've gotten on in years in my career I've moved more towards "just ship it and don't break anything"
I 100% get this point of view, I don't really care about performance more than I need to to ensure stuff doesn't grind to a crawl but. . . I can't help but feel something has gone wrong in software where things are, by default, very slow.
I've spent time optimizing code in the past (mostly python) and its not hard to speed things up a lot. It feels at times though that the ecosystem is working against you.
Tools like electron encourage you to ship many many times more download size than is needed. Python loops speeds are getting better, but they've historically been insanely slow.
I don't think we need to individually obsess over speed, but as an industry we need to give people the tools so that code is performing by default. I think we're moving away from that end.
I don't blame how LLMs big, because it's needed for now. We theoretically don't need individual Electron runtime for each application. There should have a common webview runtime, like Microsoft WebView2.
As one develops as a programmer, the performance floor of your code should increase. In those early years, you might waste inordinate time squeezing out a 3% performance improvement in an existing feature when you could instead focus on building the next thing that might potentially double the product market. As long as you eventually see the light, I don't think this time is wasted because in chasing those small efficiencies you will learn the high level patterns that lead to good enough code as well as the evil of premature optimizations.
Also, you will never get a bonus in proportion to the cost savings of an optimization.
> I don't think this time is wasted because in chasing those small efficiencies you will learn the high level patterns that lead to good enough code as well as the evil of premature optimizations.
How many times in your career has a PM agreed to make room in the backlog to prioritize performance work? It very rarely happens and the reason is that costumers rarely care. It's catnip for developers and I love doing it, but it's rarely something that matters enough
Software is getting worse. Agile and product management lead the regression. There’s no time and little incentive to build quality when you can ship crap and ask for a payraise. The low quality stuff will be replaced by other low quality stuff and no one will notice. The only that should work is taking payments - erroring to charging users twice is acceptable.
One thing I don't see mentioned anywhere is security. Software in the past was highly insecure, today the amount of checks and countermeasures is very high. By itself each check add an almost unnoticeable amount, but it adds up. Not saying it isn't necessary though (even though sometimes it shouldn't, you don't have anything to secure if you don't have data at all, but ads, tracking, and similar needs it).
This is similar to other elements, like households appliances. They now also seem less "reliable" than in the past, but take an old simple toaster, and it is probably a fire hazard.
* Buy once model. Trial + buy is best. Some YouTube videos of people using the software is often good enough. If you offer a free ad supported version, and a buy once to remove ads version I will almost always buy the no ads version.
* Local saves. If you want to sync with the cloud that's fine, but I want to do read/writes from my local device whenever possible.
* Local computation. I have fast chips, I don't mind using them.
* Privacy. Do your competitors track me but you don't? Let me know.
* Modifiable appearance. I really put value in being able to change fonts and colors. I think this is a bit niche.
The main problem is that we are okay with shipping unfinished barely-beta software because we can always force the upgrade/update/change. You didn't have that luxury when you were shipping software on a floppy disk/CD-ROM, and that was it: all the bugs and issues were there, forever.
I definitely feel that modern software is more frustrating, and thinking about it, my number one complaint is the use of mouse-over to trigger things. In late 80s/early 90s software, I could roam the mouse cursor over the screen and _nothing would change_. Now, moving the mouse is like navigating a minefield. Two examples:
- I want to select a bit of text in Microsoft Teams (yeah, I know - it's not my choice), and as I move the cursor to the desired line, it moves over the line below, which causes a context menu to pop-up, which obscures the text I'm trying to select, so... swear, back off, and try again more carefully.
- I switch applications on macOS with the command-tab keys, but I'm also anticipating where I want the mouse to be, so I start moving the mouse too. Sometimes, the mouse moves over the row of icons, which changes the selected icon, and I end up in a different app to the one I want.
Software is fast enough for most users to not complain. And as long as users are happy business doesn't care about speed.
The way to break a loop is to target audience with lower lag-tolerance. Once the user experiences fast product it'll be annoying to use everything else.
Most software has always been really crappy, sometimes it's actually good in some sense and if you're a happy path user maybe you don't notice how crappy other aspects of it are. Used to be that hardware was comparatively crappy and more work went into that than into e.g. UI: not much point in worrying about color with monochrome displays.
The question which is the lede maybe states it better than the title which precedes it: "With all the advancements in software development, apps could be much better. Why aren't they?"
On that basis I think the article is ok although I didn't vote it up.
Reminds me of Niklaus Wirth's 1995 piece "A plea for lean software":
About 25 years ago, an interactive text editor could be designed with as little as 8,000 bytes of storage. (Modern program editors request 100 times that much) An operating system had to manage with 8,000 bytes, and compiler had to fit into 32 Kbytes, whereas their modern descendants require megabytes. Has all this inflated software become any faster? On the contrary. Were it not for a thousand times faster hardware, modern software would be utterly unusable.
My answer to whether "software is getting worse" is it depends on the age of the software. I can't think of a single body of software that's been under continuous development over the last few decades that hasn't got markedly more stable. The Windows kernel, the Linux kernel, Android, iPhone, gcc, web browsers, web servers, Office software, and even RAID off the top of my head are all vastly better than they were. (I've lost more data to early hardware RAID and IBM SAN's than to failing HDD's).
But that old software is swamped by new software - web pages in particular, but also IoT, Tesla self driving, smart TV's, Max 737's, my robot vacuum and my step daughters desk (it refuses to go up unless you reset it). As predicted software is indeed eating the world, so new software is everywhere. A day doesn't go by without me having to work around the deficiencies of some piece of crap.
So you could be forgiven for thinking that software is getting worse if all you are doing is comparing mature software to new stuff. But remember a long time ago (well not that long, as I was a programmer back then too), that mature software was just a new and unreliable as todays new software. With very rare exceptions like the space shuttle software (because it requires a metric f*k ton of money to get bugs per loc in new software down to 5 or 6 nines), all new software is shit.
If the customers of the software written today cared about quality -- and they _should_ care about quality -- they'd demand a higher standard of quality for the money they are paying.
They don't, though. Firstly, the users aren't the customers. The customers are the advertisers or corporate buyers. Secondly, the users don't _want_ to be customers. They'd rather whatever they can get for free that anything that costs money, even if the thing that costs money could be orders of magnitude better.
The first thought I have: I think quality is inclusive of size, speed, and reliability. Quality is _also_ features and functionality. It is also user experience. Probably many other things too, but this is just off the top of my head.
I'm also not sure users would come rushing to the side of programmers if _they_ became the customers.
I am the customer of companies that make many other things in my life. The quality probably isn't what it could be if that consideration was put much more front and center. For many things, I am _fine_ with that.
For the same reasons, I don't think the users of software would be obviously wrong. Certainly I don't think they're obviously wrong enough for me to become preachy. If writing software that isn't to the level of quality I demand or need causes me existential dread, I am now willing to say that might be a "me" problem.
> The last two decades have been dedicated to making software development faster, easier, and more foolproof. And admittedly, we’re creating apps faster than ever, with more features than ever, using less experienced developers than ever.
The only part here that seems to be true in my experience is “using less experienced developers”.
I don’t see that apps are developed faster nor that they have more features than ever. Quite the contrary.
I'm not sure they get developed faster either. I just was talking to a friend about how quick it used to be to make a usable desktop app with Visual Basic or Delfi. Meanwhile I spent an insane amount of time to make a table in an iOS app that shows a table of kana and make it resize properly to the available screen size. This would have taken minutes in Delfi or VB and admittedly took me <30 minutes in the first draft of my app that used React.
I would highly recommend for developers to take a bit of time to look into tinyapps.org and try to figure out what made those applications so great at the time and even now. Instead of hastily built Electron app, one should always consider alternatives, when software is build with much more thought from technical side, not only looks.
Things like RAM, disk space, and bandwidth have become so abundant that it doesn't really matter to optimize for. Especially when their availability will have doubled within a couple of years for the high value customers.
Slack is a resource optimization absurdity. But it serves its target users well as they have plenty of resources anyway.
In my organization (around 1k employees) Slack’s resource hogging is a very frequent complaint and many would rather use different software. (I don’t know what is better).
Slowly but surely the pressure is building up to change the software.
This works to the advantage of a handful of large tech companies. Google, Facebook/Meta, Twitter/X, Apple, Microsoft, and some Chinese companies like Tencent, have the scale and market position to give away at least some of their respective applications. They provide what the article describes as a "joyless revenue machine that exploits your users’ attention and privacy at every turn", and do so with sufficient volume and market dominance to be profitable.
In a world where almost anyone can whip up an app, the inability to charge for it, and difficulty of growing large enough to profit off user attention and data, acts as a barrier to entry.
Yeah it’s getting worse. As a commenter points out, we now have fucking chat and email clients that take up literal gigabytes of ram and who knows how much on disk, and despite orders of magnitude more computing power are slower than ever, which is idiotic. And self serving as it may be, I don’t believe it’s because of unskilled, lazy or noncommitted devs. Most devs do the best they can within the constraints imposed on them. When a PM comes with the latest idiotic pet feature that isn’t as much for actual users as to pad their resume and enable them to climb the next step on their career ladder, devs have to make do, and often that amounts to mild protest, being overruled and then throwing their arms up. And I don’t even blame PMs - likewise, they’re just playing the hand they’ve been dealt. No PM or developer gets promoted for increasing reliability or speed, or reducing RAM or space on disk. Often, you get promoted for adding bloating buggy junk that no users actually use. How is that possible in this age of data driven decision making? Oh my friend if you only knew the ability of people to come up with fake and misguiding data to justify even the least used, most pointless features.
Fair enough, but even if you’re lazy, it’s legitimately difficult to design a chat or email app that takes gigs and gigs of ram. You really have to try hard to bloat it as much as possible. But that bloat comes naturally with the incentive structure of the modern software industry.
Least for email, you don’t have to use the bloatware. Mutt still exists and works just fine. If there are features you want in the bloated clients, that’s on you as the end user to make that choice.
I do agree with your rant in principle and doubly so with chat clients.
> How is that possible in this age of data driven decision making?
Probably because reducing RAM usage has even less potential than a feature nobody uses. Someone might use a feature, so you can take a risk on it. Who (who might otherwise buy Slack) wouldn't buy Slack because it uses 1-2 gigs of RAM?
A lot of decisions are like that - individually, they’re the right one to make at the time, but they add up, and when there isn’t any serious framework for looking critically at their place in the whole, you end up with what we have, which is apps with a handful of core features people actually want and use, drowning in an avalanche of pointless bloated junk. And those apps in turn add up, even a latest gen m2 pro will struggle running Slack, Zoom, Excel, IntelliJ, Mail, Chrome etc etc which REALLY shouldn’t be the case. (AND proves that talk of “we don’t need to optimize, these days computers can easily run whatever” simply isn’t true - resources are like a vacuum, they expand junk until the junk fills the void, when the space could in fact have been better used by actually useful things)
It is a tremendous amount of work to tread on a new path. Although at times exciting, it's often just scary and you have a huge downside risk and only a minor upside benefit. Breaking with even one widely held assumption creates a huge burden and if you break with more than one, which I think is necessary, is exponentially harder. And there is no single project which benefits from this extra overhead. The impetus has to come from within and it's very personal like an artist feeling a need to express themselves no matter the cost.
> The same general public who will pay $15 for a sandwich or a movie ticket—and then shrug and move on if they didn’t like it—are overcome by existential doubt if an app they’re interested in costs one (1) dollar.
Software is like sex. Good requires effort. You have every right to demand money for it but not many people will appreciate it if you do. On the other hand if you use it skillfully you can use it to take oh so much of people's money.
Software these days is far worse. Using bloated web technology, not optimized, and the terrible trend of smartphone user interfaces on desktop operating systems. I keep archives of all the old software since everything is being replaced with a subscription-only, dreadful web application. There aren't any new features that I find appealing anyways.
I don't know if it's worse overall, but it seems a lot more disrespectful. I think our devices spend a lot more compute time for the benefit of someone other than the user now.
My one remaining windows machine seems like its always spun up doing something in the background, and I'm skeptical that it's for my benefit.
It is one hundred percent the state of technology management. Feature delivery is everything, and reliability and performance are nothing - in fact, they might get you fired if you focus on them.
Baloney. The market is a collection of people. I am a person and I care. The next time someone foists junk on you tell them it is junk. Show them a better way. For example, I had a displeasure of using Teams to communicate with some contractors recently. I had not used Teams in a while but knowing MS well I knew just what to expect, and I was not disappointed. Getting onboarded and authenticated was like pulling teeth, and when I joined a meeting with my speakers the participants complained that I was causing an echo. They did not know that that echo cancellation, as implemented by any tool for fit for the job, was not active. Nobody thought to ask "Huh, that's strange, we don't have this problem with Meet or Zoom." This is not something users should have to think about or configure in 2023.
It comes down to recognizing junk, caring that it is junk, and having the option to choose something better.
The people using Teams are doing so by IT department mandate. They don't have the option to choose something better, nor is it worth trying to fight the corporate political battles to change the vendor selection decision.
That's understood. I'm not going to insult the workers' intelligence by assuming they willingly chose it. But a company that doesn't let its workers pick their tools is probably a dysfunctional company that doesn't value productivity.
And even the ones holding the purse strings are under pressure to grow ARR and few to no users will pick the faster software, but the one that has the features they need and is good enough.
We have games that run really well and are responsive, since that's where it matters. We can still do it, it just doesn't matter to anyone most of the time but developers who enjoy performance optimization (like I do!)
1. Why treat software as special products of deteriorating quality? Other classes of products are also getting worse eg food
2. Start by blaming yourself, then blame others. Can you proove to your manager why spending a week on performance tuning is the best use of your time? If you can, they will let you do it. If you cannot, it is your fault or even maybe your mistake for thinking optimising your app's performance is a good idea.
3. A very good chunk of developers dont even know how to measure performance, let alone tune.
4. If technical debt is becoming a problem it is your responsibility to fix it as part of your next boud of code. Did your manager ask for feature X? Tell them it takes Y amount of time and bake in a slight refactor around X that will.make your software better. Dont even ask them, or let them know. Every feature will come with some slight tech debt resolution.
5. The economic issue of software is not a problem of what people percieve being worth paying for. The economic problem is software's inherent incompatibility with the capitalist market functionality. The history of software's monetization is the history of buisnesses desperately trying to hack around this core incompatibility - which we haven't fully realized yet.
It completely depends on which platform you're writing for.
Single Platform Native Code is Easier than ever:
In the 1990s, you could write a CRUD program in a few hours in VB6, Access, or Delphi back in the 1990s. It would run on most computers worldwide, and was likely small enough to fit on a 1.4 Mb Floppy Disk, with help files, an installer, and everything.
This is the software model that the Internet is optimized to deliver. Native code, direct from a programmer. It still works in many cases. GitHub has even made it easier than ever.
However, this is not the model the "market" has been pushed into.
--
The app store - a horrible compromise
If you can stomach the restrictions, you can still write native code, subject to review and restrictions, to be deployed through an "App Store".
I've never done this, but I do know that many things just don't align with their business model.
So, if you're going down this route, you have to price your software, and meet the arbitrary restrictions of the walled garden it's going to serve. You will, at least, be able to run native code directly on the device, so that's nice.
However, when the owners of the garden change their mind, your program may disappear, or you may be forced into changes that are otherwise unacceptable
--
The web - a descent into madness
If you're unfortunate enough to not be able to write native code, directly deployed, the environments in which software is expected to work have become abysmal. For you, it was unavoidable that things got orders of magnitude worse.
Now, you're expected to put the interface of a program through the internet into a web browser that might be on a desktop, laptop, tablet, or smartphone, or even perhaps a smartwatch.
There's no consistent interface. Thanks to a lack of secure general purpose computing, nobody is willing, nor should they be, to run code from somewhere else, unless it's from an "app store".
--
I don't see things getting better for another decade or so. Eventually a consistent set of GUI widgets and interfaces will be found, and a consensus will be reached about how things should act on a Desktop vs Tablet vs Watch, etc.
The shift to open source software was a great advance, but the shift to installing/downloading dependencies from the internet at install time was a horrible choice. There needs to be a way to lock all dependencies, "security" be damned. You should be able to run programs written a decade ago without having to rewrite it to adapt to all the breaking changes that seem to be so quick to happen in Open source.
Security is part of the OS, and any application libraries shouldn't be able to make the situation worse. Only capability based Operating Systems can resolve that dilemma.
Software simply does a lot more than it used to. A lot of people are complaining about chat clients. Today's chat client is a lot more complicated. Sure, go back and use IRC from thirty years ago. Do you remember all those commands? slash-something?
And all that old, simple software is still there if you want it, better than ever. Go get Emacs, or vim, or mutt, or your old IRC client. Go download MP3s and put them in your folders and use a command-line program to play them back. Nobody has taken any of that software from anybody.
And if I want to develop that simple software, it's easier than ever. A Common Lisp compiler used to cost big money, and even a C compiler came in a shrink-wrapped box, unless you had heard of GCC - which was hard because there wasn't much of an Internet. Now I can download that stuff for free. On my Mac I can get the same tools pros use, for free. Microsoft used to charge big bucks for comparable tools.
Software is not getting worse.