Leaner, cleaner, less buggy, more secure, more performant, longer-lived code is obviously entirely possible. If people managed to do it at the dawn of the information age surely they can do it today, with multiple decades of massive experience, not to mention the incredibly powerful tools developed in the meantime.
If its not done its because there is no money in it. In fact the opposite.
The counter-incentives to wasting time on high quality software are numerous and affect all sorts of teams. VC funded startups must get to market first or die. Fake-it-till-you-make-it is their religion. For more mature organizations too, cost and bloat is not an issue. Its a feature. The bigger the team more prestige for the managers etc. The costs are passed on to clients anyway.
How come "ruthless market forces" don't rectify this wastefulness? You'd think that codebases of superior quality will earn the keys to the kingdom. They might, eventually. In a competitive environment that is less prone to pathologies, hypes etc.
Kagi is not VC funded, yet we have code that is sub-optimal, a ton of bugs and odd performance issues here and there. Knowing what I know about software development, I do not think this is avoidable, regardless of the source of funding or size of the company. It is a factor of complexity of the software, resources available and incentives in place.
What we can do though, that perhaps VC funded companies can not as easilly, is alocate time to refactor code and deal with tech debt. In fact that is what we are currently doing and we basically pulled a handbreak on all new feature developement for 45 days to deal with our main tech issues. Ability to do this requires a long term thinking horizon. Very difficult to make that kind of investment if you expect to get acquired next year and tech debt becomes somebody else's problem.
Also worth noting, as long as the product is being actively developed it will aways have new bugs and issues. 'Perfect code' is achieveable only in a closed context scenario, where new features are not added any more. (which randomly bring this weird thought to my head, that the only human that does not do any mistakes any more is a dead one; perfection in human actions is only achieved in the absence of life... ok need to stop there)
> which randomly bring this weird thought to my head, that the only human that does not do any mistakes any more is a dead one; perfection in human actions is only achieved in the absence of life... ok need to stop there
Love a good philosophical tangent! Wish you expanded :)
> If its not done its because there is no money in it. In fact the opposite
This bears repeating. It's the disease that has consumed software and is making all modern software the worst possible version of itself.
I think it's the VC funding model which has driven the industry this way. Startups get millions in funding, then it's a race to make enough money to pay back those investors which leads to this. The companies have to squeeze dollars from their app as fast as possible which means anything that doesn't have a ROI metric attached to it will not get a second of anyone's attention.
Honestly, there are many reasons for our current situation. One is that companies aren't (usually) run by engineers; they're run by product or business people. Those types don't care about performance, website footprints, smooth scrolling, etc. They care about adding new features, getting users, and doing so as fast as possible. Another reason is that many web developers were taught that software engineering is mashing together a mixture of Node, Ionic, Bootstrap, Vue.js, Angular, jQuery, etc. to quickly make a website. No one was taught how to do things on their own so they just bundle framework after framework into their projects just to do simple things. Finally, it's not like people built highly performant software in the 90s because they genuinely embodied this article's spirit; they did so out of necessity. As soon as computers got fast enough, we stopped having to focus on micro-optimizations just to get our products to run.
> Those types don't care about performance, website footprints, smooth scrolling, etc
they don't care because their _users_ don't care.
I find these discussions are always led by engineers, shaking their fists at clouds. nobody cares! it doesn't make any money so you're just whining into the void.
That's like saying only engineers care about appliances that work long past the warranty expiration; users don't so business people value engineer it and everyone except the engineers are happy.
Except that's not true. Users do care they just don't have a choice in the matter. I have listened to many, many laypeople who have expressed frustration with software. They may not be able to articulate it to the extent that the quote does, but when they're stuck on 3G and they need to load a webpage that keeps timing out they get frustrated even if they don't know its because of the footprint, it's a poorly made SPA, or whatever.
> they don't care because their _users_ don't care.
I respectfully disagree. I think the users care, but they don't make their own choices - their choices are made for them by people who don't care!
Was MS Teams chosen by their end-users? Nope.
Come to think of it - was Slack chosen by their end-users? No, again.
End-user's aren't given an option, usually:
1. For B2B the choice rests with one (or a few) people.
2. For B2C the choice is made purely because some product got some traction for reasons unrelated to its quality, and that was enough to force the rest of the users to follow or be left out of the network (Slack, Facebook, major shopping sites, etc).
The majority of end-users did not exercise any choice.
Pretty sure we had people spin up a Slack instance at our company while we were still on Hipchat officially. Slack was a big improvement over what we had before that IMO.
I think often users do care but they have no meaningful way to tell anyone this or otherwise cause change. I hate the banking web app (from a major US bank) that I have to use. It is incredibly slow, buggy, poorly laid out, and occasionally just decides to put up a spinner forever. Who do I complain to about this. If you call and tell a customer service rep they'll just tell you to refresh and try again, often sympathetically. They know its terrible and hear many complaints themselves, but also have no power to do anything about it.
This gives engineers a bit too much credit. We have a tendency to heavily over index on last year's problems. That or exploring edge cases that also make no bloody sense.
Indeed, the very existence of so many frameworks is also very easy to blame on errant engineering.
> This gives engineers a bit too much credit. We have a tendency to heavily over index on last year's problems. That or exploring edge cases that also make no bloody sense.
When I was in university (a long time ago, shortly after the big bang :-)) the informal motto of the computer science faculty and students was Computer Science: Solving Yesterdays Problems, Tomorrow!
Now that I think about it, I don't find it that funny anymore.
"companies in an economic environment of sufficiently intense competition are forced to abandon all values except optimizing- for-profit or else be outcompeted by companies that optimized for profit better and so can sell the same service at a lower price."
I don't think software used to be more secure. As computer users we were more trusting, if not to say naive. As more machines connected to networks, we learned a thing or two.
Open access by default. No passwords [1] or short passwords. Then insecurely stored passwords. Everything in plaintext. Input sanitation? Why bother, only I can input data, I trust myself. Don't get me started on telnet.
I suspect that's another reason why software is more bloated. We started noticing things, how they interact with each other. And once you see something, you can't unsee it. The edge cases we have to account for are growing, not decreasing. There's more hardware to support too.
I'm sure the whole process of writing performant code can be improved. At the same time the bar is being raised faster than we can (or want to) reach it.
1 - And now we're inching towards no passwords again.
Security is indeed an important dimension because it holds the key (pun) for ever more important applications. I agree that a more detailed, like-for-like comparison of software qualities across decades needs to be very careful. Applications have exploded in all domains. But then also the ranks of software engineers and their tools and ability to exchange best practices has exploded.
The trouble with exponential curves is that a small difference in rates can create dramatic discrepancies.
> If its not done its because there is no money in it. In fact the opposite.
Exactly my thought. Incentives are not aligned. There are industry sectors where performance and correctness have value. If you care about the software craft in the same way as the aithor (as I do!) then the best way to enjoy work is to move to such industry sectors.
Not GP, but IME researchers often run big, complex simulations, and often have to care deeply about performance. Correctness, less so, unfortunately. A lot of research software is written by researchers themselves with very little understanding of good software development principles and practices, and end up messy, overly complex, and buggy. That said, being a software developer within research can be the best of both worlds. You should be able to demonstrate (using perf tools, and comparing with existing comparable software) that you're squeezing a lot out of the hardware, while being able to construct maintainable, tested software.
The only one I have hands-on experience with is financial trading. My workplace is expanding into a new asset class and we're using Haskell for this. The company next door from us is a crypto-trading company using Rust.
I can only speculate about other sectors where it might make sense:
- Hardware synthesis (i.e. the software used to design silicon chips);
- Aerospace;
- Defense;
- Smart city devices;
Not PP, but I used to work for an automotive subcontractor company, and I've heard a few stories about fatal car accidents that lead to lawsuits, which proved that the car design was the reason for the accident, yet the car manufacturer just payed "damages" to the relatives (or settled out of court, maybe) and never bothered to change the design. Apparently, it was more expensive to reconfigure their production pipelines than pay for an occasional death.
That said, this is probably what any big enough company would do. So your point still stands, maybe car manufacturers are no different.
Quite simply: you don’t ship code, you ship features. You don’t ship automated dependency injection. You don’t ship elegant abstractions. You don’t ship cool compiler tricks. You ship new stuff for customers that they’ll pay for. Your high unit test coverage that makes any refactor a painful slog gets in the way of shipping features far more often than the TDD zealots are willing to admit. Most of the “best practices” in the industry is unrealistic dogma created by people in post PMF, entrenched companies that no longer need to do much to print money.
Only when your refactor only touches business logic underneath the public API. If you do a true rewrite that breaks tests, getting your coverage back up to what it was before the refactor just delays your ship date. And many engineers WILL get green bar syndrome once that coverage % goes down, losing sight of real business goals (staying alive and making payroll). I’ve seen lots of codebases where obsession over unit tests lead to tests that were orders of magnitudes more complex than the system under test. This is incredibly common in multiple process or multi device software, where there’s some sort of custom protocol over an IPC or network or RF layer. You’ll invariably see massive amounts of effort put into mocking complementary parties inside a unit test framework, when the sane thing to do is to just write a 5 line bash script for an integration level test. Or — brace yourself — just do a manual test and ship it today.
Eh, not breaking APIs other teams or customers depend on is very very important. I think broad test coverage is justified there.
Of course there are always exceptions, and testing philosophy should bend to the goals of the organization, not the other way round.
And if a five line bash script is a more suitable test than a unit test in the same language as the application, then that's what you should do. Assuming you have a good way to automate it as part of an automated build and deploy process, of course.
Engineers understand that the entire discipline is about tradeoffs. It would be extremely easy to build reliable, secure, performant, robust, etc software given infinite time and infinite budget. Engineering is about working in the real world, where you don't have infinite time or budget.
Is it correct to build a piece of software that runs 50% slower, but can be built in 6 months instead of 12? The answer is "it depends" and good engineers will seek to understand the broader context of the business, users, and customers to make that decision.
A conflict as old as time. Unfortunately, it's the MBA thinking that pays the company's bills.
An ideal world isn't one in which either the MBAs or the engineers win. It's one in which they coexist and find a reasonable balance between having more useful features than the competition and not expending too much effort to build and maintain those features.
No, it’s how anyone that’s actually worked in an SMB / non F-1000 / non household name company talks. Most “regular” companies need to focus on getting features out the door.
You are not anything until you act as something. Acting like an MBA makes you virtually indistinguishable from one. I don't care what you studied, it's what you're doing (or not) with it.
Your entire stance is no true Scotsman with ad hominem. You realize almost every YC company operates in the way I described until they become entrenched in their domain, right? Do you think pre PMF companies are bickering about unit test coverage? If they are, they’ll fail
It's really a basic point that I can call myself whatever I want, but if I walk and quack like a duck, I'm a duck.
YC is a VC firm which is a very specific context that demands things like revenue and growth to be so prioritized as to be implicit and core features of the working ideology. That's not really a good model for software engineering when it comes to social benefit -- it prioritizes something else. It can demonstrate incidental social benefit but that's not actually an incentive that's built into the system that YC operates in and reflects internally.
There are Scotsmen out there, they're just not part of this discussion. That doesn't make anything I said a "no true Scotsman" argument.
Shall I prepare the postage for the letter in which you'll call John Carmack an MBA? Should we send another to Chris Sawyer? I heard he didn't even write a formal design doc for Roller Coaster Tycoon!
You've either intentionally cast my argument as something it's not or you need to read a little deeper before replying.
Nowhere did I say sloppy code is the problem, what I said was justifying it in terms of profitability is the problem. There's a difference between cranking something out because you're excited to show it and rushing through an implementation because of this or that externally defined money-based KPI.
i think ordinary people frequently feel frustrated with low-quality software, but it seems to me not necessarily in a way they can consciously articulate. that is, they probably can tell when software is frustrating to use, but they don't notice a clear difference between:
- slow, high-latency software
- poor resource usage slowing down the whole machine, including from unrelated software
- bad user interface design
- bugs
- intentionally bad/manipulative patterns
if the customers can't even really perceive what it is they are buying, it's not surprising that market forces aren't solving the problem.
i'm not a user interface designer or researcher so of course this could be totally wrong -- just an impression from informal observation
your comment made me think that despite the main thrust of my comment, this trajectory is not completely money driven [1]. the incredible journey of the hardware side of technology (cpu speed, memory size, network bandwidth etc) has also played a role in this profligacy because it can cover for a lot of less optimal design
this is something that happens more widely in the use of resources: you build more highways and instead of relieving congestion you get more people commuting
[1] my open source and volunteer-built linux distribution (will not name names) routinely (like almost daily) prompts for GB sized system updates.
I'm not sure how well "ruthless market forces" are allowed to operate in software. You definitely get some disrupters but for the most part I think a lack of competition is part of the problem.
Microsoft Teams is one of the most buggy, unreliable pieces of software I know, but it's still the market leader in terms of share because of Microsoft's near monopoly of the office suite software world.
It may be worth considering that the market is not for messaging and workspaces, but mainly for integrated business communications systems. If the integration is more valuable to customers than Slack or Zoom's smooth operation, Teams is going to win out.
At a data-centre scale they might still. Even marginal gains in efficiency translate to large savings in power consumption, heat management, and waste. If tech companies are properly taxed and regulated for these externalities, like say consuming billions of litres of fresh water during a drought in order to train a ML model, there would be a lot more pressure for this sort of efficiency.
This is where I normally end up in the software is way too slow today discussion.
Everyone claims to care about the environment until it impacts them even slightly. It's not even hard/time consuming to write good code. People are just overwhelmingly selfish and lazy.
Software devs consistently contribute to the environmental problems that many software devs then complain about and pretend is only a redneck issue. The reality is that we have an incredibly serious idiot issue and there isn't a solution due to the scale of the problem and the corruption preventing meaningful change.
An article the other day pointed out the problem well as well. Though it was related to the increasing lack of people who understand the code underneath the current flavour of the month bullshit framework.
> Software devs consistently contribute to the environmental problems
It doesn’t help that schwabists try to provide indexes such as “Java consumes more than JS”.
This kind of sentences, measuring the immeasurable like a square meter takes more energy than a liter, communicates “We’ll dominate you with power while understanding nothing of what you do, and tax you for using Java”. I bet it’s Facebook’s PHP lobbyists who came up with that.
If heating is a problem, then by all means, tax CPU time. But I already know the tax won’t matter because software provides such an immense value to society. We already pay average engineers $800 a day! But we shouldn’t try to get rid of a language.
If they knew what they were doing, they’d certainly ban NPM. But of course there are already GitHub actions and Sonar extensions publishing the CO2 consumption index of Java programs...
I wouldn't take the approach of banning languages.
But certainly Microsoft shouldn't be allowed to carelessly use a public, scarce resource like freshwater on such a scale as they did during a drought in order to train a model[0] they didn't have a use-case for yet? They should at least pay for the use of that resource to help the local community that now has to deal with the consequences.
The company claims it will "replenish more fresh water than it consumes by 2030," some how... but that's too little too late. And whose to say they will keep that promise? There's no consequence to them if everyone forgets about it and they never follow through. No better time than the present to course-correct.
Resource-extraction companies are like this too. They'll exploit the environment and pollute like crazy unless they are forced not to. They will fight tooth and nail to avoid regulation since that slows or limits profits. But they don't care about people or the environment. Neither do technology companies.
Small gains in efficiency at the level of user-space code has been shown to translate into marginal gains in profits in domains such as ads and securities trading. Data centres also look for efficiency gains here to reduce their operating costs where possible. However I suspect we're getting close to the limit here in terms of gains and motivation. There is still a significant amount of, "throw more machines, air conditioning, and fresh water at the problem until it goes away," kind of thinking since the environmental impacts of those decisions have no consequence to the ones making them.
There's no need to measure the energy usage of the JVM if it's doing useful work. However there probably should be more guidelines for where it's appropriate to set up data-centres and taxing them for the resources they use and pollution they create.
yep. but also mundane and cyclical factors like the much higher cost of money after decades of ZIRP are likely to "encourage" people to do more with less.
There's hardware bloat too. Old fart example: In the days of analog SD TV, the "foolproof" way to feed video into your TV was an RF modulator. The "proper" way was via direct video input of some sort. The "even more proper" way was an RGB type interface.
Except by the end it hardly mattered any more. RF modulators and tuners had gotten so good, that perfectly adequate video resulted from the RGB -> composite -> RF -> composite -> RGB chain. Bloat, but who cares?
In an automobile, the "proper" way to charge a phone is to have a 12VDC->USB type of adapter plug. The "bloat" way is to have a 12VDC->120VAC inverter and then plug the phone's existing wall charger into that. More circuitry, but it gets the job done and it's cheap.
If you like designing electronic gadgets, the "proper" way to flash an LED is two transistors and a couple of resistors and capacitors to build an astable multivibrator. The modern way is to program up a small 8-pin microcontroller. A CPU running thousands of lines of code just to blink an LED? Who cares.
If your computer/tablet/phone are reasonably recent it's the same for software. It's only when your gadget is a few years old that you really see the bloat as formerly performant software "degrades" in later versions.
"Except by the end it hardly mattered any more. RF modulators and tuners had gotten so good, that perfectly adequate video resulted from the RGB -> composite -> RF -> composite -> RGB chain. Bloat, but who cares?"
I think software is so complex that the house of cards eventually falls and ruins the company. Then everyone is mad and the workplace becomes toxic. Most companies fail so eventually the market does rectify it but it can be a long slog until that happens, at least when there was so much money being thrown around.
But the quality of the codebase definitely had less effect in the company’s success than you’d expect, like you said. The costs are just shuffled down to the customers and developers while everyone else gets rich.
> ...with multiple decades of massive experience...
You're assuming that the people with this experience have a) successfully passed it on, and / or that, b) they're still coding.
We don't have an apprenticeship model in software development that would lend itself to such a thing. Each new crop of developers has to relearn the lessons in whatever new stack they happen to encounter the issues. It would be like each new generation of carpenters having to relearn their trade because the wood had entirely different characteristics every time someone went to use it.
That experience you mention applies to the general cases, but again, the people that have it may not be the ones doing the work hands-on any more.
> VC funded startups must get to market first or die. Fake-it-till-you-make-it is their religion.
Sure, that's valid. Though most of the world is not comprised of startups. That's mostly a USA thing with a few small exceptions out there as well.
> How come "ruthless market forces" don't rectify this wastefulness?
They do, though people mistake those forces for "incumbents with enough power to influence the market forces". So we're witnessing the reality which they dominate.
I had plenty of examples in my career where better code helped the company long-term but as we all know, there must be a leader somewhere who understands the tradeoffs and give a green light every now and then.
Ask yourself whether you want to be remembered for selling millions of copies next year, or leaving behind something that in time might be cherished by generations.
Look outside software and see what things has been deemed quality and why.
Usually the people doing quality make very few comprises and often they don't do it on purpose.
Quality solely starts with yourself. Only you can guarantee within your own merits and experience what quality is.
Explaining quality is thereby difficult because it is so determined by the personal traits and experience.
Many of us have asked themselves that and the answer is clear: we want something that will truly last.
But short-term interests go explicitly AGAINST that. If I start demanding +20% more time to deliver quality the company will likely start a campaign to replace me in 3-6 months.
From one age (or burnout quantity) we prioritize stability. Ironically it's only then when we can truly provide the good quality but by that time we're much more risk-averse.
And the answer is the same as the answer has always been. When the market fails, you need to look to law and policy. It's really that simple.
We've done it a bit, Apple's little wrist-slap for slowing down the iPhones, for example.
We just need more of it. I know it's all anti-Libertarian or whatnot, but "more regulation" has worked quite a bit in the past and present. Just do that.
If the goal is to make money, you will make money. If the goal is to make quality software, you will make quality software. Sometimes those two goals are in alignment, most of the time they are at odds.
in my experience, when it comes to picking two out of: quality, cost, or speed to delivery, businesses always choose speed and cost.
I dont' like it, but I just grew to accept that.
The reason why physical engineering seems focused on delivering quality is because of strict regulatory oversight, which for better or worse, we lack in software.
good quality software (say, based on well designed, documented, tested etc. building blocks) can lower costs and improve speed to delivery in the longer run (less need to refactor, fewer bugs, easy to reuse, extend etc. etc.)
the trouble, empirically speaking, is that this "longer run" is not close enough to weigh on decisions :-)
Nah, for 98% of my career it has been managers. I was not always a good programmer, obviously, but even with that, most of the crappy code I wrote was attributed to having to rush.
I have a still barely usable HP MS200 all-in-one machine. I got it cheap at a garage sale in 2017. It wasn't fast, but with Linux on it, once Google Chrome was finally loaded, it was OK, even to the point of running the web version of Skype for fullscreen video chats, certainly for watching fullscreen Youtube. It went off to the in-laws as a Youtube watching and Gmail station.
Recently it came back to me. And with the old, 2017 vintage software on it, still worked as it did then. But before making it a kiddie computer, I installed FC38 and the current version of Chrome.
But Youtube videos were now "slide shows". No amount of fiddling with the settings made them play right. Finally gave up and changed the RAM from 2GB to 3GB - I just happened to have the right (laptop) memory card to do that.
And that brought it back to the old, barely adequate (720p fullscreen without noticeable skips) performance. 1.5x as much memory, an extra gigabyte, to do the same thing as six years ago.
"So get with the program and buy an adequate machine! Don't you know that 16GB is the absolute minimum to get anything done these days?" Sure - I have machines with 16GB+ in them. Even on the crappy machine though, Google Chrome is showing a memory footprint on the order of 30GB. I'm sure most of that is mmap'ed files; it sure isn't RAM. But 30GB. For a mostly idle web browser.
Youtube is a great example of the performance difference. Videos have risen in quality because of modern codecs, but old or cheap machines need to decode them in software (sometimes in Javascript, which is terribly wasteful but works on every machine) because they lack a modern GPU. On the other hand, for most devices, the increased battery life, lower temperatures, and smaller RAM/disk space requirements are obvious improvements. Youtube could leave duplicate files with the old encoding on their servers (if you use an alternative frontend for Youtube, you can often see the old file formats still being available for old content!) but that's just wasted space with most of the world visiting from more recent devices.
Most RAM usage increases in Chrome have come from advancements in the sandboxing architecture. Shared memory and process space could be used to attackaand bust out of sandboxes, so more isolation was added. I'm sure there are some more useless features leading to a jump in RAM usage as well, but the really big ones always seem to be the fact Chrome spawns a new, independent process for every tab/extension.
I've also noticed how much impact ad blockers have these days. With the web becoming ever shittier, effective adblockers have become harder to make and need more resources to do their job.
Software doesn't get slower just to make your day harder. In many cases, slow software is a result of replacing dangerous hacks by good implementations and changing requirements. FLV videos just don't cut it anymore, and I doubt h264 will be around in five years with h265 and AV1 making it's way to more and more devices.
Excellent remark. For each "I needed an extra gigabyte of RAM in my computer to do the same thing as six years ago" complaint there is normally a valid reason like this. It's always not exactly the same thing as six years ago, really.
Oh, and the same applies to the original blog post, too.
AV1 is a massive flop in my opinion. What was supposed to be the be all end all open source video format of the future just recently barely starts to pop up here and there. There are tech demos and talks of Youtube etc. using AV1, but in the same breath they say it's mostly for really low bitrate where the AV1 gains the most. Also I'm sure no one is looking forward to burn CPU time (or very lately, GPU time) on encoding 4k+ streams. Is that even financially viable? You save 30% on bandwith and spend 10000% more on server HW time. (I'll admit that these numbers are pulled from my ass, but they're based on a few years old knowledge of AV1 encoders being like 350-3500 times slower than h264 encoders, depending on optimization.)
The entire "there was no usable encoder and no hw decoder for years" thing didn't help and even now it's still all slow and complex, who has time and money for that?
AV1 is just too complex for what it produces in comparison with the good old h264 or vp9.
AV1 performs just fine, the problem is that it started gaining popularity at a time that new hardware has exploded in cost, especially for computers. h265 has been in computer hardware for a while, but as I have found out trying to play some special h265 stream on my laptop in the train, software decodes will easily peg the CPU as much as AV1 does.
h265 hardware decoding came out in 2015 with Nvidia's 9xx series, while AV1 took until the unaffordable Nvidia 30xx/Intel Xe/RDNA 2 generation to become available. Encoding support took even longer (RDNA3/Xe 2/Arc/40xx). In a few years, I'm sure it'll be as popular a format as h265 is today.
Web browsers in particular have become operating systems unto themselves. The "modern web" has so many features, often implemented in half a dozen competing ways... Just text rendering on modern hardware can be a HUGE lift.
Modern toolchains are all about "not reinventing the wheel" -- but when each dependency has picked a different version of said wheel, just pulling in a few dependencies leads to 6 different implementations of the same low-level features.
This is one of the reasons the announcements about WebGPU made me cringe. Yeah, just what I need, more unnecessary complexity added to webpages. Especially when there's seemingly a chance people will use them to mine crypto on my GPU while I'm on their page.
I forgot to mention this: Every time I install a new Linux system, I give the "factory" UI a chance before, inevitably, giving up and switching to MATE. Well, Gnome Shell was actually pretty crisply responsive on this clunker. And it has an "app store". And that has Chrome in it. Well well! But that installed a FlatPak. Bloat, bloat, bloat. Luckily Chrome is still directly "natively" installable as an RPM that actually uses the OS's shared libraries.
Sure it’s using system ones and not the vendored ones? I mean chrome as packaged by Google, not, say, Fedora’s Chromium (which is bent and coerced to use system libraries as much as possible).
Good point, so I checked. It has a few private libraries but for the most part uses the system ones. I think I can paste this output without leaking anything personal...
Um, okay then. It's just that Chrome's source tree has a hefty 3rdparty/ directory with everything in it, and it's not easy to say when it falls back on rolling its own when building.
I remember needing 8MB to run a graphical web browser with Linux/X11 (I believe it was Netscape Navigator). 8MB was sufficient, 16MB a bit more comfortable.
This article doesn't even touch on the main pain point: bugs. Virtually all software is just as buggy as can be. I dread every time I need to take a piece of software down a non-happy/non-common path. It almost always fails. Working around and dealing with bugs is just a normal, every day part of modern society now.
Simple example, I sold my car to Carvana the other day and just baaaarely pulled it off using Chrome and Firefox. In Chrome the upload image wizard would get a JS exception. That part of the app miraculously worked in FF, but virtually the entire rest of the site was mired in issues as it's obvious Carvana devs don't test in FF. I pulled off the transaction by bouncing between the two.
Even worse, most non-technical people think they did something wrong when they encounter a bug.
Software that is bloated and slow but stable and rock solid? I'd gladly take it at this point.
On one hand I cannot believe what we have actually works as well as it does. Duct tape and bubblegum everywhere at all levels and it actually still works. I’m amazed everyday at what humanity has accomplished with that in mind. On the other hand I can’t help but notice just how damn buggy everything is, and I’m not sure if everything is actually more buggy or if I’m just losing patience with big business software development as I age having seen how the sausage is made. I can’t help being angry at some illusory product person telling people to ship feature X or else with it having a glaring bug in the UX that is easily caught.
It can be hard to tell what is "duct tape" and what is "speed tape" in software these days. An aircraft mechanic doesn't need to know the structural differences between the two - just use speed tape to be safe. Similarly, a pure programmer doesn't have to know the difference between ufw and Windows Firewall - just use the latter and move on.
However, an engineer (mechanical or software) better understand the differences in both situations
You're 100% right. Ever since watching the Jonathan Blow talk on the end of the world I have started paying attention to it and it's amazing how many absolutely shitty software experiences we just accept on the daily.
I understand all the points about financial costs and opportunity costs and pragmatism and the rest, and I partially agree with it, but it's hard not to sometimes feel like we've accepted living in a half built world.
Agreed. I remember as a teenager when the first generation Macbook Airs came out, my friend realized you could reliably crash them (brand new, the demo models sitting out at the mac store) just by opening all the apps on the dock in quick succession. Took just a few seconds of clicking and the machine would crash and reboot.
And it's not like things have gotten any better, now I'm an adult with young children and if one of them gets their hands on my phone or laptop, they seem to be just as reliably be able to lock up, crash, freeze modern devices all the same. These kids are not physically damaging the phone in any way, they're just pushing buttons too quickly or in unexpected orders. That's the state of modern tech that we're in.
To be honest, I can understand why this hasn't been fixed. As mentioned in the article and throughout these comments, we've just come to expect to need to restart every now and again. In this case, people are likely to blame such a problem on the child, and a restart fixes the issue anyway. I just would've hoped for better by now in the lifecycle of these operating systems.
When the networking stack got worked out by Windows 98, everything was pretty smooth sailing. Until the drive-bys started. Then we moved to Windows NT and complexity, and albeit security, increased significantly. Windows XP was stable as fuck until the root kits started flowing. We moved through Vista, hopefully avoiding it all together until we landed at the fucking dream that was Windows 7. I would have stopped right there and been happy.
Then something got pushed over the cliff. Windows 10 and the UWP...like seriously WTF!? Satan's very own dumpster fire.
That may be true. But software is also much more ubiquitous and essential these days. At least in the 90s you could be confident your car wasn't buggy.
Yeah, and I shit you not, many vehicles have had software updates to solve oil consumption issues. People walk away from those service encounters just shaking their head wondering how in the fuck software was causing their car to lose/burn oil. Truth is, several different reasons.
The number of vehicle software updates are staggering.
American cars in the 90s were terrible. The domestic automakers wouldn't be in business today if people actually bought based on reliability. Now you can buy almost any brand and reasonably expect it to last over 100k miles.
But if the software keeps getting modified and expanded it will not be stable. Bugs will be added. And that also causes bloat. I think they go very much hand in hand (and very much correlates with the frequency of upgrades... usually upgrades that no user asked for anyway).
Not sure I agree with that. It's possible to add new features with minimal or even no bugs. It requires good engineering, good test coverage, and often good manual QA and even alpha/beta rounds. But it can be done. Is it done today? Sure, some organizations absolutely follow this approach. I would guess, especially in the commercial space, these things are often lacking.
I disabled my ad blocker and cleared out all caches and it still persisted. Even if it was, the JS exception was unhandled. Carvana is built in React, the absolute bare minimum would be an ErrorBoundary at the top of all flows (in this case, a modal).
I didn't say it in the post I wrote about C [1], but this is a big reason why I use C: I will have a hard time bloating my software. I can add features, yes, but adding a singular feature struggles to add even 100 kb to the executable.
I don't work for anyone right now, but I do have a "work" machine. This machine is beefy.
But I still run Neovim and tmux instead of an IDE. [2]
I don't run a typical Linux distro; I use a heavily-modified Gentoo, and that includes using OpenRC over systemd. [3]
I don't use a full desktop; I run a TWM called Qtile. [4]
All of this is so my machine is not bloated. When my machine boots up, and I just barely log in, it is running only 40 processes (including the terminal and htop I use to check).
As of right now, I'm engineering software. Truly engineering; I am spending the effort to effectively mitigate all of C's problems, while keeping the software lean and fast. I hope to someday build a business on that software.
I guess I'll see if there even is a market for non-bloated, sleek software anymore.
>, but this is a big reason why I use C: I will have a hard time bloating my software.
Your perspective is interesting because I'm old enough to remember when the C Language was considered bloat compared to just writing it in assembly language.
Examples of 1980s programs in assembly was WordPerfect, Lotus 123, MS-DOS 1.0. SubLogic Flight Simulator (before Microsoft bought it) was also in assembly.
Back then, industry observers were saying that MS Word and MS Excel being written in "bloated" C was a reason that Microsoft was iterating on new features faster and porting to other architectures sooner than competitors WordPerfect and Lotus 123 because they stayed with assembly language too long. (They did eventually adopt C.)
I see this "bloat-vs-lean" tradeoff in my own software I write for my private use. I often use higher-level and "bloated" C#/Python instead of the leaner C/C++ because I can finish a particular task a lot faster. In my case I'm more skilled in C++ than C# and prefer leaner C++ executables but those positives don't matter when C# accomplishes a desired task in less time. I'm part of the bloated software problem!
is there not, a minor hint of irony here, that when I think of bloated software my mind goes almost immediately to everything Microsoft creates (except VSCode, which is somehow the most performant software they've created despite being written in a language that is theoretically slow)
> Your perspective is interesting because I'm old enough to remember when the C Language was considered bloat compared to just writing it in assembly language.
Both perspectives are correct for their respective times. Compilers were much dumber in 80. These days your GUI desktop program written in assembly language would probably run slower than written in C and compiled with modern gcc O2.
C probably was bloated then. But today it’s the backbone of everything, whereas python will never be that, at least on hardware at the levels we can foreseeable build.
Languages like C brought a massive benefit to accessibility. Devolving “software is slow” to “yeah but C vs assembly” is such a ridiculous crutch argument. Assembly is not remotely approachable to the majority of programmers. C, rust, zig, c++, Java, C# are all approachable languages that are fast and have great fast libraries and frameworks to work with.
All I can see in the “I can finish the see sharp program faster” argument is that “python vs c++.jpeg” from the 2000’s where half the python was importing libraries, but they wrote the C++ from scratch, and everyone who knew nothing about C++ moved this image around like it was some hilarious joke of C++.
I would say truly engineering would be contextualizing these decisions along a spectrum of tradeoffs and positioning your project on that spectrum according to the constraints of its creators and users. That may put you to one extreme of the continuum, but that doesn't mean people whose constraints land them elsewhere are not doing "true engineering."
"I am going to build the strongest, lightest bridge in the world" is a marvel. "I am going to build a light-enough strong-enough bridge for a cost my client can afford" is engineering.
Hear! Hear! Engineering is a practical discipline - and it is a discipline. It's building for purpose. I appreciate Tonsky's point, and agree with much of it, but engineering means turning out into the world things that are fit for purpose, however that's defined. Purpose is +100 and all the rest is something of less than 100. Mathematics is mathematics, science is science and engineering is engineering.
I agree, except that engineering is"fit for purpose at the smallest cost."
For me personally, I am banking on having multiple business customers for my software. If I consider the cost amortized, I can spend a little more to engineer something better.
But yes, it still has to be fit for purpose. My customers will provide a document that explains what they want to do with my software and on what platforms. If I sign a contract, that means I am committing to supporting their purposes on those platforms.
Trade-offs also get a bit meta with "standardization" one way or another. Sure you could get peak performance if you made custom length and width screws but it boosts costs and makes it harder to maintain with the one-offs. Going with trade-offs is itself a tradeoff which means monoculture breeds a pattern of ignoring the tradeoffs.
The ethical considerations and professional obligations that go into creating a bridge aren't exactly translated into the act of creating software. If we mirrored those in a bridge building analogy the "cost my client afford" would mean the engineers would often find themselves greenlighting a bridge that would fail in sensible conditions in our current world.
No. A civil engineer's response would (should?) be immediately that the client cannot build the bridge for that cost, simply because it will kill someone as a first order effect. That's the ethical consideration. The downsides of janky rendering or poor memory management aren't quite the same unless you actually add up the power wastage and apply it as an impact on global warming. But, those are not a first order effect, and although they should be, they don't have the same influence on the solution.
There's plenty of software that can kill someone as a first-order effect. If you allow second-order, there's even more.
Avionics. Air traffic control. Industrial control. Automotive software. 911 dispatch. Medical instruments. Medical information. Military command and control. Mechanical engineering analysis software.
That's not what I'm saying. If a web page takes 2s to render rather than a theoretical minimum of "immediately", for most software that most people write, no one will die, or be hurt, or even feel too bad as a first-order effect. Obviously, if one is writing flight control software, or somesuch, then the stakes are higher, and one ought to be aware of that. That's the practicality calculation. If you're going to kill/hurt/really upset someone, then what you're doing is not fit for purpose (unless that is the purpose :) )
> As of right now, I'm engineering software. Truly engineering; I am spending the effort to effectively mitigate all of C's problems, while keeping the software lean and fast.
Can you expand on that?
Which exact problems of C's are you working on solving? Do you mean the language itself (writing a new dialect of C), or the ecosystem (e.g. impossibility of static linking with glibc)? Or something else entirely?
Since you care about these things, why not try a language that has scoped threads and bounds checking built in, like Rust or Ada?
Actually, I had written more about Rust and Ada, but then I read your blog, and it says you like C better anyway because you find it more fun, so nevermind.
You might also like to use D in its "Better C" mode, which at least offers bounds-checked arrays and sliced, as well as some other features, while being very similar to C.
C++ is huge and highly interconnected. No matter whether you like it or dislike it, and how experienced you are with software development, you either invest years into becoming good at it, or stay away from C++ jobs.
If I was going to go back and time and offer younger myself career advice, I would probably say "just stick with C++". It was the most popular programming language when I started my career. It will be the most popular programming language when I end my career.
Think about it. Is there any widely-deployed OS not written in C/C++? Any web browsers? Spreadsheets? CAD software? Text editor?
Having as much experience as you do and still conflating "widely used because it's super good" and "widely used because of network effects and inertia" is really puzzling.
Surely you understand that your argument is disingenuous, right? Nobody wants to start over a new OS or a browser in another language due to the huge costs that no corporation nowadays is willing to shoulder (due to their own interests). Otherwise a lot of people would very definitely write OS-es and browsers in Rust, D, V even, Zig, Nim and a good amount of others.
C/C++ were there first, that's all there is to it, it's that simple.
I think there might be a market mostly with other engineers like you. Lots of people seem to really love this stuff, look at the forth crowd and how much they adore the language.
For me... I don't notice any slowness or excessive battery drain in VS Code, so I just use that. Perhaps there's something deeply wrong with my psychology, but I'm just not very interested in simplicity for its own sake. It's cool, but not an approach I'd want to use every day in real life.
I wonder how much of this is just because when a project manager comes along and says "as a system administrator, I want to be able to log all of the keystrokes of the users and reports who's slacking to the boss", you say, "ok, that feature will take about two weeks to implement" and they can't argue with you because it's C so they just go away and leave you alone.
Besides, I struggled with sysadmin-type tasks until I bit the bullet and installed Linux from Scratch and then Gentoo. It was one of my best educational investments.
Not to pick on you—this is something I and most other engineers also struggle with—but I think this sums up the reason for most of the complaints in this thread. That is: builders who prioritize marketing outcompete those who prioritize engineering quality.
Making something people want enough to pay for is very difficult; if you are trying to do that while also imposing a bunch of other constraints on yourself that have no clear marketing benefits, your odds of success go down a lot.
Engineering excellence can sometimes be used successfully as a form of marketing in itself and you could well pull this off, as your content is super engaging and well-written.
But I would suggest to every engineer to consider Chesterton’s fence. Why is all the winning software in every category slow and bloated and seemingly unconcerned with engineering excellence? Is it because everyone involved, the engineers that build it and the customers that buy it, are all technical philistine morons? Or is it because the products that win in the market prioritize… winning in the market, and all their competitors that don’t prioritize that for whatever reason fall by the wayside?
> But I would suggest to every engineer to consider Chesterton’s fence.
I agree.
In fact, I believe I have considered Chesterton's Fence; I have a plan.
I'd like your opinion on my plan.
> Making something people want enough to pay for is very difficult.
Yes, absolutely.
To fight this, I have spent years, 3.5 years, figuring out what people hate about build systems, and I have designed mine to address those.
While that doesn't guarantee success, I think it may improve my odds. Do you agree?
> That is: builders who prioritize marketing outcompete those who prioritize engineering quality.
Yes, and it sucks.
I hate marketers. I hate marketing. I hate the fact that I'm going to do it. But I have to.
So my marketing with be focused on giving people something for just paying attention.
> Engineering excellence can sometimes be used successfully as a form of marketing in itself and you could well pull this off, as your content is super engaging and well-written.
Thank you for your compliment! I hope so, and my plan is to emphasize this.
I'm going to post four articles on HN, each a week apart, and each will be designed to give readers something substantial, with a blurb about marketing at the end that will be clearly marked as marketing.
* The first will be new FOSS licenses designed to better protect contributors from legal liability.
* The second will be a post on language design and psychology.
* The third will be a deep dive into Turing-completeness and what it means.
* The fourth will be the source code as a Show HN, along with an offer for early adopters.
> Or is it because the products that win in the market prioritize… winning in the market, and all their competitors that don’t prioritize that for whatever reason fall by the wayside?
It's awful, but you are right.
So this marketing push is all I will do for four weeks: preparing and posting, and responding to comments. This is when I will "prioritize winning in the market."
It's a start for sure. But personally I'd suggest changing your mindset a bit from the idea that you'll come out of the code cave for a month to grit your teeth and do marketing. Granted, this is much better than just staying in the cave. But really to run a successful business I think you need to accept at a deep level that marketing, making money, and growing the business is now your main job, and you will permanently need to spend at least as many thought cycles on that as you do on programming.
To wit, while posting those articles and a Show HN sounds like a good plan and you should definitely do that, what will you do if they all fall flat and get no traction? It's a distinct possibility, and I hope you won't just give up.
I'd think more about what you're going to do every week for the next year to get users rather than putting all your chips on an HN launch that may or may not pan out. Even if you do rock the HN launch, you're probably going to have the "trough of sorrow"[1] to contend with after, so I'd think more about how you can make marketing a repeatable part of your rhythm in the long run.
> while posting those articles and a Show HN sounds like a good plan and you should definitely do that, what will you do if they all fall flat and get no traction?
You got me. I was planning on giving up. :P
I am in a place where the only cost to switching projects and trying again in three years is the opportunity cost, so because I'm bad at constant marketing, that's what I was going to do if I got zero traction.
If I got only some traction, I'd weigh my options.
I was only going to worry about long-term weekly marketing if the launch went well enough.
Because I'll be frank: I have no idea how to do constant marketing that doesn't bother people or waste their time. If I would waste their time, I'd rather just throw my own time away and switch projects.
> Even if you do rock the HN launch, you're probably going to have the "trough of sorrow"[1] to contend with after
Good blog post, and yes, I agree. I am expecting the trough as I build up the MVP.
The issue with your strategy imho is that failing to get traction with an HN launch is not that much of a negative signal. Getting to the front page is a bit of a crapshoot and not achieving it doesn't necessarily mean no one is interested in your product--it could mean you got unlucky or just need to iterate on your messaging.
If you haven't gotten it in many users' hands yet, it might be a good idea to try recruiting like 50-100 users first, either one-by-one through email reachouts or in smaller communities where it's less hit-or-miss, like niche subreddits. If some of these users like the product and stick with it, start giving you feature requests, etc., that tells you that you're on to something. Conversely, if you can't get even a small group of users to try it and stick with it using that approach, it's much more of a negative signal than a failed HN launch and probably indicates that something needs to change.
Whatever route you decide to take, I wish you the best with it!
Also:
"I have no idea how to do constant marketing that doesn't bother people or waste their time. If I would waste their time, I'd rather just throw my own time away and switch projects."
That's noble of you, but I would cut yourself some slack. Ideally you would market in a way where you don't bother people or waste their time, but getting users often requires trying things where you risk getting close to that line. Sometimes you might cross over it, but that's just something to learn from.
Trying to market a product while never bothering anyone the least little bit is a bit like trying to be a comedian without offending anyone or to find a romantic partner without enduring some awkward dates. I think it just goes with the territory.
I think if you succeed, it will be because other developers are investing in you personally. Because they like what you write and how you think, and thus trust you to write good software.
I think patio11 is someone who has used a similar marketing strategy to great success. People hire him because of the quality of his writing demonstrating his understanding.
> While that doesn't guarantee success, I think it may improve my odds. Do you agree?
Not your parent commenter but I wouldn't agree, no. Build systems are one of these things most companies want to setup once and want to never deal with them again, and only do so if they have zero choice.
`earthly.dev` had an article some weeks ago that addressed why one of their products never took off even though it's really good and addresses grievances in the space. The TL;DR was that existing customers of other CI/CD tools did not want to migrate to a new system, especially since it doesn't offer a migration from their existing solution.
> Why is all the winning software in every category slow and bloated and seemingly unconcerned with engineering excellence?
1. They are extremely aggressive about coming to market fast. That's a good thing, yes.
2. They then let the project ossify because they view development as a cost center. Which is a bad thing.
3. Their endgame after being successful enough and can afford enough lawyers, they start proactively buying or shutting down potential competitors.
The "marketing wins over quality (and slower) engineering" does apply, yes, but not throughout the entire lifecycle. It applies only in the very first phase.
Here is how the main factor incentivizing this sloppy engineering behavior is justified: everything is cast in terms of making money. Revenue is good -- fine. Therefore no revenue is bad -- doesn't follow and you need to justify that somehow.
No revenue generally means the project eventually gets abandoned or relegated to spare time because people gotta eat. So yeah I'd say it's a bad thing.
I'm all for quality engineering. I'm just saying that if your whole plan for a new product is "quality engineering", then you have no plan at all. The horse needs to come before the cart.
Even couching the output as a "product" begs the question. It comes with a whole set of connotations around what's expected in and from such an effort.
What you're describing is a tragedy of the commons from the perspective of social and civilizational benefit. I agree that's what it is, but I think we should be more careful before justifying it.
That's fine for theoretical discussions. The poster I was replying to implies that their goal is to sell a product and make money, so I was speaking to that.
Using Arch with GNOME, VIM/NEOVIM. Happy value between usability and resource usage. The programmers on Linux usually care about efficiency. Christian Hegert is improving recently a lot in GNOME. This allows me to use a ThinkPad X220 which is running circles around some modern laptops with Windows. I understand that people prefer the small footprint of C. I like C. My personal preference is C++ (low- and high-level, more safety and flexibility).
The problem is greed - i.e. capitalism. The blog mentions Electron and therefore Chrome and JavaScript. They are awful combination and allow companies to save on programmers. Programmers which handle C, C++, Rust or Python are a small group. Java, C# and JavaScript consume more resources, allow using a lot more stuff quicker (like a drug) and most importantly are forgiving mistakes. So the industry decides to waste the resources on our computers because they don’t need to buy and maintain them. The customer pays twice, for the software and the next computer with more soldered main-memory. The key point is - the software companies don’t pay for hardware or the environmental damage! I’m doing myself a little JavaScript and some Java. Efficiency? Nobody ask for that and I should not spend time on it. There is now law stating that managed languages need to waste resources but a side-effect.
Remember Steve Jobs?
I don’t appreciate what Apple does. But he banned Flash for a reason. Resources! Okay. Also bugs. And for the same reason they should ban Electron. Apple devices run faster with less resources because Apple saves on hardware. Why build devices with huge batteries, when you can achieve more runtime with less material and take the same money? The EU needs to enforce side-loading on the iPhone. But I also think Apple should be allowed to keep Blink (and therefore Electron) banned from AppStore. I don’t want see my battery dying because some corporate manager decided to drain it. But if you need Blink? Side load.
Jobs also said native apps were unnecessary and we can all just use web apps. (Despite working on an app store at the time.) So even the prophet himself isn't such a great example.
High keyboard focused usage with focus on the application-window. I specifically like to enter just “terminal” and the terminal open, either a new window or already open on. I also Alt+Tab but with many open applications finding is more convenient. Another thing is that it is also easy to use with the mouse and the overview makes that pleasant (good for novice users).
They eliminated the unappropriate “desktop metaphor” from Windows 95 and the “system tray”.
The negative side is that GNOME often removes options or hides them - which drives away experienced users (the people needed to pull in new users) and some developers to forks. The are right to not support every bewildering option but the needed ones must in place despite the UX people don’t use them itself.
Some of this is right. Do stuff automatically right and don’t provide unneeded options to avoid complexity. Some not! Also provide required preferences e.g. “Do not suspend on LID close” because some people don’t want that! The GNOME people assumed it was needed as option because of problems with suspend (S3) in the past. Maybe people used it to bypass issues but that isn’t the use case of the preference. Another thing is “When you’ve five clock applets” I want to know what is missing from the first one which made all the others necessary. GNOME learned that people don’t need an “Emacs” as UI-Shell but somewhat went into the other extremes.
PS: And GNOME just looks good by default. I don’t need themes because it is fine.
Admittedly, I stopped reading just over half way, but the crux of the argument appeared to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth. The article did acknowledge the counter point that in some cases the efficiency gains will never make up for the time spent chasing this efficiency, but it hand-waved that away without interacting with the argument.
The other key trait of the article was cherry-picking data and over simplifying domains. Several times the article alluded to the emotional plea that "what could the software possibly be doing that takes up that time/space" (my paraphrase) but it didn't place any serious attempt to answer that question, using their lack of provided answer as if it were an indication of an invalid answer and comparing various bits of software that do not have feature parity looking as if their only practical difference were performance. (Edit: Updated wording of previous sentence to be more clear.)
There's definitely alot to be said about how software could be more efficient as well as the social, environmental, and business costs of inefficiency, but there is also much to be said about how modern software empowers people that otherwise might not be able to write anything to write something "bad" that does what they need or to discuss how modern software developeres tend to aim to be "fast enough" in the way that an structural engineer would choose "strong enough."
There's much rich debate to be had, but this article didn't include it, instead going for an emotional rant, and failing to engage with actual reason. There is, in my opinion, truth to parts of the argument, but the article made itself clear that it didn't want a discussion.
>Admittedly, I stopped reading just over half way, but the crux of the argument appeared to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth.
But it is the truth, and all we have to do is simply look at software from the point of users.
What device did You use to write this comment? Iphone4, or 14?
What device do You use as your workstation, some pentium with 4gbs of ram?
Heck, even going outside the software itself, but to related branch - What network did your device talked to servers?
5G/fiber counted in hundreds of Mbps or 2g counted in few kbs?
At the end of the day, actions speak louder then words.
And you can pretend all You want that "wanting faster software" is unproven axiom, but if the axiom is followed by literally all of society, it might as well be taken as truth.
...especially since it originates from the exact same place as the "developer time is worth more" one.
Only difference is who's time we are saving
There is no evidence what-so-ever that “performance trades with developer time”. 99% of the time, reasonable performance is a skill issue, not a developer time issue.
In actual fact, if you look at “clean abstractions” and other nonsense, you can see that higher developer time actually seems to equate to lower performance. As we all also know, the best indicator of bug count is lines of code, so adding in all these abstractions that adds lines of code not only make code slower, but also results in more bugs.
That is to say, all current evidence points to the exact opposite of their claim:
Higher dev time = lower performance and more bugs (assuming higher dev time is coming from trying to abstract)
> There is no evidence what-so-ever that “performance trades with developer time”. 99% of the time, reasonable performance is a skill issue, not a developer time issue.
I think this is reductive. There are lots of things people can do in Python that would be slower to write in C, and no amount of skill is gonna close that gap.
> In actual fact, if you look at “clean abstractions” and other nonsense, you can see that higher developer time actually seems to equate to lower performance. As we all also know, the best indicator of bug count is lines of code, so adding in all these abstractions that adds lines of code not only make code slower, but also results in more bugs.
I understand the argument against bad abstractions, but what abstraction is adding lines of code? I think we can agree that most of the time abstractions are to reduce lines of code, even if they may make the code harder to read.
> Higher dev time = lower performance and more bugs (assuming higher dev time is coming from trying to abstract)
I'm sure there are lots of projects that are buggy with little abstractions.
>I think this is reductive. There are lots of things people can do in Python that would be slower to write in C, and no amount of skill is gonna close that gap.
Write python without import statements and then try to make this claim again.
C using libraries is often as easy, or easier than python with imports. You don’t get to let python use someone else’s prewritten code while C has to do it from scratch and then say “see”. It’s a false equivalence.
>I understand the argument against bad abstractions, but what abstraction is adding lines of code? I think we can agree that most of the time abstractions are to reduce lines of code, even if they may make the code harder to read.
I don’t agree with this claim. I’d say go check out YouTube video from people like Muratori who tackle this claim. We have no measurements one way or another, but if I was putting money down, I would bet that nearly all abstraction that occurs ends up adding lines instead of saving them.
>I'm sure there are lots of projects that are buggy with little abstractions.
I’d tend to agree that the evidence shows this, given that the vast majority (and it’s not even close) of bugs are measurably logic errors.
But we're not talking about Python using libraries and C using libraries. We're talking about pythons basic language features and what C patches in with libraries. Which makes a huge difference to me.
> I don’t agree with this claim. I’d say go check out YouTube video from people like Muratori who tackle this claim. We have no measurements one way or another, but if I was putting money down, I would bet that nearly all abstraction that occurs ends up adding lines instead of saving them.
We may not have measurements of the trend, but we can measure it on each abstraction fine. How many lines of code is your abstraction and how many lines would you have to duplicate if it wasn't for your abstraction? If you're not saving that much, don't make the abstraction.
Again, this isn't talking about maintainability. I still question the claim that less lines of code means less bugs, but I know it's a popular one. In my experience, it's the terse code that's doing a lot that winds up being buggy. I prefer things to be longer and explicit, so you're talking to someone that doesn't even like abstractions that much unless they're necessary.
> I’d tend to agree that the evidence shows this, given that the vast majority (and it’s not even close) of bugs are measurably logic errors.
I don't think this premise you propose is obviously false, but we have lots of evidence that memory bugs are the cause of most bugs.
The standard library is still a library. Python doesn’t provide very much as a language feature.
There are pros and cons to standard libraries. If we were talking about brand new C with not a massive community of quality libraries, then yeah. But we’re not. We are talking about a language with a half century of great libraries and frameworks being provided.
>memory bugs cause most bugs
Yeah no. Not even close. You’re thinking of Microsoft’s citation that 50% of their security bugs are memory safety, but that’s not 50% of all bugs.
Basic standard logic bugs are far and away the largest contributor to bugs.
The way we know this is that language choice virtually doesn’t matter. On the whole, developers write 20-30 bugs per 1,000 lines of code regardless of language, so memory errors simply cannot be the largest contributor to bugs.
No, it's completely different. When a programming language provides a solid way to do something, that becomes a standard and accepted API for doing that thing, and standards are really nice when working with others.
> Python doesn’t provide very much as a language feature.
Uh what. I don't even like Python that much but this is a ridiculous statement. If we're talking C vs Python, Python is able to deal with lists in extremely terse and memory safe ways and C has a bunch of error-prone ways to do that in 10x the amount of code. If you care about LOC so much, is this not a useful language feature? Aren't you essentially saying C is inherently buggier than Python?
> Yeah no. Not even close. You’re thinking of Microsoft’s citation that 50% of their security bugs are memory safety, but that’s not 50% of all bugs.
I wasn't, but it was 70% for the record.
> Basic standard logic bugs are far and away the largest contributor to bugs.
BS. Citation? By what metric? I would wager that almost all C programs have dozens of memory bugs in them whether the developers want to admit it or not.
Ever read a security audit of software before? It's never "Oh you forgot to filter this array." It's buffer overflows and poorly allocated variables.
> The way we know this is that language choice virtually doesn’t matter. On the whole, developers write 20-30 bugs per 1,000 lines of code regardless of language, so memory errors simply cannot be the largest contributor to bugs.
Again, citation? You dismiss a Microsoft security report because that's
"just one company" (the biggest company) and then you say this?
It's almost like you're saying "There's more logic bugs than memory bugs in memory safe languages."
I kinda agree, but skill issue can be easily extended to time issue.
Because it takes time to skill up.
And also, if you put higher bar on require skill level, you inherently get less developers, which means more work per developer which then converts to time issue.
Obviously it's not a one to one, because great dev will spend less time doing good job then a clueless one making a mess, but still
It's only truth because no great alternative is provided. My best devices are ones from the past. I gave them up only because software rendered them obsolete, not because I wanted the newer model.
If half of society is speaking a new language AND an old language, it doesn't really matter if the old language is superior. You need to be able to speak the new dialect just to navigate society, even if the new language only exists as a mechanism to differentiate the new generation from the old.
Actually? An old iPad on WiFi that is stationed on the other side of several walls.
Honestly, a lot of people don’t mind waiting a second for actions to complete. (For some of us, it’s the only pause we get to take during the day.)
There are enough low-end phone sales that we should be able to accept that some people really don’t mind taking an extra moment if it saves a dollar.
This really isn’t to say that software should be slow, but I think we should acknowledge that speed is not the sole value that users concern themselves with when using software. Many times it isn’t even a primary value.
> to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth
The gist is that if your code takes just a couple of seconds to complete on your fancy M1 Mac, there will be a pretty big chunk of your potential audience who will have to wait for minutes (and there's that surprising character trait in many non-technical users that they simply accept such bullshit, because they don't know that performance could be drastically improved without them having to buy new hardware).
But unless devs test their code also on low-end devices they will be completely oblivious to that problem.
And the actual problem isn't even the technical aspects, but that some devs are getting awfully defensive when confronted with the ugly truth that their code is too slow and start arguing instead of sitting down with their product mangager and making time for some profiling and optimization sessions to see what can be done about the performance problems without having to start from scratch.
What percentage of your target customers are going to have low end devices? And what is their estimated value compared to people who upgrade their tech (like most people with money would)?
Is it a problem or are those people simply not worth serving?
I liked the emotional rant. Sometimes data just obscures what you know to be true. I don't need data to tell me that I should try my best to make the best software, and that's what this article reminded me.
> how modern software empowers people that otherwise might not be able to write anything to write something "bad"
What's so special in the modern software? How do you tell if software is modern?
Few points to illustrate the difficulties with your descriptions: BASIC and SQL were meant to empower people ... to write something "bad" a very long time ago. So did Fortran, as well as some other languages / technologies that didn't survive to the present day.
Python or Java can be called "conservative" if you are very generous, but, really, in truth, should be called "anachronistic" considering programming language development that happened in the 70. Languages like J or Prolog are conceptually a lot more advanced than Rust or Go, but have been created much earlier. Many languages are actually collections of languages that have been created over time, eg. C89 through C23 -- does this make C a modern language? Only the C23? Is there really that much of a difference between C89 and C23?
Is there some other way to define modernity? I.e. not based on time of creation nor based on some imaginary evolutionary tree?
I’ll be honest here, time got away from me. I had Node and Python in mind simply having forgot how old Python was (And Node not exactly being the new guy anymore.) :p
This thread is ample evidence that not everybody is ok with it. Also, note that "being ok with it" and "seeming to be ok with it" are two extremely different things.
Surely you can see that this top list is optimized for features, not for speed. So yeah, people can complain here on HN all they want, in the end the 'evidence' of which software is preferred is pretty clear, even in the programmer demographic.
Going by number of search queries is such an obviously flawed assessment method. These results could just as well mean that the users of VS Code experience the most problems with their IDE and thus need to search for answers more often. I don't know when I last searched for Vim on a search engine; stuff just works for me. :)
I didn't know how to put my thoughts about the article into words so decided not to, but thankfully this comment does it better than I could have.
Really annoying how the author brings up counter points and comparisons but does not deeply engage with them, as if they're absolute truths and only rhetorical, when they're far from that.
Huh, the author feels the pain with endless crappy and slow software. If you feel all his points either superficial or wrong you could easily come up with counter arguments better than author. But it seems all you are saying is I don't like it because I don't like it.
> Huh, the author feels the pain with endless crappy and slow software.
If that was all, it would've been fine... if he was only listing bugs and saying he's tired of crappy software. But he has framed the article in a way as if he's describing the "why" and how the problem can be solved. In fact I've been following the author and he has a place where he lists all bugs he finds in software, and that in my opinion is a great initiative, but this article just opens too many threads and tangents and only appears to explain the root cause.
I could come up with a few examples like:
- trying to compare cars, buildings, and planes with software and not diving deeper into how much different they are and why.
- or saying everybody seems to be ok with inefficient software without diving deeper into why everyone's ok with it.
- or mentioning a tweet about a guy who spent more time trying to make something faster than he will ever gain back without going further into if it's a good/bad thing and why.
> If you feel all his points either superficial or wrong you could easily come up with counter arguments better than author.
I never said his points were superficial or wrong, just that he does not deeply engage with the threads he opens up. For that reason, I have no counter arguments to come up with, since he is just complaining about his issues and also does not come up with a good solution as such. Of course everyone would like nicer software, that's not new. Why is software different than other industries in the first place? Is it worth improving it? What are the trade-offs? What is the effort required at scale? What would we lose if we made software more like manufacturing? What would we gain? If he knows a path forward, does he have a better way to express it than the last "manifesto" paragraph? Etc.
> Admittedly, I stopped reading just over half way, but the crux of the argument appeared to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth.
Are you suggesting that maybe users (including you and me) should have to put up with painfully slow, stuttering software? I don’t see why his claim requires any justification - to my mind, it should be just as self evident as the fact that inflicting physical pain upon others should be avoided.
> […] modern software developeres tend to aim to be "fast enough" in the way that an structural engineer would choose "strong enough."
But they don’t aim to make software fast enough. My experience last week: Windows 10 file Explorer took ~2.5 seconds to open. Close it and re-open it: same thing. Open it, right mouse click another folder to open a new Explorer window: same thing. Fresh install of Windows 10 on a top of the line workstation-type laptop with 64gb of RAM.
Not all, but a frustratingly large proportion of modern software is dog shit slow.
The reason doesn’t need to spelled out in the article: if the developers of these slow apps cared, they could figure it out. From my experience, it usually looks something like this:
Some developer has an emotional attachment to Protocol Buffers, and will stop at nothing to see its adoption within the org. But their software is pretty heavily invested in JSON. So they rewrite the software to read the existing JSON files from disk (or REST web service response bodies, whatever), and reserialize them to protobuf in memory. Tada! Now we’re using protobufs, great. Of course, nothing meaningful was actually achieved here - they already had a perfectly fine, ready to use, deserialized, in-memory data structure before they added protobuf to the mix. Oh, and that plain struct in memory was faster than traversing a protobuf: the former had small substructures laid out in the same allocation as the parent, whereas substructures in protobuf involves multiple allocations and chasing pointers. Next step: realize that REST is lame, and gRPC is hip. But it’ll be practically impossible to rewrite everything from REST to gRPC, so they do the only reasonable thing: create proxies that sit between the client and server that translate REST requests/responses to/from gRPC! Now that we have that in place, we can add an additional proxy to the mix: Envoy. Envoy is a super popular layer 7 proxy, so it’s gotta be good. What functionality will be used? Any load balancing? RBAC policies? TLS termination? Nope. None of it. But because Envoy is “good”, adding it to the stack with no justification must also be intrinsically “good”, right? Right!
(Edit: do you see how long winded and boring this example is? This is precisely why the author shouldn’t expound on the “why” - anyone involved in the development of needlessly slow software (who isn’t blind to the problem because they are part of it) can recount similar craziness. Adding this to their blog would make for boring reading, and distract from the aim of their article.)
Buzzword/resume driven development, unwarranted layers of indirection (for no gain), absolution of responsibility via appeal to authority (if the top 10 software companies created and/or use some software, then surely we can blindly use that software too and enjoy the same success, despite not giving any consideration to whether it’s even remotely the right tool for the problem at hand), cargo culting, etc.
The reason software is painfully slow usually boils down to lack of critical, rational thought: either out of laziness and/or deferral of responsibility, or because of some emotional attachment to some type of software component.
> Are you suggesting that maybe users (including you and me) should have to put up with painfully slow, stuttering software? I don’t see why his claim requires any justification - to my mind, it should be just as self evident as the fact that inflicting physical pain upon others should be avoided.
I'm suggesting that the blanket assertion doesn't hold true that slow software is painful software. Software can be so slow that it's painful but "slow" from the point of view of absolute does not immediately make something painful.
A counter-example that the author used when describing people with a pride in inefficiency was to quote:
> @tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)
I'm also asserting that slow and painful software that does what I want is better than fast software that doesn't or that I can't afford.
Heck, I empathize with the author: when I run Slack there is a perceptible delay when I type. Do I want them to fix it? I mean, no, not really. I'd personally rather they provide an offline search mechanism or a way to write direct CSS for theming. It's something that is annoying but it is less annoying that missing new features or the price going up. Likewise, I could use Vim for editing if I really wanted to, but I'd rather have the featureset of IntelliJ.
> I'm suggesting that the blanket assertion doesn't hold true that slow software is painful software. Software can be so slow that it's painful but "slow" from the point of view of absolute does not immediately make something painful.
I agree with this. Though my interpretation of the blog post was that the complain was directed at the subset of "slow" software that is specifically "perceptibly slow, to a frustrating degree".
> A counter-example that the author used when describing people with a pride in inefficiency was to quote:
>> @tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)
Yeah, that quote (and attached commentary) did seem a bit out of place.
> I'm also asserting that slow and painful software that does what I want is better than fast software that doesn't or that I can't afford.
I agree here.
I suppose whether the blog post is is likely to resonate with someone comes down to one's prior notion of why frustratingly slow software is the way it is.
Some may believe that such software is developed by experienced, motivated developers who want to do their best work, but run up against a hard choice between allocating time to important features vs clever/tricky/novel optimization.
Others, like myself (based on my firsthand observations), believe that software can be just as feature complete while also being "fast enough (to not be frustrating)" all while using the same resources (dev time, money, etc). Developers just choose not to. It's not that anyone consciously thinks "hey, I think I'll go out of my way to write slow software" -- in the same way that (almost) no one thinks "hey, I'm going to strive to be morbidly obese"; rather, it's the lack of a conscious decision to employ self discipline and:
- Do not add layers of indirection because it "feels" right (I've seen countless changes that, upon questioning, can't be motivated beyond "I dunno, it 'felt' right"); instead, think through what you're doing. Just because you've thumbed through a design patterns book, or saw a blog post about dependency injection, or recently discovered runtime type reflection, doesn't give you license to cargo cult any of these things. If it was discovered that a bridge was constructed out of some material because the engineer(s) thought it "felt right", because the raw materials were "pretty" and "sparked joy", or maybe even "tasted good", I'm sure almost everyone would be horrified at such spectacular negligence. Somehow, similar behavior in the software world is perfectly acceptable.
- Do not add libraries and frameworks and orchestrators and pipelines because your ADHD compels you to play with new, shiny things. I have ADHD, and the compulsion is there, but I choose to ignore it.
- Do not use some tech just because it's popular, with zero critical thought, thus applying the wrong tool to the problem at hand. That's a recipe for slow software that requires more development time, as whatever time you save by not engaging your brain will almost certainly be eclipsed by the time wasted trying to contort the tool you're abusing.
- If you're about to spend the next hour working on something, first spend 5 seconds to see if the standard library has a ready-to-go, optimal solution to your problem. An example from firsthand experience: early in my career, I worked on a desktop GUI application, using Windows Presentation Foundation (WPF). One of our devs needed to add an underline to a text label. At our standup, he reported that he expected to spend the rest of the day working on that one change. He and the lead developer discussed how he could develop a new, custom control, and how the custom rendering routine would work, and gotchas to watch out for (naturally, you'd need to gather subtle font/layout metrics to know where / how wide to draw the underline), etc. I mentioned that an underline decoration is something that the standard WPF text label control already supports out-of-the-box -- the change would be something as simple as adding underline="true" to the respective XML element that declares that control. That was uncritically shot down with "No... no. I don't think that's a thing". The only reason we didn't lose 8 hours+ of developer time that day was because I did a 5 second Google search after that meeting, found the MSDN documentation page for that "underline" setting, and sent it to the developer, and he chose to implement that 5 second change instead of creating a custom component. Of course, the lead developer rejected the changes as "this wouldn't compile", though I had already compiled and spun up the app; because I'm diplomatic, I asked if he could look at something, and inquired "so, I know this underline probably isn't quite what we need, but I was curious if you could give some pointers on how our custom implementation would need to differ?" -- he was stunned, finally realized that the existing, bog standard control already had this functionality, and then approved the change.
I would bet anything that 99% of painfully slow software could be pretty snappy (or at least "snappy enough") if the developers thereof wouldn't create more work for themselves by doing inane things. No hyper optimization, bespoke algorithms, nor heavy R&D required.
I mostly see bloat created for other reasons. Think about situations like Docker containers replacing single applications (and dragging entire Linux userspace installation with them). Deploying in Kubernetes with its own CNI as well as DNS server and a bunch of other networking-related stuff while the network in your datacenter already does all of that. Packaging the whole Python virtual environment into a DEB or RPM package instead of shipping just the library you want users to install.
This bloat is harder to deal with, on organization level, because people creating it justify it by saving on development effort necessary to make the product leaner. There's no financial incentive to not use Docker for deployment (and spend developer's time ensuring the code works on different platforms with different libraries).
And software industry isn't the only victim of this situation. First time I ever encountered this was in a... church. American missionaries coming to the former Soviet republics would bring with them pocket Bible for free handouts. Since I studied printing, to me this pocket Bible was strange in many ways. It was printed on paper lighter than 20 g/m^2. This was unheard of in Soviet printing industry. If it ever tried to produce such paper it would simply fall apart because they didn't have access to the technology necessary to produce plastics that held this paper together. Because the paper was too thin, it required a lot of "filler" (again, more plastic). And that made it worse for recycling. It was printed using offset machine. Soviet industry didn't print literature using offset machines. They didn't have the technology for making precise high-resolution plates necessary for such printing, so letterpress printing would be the way to do it. But, letterpress makes a noticeable difference in texture of the page, it also pretty much prevents you from using "unorthodox" font sizes, ensuring that the font's author could see exactly how letters are going to look on a page, making the overall experience much more pleasant.
All in all, it was kind of a technological marvel I knew I couldn't achieve with what I had / knew, on the other hand, all this technology was intended to decrease cost at the expense of marginal drops in quality. In truth, at the time, I didn't think this way. I saw the technological marvel part, and didn't notice the drops in quality. The realization came a lot later.
The truth is that a lot of conveniences we take for granted have a cost that adds up a lot. A 4K screen has 17 times more pixels than a 800x600 one, and uses 32 bit color. So the raw size of graphics made for a modern display is around 68 times bigger.
Where before a static picture was acceptable now the norm is a high quality, high framerate animation.
Arial Unicode is a 15 MB font, which wouldn't even fit in the memory of most computers that used to run Windows 95.
Spell checking everywhere is taken for granted now.
And so on, and so forth. That stuff adds up. But it makes computers a whole lot more pleasant to use. I don't miss 16 color video modes, or being unable to use two non-English languages at once without extremely annoying workarounds.
It does have a cost. But maybe the cost isn’t worth it sometimes. I’ll take the 4K screen please. I’m willing to pay for those pixels.
But animations for no reason other than to make it seem like waiting for something is less of a chore? Nope.
Unicode is something I’m willing to pay for.
But we should be able to draw the glyphs on the screen in single-digit ms like we did in 1981. Yes more pixels and more glyphs but it’s possible just not a priority.
> But we should be able to draw the glyphs on the screen in single-digit ms like we did in 1981. Yes more pixels and more glyphs but it’s possible just not a priority.
In 1981 we drew fixed-width bitmap fonts at low resolution. In 2023, a font is a complex program that creates a vectorial shape, which is carefully adjusted to the display device it's being rendered on for optimal graphical results and antialiasing. That said, performance isn't bad at all.
Just resize the comment field back and forth while having a bunch of text within, and you'll see that text rendering performance is perfectly fine. I see no slowness.
So okay, all the artifacts take much more resources but even after consuming 100 times more compute resources why software is still excruciatingly slow?
This comment is something like even after paying 100K for performance BMW car engineer tells the user car will take 30 sec for 0-60 mph. And since user is not perf expert they have to take it face value.
> So okay, all the artifacts take much more resources but even after consuming 100 times more compute resources why software is still excruciatingly slow?
It's not?
Software used to be way, way slower. I had a 386. I experienced things like seeing the screen redraw, from top to bottom in games running at the amazing quality of 320x200x8 bits. I've waited hours for a kernel build. I've waited many seconds for a floppy drive to go chunk-kachunk while saving a couple pages of Word document. I've waited minutes for a webpage to download. I remember the times when file indexing completely tanked framerates.
I've noticed a similar trend in my current organization. In the beginning, when we were smaller, the details mattered. Things couldn't be slow, animations had to be smooth, scrolling speed mattered, loading times mattered. We aimed to build the most efficient product, anything to increase user efficiency.
Then as the team grew, the values changed to favor anything that improved developer efficiency. More abstractions, more layers, more frameworks. The tradeoff of saving one day of developer work was worth the cost of millions of user seconds collectively. I think the difference was just the visibility - management can see the costs of things on the development side, but they can't see the benefits of a slightly faster launch time, or better caching, or smoother scrolling. They aren't measurable, and once an org gets to the point where all it cares about are measurable numbers, I think this is a natural course.
Now just wait for the deplorable impact of LLM on code quality and performance in the years ahead. There's a new wave of programmers who are "GPT whisperers" and spend most of their time hooked to a chatbot (sometimes right in their IDE) that programs for them.
Of course, that's until AI gets to the point where it can fix everything we (or it) programmed wrong, including AI itself which is highly inefficient just like the OP predicts.
I don't think it's a given that LLM-driven coding is a negative for code quality and performance. In my experience coding with ChatGPT, it often reminds me to think about error handling, edge cases, and performance issues.
It also over-prioritizes readability if anything, using descriptive variable names and documenting every line with a comment. It's maybe not as good as the best engineers at considering all these factors, but I'd say it's better than the average developer, and may be better than the best engineer too when that engineer is tired and in a hurry.
It's not just about being measurable, it's about being profitable. Capitalism results in efficient production, not efficient products, because it's only interested in extracting profit from the production system.
Cars were optimised only after external (oil) shocks were applied, and even then very unevenly (American cars continue to be very inefficient). In fact, inefficient products are often more profitable in practice, as customers have to replace them more often; the crappy iPhone cable that splits after a year means Apple can charge you again (and again) for its replacement. White goods now break more often, but they're built more cheaply so they can make per-unit profit higher than it would otherwise be. What matters is efficiency producing, not after-sale use.
Capitalism in software means churning out new automation as quickly as possible, letting consumers pick up the resulting waste of energy and time. The production chain gets more and more standardized and optimized: you can now swap React developers, or Kubernetes admins, like you can swap warehouse workers, with all that it entails in terms of salary pressure. Some of that automation is effectively self-justifying, in the same way accountants make accountancy terms obscure so they can justify their jobs; but that's about it. Everything else is about profit.
Everything has a cost. In the cases where the computing cost or degraded user experience is high enough efficiency is optimized for (see ML models for example). In other cases it's not because the end user doesn't actually want that at the cost of fewer features. Cars used to be gas guzzlers until fuel costs and environmental concerns caused customers to want something different.
A lot of engineers forget that they are in fact paid to build a product for customers.
edit: It also seems OP has never had to wait for older Windows or Linux machines to fully boot. Modern versions boot much faster because customers wanted that. Phones are on 24/7 so customers don't care if it takes longer to boot once every few months.
I don't know where the POST code is run, maybe not the CPU ? Also, i'd imagine this sort of code to not parallelise the checks because it try very hard to be bug-free, so we are looking at a piece of code scanning the whole RAM linearly, I don't think you reach anything near the max bandwith of your stick of RAM in this scenario.
The problem here is that it does that at every boot, it should do it once, you maybe have disable fast-boot options or equivalent in the bios ?
A huge amount of boot time seems to be because BIOS waits several seconds(!) by default for various hardware to get into a stable state after powering up.
Since when customers have any meaningful choice or way to express preferences? It's long been a supply-driven market: vendors make what they want, customers buy what's being put on the market.
I feel like it's gotten even worse with pervasive telemetry and A/B testing. Now customers get more of what they "engage" with even if that's not what they want.
>Since when customers have any meaningful choice or way to express preferences? It's long been a supply-driven market: vendors make what they want, customers buy what's being put on the market.
By your logic Tesla and Apple shouldn't have grown like they did since customers wouldn't pick them over existing incumbents. Customers do have a choice in aggregate and they express that choice. Some people who don't agree with that choice try to say customer's have no choice but in the end they do and the person saying it is upset they're a minority.
That's nothing. I remember when booting a game required loading it from cassette tape. Me and my friends would go round to each other's houses, start the computer loading a game, go outside to play a football match and when we got back it'd be nearly ready to start.
I mean, there's booting, and then there's loading a program...
My Commodore 64 from the early 80s booted to a full BASIC REPL in around 1 second. Yes, loading from removable media was slow, but the computer was ready to fully use in less time than it takes "modern" systems to even POST. Totally ridiculous.
> A lot of engineers forget that they are in fact paid to build a product for customers.
In a way yes but I think in many ways software became much more than that. In many ways it became a crucial part of human lives as electricity or medical services. And you can say doctors/engineers are being paid to develop new medicine or a new power sources but these have many many levels of government control just because how sensitive and important the matter is, which is not the case with software (yet?).
Partially true, but let's not pretend that when trends are set by the companies of the size of Google or Apple, they are still following the customer's will the same way some small, or even medium sized company does.
They have enough power to shape the landscape itself, and as such they can drastically steer the outcome of "customer needs"
Neither of them used to be the giants they are but overall they made solutions that their customers (who are, btw, advertisers in Google's case) wanted and bought over larger existing competitors. Apple's market cap has increased 500x since the late 90s.
Obviously they entered the market with simply good products, but that doesn't change the fact that now they have a lot of power that comes simply from their size.
Im not saying its their only advantage, just that it plays big role.
But also, common now. Today's platform lockins are much more impactful then anything we had in the 90'. And you don't compete with Google's product anymore, but rather you compete with ecosystem that half of the world bought in. Which makes it practically impossible to be a viable alternative
It's important to remember that as developers, we do have a choice. Not about everything, but there's an option to choose the less-sucky alternative.
You don't have to use Node. You can write good, backwards compatible software just fine on the .net or JVM ecosystems and you can know that it will still run without modification in 10 years.
You don't have to write single page webpages. Old-fashioned HTML that completely reloads the page on each click works just fine and is probably lower latency at this point.
You don't have to write desktop apps using Chromium. Getting started with a UI framework is a little more work but the quality is worth it.
The decision isn't always yours. But when it is, opt out of the suck.
The "everything must be a SPA" mentality these days just saddens me. I get that GMail and similarly complicated apps get benefits from being SPA, but I've worked at too many places that just insist on a mess of complexity on the frontend when basic CSS / HTML (and maybe a sprinkling of JQuery) would give them all the same features in _WAY_ less time (and with significantly fewer bugs).
> You can write good, backwards compatible software just fine on the .net ... ecosystems
(Cough)
The .net core initiative broke A LOT of code. Microsoft obsoleted a lot of libraries. (Some for very good reasons, too. A lot of legacy .net libraries had rather poor design choices.)
You can only pick two out of the three from the triad of optimization (time, monetary cost, quality).
Industry prioritizes functional solutions (requirements) over efficiency. If efficiency is one of the requirements, it will be addressed (e.g. video games). Optimizing for efficiency takes additional effort. The article argues that the software industry is stuck with inefficient tools and practices. Engineers can and should do better, aiming for better apps, delivered faster and more reliably with fewer resources. However, economy dictates that as you optimize two variables from the triad, the third will get de-prioritized.
Edit: I can see the downvotes but no idea why. Would you care to explain?
>I can see the downvotes but no idea why. Would you care to explain?
I dunno, but this has been taught in PM since PM. I remember learning it in the 90s and it still holds true. If you want it fast and a large scope, you have to throw bodies and planning at it, costing money. If you want it cheap with a big scope, you have to wait for that small team to finish it (time). If you want it fast and cheap, you have to limit your scope.
Time/Cost/Scope, pick 2. Perhaps people are taking issue with quality vs scope, but quality is a part of scope for certain.
From what I've seen the faster solution is also usually the simplest, which makes it faster to deliver and easier to maintain. We're far away from what Knuth warned about (micro-optimizing assembly). It's more at the level of high-level design, technology choice, etc.
Like at my last job, we had something like 50 microservices, Kafka, rabbit, redis, varnish caching, etc. For under 1k external requests/second and some batch processes that ran for a few million accounts. If we cut out all of the architecture to "scale", you could've run the whole thing on a laptop if not a raspberry pi. And then a real server could scale that 100x.
The company was looking at moving to a "cloud native" serverless architecture when I left.
Agreed. My argument is that software from market/customer perspective is more like a movie, which is supposed to delight them. And industry rightly priorities the capability/delight factor over mere efficiency for efficiency's sake. Only engineers are interested in efficiency for efficiency's sake (also - will the engineers pay for more efficient software? Nope. Why did engineers abandon Sublime for vscode? So when anyone wears a customer hat, they tend to prioritize factors other than efficiency/correctness more unless we are talking about life-saving medical saving devices or such).
I don't think this can even be blamed on what the industry prioritizes, but on what customers reward. How many users pick one solution over another because it's quicker? Outside of very specific use cases practically never happens. These types of complaints ultimately come down to wanting users to care about different things. It's not dissimilar from complaints about nobody having fashion sense anymore or buying too much processed food.
> How many users pick one solution over another because it's quicker?
Isn’t that what drives the most sales in upgrading a phone? Idk many people that care about the microscopic improvements to the camera or UI, most of the time it’s something along the lines of “my iPhone 8 doesn’t run fast enough anymore, time to upgrade”. Same with consoles. I remember one of the big pitches of next gen consoles being “look! No more loading screens!”. And even ChatGPT 3.5 vs ChatGPT 4. There are tons of people that will use crappier output because it’s faster. Speed is still absolutely a selling point, and people do care.
> I remember one of the big pitches of next gen consoles being “look! No more loading screens!”. And even ChatGPT 3.5 vs ChatGPT 4.
Those are the very specific use case I mentioned.
How many users are gonna swap to a different chat client because of this or a different word processor because it start 1s faster? As a parallel comment points out, even developers swapped away from the very quick Sublime to Atom and now VSCode. At work hiring at some point became a huge pain because we swapped from Greenhouse to the recruiting tools built into Workday. It was super slow. I hated it, our recruiting team hated it, but it was purchased because it checked all the boxes and integrated with all our other stuff in Workday. The comparison to engine efficiency made me think that if we really want faster software, we need the equivalent of a gasoline tax for software, but what's the negative externality we are preventing?
Yes, the engineers who verbally bat for efficiency/correctness in these threads have no qualms about practically abandoning efficient sublime to less efficient vscode (in general, of course; not that there aren't exceptions) :)
How much programmer time has been wasted by gdbs byzantine UX? In a rational world, would we collectively invest time into building extremely good debuggers?
It's a tragedy of the commons, where everybody would benefit from better tooling, everybody wastes time dealing with poor tools, but nobody is willing to put time/effort/money into making better tooling.
I think you’re right that there’s a fundamental tension between those three factors but it’s begging the question of whether we’re seeing those fundamental limits versus something else. For example, I’ve seen multiple teams deliver apps using React frameworks which have no advantages in any of those points - they’re notably slower than SSR, use more resources on both the clients and servers, and don’t ship noticeably faster because while some functions are easier that’s canceled out by spending enough time on toolchain toil to make a J2EE developer weep.
That suggests that while those three factors are part of the explanation they’re not sufficient to explain it alone. My theory would be that we’re imbalanced by the massive ad-tech industry and many companies are optimizing for ad revenue and/or data collection over other factors such as user satisfaction, especially with the effects of consolidation reducing market corrective pressure.
The impossible trinity (also known as the impossible trilemma or the Unholy Trinity) is a concept in international economics and international political economy which states that it is impossible to have all three of the following at the same time:
- a fixed foreign exchange rate
- free capital movement (absence of capital controls)
- an independent monetary policy
This is why I'm an embedded programmer on small MCUs. Give me C99 and a datasheet and I'll give you the world in 64kb.
Though times are changing in that world too. Sometimes you have to use a library. And more and more those libraries require an RTOS. Just about to make the plunge into Zephyr so I can use the current Nordic BLE SDK.
Having hard limits to RAM and flash is a great way to prevent bloat. Management is happy to let engineers grind for a month reducing code size of it means the code will fit into a cheaper MCU with less flash and save a 10¢ on the BOM. Pure software has no such incentive to minimize resources because the user buys the HW separately. If anything, some SW companies have an incentive to add bloat if they're the same company that sells you a new phone when your old one becomes too slow.
I have little to add to this before comments start going below the fold. Other than to say that I had this realization in the late 1990s as 1 GHz processors were coming online and software was still as slow as ever. Today we have eye candy, but tech has mostly abandoned its charter of bringing innovation that increases income while reducing workload. Like phantom wealth that fixates on digits in a bank account instead of building income streams, today we have phantom tech that focuses on profits instead of innovation which improves the human condition.
We used to have a social contract between industry, academia and society. Company invents widget, it pays into a university's endowment, student goes on to start the next company.
Today that's all gone. Now company invents widget, billionaire keeps the money, student gets forgotten as university is defunded and discredited through various forms of regulatory capture. Often to thunderous applause.
The stagnation of tech and the subjugation of the best and brightest under the yoke go hand in hand. Your disempowerment is a reflection of how far society has fallen. Loosely that means that even though we know how to make programming better, we will never get the opportunity to do so, because we'll likely spend the rest of our lives making rent. Which is the central battle that humanity has faced for 10,000 years. Like the tragedy of the commons, we work so hard as individuals that we fail at systems-level solutions.
Programming won't get fixed until we stop idolizing the rich and powerful, and get back to the real work of doing whatever it takes to get to, say, UBI.
This is why we must question the term "engineering" in the title "software engineering". Most engineering disciplines are concerned about optimization and correctness on orders of magnitude more, compared to software discipline. Software is perhaps better seen as mass-market movies or music. Most software is less concerned about hard reality but is constantly struggling to keep up with the intangibles of the human mind and psychology. Put it another way, software addresses subtler aspects of reality (human mind, psychology, etc), rather than the hard realities of the world. And the human mind is mostly a black box, and quite dynamic and random. As the fancies of the market shift, software shapes itself to satisfy it.
In physical engineering, if a mistake is made, the bridge collapses, lives are lost, and therefore there is a deterrent against mistakes. But in software/movies, nobody cares if there are 10 flop movies/software, as long as one works/pays off.
I don’t think it’s a coincidence that so many of their examples were Google products. If they use the Apple Mail client on the same device, email opens in hundreds of milliseconds (measured from depressing the key to finished rendering a complex HTML message just now).
This isn’t to say Apple doesn’t have their own problems but I think it’s not an accident given that Google’s focus is on showing ads while Apple’s is maximizing device value, minimizing energy usage, etc.
The web is frustrating because it’s so easy to hit high frame rates at low power usage in a modern browser, but everyone internalized that “move fast” BS focused entirely on developer experience and half the industry consolidated on a slow framework from the IE6 era rather than learning web standards. It makes me wish browsers had an energy-usage gauge so you’d have to own that decision publicly similar to a fast food place having to list calories in the menu.
It's worth noting here that part of the problem is that the web is ideologically hostile to anything native.
Consider: people complain a lot about Electron app bloat. Why can't Slack optimize a bit? Well, they have a lot of Mac users. One obvious way to optimize would be to incrementally port the most performance sensitive parts of their app to use AppKit so Apple's highly hardware-optimized UI toolkit handles those parts, and Electron handles the rest.
Problem: doesn't work. You can't embed an NSView into a Chrome renderer. This was once possible using the Netscape plugin API, but it was removed many moons ago and now you have to use HTML for everything. Electron is popular, plugins were popular, and Chrome could add these capabilities back and even do them better, but they don't because if your app is pure HTML then this maximally empowers the Chrome team. They can do lots of stuff to your app, and add value in various ways, and if the price of this is less efficiency or that some features become less consistent or even harder to implement then this is a sacrifice they are willing to make.
The result is that a lot of these discussions go circular and just become moanfests, because the ideological constraints of the platforms are taken as unarticulated givens: immovable objects that are practically laws of nature rather than things that can be changed.
There is no specific technical reason why you can't have apps that start out as web apps but incrementally become native Mac or Windows apps as user demand justifies it, it's just not how we do things around here.
There’s definitely a lot of friction there but I think it also hits the cost shifting aspect pretty hard: it wouldn’t be that hard for, say, Slack to use a native Mac app which embeds WebKit but they want to save money by only supporting Electron. For a small startup that makes sense but given the collective petabytes of RAM and power used, it feels like the balance should have shifted at some point.
I don't think WebKit can embed NSViews either? At least not unless you modify the source code, iirc there are still some bits of the old Netscape plugin code still there.
The hard part isn't native embedding web, it's that if you start with web you then can't easily stick a native view into e.g. an iframe.
> Your desktop todo app is probably written in Electron and thus has a userland driver for the Xbox 360 controller in it, can render 3D graphics and play audio and take photos with your web camera.
Can Electron not "tree-shake" parts of the browser engine that are unused? Either by static analysis or manual configuration? Seems like a real missed opportunity to trim down on bloat...
The "build" step of an Electron app doesn't build Chromium, so this wouldn't be very feasible. Building Chromium requires an insane amount of computing power.
According to Google, a Chromium build on normal hardware takes 6+ hours.
And alas, even if it was feasible to custom build based on what you need, it would have to be done via configuration--since there's no way to know at compile time which language features will be used, since your app could (and probably does) include remote scripts.
Yeah that sounds about right - I use a chromium on Android fork and according to the lead developer, it takes about 3 hours for a release to compile and that is after optimizing the process as much as possible.
Nope, it doesn’t do this. A while back I looked into doing a “hello world” version of that - trying to remove the Print feature. To do this, I had to build Chromium, which did actually take like 5 or 6 hours on an M1 MacBook.
From what I could tell of the config files, Chromium is pretty modular. It looks like you could just delete a few lines and have it avoid compiling entire subsystems. But I didn’t ultimately achieve my goal because I couldn’t get it to compile with those changes. IIRC it hit a linker error and I couldn’t figure out what to prune next, and I wanted to get back to actually building my product. (I ended up switching to Tauri anyway)
Part of me wants to revisit that project though. It would be so great if there were custom minimal builds of Chromium for Electron apps.
I'm not familiar with Chrome's architecture enough to say, but I would be surprised if all these capabilities get paged in and initialized. There definitely has to be some work to setup the web platform bindings, to let JS think this stuff is here ready to go, but I hope it's backed by late bound code.
And in the web browser, I believe v8 is using snapshots of the js runtime to avoid much of the work of initializing the js side of things: it just forks a new copy of itself.
This is one of the prime strengths of alternatives like Tauri, that use a shared library model rather than having a static library. With Electron you have a ton of initialization to do for a very excellent runtime, but then you never get to reap that existing work again. Where-as on the web, we open new pages and tabs all the time, and avoid the slow first load & get to enjoy the very fast latter instance loads. That multi-machine capable vm pays off! With Tauri, the shared library may well already be in memory and initialized. Only the first consumer of the shared library has to pay the price.
No, it can't. Chrome is deliberately designed to be entirely non-modular. The codebase itself is somewhat modularized in the sense that related functionality is grouped into different directories and build targets, but the Chrome team have no interest in allowing it to be cut down to just the needed functionality.
The main reason for this is politics. The Chrome team is ideologically wedded to the idea that everything should be a web app running on Chrome. They see desktop and mobile apps as the "enemy" to be wiped out by making the web do more and more stuff, until Chrome is the universal OS. Classical ChromeOS is the pinnacle of this vision - a computer that only runs a web browser and nothing else.
Chrome's architecture reflects this, um, purity of vision. It is not reusable in any way and the Chrome team do not care to make that easier. Projects that make it embeddable or reusable are all third party forks that have to maintain expensive patch sets: CEF, Electron, etc, all pay high maintenance costs because they have to extensively patch the Chrome codebase to make it usable in non-Chrome apps. The patches are often not accepted upstream.
This problem also affects V8. Several projects that were originally using V8 are trying to migrate to JavaScriptCore or other JS engines because V8 doesn't have a stable API and building it is extremely painful. It's a subsystem of Chrome, so you have to build Chrome to get V8.
This is a pity. There's a ton of great code in the Chrome codebase, and it can be built in a modular way (with millions of small DLLs). It's slower to start up when you do that due to the dynamic linking overhead, but it does work. Unfortunately for as long as the Chrome guys see native apps as a bug to be fixed and not a reality to be embraced, we will continue to have dozens of apps on our laptops which statically link an Xbox 360 gamepad controller.
There are a few possible solutions for this.
One is to not write Electron apps. Java went through a modularization process in version 9 and since then you can bundle much smaller subsets of the platform with your app. I guess other platforms have something similar. Obviously, native apps also don't have this issue both because the platform comes with the hardware you buy and because they tend to be more modular to begin with. But, people like writing Electron apps because it gives you the benefits of web development without many of the downsides (like the ultra-aggressive sandboxing). The nature of the web platform is that it always lags what the hardware can actually do by many years due to the huge costs of sandboxing everything so thoroughly, whereas Electron apps can just call into native modules and do whatever they need, but you can still use your existing HTML/JS skills.
Another would be for an alternative to Electron to arise. There are experiments in using system WebViews for this. That doesn't make the web platform more modular, but at least means it's a single install is being reused. You could also imagine a fork of the web platform designed for modularity, for example, in which renderer features are compiled out if you don't need them or even bringing back renderer plugins.
Another is to just tackle the issue from an entirely different angle, for example, by opportunistically reusing and merging Electron files on disk and in memory between different apps. If you ship your Electron app to Windows users using Conveyor [1] (or in the Windows Store using MSIX) then this actually happens. Windows will reuse disk blocks during the install rather than download them, and if two apps are using the same build of Electron the files will be hard-linked together so only one copy will be in memory at once at runtime.
But fundamentally the issue here is one of philosophy. Chrome wants to rule the world and has a budget to match, yet their approach to platform design just does not scale. For as long as that is the case there will be lots of complaining about bloat.
>The Chrome team is ideologically wedded to the idea that everything should be a web app
I'm not surprised, and doubt the Firefox team is any different in that regard.
So far at least the Chrome team hasn't removed from Chrome the ability to go to a new web page in response to code external to Chrome, so we have that to be thankful for at least. Yay?
(The desktop code I maintain achieves that effect by invoking /opt/google/chrome/google-chrome with the desired URL as an argument.)
The comment I wrote at a higher level in this same thread is, ultimately, a criticism of NOT using a statically compiled, statically linked language (because a form of tree shaking like original commenter suggested is already part of the linking step there, except for DLLs).
And yet, full disclosure and admitting to cognitive dissonance, for a (hobbyist) C++ game engine I'm currently working on that targets Emscripten for a web build and native for Debug build, I'm considering not even having a native-Release build at all.
The idea being if it's web-first and the native build being only for developer use for debugging, I could do things like supporting only one desktop graphics API (e.g. just DirectX) and optimizing the native graphics pipeline for simplicity over performance. End users would/could just use the web version.
Granted this is a bit different because I wouldn't be distributing a browser too a la Electron; it would just use the browser the end user already has. Just thought it's interesting that it's easy for me to criticize others, but with the choice of how to spend my own limited developer time (my free time) it's looking like this way of doing it makes the most sense.
I kind of at least partly agree with Chrome. My problem with it is that web apps are missing a lot of features you would want in a non-cloud environment for offline use.
But if something is proprietary and cloud linked anyway I would much rather it all go through the web. That way open platforms still have a chance.
If banking apps and and Google payments and proprietary IoT devices controllers were all on the web, then a Linux phone might actually be viable!
How would it be a "Linux phone" if the only software you could run was web apps? That which defines an OS is its APIs and unique capabilities, anything can act as a bootloader for Chrome.
Unless they actually convince Linux to block non web apps, you could run whatever you want on a FOSS platform, you'd just be able to run proprietary web stuff in addition to native free software.
I'm guessing eventually the native free software would move more and more into the browser too, but that's fine as long as you can still run the stuff that hasn't been moved yet or isn't interested in moving.
This is an important distinction between statically compiled and interpreted languages which is often lost in discussions that focus on developer usability.
For many years I worked on my own game engine which combined a C++ core with an interpreted scripting language. I had developed a system of language bindings which allowed the interpreted language portion to call functions from C++. The engine quickly grew to a multiple-gigabyte executable (in the debug build), and no matter how much I tried to optimize the size it was still unconscionably huge.
One of the reasons I eventually gave up on the project was I realized I was overlooking a simple mathematical truth. The size was NxM, where N is the number of bindings and M the size of each binding. I was focusing on optimizing M, the size each binding added to the executable, while not just ignoring N but actually increasing it every time I added bindings for a new library I wanted to call from the game engine.
There were diminishing returns to how much I could improve M because I was relying on compiler implementations of certain things I was doing (and I was using then-new next generation C++ features that weren't well optimized); it would be a lot easier to simply reduce N. And the easiest way to do that would be some sort of tree shaking.
Unfortunately due to the nature of interpreted code it isn't known at compile time which things will/will not be ultimately called. That determination is a runtime thing, by calls via function pointer, by interpretation of scripts that start out as strings at compile time (or even strings entered by the user during runtime).
From a compile time perspective, static usage of every bound function, feature or library already exists - it is the C++ side of the cross-language binding. That's enough to convince the linker to keep it and not discard it.
In fact, the mere presence of the bindings caused my game executable to grow more per each included library than would a similar C++ -only, all-statically linked program. If a library provided 5 overloads of a function to do a similar thing with different arguments, an all- C or C++ application that uses only one of them would need only include that version in the compiled executable; the others would be stripped out during the linking step.
Since I don't necessarily know ahead of time which overload(s) I'm going to end up using from the interpreted language side of the engine, I would end up binding all 5. Then my executable grew simply from adding the capability to use the library, whether or not I make use of it, but moreover if I did use it my executable grew even more than an equivalent C/C++ - only user of the library because I also incur costs for all the unused overloads.
You can see why something like Electron would have the same problem. Unused functions can't be automatically stripped out because that information isn't known at compile time. To do it by static analysis the developer of the Electron app would have to re-run the entire build from source process of the Electron executable to combine that with static analysis of the app's Javascript to inform the linker what can be stripped out of the final executable.
And it bears mentioning neither such a static analysis tool for Electron app Javascript nor the compiler/linker integrations for it currently exist. In theory they could exist but would still have trouble with things like eval'd code.
Manual configuration would be possible but necessarily either coarse-grained or too tedious to expect most developers (of Electron itself or users of Electron) to go into that much detail. That is, you may have manual configuration to include or not include Xbox 360 controller, but probably not for "only uses the motion controls" while not including other controller features.
Either way you wouldn't be able to add-back support for it Javascript written after build time turned out to actually need the function or feature after all, unless you distributed a new executable. If you're building so much from source with configuration and static analysis, at that point why not write your whole application in a statically compiled language in the first place?
My thesis here is not that we should accept things like Electron being bloated because they cannot be any other way. My point is (as happens time and again in Computer Science) we had certain things already (like tree shaking and unused symbol stripping during the linking stage of statically compiled languages) and then in the name of "progress" let them either be Jedi-mind-tricked away or the people developing the new thing didn't understand what was being left behind.
They're mostly independent components, controlled by software, talking over a shared bus to other components. It's already bad and has been for some time.
> I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)
The key here is "I run every day" - if you're only saving CPU for yourself once a day then, well, fine. But if the improvement is about something that runs millions of times a day, or is run once a day by a million people, then that's something completely different!
I'm so tired of these rants. They're always very unoriginal and cover the same points of waaaah software is slow, software is bloated. And they never offer solutions or analysis beyond "(almost) everything sucks".
Y'know why software sucks? People. People are hard to manage, hard to incentivize, hard to compensate. Focusing on the slow loading web page is neglecting the economic incentives to bloat the bundle with trackers, to ignore perf work in favor of features, to not compensate open source developers.
The point about engineers in other domains doing it better, well I'm not exactly convinced about that (look at the bloated cost of building in America). But taken as true, they're doing better because there are economic, legal, and social incentives to do better. Not because they're just better at engineering.
I think there's also some big conceptual distinctions between making software and the other engineering practices it gets compared to in these articles.
If you're making a building or a bridge, it may have some aesthetic or small functional qualities that make it unique, but 99% of the job is to make it with the same success criteria as every other building or bridge. The building needs to stand up, hold people, have X amount of floors, and not fall over. A bridge has to get people/vehicles over it and not crumble.
Pretty much every piece of software is expected to do something new and innovative. It starts off with an abstract idea and details are filling in as you go. You stumble into some roadblock because of some design conflict that wasn't obvious until you were implementing. If the software is being made at the request of a client or your company, they probably gave a bunch of success criteria that has a bunch of inherently contradictory ideas that they'll need to compromise on because users don't actually know what they want until they're using it. You finish off the first version, they identify all the things that aren't working, and now there's new success criteria you need to implement built off a foundation that wasn't prepared for it. No architect ever finished erecting a skyscraper only to be told it needs a major change that will involve reworking all the plumbing.
That's the inherent difference. Software is a game of constantly trying to chase a moving target, and most of it is being built on a stack that is trying to chase moving targets at every level.
I’m absolutely, unequivocally, happiest at my job when I’m simply making things better.
I loathe building new features, adding new functionality, starting yet another product launch, etc. Yet it feels like 80% of my time at my job is doing the above, probably more.
nobody will complain if your software is faster, or simpler, or just plain better. And I can happily spend the remaining years of my career making that happen.
Software developers need to quit building yet another front end framework, and get back to the actual engineering aspects of making things more performant. IMO, it’s far more rewarding.
I feel the hard fought lessons are learned again and again and people have their own mental model of how things work (and how they should work), so the software industry is really fragmented, repetitively building the same systems again and again, we're not all improving the same system, so reliability and scalability doesn't improve/scale over time whereas improvements to gcc/clang, CPU hardware and Chromium benefit many organisations/applications. Many software system only lives in one company. The cost of targeting Windows, Mac and Linux is high so we have Electron and React Native.
The number of times I've seen an endless spinner or a button that doesn't work.
Getting people to use (and improve) "your way" of doing things is difficult. (Web frameworks and javascript frameworks)
For me it's even worse. Not only have I lost a lot of the passion that I used to have for programming when I started 30 years ago, I've lost my enthusiasm for software AS A USER.
Programming for me was always about connecting to a machine. I started writing games and native desktop applications in C/C++.
My first job was for a dot-com start-up that was a bit ahead of its time. Basically it made web-based office productivity software before we had terms like "the cloud", when Google was just getting off the ground and there was no GMail or Google Docs etc. That was how I got started in web application development and for many years it was grand. I was a "full stack" developer before "frontend" and "backend" were concepts. We needed to know how to optimize our SQL queries, avoid SQL injection vulnerabilities, and we even developed our own dynamic templating language using C++.
We offered a "hosted" version of our product but also a "self-hosted." Bare metal servers were the thing. We needed to be able to port our software to multiple operating systems and have it run on all sorts of hardware.
One of the most significant differences in the customer relationship was that if we wanted more business from our customers, we needed to convince them that upgrading to the newest version would be worth their time and money.
These days everything is SaaS, "cloud" and CI. As a user I need a fucking user account for every single piece of software I want to use. All of my data is hosted on someone else's computer which isn't even a computer. It's some virtual data centre hosted inside of an AWS data centre. The product owners of the company will use me as a perpetual beta tester, throwing any shit at the wall that they can think of hoping something will stick. The products constantly degrade in terms of performance and UX and I have no say other than to stop using computers and software all together.
Which is starting to happen.
I'm a software engineer that avoids using software in my personal life as much as I possibly can.
The article gets into what I'll call correctness, near the end, but doesn't highlight that in the lede.
I think correctness is often much more important than efficiency, elegance and other things that can sound like conceits or misalignments.
I don't mean some academic or niche formal proofs notion of correctness. I mean ordinary everyday minimal level of competence of systems. This includes that the system exists in the actual real-world environment of normal users, as well as the actual real-world environment of attackers. We often fall flat on both aspects.
It's one of the biggest problems in software. On average, our field behaves like criminally irresponsible, incompetent clowns, who casually pull prop clown horns out of our posteriors (and StackOverflow and ChatGPT), commit them as part of our sprints, and call it done. And frequent "security updates" and "bug fixes" for what should be considered irresponsible failures needing deeper corrective action.
Personally, I'm considering next working on things that have a high priority to work correctly. Which by default is humbling, but it becomes downright scary, when you consider that very little of the software, infrastructure, and personnel ecosystem on which we depend has been developed with a sufficiently responsible and competent mindset.
Programming languages. Developer tooling. Debuggers. Cross platform libraries. Everything could be orders of magnitude better than what we have today. But our world is ruled by pragmatism, and duck-taping mediocre technologies together results in barely "good enough" solutions that make money and solve real world problems.
Even open source suffers from the same problem. Making really high quality software takes 10 times the effort (if not more) and nobody is willing to make that investment up front. And so half-baked software becomes popular and now we spend years or decades and untold hours trying to turn bad software into pretty good software without completely breaking backwards compatibility.
The root cause is probably the incrementalist approach we take to developing software. We start with something small because of time constraints or because we don't really understand the problem yet. Once the software has users who demand bug fixes and feature additions and this results in software organically growing instead of being intelligently designed for a purpose. As a result, you get stuck in a local maximum.
And that's how we spend 500,000 hours debating strategies for removing the GIL (global interpreter lock) in Python when the first version of Python was made 30 years ago as a side project during a holiday break.
> The root cause is probably the incrementalist approach we take to developing software.
I would blame the "unix philosophy" and "worse is better" approaches of the past, but I bet they were more symptomatic than causal, and their equivalents in other digital realms pop up all the time: IBM vs clones, unix wars, protocol wars, at various times its fights between 'official' (described as stuffy) vs 'pragmatic' (described as lax/crappy) definitions of stuff.
I'd hazard a guess that since (for those of us who are young and therefore spew confident sounding incorrect speculation like this comment) we still have so much of the old 'it works, ship it' hacker groups of the 70s in our past, then the overreach of the CASE / UML / XML fever of the late 90s which we have in turn overreacted against by going too far in the 'look its a containerized k8s pod running behind a reverse proxy that runs some react and uses leftpad to graphql your (must always be online) user information record in this headless electron because SHIP IT' direction.
PS: our historical 'its good enough' precedent didn't help, we've been trapped on 'very fast PDP-11s' for decades. Even BWK and the other forerunners of our modern C + unixlike stack weren't able to get us un-stuck from that stack and so plan 9 etc. failed to catch on. The Lindy Effect is a double-edged sword for sure.
I feel this so much. The statement "Recently, our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and IT in general." hits home so hard.
I feel like every application I purchase or download or try to setup is unsatisfying and flawed in some way. I paid $30/mo for access to the EastWest Composer cloud and was excited because I had spent most of my music production career hearing people talk about this near-infinite library of amazing sounds that professional producers use and I was about to have access to it. Then I downloaded their plugin and it was dreadful. Crashed regularly, I never actually got it fully working even after working with support for a month or so.
It feels like everything is like that now adays. Most pieces of software that I use I just immediately learn to smooth over the errors in one way or another. A reboot before creating each new project, etc.
It even works into a lot of hardware things because they need software too. I paid $500+ for a gopro MAX and the GoPro app has been the biggest nightmare in trying to use that device. Plus the firmware is dreadful and the wifi tech they use to connect is clunky.
It depresses me too. I feel like I can't even spend money to find a solution anymore and it feels like I end up throwing my money away more often than not because I buy something and it's clunky and awful but I can make it work so I use it sometimes but I could be getting so much more for that money if they had spent time refining the software.
GoPro is a special kind of awful when it comes to software. Almost like they want to botch it on purpose: They distribute their software only via the Microsoft store. No chance to get a regular .exe installer. I heard there is some beta group on Facebook (!!!) where you can get some beta installers.
For this kind of wifi usage, wifi direct or wifi aware would be great, the connect to the cam would be more reliable and it could exist in addition to another access point. But no.
It seems to me as if GoPro is basically a shell company consisting of a lot of marketing people and MBAs without technical understanding beyond using iPhones just contracting tech work out to off-shore.
That's entirely accurate! I think marketing is the only way they got to the status they have because while the cameras take decent video/images the entire rest of the experience is miserable. I almost returned my GoPro MAX because of the phone app being garbage. I almost wish I had. That's $500~ I didn't have to spend because it hasn't gotten a ton of use to begin with and partially that's because the software was so clunky I felt like I couldn't really get anything usable out of it.
The experience was better when I tried downloading Insta360's 360 video editor! Still not sure what to do with the 360 camera anymore lol but that's my own problem not gopro's.
Funnily enough, the author is the creator of DataScript, a language which has been the source (directly or indirectly) of some of the biggest bloat/performance problems I've encountered in client side programming.
It will grow without bound because software is a stack. It grows higher and higher and the bloat is from all the layers underneath constantly trying to adapt the "cruddy, ugly, gross" stuff below into "nice and clean" abstractions for above.
Which of course would be good, if that's what the stack does. But the stack doesn't do that. The stack constantly introduces new, crappy abstractions that eventually no one wants to use directly anymore, and people build new ones on top. Eventually, the new ones on top start to resemble the ones below, until we end up with a cycle. This mislayering is known as "abstraction inversion"--the abstractions at the top end up being crappy and slow reinventions of abstractions deeper in the stack.
It won't ever stop because people keep fancying themselves the most awesomest library designers ever and keep coming up with new crappy ways that clearly not everyone likes. It's an endless game of promising the world but delivering a spray-paint job over someone else's world.
It's not just an aesthetic choice. You have people not understanding the layers below on a technical level. It's partly a lack of talent, partly a lack of willingness to make the effort, and in large part just a lack of visibility into anything 2 or more layers down.
I'm on a very lean Linux on a fast machine. It boots near instantly: I think one second between entering the encrypted disk's password and then the login screen.
... $ time emacs -Q -eval '(kill-emacs)'
Takes 86 milliseconds I think. That's launching graphical Emacs on a 4K display (well nearly: 3840x1600) ("-nw" / "no window" would be even faster), running elisp code, and exiting back to the terminal.
I'm running the very small "Awesome WM" tiling WM with 12 virtual workspaces and switching between them is ultra snappy.
Now, sure, countless websites using way too much unnecessary monothreaded JavaScript and calling way too many microservices have insufferable network
and redraw latency (even though I've got a fat and low-latency fiber pipe to the home), but my computer is still incredibly snappy.
My Ryzen 7000 series CPU is so quick I actually set it to always be in "eco mode" (in the BIOS/UEFI) and don't bother to run it at its max speed.
I'd say hardware is amazing compared to what we had in the past and software, if you're careful, can be very snappy.
That thing will reach months of uptime if I want: not just the OS but the software as well. I'll reboot for critical security updates mandating a reboot (extremely rare) and when I go on vacation but that's about it.
I know, I know, I should turn it off: but the thing is so stable I don't even bother. At night I switch to a virtual workspace that puts the CPU in "powersave" mode (actually of my 12 virtual workspaces, only the one where I "dev" turns the CPU to performance mode and as soon as I switch to another workspace, I put the CPU back to powersave: hence broken JavaScript pages won't make my fans runs loud).
I just told a colleague how the UX of Concur — the defacto enterprise expense report system that surely employs hundreds of software folk — keeps getting slower and crappier with every obvious update. The amount of new modals that show up that I DO NOT READ is crazy.
Needless to say I was primed for this article and it is a thoughtful and apt read.
The biggest gut punch of it all is that this article is _from 2018_.
As I get older (I'm 38), embracing as much dumb hardware as I can. I smarted up our house with HomePods and Siri started not recognizing the sound of my voice. JFC.
As many others have mentioned here, the backwards incentive structure is absolutely part of the problem -- i.e. businesses don't care to invest the time and money in making things efficient and performant, because the goal is to get something out the door as quickly as possible.
And, the fact that software can be continuously updated in the field makes it inherently different than other fields, which have to try to get something “right” the first time.
But I also think that we have a responsibility as engineers not to thrust technology out into the world without considering its impact, and that includes things like Electron. At its inception, I'm sure it was a hilarious and insane tech demo, but that's all it ever should have been.
The sad fact is that Electron solves a real problem, though, which is that developing per-platform native apps takes an enormous amount of effort. To me, it's a huge failing of the software world that we've not yet provided an alternative that's just as easy to deploy as Electron, if not easier, but with performance and efficiency built in from the start.
> It also animates empty white boxes instead of showing their content because it’s the only way anything can be animated on a webpage with decent performance.
That's not the only reason, although it is one. Two of the other big reasons for this are:
1. The content is often loaded async while the boxes render, and this hides the latency of doing a network fetch.
2. Animating text looks terrible. Even if you had a perfectly fast computer, you wouldn't want to animate boxes filled with text and images, because it looks really bad.
We presume that what software does is far more important than how.
There's a cost to waiting for a feature to be done your ideal way to your ideal standards - that's the cost of all the people not being able to use it while you add perfection. It's money left on the table but also just efficiency in the world economy that's being delayed.
Programs were not that great in C or C++ in terms of being able to run for a long time. I fought the once-every-2-week crash bug in a C++ program - a single double memory free under certain uncommon conditions. Finding problems like that was very very unproductive.
Cars, to give an example, are often made with space for all the possible optional bits even when that model will never have them - because it's more cost efficient to have a common design among several models than to redesign each one.
One take I have when this topic comes up is that older, more efficient programs and assembly language instructions are the nanotechnology and atomic level of software. We just are making the same progression, backwards, that hardware technology made from bulk materials to computing with the smallest number of individual atoms. So when you see a 500 MB app doing the same thing that used to be done with 50 MB, 5 MB, or 500 KB it's kind of like looking at a 10 micrometer process CPU from the 1970s and marveling how much atoms it wastes to make a single gate when we could do the same thing with just a few atoms or fit many gates into the same footprint with 2020's era tens of nanometer processes.
Of course "progressing in reverse" is the same thing as saying we're regressing.
You could say that witing software at all is regressive and we should implement everything in gates at the hardware level - it would be incredibly efficient. We do this with ASICS for bitcoin etc so why not for everything?
The hardware guys are working away to make hardware better all the time - whatever software people are doing. We could tell them to stop doing that, get other jobs so we could just write more efficient software . . . . but why?
When they run out of things to do we will be forced to concentrate on using what we have more effectively. Before then however, the most stupid thing to do would be to leave that capability they're providing unused.
as an example; I use MS Teams a lot - it's horrendous and ridiculously inefficient and I would love to have some stern words with the development team. OTOH I get lots of use out of it. It may not be the best of it's kind but it's what my company chose and it does a lot of very useful things. Better to have it than not on the whole.
I remember a few years back when a hacker decompiled the GTAV PC loading sequence and found the software loading ~ 1GB json file in memory to read a single param, which added an additional 2-3 min to the load.
Let’s keep the standard for performance high and continue to highlight performance issues.
I'm not sure the car comparison is a good one, given that their inflated sizes prevented us from actually saving fuel... Most of our modern industrial complex is by and large wasteful on resources, computing is no different.
Zen and the Art of Motorcycle Maintenance's entire thesis is "What is Quality?" How do you define it? How does it come about?
You can still get software quality but you have to be willing to devote time and effort to it. The binary for my modern, commercial background job engine written in Go, Faktory, is 5MB in size.
I'd like it if things took five seconds instead of ten... But mostly they work. It's fast enough. CPU load is generally low enough for good battery life.
It's hard to complain when everything is so cheap and easy. 40 years ago we might have literally written letters for everyday utilitarian purposes. Even if you added a limit of 2 characters per second that you could type, I'd probably get used to it and say "Still faster and easier than a pen".
And on top of that, stuff seems to get faster all the time. The new Ubuntu no doubt is packed with more code than ever... But it feels faster than Kubuntu. I assume they fixed some big in allocating resources or something.
Every generation of software is more secure even if less private, subjectively I notice more reliability every year.
These days we have watchdogs to restart broken apps. Maybe every day it's down for a minute. Back then, people didn't always add that, they just "Tried to get the code right"... And it would be fine for a year and then be down for hours.
Modern software is more predictable. Overall quality might be less, it might crash weekly instead of monthly, but a reboot is far more likely to fix it, we have fewer places were your app writes a bad setting and now it never opens again till you manually edit something.
We can always do better. We very much should. But I'm generally happy with the current state.of software, and most consumers seem to be too.
I've been working on an idea recently: "low interest rate architecture."
High complexity, ugly / unplanned, high rent (cloud costs, etc.), bloated, only polished on the surface, and designed to gather as much market share as possible as fast as possible and we'll fix it "later."
Examples: Electron, Kubernetes, Helm, expensive managed cloud, lots of domain specific languages and other high cognitive load systems demanding high staffing requirements, total abandonment of labor saving tech like WYSIWYG design, etc.
> Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design.
If a major automobile manufacturer is able to get away with still using its 50 year old engine block design based off of valve technology that's a century old at this point, and crowbar it into everything from pickup trucks to supercars with success, I can guarantee you that it's not operating at the cutting edge of material and thermal engineering.
I agree with the sentiment of the article, but I'd advocate a balanced approach. Sometimes inefficiency is perfectly fine.
> Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time.
A more apt analogy would be, "would I buy a car that gets 100,000 km/liter over a nicer car that gets 1,000 km/liter?".
Seeing as a I drive about 12k km/yr, the difference in cost in absolute terms is negligible for me (it would be about a $25 Euro difference), even if it's a huge improvement percentagewise. Sure, there's no cars that get this kind of performance, but for computers, the cost of running my laptop for an entire year is less than what an hour of my work-time is worth. If energy were 10 times the price, it would still be negligible.
Just look at the quoted example:
> I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)
So this person took six hours to write code that saves 1.44 seconds/day. So it will take 15,000 days to make back that time (I know, I know... they probably did it for some intellectual satisfaction rather than to actually save time).
But as I said, I still agree with the sentiment of the article. There are so many cases where making code more performant wouldn't even mean a significant amount more work... it would just mean watching out for very heavy libraries, doing some negligible performance tweeks, etc.
"As engineers, we can, and should, and will do better. We can have better tools, we can build better apps, faster, more predictable, more reliable, using fewer resources (orders of magnitude fewer!). We need to understand deeply what we are doing and why. We need to deliver: reliably, predictably, with topmost quality. We can—and should–take pride in our work. Not"
The problem is there are engineers who will take your paycheck and build crappier software more quickly, because there's barely any quality regulation for software. That's the difference between software and other traditional engineering disciplines. An electrical engineer has their license at stake.
Right. Many of my performance enhancements added by caring Devs are kept secret from the PMs, because they are out of scope, don't have tickets, and/or may increase complexity.
I’ve been programming for 15 years now...There are no additional features. They are not faster or more optimized. They don’t look different. They just...grow?
I've been programming for over 40 years. OP's take is a classic underestimate of how many features new versions of software have added.
My first computers in order had:
64k of RAM
128k of RAM
640k of RAM
Those first two booted up literally instantly as they copied firmware into RAM. That last one booted up in about 10 seconds from an unbelievably slow C: drive.
What did all three have in common? Did not even have basic networking built-in. Yes, systems are bloated today but they do so much more than the systems you're comparing them to. Yes, they could re-write them from scratch to remove this bloat but we knew for 8 years before OP started coding that is not the right answer either.
This is one of the major gripes I have with modern software too. It is becoming overly inefficient. I personally attribute this to two reasons:
1) Product managers have more influence and decision making in prioritizing what should be delivered. I have routinely encountered wherever I worked that any work related to improving efficiency or spending time in non-business related work is constantly de-prioritized. Controlling programmers time means certain system bugs get lower priority and junk accumulates over time and software gets bloated. There are code-path ways which are not being used anymore but no-one bothered to clean up as they have not been instructed to do so yet.
2) I find the explosion of javascript in popularity is a major reason of this inefficiency. Imagine the compute and electricity wasted globally because certain developers don't want to build multi-threaded applications and prefer dynamic typing.
Oftentimes it is make or break for a company/project to get the programmers to follow yagni. Cleaning up old code can very much offer no value (short or long term) to the product, the conpany or the user. And a programmer who is zoning on a task or story should not need to pull himself out of that.
I learned to program at the dawn of the PC age when memory was measured in KB and CPUs in MHz; so I learned to write fast, efficient code because that was a basic requirement. Modern CPUs, super fast memory, and Gen 5 SSDs can certainly mask inefficiencies in poorly written code; but data can always expand enough to bring even supercomputers to their knees.
My expertise was centered around data management; particularly file systems. So when hard drives got big enough to hold more than 100 million files; I realized that the decades-old file system architectures could no longer handle them efficiently enough for my taste. So I set out to build a better type of file system (a data object store).
I spent weeks (sometimes months) fine tuning algorithms and code paths to make it as efficient as possible. I regularly ran my code on old, outdated hardware to make sure it ran fast in low-resource environments. I tried to take advantage of parallel processing and every other trick I could think of.
But it seemed that no matter how much faster I could make the code run; I had trouble getting any potential users to care about that aspect of the code. I could make it 10x faster than their existing solution; but if it was missing even one feature their old one supported, they would reject it even if that feature was not that important.
The code was especially efficient at handling unstructured data (since that was its primary purpose), but the metadata management features I built in turned out to be extremely efficient at managing structured data as well. When I started benchmarking it against major databases it was performing very well (e.g. see it compared to SQLite: https://www.youtube.com/watch?v=Va5ZqfwQXWI). Still, it has been very slow to attract attention in spite of its speed.
We can complain about slow, buggy software all day; but we will be stuck with bloated, inefficient code if we don't put our money where our mouth is and support code bases that focus on small, fast code.
I’ve written this elsewhere, but I think a big part of it is that as software developers we love solving puzzles.
Figuring out a quick bugfix is like solving a puzzle, so is figuring out the right combination of flags for a cli command to do what it needs to. We subconsciously enjoy creating puzzles for other devs to solve, and when they figure out how to solve it, they feel good and like the tool we developed. It’s a vicious cycle of puzzle making and puzzle solving.
We make candidates solve puzzles to get a job!
But interesting puzzles aren’t usually the best solution to a user need. Until we break out of the puzzle-solving mindset I don’t think software will get better. Using a simple tool to get a job done simply scratches a different itch than the one we’re used to.
My more positive take on this: our runtime environments are bloated because we have ways to enable trust, stability, and iteration speeds that people wouldn't have dreamed of in years past.
Your Notion desktop app and Google Chrome both support embedding & displaying multimedia content that's controlled by people that you may not trust, but they can draw on decades of engineering to sandbox that content. They can independently be updated without worrying about a centralized `flexbox.dll` that may or may not be the right version. They do not require building a new executable to make the vast majority of UI changes. And the cost is simply storage space and initial download bandwidth.
We have to compare apples to apples - the abstractions we have today would not prevent such a piece of software from being built, and indeed would allow us to build that exact software, even bit-for-bit the same, much more easily due to abstractions on our tooling itself. We have not departed a world where, given a nation-state budget, one could pay for 1400 person-years of work and create the AGC (though one might make arguments about the distraction levels of modern society, but that's a different thing entirely).
But we also exist in a world where I can build and ship a cross-platform video chat application in an afternoon (well, not counting app store approvals) and be reasonably confident that my app will be compatible with, and secure on, practically any computer or mobile device sold in the past half decade, regardless of how many other apps may have been installed on each device. I'd venture to say that Apollo engineers would, and do, find this aspect of our world fascinating, too.
This is a perfectly fine take, but it seems to me there is something wrong when our everyday workhorse tools are painful to interact with despite providing the same basic functionality as they did 20 years ago, adequately, on much slower hardware. It doesn't necessarily mean that the tooling improvements you're praising should be abandoned, and it doesn't mean that none of the new features are worthwhile. But it does in my view mean that the priorities are hostile to me as a user, and I feel justified in being disenchanted by that fact.
A perfectly valid opinion! As a user, I like that my apps keep getting feature improvements without needing to pay $100/user/mo for them. I like that they can support all different kinds of text rendering without relying on shared system libraries that might be clobbered when I upgrade my operating system or install Qt for a side project. I like that they have multiple levels of sandboxing, and that someone sending me a message isn't likely to cause RCE at a level that can wipe my drives. I don't like that there's a few milliseconds of delay as I type this in Google Chrome, but I like the other stuff even more.
This makes me wonder, having seen another thread on this recently, how much of this is due to layer after layer of abstractions piled onto one another. Given the demand for backwards compatibility, & the never-ending efforts that show up to port everything old to any new platform (with all the support that has to be baked in to make that happen in many cases), how likely is it that this can be avoided, or prevented from getting worse in the future?
You can run DOS applications on any computer architecture and on any operating system. Modern software is much worse at backwards compatibility than software from the past.
The reason for that of course is that DOS stopped changing almost 30 years ago. But, yes, Dosbox and other DOS emulators are great virtual machines if you do not want to have to rewrite your software every six months because some API changed, and they run almost everywhere.
> Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same.
Cars cost around X * $10,000 each. Houses cost Y * $100,000 each. The author is ignoring the economics of producing real world goods.
I don't get the emphasis on bloat and speed. Those sacrifices let us create a TON of software really fast.
What does disappointment me about this industry is that we haven't found or invented some low-code app builder that is actually good enough for most use cases. I feel like we really should be spending most of our time as logicians who encode business logic, but we spend way too much time churning through supporting technology.
But every app builder I've seen runs into the same problem, that complexity scales exponentially in complexity a way that code doesn't have to. So it quickly becomes better to have built the dang thing from scratch.
And it's not like I can think of anything better. So it feels like we're missing something fundamental, probably to do with how we imagine user interfaces. Hopefully LLMs open the door to something better here eventually, and I'm not at all saying that chat views or plain speech are the best user interfaces we can do.
Making some piece of software quickly and with little effort is great for one-off projects, or something that is just for me, etc. What sucks is stuff that I have to use many hours a week, with more or less the same requirements over many years, where we still accept the same quality bar as we would expect from a weekend project. Almost nobody uses "a ton of software" day to day.
Low code approaches have limits because the real problem has never been the code but the analysis and specification that has to be done (explicitly or not) before code can be written. The reason most people can't / don't want to develop software isn't because they don't know the syntax of some programming language (that's the easy part) but because they can't / don't want to learn how to break down their problem into small parts and precisely explain to the machine how to solve it.
The only real way to "fix" that is for the tool to already know a lot about the problem domain but that means the tool is limited in scope and things become a lot harder once you try to go outsinde the things it's designed to do.
Executable specifications face the same problem. Seems like a great idea in theory to get the client / stakeholders to write a specification in a form that can executed to show the system does what's wanted. But very difficult to get them to do it. They come up with all sorts of complaints about the tooling but the real problem is that they don't want to precisely specify what their requirements are including edge cases but be vague and handwavy...
I am getting really tired of these extremely low effort "manifestos". They complain about the state of software, but make no attempt at understanding why it is that way or what tradeoffs had to be managed. There are just these vague claims that "we should be better" because the problem is that we're just "lazy". It's the writing that is lazy, not the industry. The software industry is arguably one of the most productive spaces that the world has ever seen.
If you're going to criticize the state of software, you have to:
1. Demonstrate you understand the systemic incentives in place that encourage this behavior,
2. persuasively argue why this condition needs to be improved,
3. and propose concrete policies or incentives that counteract these tendencies.
Otherwise it's just whining about something you don't really understand, and the proposed "solution" is to just start all over because this time it'll be different I swear.
The author appears to raise valid points. There are explanations that they did not visit, which is okay by me. It does read like a manifesto but gets the conversation started.
> The software industry is arguably one of the most productive spaces that the world has ever seen.
I would argue based on personal experience that majority of software that comes out of the industry is unmaintainable garbage. The reason it is acceptable in most cases is that the barrier to entry is low and there are few consequences of things going wrong. Quality goes up significantly when consequences become real. At any rate, I would not call the software industry "productive".
> The reason it is acceptable in most cases is that the barrier to entry is low and there are few consequences of things going wrong
Then it is acceptable. You're making a moral value judgement because it's not up to your personal standards, but that's not relevant for whether or not the software accomplishes what it set out to do.
> At any rate, I would not call the software industry "productive".
This is indefensible, given that basically the entire world runs on software these days. Again, I think you're making some sort of subjective value judgement about the perceived "quality" of the software, while ignoring what it has enabled.
We need more of these articles. Teams has no business being as slow and awful as it is. No reason to try multiple times to do a simple task in any OS and app. Sony cameras computer software UX is inexcusable. Overall software quality is dismal. And society is ever more dependent on it.
> Teams has no business being as slow and awful as it is
Why not? This is part of the problem I'm talking about: we have too many armchair software engineers that know nothing about extremely complex codebases making sweeping generalizations about how they "should be". Their solutions are invariably "let's get rid of most features" or "let's just start over and Do It Better This Time."
Whining about a problem comes before fixing the problem. Demanding people must come up with a solution on their own before they share a problem doesn't help fix problems.
And my point is that this problem has been whined to death, with absolutely no concrete suggestions on how to fix it. All I've ever read is complaints about how some-or-another software should be better, and surprise! the author knows absolutely nothing about the codebase or the tradeoffs being managed. It's lazy armchair software engineering.
It obviously hasn't, there is still so much low hanging fruit everywhere.
And don't think this problem is unfixable, we will fix it at some point via standardization and each whiner is a data point that helps us get closer to standardization. It happened to cars, planes, boats, tanks etc, those fields started out with a lot of whacky ideas that didn't fit but then designs got standardized and the only thing left to do was to optimize those standard designs.
Edit: But for example, whining gave is git, a fast standard version control system that everyone now uses. When enough people whine about something then it gets fixed, bigger problems needs more of it.
The problem is that the whining on display in the post is completely rudderless. It has no idea what it's actually complaining about, just a general feeling of "wow things are complicated aren't they?". Then it asserts a priori that it could be simpler. Out of all the different examples that it complained about, not once did they propose a solution in any of those scenarios.
If your argument is that things are more complex then they need to be, prove it. Because it's like an old man looking at a Honda and going "things were so much simpler back in my day!!"
You are 200% correct. But - what are you going to do about it?
Communicating it was step 1
Now what do you expect to happen?
You can wait for others to step up, and help you make your vision happen, but the call is this - have a goal, fight for it, but make no mistake - you will always be chosing between your family, and your goal, and yourself - this is the true conflict of every moment.
RALEMOI - Read - and loved every moment of it!
Point being, you are living your OWN life, and in the end, you will want to be satisfied with what you did, and to a much lesser extent, who you feel you are now.
I'd say kudos, god speed, etc, but I personally believe, God bless, and I thank you for your work!
I think the major reason is that software is largely invisible. If the normals could SEE the fractal Rube Goldberg machines made out of Rube Goldberg machines they would have the expected reactions.
So many people shut off their critical thinking skills as soon as you rub a computer on it. (Whatever "it" is: cars, ponzi schemes, whatever.)
- - - -
The second major reason is that the IT industry is fashion-driven. We are more fashion-driven than the actual fashion industry.
It took fifty years for type-inference and -checking to break into the mainstream, not because it didn't work or wasn't useful but just because it wasn't fashionable.
- - - -
I think the answer is, in a word, convergence. We have to converge on simple, reliable, proven systems (or watch the world melt into unbounded complexity and unpredictability.)
I'm reminded of these issues when I think about export controls on processors sold to China: I don't think it will have any effect. US-based programmers burn resources thoughtlessly. From what I've seen, Chinese developers often don't. Compare Gitlab to Gitea.
I read half way through, enough to get the sense. What the guy is missing is the difference between “it bothers me” and it matters.
He uses a car analogy (how much efficiency we squeezed out of cars vs software) but misses the point - if a car was available that consumed half the gas or went twice as fast, I’d buy it!
If a computer / OS was available to me that was so much faster / more efficient than what I use… turns out I don’t really care and neither does anyone.
I am writing this on a 6 year old iPhone X that I got after my wife upgraded to the iPhone 15 at the same time my toddler drowned my Pixel 6 pro. The iPhone X is noticeably laggier than the 15 or the pixel but it doesn’t matter enough to go upgrade. And ditto for my 10+ year old PC. Point being, if I as a consumer am not bothered by the perf I get to even just drive down to the store, manufacturers would be dumb to over index on it
Obvious this is “just me” but if people really cared about a phone that boots in a second instead of 30(his example) there would be a market for it but the fact is nobody reboots their phone and it doesn’t matter. On the other hand if I had to wait 30 sec to drive my car and another manufacturer reduced that to 1, I’d switch.
TLDR - yes it’s inefficient and no it doesn’t matter. Those old systems he’s talking about were “efficient” because they had to run on hardware of the day, and increased requirements meant decreased market size - ie it really mattered! I am sure someone bemoaned the bloat of win95 that couldn’t run on his beloved 286.
I believe what the author is bemoaning is not the business or customer realities, but the "art of engineering" being traded in for the bottom line. In many ways, this is simply the way all disenchantment works, the hard truth is that art doesn't really matter for its own sake. It needs to sell, and what makes it sell is not what makes one love the process of building. The elegance, the simple beauty, none of those "matter" to the market. Unfortunately, writing up articles stating how much you can't do what you love probably won't garner much sympathy (clearly), but I suppose that's why we have passion projects instead.
One of the key differences between art and engineering is that the former eschews compromise while the later requires it. An engineer that over-indexes on a single dimension of what they are working on is seldom a good engineer.
Efficiency is one dimension. If I couldn't run calculator on anything but the latest Pixel / iPhone, that would be clearly insane. But it's not the sole dimension - if my ambition is to create something that works well for millions or billions users, hand-crafting Assembly may not be the right tradeoff.
People care but they have other concerns like compatibility that force them to surrender to bad software.
Look how bad the self checkout experience is. You might say “if people cared, they would shop elsewhere”. But people want to buy food more than boycott a store with bad software.
// Look how bad the self checkout experience is. You might say “if people cared, they would shop elsewhere”.
I would say that bad checkout matters - in the way that I am talking about mattering - because it does effect a non-trivial amount of behavior.
First, I am much more likely to opt for self-checkout at Whole Foods vs in my town's supermarket (that I otherwise like) because the WF experience is pretty smooth while at the other store, the machine blocks until help arrives every X items. Does that matter? Well I think so, because (1) some # of people will go to WF over this store just because of that (2) some # of people will stand in a cashier line - costing the store money - because of this (3) some people will opt for delivery because of this - undermining the "local store" value proposition in the long run.
The above impact exists because in the bad case, self-checkout is nearly unusable. If it was just a matter of every item scan taking 2 seconds instead of 1, or needing to get human help once a year instead of several time per checkout - nobody would care.
Until a critical mass of movers forms, no change will be made. And there's other inertia, including lack of signal from customers, existing practices, vendor monopolies.
Your reasoning assumes a fluid system with perfect observability into everyone's interests. Consumers can't even articulate their own interests fully -- it's impossible for vendors to respond properly.
Quality and high performing software exists when vendors elect to produce it. There are intrinsic motives and extrinsic.
Most quality products exist despite consumer demands not because of them. Don't fall for the "money incentivises everything" trap.
// Quality and high performing software exists when vendors elect to produce it.
Can you give me an example of where this exists separately from "and we can sell more units before its performant"? I am open to that being the case, but wasn't able to think of a good example. Can you?
He lost me immediately with the car analogy, as if the auto industry, its products, and the ways they're used by consumers are in any way reasonable.
Everyone needs a car capable of reaching 120mph right? Moms and dads need giant utility vehicles to feel safe on the streets surrounded by giant pickup trucks. How is it "efficient" to drive trucks that are 2-3x the curb weight with less cargo capacity?
The problems occur due to business decisions not engineering
> Windows 10 takes 30 minutes to update. What could it possibly be doing for that long?
I recently updated Windows 10 in VirtualBox. According to htop, the process wrote over 17GB and I'm pretty sure it took more than 30 minutes using more than 100% CPU most of the time.
I think it is the consequence of reusability. We do not want to write everything from scratch understandably but the issue here is that every library you use, would not have exactly the amount of features that you need so there's lot of stuff that would be shipped regardless of how much of that reusable library you're using. I might be wrong on that part (maybe static linking does not work that way and does cherry pick portions)
Or at least that's how it works for some languages. You cannot have just a portion of a jar included as your dependency. It comes all or nothing.
Tree shaking etc does exist for the frontend world but I am not so sure how much efficient that is.
"@tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)"
The example omits how long it took to write the Python script. Is it accounted for in the calculated result of "41 years, 24 days".
It also assumes that memory usage, CPU usage and storage space are not relevant to efficiency.
There are other languages faster than Python that one could try besides Rust. Other languages might result in less memory usage, less CPU usage, less storage space and faster completion.
Also if the script is used by a few million users, that’s a lot of man hour saved.
Plus, things in software tend to layer, so as the article point, maybe the script itself can be waited for a second if this is your final top task you are manually calling, but if it occurs in a long chain of calls it participate in a delay inflation.
To me, this is a result of products and companies being built on web stacks. Its easy to prototype, integrates with cloud services/mostly plug and play, and most developers known at least one stack that works in browsers. There's just not as much money in desktop/native application development - users are on the web and mobile. Of course I'd like to see a resurgence of desktop native programs (not electron) but the use cases are diminishing. Its harder to track users on desktop, harder to patch/update, and harder to hire people who know that stuff (always lean senior, meaning more expensive for startups).
One thing I'm finding... The browser is just a horrible, least-common-denominator, environment for running an application.
I spent the 2010s in the desktop world. The Mac / Windows APIs are generally well designed. A lot of the abstraction and niceties comes from whatever frameworks are shipped with your application.
The browser forces you to use Javascript. Even if you use WASM, you have to use Javascript, because you still need to call back into Javascript to interact with many of the lowest-level APIs.
The browser UI is based on parsing complicated text; instead of the API and object-based models that we have on the desktop. It's really absurd, IMO.
It’s a trade off. Would you rather have more software with more features or less software that runs fast and efficiently?
The market has clearly spoken in favor of the former.
After all, the goal of engineering isn’t to produce some abstract beautiful art but rather to create something valuable for regular humans.
Also the car and building analogies are poor because they are essentially the same design that has been optimized for decades. Indeed the only “new features” in cars are usually software driven and probably just as inefficient.
Except, in my experience, they don't have more features. Has Slack fundamentally changed in the last years, or Zoom, Teams etc? Why haven't these apps been optimized by now?
Same for web apps, why is Facebook, Gmail slow? I could accept it for new products but not for nearly unchanged products after all these years.
Btw: Gmail HTML was fast, so it can't be an infra problem, it's bad frontend programming.
Has the market spoken? I agree that it's largely true but I am not aware of anyone selling software with an emphasis on performance. It's always been about new, shiny features. You can't really say the market decided if one half of the equation isn't even on the market
Demand software with features and performs efficiently.
If anything, the last several years in the software industry have proven hordes of mediocre developers really isn't a substitute for understanding how computers work.
The phenomena we're seeing now is VC money has dried up for all but those focused on efficiency, and the rats are scrambling to run to the other side of the ship before it sinks.
I looked after the XI editor mentioned here and found it being deprecated. Looked at the article again and learned it's from 2018.
But I get the argument. Software should be fast and reliable, but making it fast is not what the customer pays for, when it's good just enough.
Having the problem to always start at the bottom is what discourages people to start again. A company in the EU would be needed to build a smartphone and software on par with iOS and Android but we'll never get there.
Arguably Xi was a design-bloat. I mean - it was a research project. Nothing wrong with it, but the community got super-excited about it, ignoring ... well, the pragmatic reality. Any time it was mentioned it talked about efficiently editing > 1GB files. Super over-flexible plugin system design.
For a lean and pragmatic text editor look at Helix, Kakoune, even NeoVim or Emacs.
> Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design.
I think this really highlights the problem. Would it even be possible to say this about software? Perhaps theoretically, but practically?
Thermal efficiency is well understood. Internal combustion engines are well understood. Every non-trivial software system, while using some common building blocks, is incredibly unique and decidedly not well understood.
The hardware/software industry needs to be regulated by levying a carbon tax and pollution tax.
The first iPhone/Android were great innovations. And subsequent iterations occasionally added better features -- but not each year.
Phones are one example, but all we are doing is shoving more carbon the air, putting more trash on the streets and land fills, more water for datacenters, more plastic and chemicals in the oceans and our bodies.
These days, my techno thrills come from using an 8 year old, ancient 2 core (Celeron) notebook that can no longer run win10 in acceptable fashion. Instead running (Mx)Linux for the desktop, its unperfect but still obviously more performant than that notebook -ever was- even when it was brand new, with win10, and sans telemetry for added benefit.
My middle finger to the sick joke the OP so elegantly describes.
I'm sitting on a macbook M1 with 3 different IDEs running, 3 chrome windows with a total of about 40 tabs open (a relatively small amount for me!), slack, iterm with 7 tabs, preview with some PDFs open, VPN, zoom, and a bunch of addons. I haven't seen a bug happen in a weeks. The IDE is like magic compared to the 1990s (I graduated college in 92, so was professionally developing at that point) - faster, smarter, more well organized, and just vastly improved. In the 90s, you always saved multiple copies of everything because there was no versioning systems to speak of, and every application liked to randomly die, corrupting your file. It was a regular occurence - as in weekly, sometimes more with MS Word. There were no significant help systems or context suggestions or autocompletions or spell checkers, and searching was a slow affair. It took minutes to boot up a significantly-loaded computer. Viruses were regular and nasty, and security wasn't a thing.
Today is infinitely better, albeit more complex. I couldn't stand even thinking of developing any of the software I work on today with a 1990s PC - the thought is just inane. Let alone developing without Stack Overflow, or even now, just getting CoPilot to do it for you.
Loading times and such can be a bit long from time to time, but that's just because wait times are keyed to how much people are willing to wait, so the waiting time distribution is centered on that value. We could have subsecond wait times for everything, but people are ok to sacrifice a bit of time for cheaper costs or more functionality or any number of options.
These kind of articles have been popping up monthly on HN and I'm tired of them, but I get it - computers aren't perfect and the complexity of the world is insane, and while I wouldn't use a 90s computer, I would love to be working on 90s problems again - I spent almost all of my time writing new code, because the world needed that code. Now it is mostly just integrating and tweaking, and the big problems are figuring out what people want and need and if there's a way to get it to them. You don't need to write a rasterizer or a 3D renderer or a database or any number of fun projects, because there are a million different versions of those out there that would take years or decades to replicate.
Bug the hardware and software we are using? Orders of magnitude better! And fast - holy hell is it fast. Just because you don't understand what it is doing doesn't mean it is slow.
Everything to agree with this article. I personally find it sad that a lot of people will still defend current state of software because "features". Like the author I feel same disenchantment as microservice/devops/cloud evangelist keep telling me how my old services need to done from scratch with 3 more layers of abstraction to become state of the art.
Abstraction isn't the reason code is slow. Reducing abstraction is an utter micro optimization. The idea that abstraction has to give way to performance is a fallacy in most cases.
The reason software is slow is because people write shit code, with or without abstractions. Too much db/api traffic, suboptimal algorithms etc. Fact is, most developers just suck at programming. Which makes a lot of sense at least where I am because we don't have any rigor in our craft. People go through a bachelor's degree learning as little as possible and the bar for passing is embarrassingly low. They get a job and the employer just puts them to work, personally I was placed alone on a maintenance project for a large application. Nobody reviewing my code, nobody else with any sort of domain knowledge to lean on.
In other professions, people have apprenticeships and such where experienced peers teach them the things they need to know and make sure they do things right. In software nobody gives a shit. Some QA person goes into the app and clicks a button and if it seems like it works they ship it.
People are stacking shit on top of shit, literally just writing legacy code. Because nobody teaches people how to do things right and nobody checks their work to see if they have done things right.
Imho, it's because of "engineering". Which is exactly what we don't do. In engineering, there are by-the-book solutions for every established, well-known problem.
In software, doing that is boring. So we constantly invent new solutions. Because evaluating and reviewing products and maximizing quality is boring and hard, while hacking out new things is fun and cool.
The software industry is young. Using cars, planes, freaking building is a cheap shot, we have been making these for a longer time, and even with cars, I could argue the efficiency point. Besides, software's primary platform, the hardware, is also moving forward really fast. Why would something that's built upon it move not as fast?
I think I lost the lede when the author implied text editors were simple and easy. My understanding is that text editing uses sophisticated data structures and is actually a really difficult problem to solve. Especially when combined with lock free replication used in multi-user editing. This is not easy or simple and comparing it to a terminal editor felt disingenuous.
I can relate to the disenchantment but I found the argument here to fall flat.
Yes 100%. Plain text editing, especially with any sort of highlighting or parsing, is hard. But I'm also struggling to think of a common and modern "text editor" with high latency like that. When I think "text editor" I think of something only capable of saving and loading plain text, with some plugin support. Stock VSCode is exceptional at this. So is neovim. So is Microsoft Notepad. So is Sublime Text. So is Nova.app.
He has SOME editor in mind, I'm just curious what it is.
> lock free replication used in multi-user editing
I understand that author was referring to the concept of a minimal version that does just text editing and nothing more. Add features to anything and it will stop being simple and easy.
However, the author might also be generalizing and referring to "MS Word" as a text editor, which a lot of people unfortunately do. This is where I would disagree with the author's premise.
Personally, I’m seriously considering joining the monastery because software is just the worst today.
About the best you can hope for is get into a place that’s complacent because otherwise it’s daily wtf stories all the way down. The complacent place will be a daily wtf too but at least people will have bread and circuses
Art is wonderful when it fits its purpose. Modern art is garbage because since the beginning of the last century art became an opportunity for investment. Modern software is garbage-y for similar reasons. It could be wonderful. But to make it so you have to somehow overcome the corrupting forces.
Although I may never have time to actually work on it. Especially since it will be a complete waste of time unless I can get a huge number of people to adopt it.
The author misses the crucial cost/benefits equation.
How much is saved making the code more efficient? Now remember, their employer only cares about this in terms of their dollars.
Vs.
How much does it cost to make the program more efficient. Remember that every change risks introducing new bugs that could literally put a customer out of business.
As CPUs get faster, memory gets bigger, screens get bigger, networks get faster and everything keeps getting better, one component of the systenm is not improving - the developer's brain. I think it just comes down the the fact that software is hard and bigger software is harder.
Such a naive point of view, implying software is detattched from reality somehow. Software exits because of business which have budgets and deadlines. Don't get me wrong I love programming, but you don't just do it on a vaccum.
Users very rarely vocalize their need for speed and even less their need for simplicity. App size being low might as well be associated with "low-content" so not valuable, a lot of effort to seem less valuable in the end.
Is this the sound of alienation? This plaintive cry for the satisfaction of a craftsperson, for the feeling of a job well done and the knowledge that it could not reasonably have been done better?
Sorry but housing should not be a model for anything....ever. the most ineffective and inefficient system is real estate and housing development....software is far better than those scumbags
De-bloat may be an interesting AI opportunity because the existing software serves as a detailed spec (we have 40 years of evidence that humans won't suddenly start writing debloated code)
Honestly, if I could retire young or even just when I retire, my overall goal is to just work on free software that is simpler and faster for desktops, regardless of OS (though I would probably target Windows first). I'd happily work towards making better software more commonly available.
That would make an interesting retirement home, you're only allowed in if you're from the Software Engineering industry and willing to work on open source software.
> I’m dying to see the web community answer when 120Hz displays become mainstream.
Oh I can tell you the answer already. It's 'no'.
> Ever wonder why your phone needs 30 to 60 seconds to boot? Why can’t it boot, say, in one second?
Simple, run systemd-analyze blame, disable all services and boot in under a second. Of course now nothing works and you can't access the system to check how long it took...
But that's nonsense; the answer is bloat. You can run complex graphical OSs in a few 100kb or, god forbid, mbs, that boot in <1s but have everything. It's just a lot of work and they lack customisability modern people (seem to) want. QNX, BEOS, but, to me more impressive, something like SymbOS shows you can build interesting and usable things that work and have tons of features people want. It takes more time and you cannot easily customise them and there is not something like compatibility.
Your phone OS is just bloatware and you and everyone else knows it, no need to make it seem like it couldn't be different rightnow.
Well sure, I don't think anyone is denying that. And there have been absurd documented cases like babel or some other core JS library including a full photo of Guy Fieri for no apparent reason, increasing everyone's node modules by a few MB. Unlikely to be a lone case of these sort of shenanigans either.
But sometimes bloat is also good in a way? Micropython comes to mind. It may be the most laughably absurd way to destroy one's performance on a microcontroller but the abstraction it offers is really nice to work with. The problem is, I suppose, that we're in a recursive spiral of abstraction and each additional layer has to take into account all possible cases, drivers and whatnot for the next one. Hence the bloat. But there is definitely something hilarious about running an OS that runs an OS VM, that runs a container, that then runs another VM to run an interpreted language.
One day we'll be able to chatgpt ideas directly into machine code and get both speed and efficiency, but until then I don't see how one can avoid long dev times without working at a higher level.
My first computers didn't do anything well. You would reboot the entire machine to load a game from a floppy disk which took forever. If that game crashed you'd have to start all over again.
My current computer boots in seconds.
In my experience it seems a good deal of bloat has to do with resiliency. Most desktop applications don't completely crash the entire system when they contain a memory access bug. They just crash. That doesn't come for free.
Let's say you want to go outside and cross the street. In order to do that you need to put on your shoes and lace them up so that you don't trip and fall. You do this and while crossing the street one day you fall and discover that your lace came untied. In order to prevent this next time you decide you will look down after every step to make sure your shoes are still tied.
This is how a lot of modern software works: we check if our shoes are still tied.
One way to fight against this, I suspect, is that we have to think more clearly about correctness. Back when Multics and Unix were being developed under the same roof at Bell Labs the Multics folks were wringing their hands over error recovery: the kernel had several "rings" and handlers could be registered... but there were always edge cases. Unix on the other hand? Just let it crash. Since then there has been a lot of research and development into how to formally specify systems using precise, checkable proofs. I think there are certain kinds of performance gains we can only make when we don't have to keep checking if our shoes are tied and those can only come from languages and tools that let us formalize our programs.
This is not my idea, it's from Conal Elliot who's awesome and smart [0].
The other source of bloat is the kitchen-sink effect. Writing software is time consuming. Most businesses don't want their developers writing low level drivers and optimizing memory usage. They want to get a product in front of consumers and get paid because the business might not even be in business in 9 months if you don't start shipping and making revenue. Fortunately there are a great number of kitchen-sink software frameworks and stacks of libraries for most development tasks... which include more code and resources than your application will use... but again, convenience and speed dominate. The average phone in 2018 had something like 4GB of RAM and ~32GB of HDD space. If your app used up a gig of working memory and took up a gig of space but you got it out in a few months by hiring JS developers who knew react native for less than what it would cost to hire Swift/ObjC developers to write the perfect app from scratch? We know what most businesses will choose.
> In my experience it seems a good deal of bloat has to do with resiliency. Most desktop applications don't completely crash the entire system when they contain a memory access bug. They just crash. That doesn't come for free.
This is provided by hardware and basically free. Besides, computers aren't slow because of kernel bloat. Linux, OSX and Windows have serious issues but all major software problems are in user space.
> The other source of bloat is the kitchen-sink effect.
This is the "user space" bloat.
Resiliency is one of those things that's baked into every layer of the stack seems to be cumulative. Research into lock-free algorithms in order to improve efficiency requires correctness and there are still places in the OS level and user space that would benefit from it.
Lock-free algorithms don't help much with performance. Locks are very cheap and your PC spends very little time spin-locking. Even interrupts and thread-switching (which is much more expensive) aren't that big of a deal nowadays.
I don't think performance is at odds with resiliency. You can have one, both, or neither. Resiliency you get by eliminating entire categories of problems with strong conceptual abstractions. As you pointed out, virtual memory that protects applications from interfering with one another is one such example. Other examples are transactional databases (ACID), and perhaps software transactional memory. Statelessness also has fantastic resilience properties and is a big driver behind the success of web applications.
Take sqlite for instance. sqlite is very resilient, gives you transactions that you can roll back, and on top of that, storing small files in a sqlite database can be faster than using the filesystem directly. You get resilience AND performance AND a boost in productivity.
I honestly just want some sane build and packaging tools. I will work on your pile of garbage, but please make it easy to setup and leave as soon as possible.
lack of care, creativity, profit over value....and yeah those would be far worse than poor performance....that performance is a side effect not a cause...
TL;DR: where performance actually matters and is a real competitive factor - things are fine - say same video games - if someone created same game giving half the fps on the same hardware, it is dead on arrival and gaming community will pour buckets of shit on it same day. Where it's not, it's not.
The disconnect here is between people who believe there's an inherent wrongness to things being worse than they need just because it's under a rug, versus those who see no problem here if the market doesn't punish it. I realize those in the first camp have the burden of proof to make a cogent argument beyond "this offends me personally," but those in the second camp are arguing from their own conclusion.
Edit: it's like climate change: individual decisions that are all locally justifiable according to some decision framework, but add up to a world which is objectively crappier than a world in which we did collectively behave this way.
tldr: the foundational problem is lack of liability
The whole ecosystem of modern programming and software is just insanely opaque. As an enduser you have very little clue who is to blame for an error or a slow machine.
Imagine your car would not been build by a single company, liable for the whole product, but you would buy the individual components from twenty+ different companies, rangin from ibm to small startups. No central planing. They all have their own take on it. They just set a few standarts for how the things bolt up. It would be the same mess.
Like in the earlier days of analog tech, ransomware attacks today are blamed on bad luck. Just put up some Antivirus gemstones in your Outlook and don't forget to get your security christened with some certifications.
But to be fair, people seldomly die from slow software.
Windows 10 takes 30 minutes to update. What could it possibly be doing for that long?
We have a contractor who built a system on Azure that I'm about to take over. I was complaining about how slow RDP'ing into the Azure machine that runs Visual Studio and SSMS is. The contractor said it's quite fast for him so I sent him a video of what I see. In that video, you can watch parts the window as they are painted. Same for executing a query in SSMS. Now, this guy is a fantastic engineer but, I had to chuckle a little when he saw the video and replied, "Yeah, that's how fast it is for me too."
After two weeks of fighting with SSIS and RDP speeds, I eventually gave up and rewrote it in Clojure in less than a day.
Modern text editors have higher latency than 42-year-old Emacs.
Visual Studio Code is about the only Microsoft product I have any respect for yet, for the most part, I've gone back to Vim as my daily driver. Even the contractor I mentioned above said he uses it a lot of the time.
The iPhone 4s was released with iOS 5, but can barely run iOS 9.
I have had so much respect for Apple in the past for saying openly, "This release is going to be lacking in features because we're going to try to make things run better." Yet, when was the last time they did such a release for any of their platforms? Given some of the oddities I'm having with Sonoma, I'd say it's time.
Windows 95 was 30MB. Today we have web pages heavier than that!
I spent 2/3 of my career trying to make web applications as good or better than native. I gave up on that about 10 years ago after realizing that a) the webplatform was designed as a document sharing system, not an application platform and, b) as much as I love Javascript, Node and NPM are a dumpster fire. We need to call it a day and do something else. My vote is for something akin to Flutter.
Also, the shear tonnage web application frameworks are a clear sign to me that it is a problem that can't be solved. They all eventually devolve into a bloated mess.
Related: Nobody thinks compiler that works minutes or even hours is a problem.
Am I the only one that thinks compiling dynamic, scripting languages is insane? Jeff Goldblum called. He want's his Jurassic Park quote back.
It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore.
50% of the blame needs to be put on the stakeholders. Too many of them don't understand (and, often, don't want to) and demand the product be delivered cheaply and done yesterday. The current state of software is what attitude that produces.
If its not done its because there is no money in it. In fact the opposite.
The counter-incentives to wasting time on high quality software are numerous and affect all sorts of teams. VC funded startups must get to market first or die. Fake-it-till-you-make-it is their religion. For more mature organizations too, cost and bloat is not an issue. Its a feature. The bigger the team more prestige for the managers etc. The costs are passed on to clients anyway.
How come "ruthless market forces" don't rectify this wastefulness? You'd think that codebases of superior quality will earn the keys to the kingdom. They might, eventually. In a competitive environment that is less prone to pathologies, hypes etc.