One eye-opening experience for me in this regard was when I installed an old version of Microsoft Excel--I think it was either the 2000 or the 2003 version--on modern hardware. The snappiness of every action was AMAZING. Every click, drag, keystroke felt instant.
It's hard to describe because modern Excel doing typical actions doesn't feel slow and prior to this experience I would have said it was immediately responding to my inputs, but the difference experienced was obvious and felt GREAT even though it was probably only a small number of milliseconds.
Microsoft Excel is probably also a good example for another reason.
Microsoft Office 2003 Standard Edition cost $400 new, or $239 per upgrade ($664 or $397 in today's money). If you wanted Excel alone, you could buy that, for $229 (or $109 if you upgrade from an earlier version). That alone would be $380/$181 in today's money.
Keep in mind that features were released every two years at the very earliest. Two years of Microsoft Teams costs $96 and gets you hosting.
The question of web vs native isn't "do you want fast software or slow software" but "do you want cheap software that brings lots of features or do you want expensive software that makes you pay again to keep up with the competition after two years".
Of course enterprise pricing was more generous and piracy was another factor that doesn't really come into play with Teams, but I'm not surprised all software has moved to quickly evolving web apps these days. In a time where paying more than $60 for a video game is considered controversial, $660 office suites simply aren't viable.
Another smaller issue with even modern Excel over old versions of Excel is that a lot of sandboxing has been added since, and even that is proving not to be sufficient by itself. The old happy-go-lucky Excels were a one-click way into infecting a whole network because no time was wasted on boundary checks where it was not absolutely necessary to prevent the program from crashing. Web apps especially have layers upon layers of security mechanisms to prevent viruses from spreading as trivially as they did twenty years ago.
Honestly, this was a big one for me too. I remember being a kid and using Office 2007 on a terrible Windows XP desktop, and things just worked. Now I'm on a "modern" machine with "modern" software, and right-click takes half a second to pop up the context menu.
In the grand scheme of things, maybe not a big deal? But that's also how things get worse.
I've had two examples of this myself. In both cases, it was because I was unhappy with existing libraries and decided to write my own.
The first was a simple Text UI for a business application. In other words forms using a text UI instead of a GUI. Underneath it was just laying out the screen with curses, and reading the keyboard, moving between fields on the form and doing basic jobs like text, number and date entry. I don't recall why we didn't use existing libraries, but it did mean we had something to compare it to. To this day remember the end result. It was blinding fast. So fast it ran far faster on a IBM PC running MMU-less Linux that any web form you'll find today. I still have no idea what the other libraries could possibly be doing to make them so slow. If you want to get a feel for the difference in todays terms, compare vim to vscode.
The other thing was a web framework. None of the other members of my team had formal programming qualifications. All of them struggled with any sort of concurrency and I found myself forever fixing race conditions in their web apps. Whenever the "run the main loop until nothing changes" nature of modern web frameworks reared it's ugly head, they were lost. So I wrote one that didn't need that loop. Instead, when you changed a property, it and everything that depended on it changed when you did the assignment, synchronously. It turned you needed so much boilerplate it was ugly to use. But again, it was fast. The first remark most people made when using a SPA based on my app was how snappy it was. Everyone touts how simple and fast their virtual DOM is, but evidently that's only in comparison to other virtual DOM's. Compare to just manipulating the real DOM once and only once for each change, they are all dog slow.
Then there is the difference between a Linux Desktop and the Windows 11 desktop. While I guess it happens less often than not, Windows 11 can take seconds to respond to a mouse click. Turns out that creates a cognitive load Windows uses seem to be unaware of. They have to constantly poll the interface to verify it's read for the next click or key press. You don't notice it until you move from a Linux windowing system to Windows. Then it's jarring, for a while.
The 2010 version of Excel is significantly faster than later versions even for pure number crunching, e.g. doing some calculation over a big sheet with a million rows and a thousand columns.
People used to be ok with Webpacker before Esbuild came along and thought that was OK, until Bun show what is really possible. The same with Atom, VSCode, and now Zed. ( For a very long time people on HN claim VSCode is about as fast as Sublime )
I am hoping Vision Pro, Meta or VR / AR will finally have enough incentive to solve latency computing. Most of today's computing is very slow. From OS to Apps and Web. And I value Steve Jobs' opening the lid of a MacBook and it should be ready mentality. Sometimes I think if Steve Jobs was the yard stick on speed and smoothness of the Apple software platform. Because it has gone downhill since. Especially on Safari.
> For a very long time people on HN claim VSCode is about as fast as Sublime
The trouble is that "fast" is an insufficient word to describe a piece of software.
As everyone has certainly realized by now, software feels "fast" when it is very responsive, e.g. the latency is low, and Sublime Text certainly felt "fast". However, Sublime Text regularly ran into problems where it had too much algorithmic complexity, e.g. when dealing with pathological cases like extremely long lines or something like that. Basic editing could become slow or you might even need to force close it. Not great.
VSCode feels reasonably responsive, at least, it certainly feels more responsive than you'd expect what is effectively a web app strapped to a browser to feel. A lot of effort was put into the feel of Monaco and keeping the text editing relatively unblocked. But maybe most importantly, careful optimization made VSCode perform well in a lot of scenarios where even native code software like Sublime and Kate struggled, with very fast Find and Replace that could gracefully handle pathological cases and a document structure that was good at loading, displaying and editing large documents (and with some limits applied to try to prevent pathological cases from blowing up the user's editor window.)
Zed is off to a good start because it takes a lot of learnings from VSCode and others. In my opinion though, it doesn't really feel dramatically better than VSCode, as the day-to-day UX is just less polished over all. It is, however, very promising, as it certainly feels better than VSCode did in the early days of it.
Hard agree on VR / AR latency requirements. It's the first consumer use case in a while that's ahead of the average user's compute power, and I really hope it will mean good things for performant software.
Most people are “okay” with waiting 500ms for a right-click context menu to appear,
It is maybe that most people have never experienced the world "before" where everything in term of UI was a lot faster than today when we were more using real native applications.
I have often this debate with friends that think VSCode and co are awesome when for me it is barely usable to have an editor with so much latency on each key typed.
Same effects with apps like mail clients. When you are in Gmail and struggle to select hundreds of spam in multiple pages to delete them, compare that to the instant click/drag select of emails with infinite scrolling in native applications like thunderbird.
Yeah, it's definitely one of those things where your eyes are "opened" when you use something faster. Kind of like headphones, what you have is good enough until you try something better.
Slack is not a good example of a performant electron app (or, perhaps if it is, that is quite damning praise). I should never be able to type faster than it can put characters on screen...
This is often explained as “cycles are cheap, programmers are expensive”. For the sake of argument, suppose LLMs are able to replace programmers for “laborious but straightforward work” like writing native versions of your app for every platform, and optimizing the hell out of it.
Do we then expect all the apps we work with to become snappy again? Or are there red-queen-like dynamics that will prevent it?
I think "software is a gas" is the better explanation. ("Software always expands to fit whatever container it is stored in")
Microsoft has enough engineer-hours on hand to make snappy UIs if they wanted to, but it's just not a priority for them. More/cheaper engineer hours doesn't really change that prioritization.
Microsoft would hire a ton more engineers and look very different if engineer price went to ~zero. So what does that look like? If you can make an app snappy with $100 in GPU time, why wouldn’t they?
Because they'd rather spend that $100 in GPU time on adding a new feature. Repeat that a few times and now your app is 10x more complex and costs 1000x more to perf-optimise.
The hypothetical is only interesting if we draw a distinction between easy and hard tasks. (“AI can do anything a human can do behind a keyboard” is AGI complete.) For this hypothetical I asserted that performance optimization and porting were automatable, but that doesn’t include any feature.
If you think there’s a more interesting/realistic/well-defined assumption, please suggest it!
Snappy software requires the design of the system to be in sync with the engineering. Also the team needs to be aligned on snappiness being a priority.
Product designers not understanding how adding certain UI elements, animations or requesting certain features is the main cause from my experience. The fastest way to speed things up in certain cases is to just remove some elements.
I would expect the situation to get worse rather than better if LLMs can deliver any feature a product designer wants without limitation and if the product designer doesn't care about "snappiness"
The assumption of the hypothetical is not that they can deliver any feature, which I consider (maybe you disagree) more open-ended and less automatable than porting and optimization.
If you think that’s an unrealistic assumption, please feel free to propose another!
I'm saying that it doesn't matter how much porting and optimization the llms do if the person designing the product doesn't care about efficiency.
Currently designers have pretty limitless ability to add features even without LLMs. LLMs will just compound that. The only way to get efficient code is to have the right mindset from the get go. Not as an after thought that you hope the LLM will optimize.
That's one possible outcome, lets call it outcome A. The others that occur to me are that (B) software stays the same but gets cheaper, or (C) software stays the same and is still expensive but profits increase. I imagine companies would much prefer outcome C if they can get away with it, and will do everything in their power to achieve it. But they are subject to market pressures so may be forced into one of the other outcomes. What outcome do you think the market will force? Personally I think we will end up with a mix of outcomes B and C.
Because features require more human overhead to understand and manage than porting/optimizing does, by assumption of this hypothetical. Product managers and high-level architects aren’t free.
There is a future where this is within the realm of the possible. All it takes is one good AI capable of doing this which can then be duplicated many times to work on this task. An AI engineer which knows how languages compile down to machine code and understands the intention of the blueprint, which the slow app would be. 5 to 10 years, maybe?
I mean, we currently think in the context of programming languages as written text, but AI has shown that it's capable of doing speech-to-speech "tasks" (for lack of a better word), where tokens are not only present in the form of text tokens, but also audio tokens, which skips the TTS step. It might become able to do video-to-assembly tasks, where it sees what an electron app does, maybe also reads the code, and then knows what needs to be done to convert this into optimized assembly code.
The reason is that most developers have no clue and/or no interest about performance, and so they only optimize where there's a user-visible problem, which means that the program will consume resources proportional to speed of the developer's machine.
I think the right way to look at this is the trade off spaces that are maligned with performance. You can usually gain something by losing some performance, so those who gain but don’t go too far overboard on things being slow will win.
I will say I share the authors sentiment that 9 billion cycles should be enough for anybody. Unfortunately for us, it’s usually somebody else’s computer that’s holding us up.
Casey Muratori started a whole educational webseries about this - the fact that hardware is getting faster and faster but the user experience seems to be staying the same.
The lower your performance requirements, the more you can loosen up what kind of programmers you hire. A dev who can only use <insert high level GC'd language known for being easy to use/easy to hire for> is relatively quite cheap! The language isn't necessarily the problem, but at the very least it correlates with the problem due to pricing. The code may suck, but most companies don't care, because most clients either don't care or there's not an incentive structure in place to communicate the care back up to the corpo decision-makers.
I think a lot of it is that consumers don't care that much. A couple of seconds is still fairly quick regardless of how many cycles are going on in the background. With humans you don't say how can they take a couple of seconds to reply when they have 100 bn parallel neurons.
The only thing that really bugs me computer speed wise these days is connecting to wifi. Why does that take a minute while you just want to read something?
I'm finding it slightly ironic[0] that this site takes 162 MB RAM (according to hovering over the tab in chrome), scrolls incredibly slowly, and has hijacked the right-click menu so I can't open links in a new tab.
I definitely agree with the content, though. I'm so glad I don't do webdev.
[0] I think this is actually correct usage of "ironic."
Entirely possible, I'm on a work-issued laptop and I can't install extensions, because raw-dogging the internet is safer than letting employees block ads.
I'm guessing you're on Edge if you're in that predicament; in that case you should check if you can set tracker blocking to "strict". It doesn't work completely like an ad blocker (it doesn't hide broken elements for instance) but that may block loads of unwanted content without needing permission to install addons.
Firefox 115, tab takes 9.3MB with adblock disabled and Enhanced Tracking protection also disabled.
No problem with scrolling or right-clicking. Or any other kind of problem.
I couldn't upvote this enough. I find it utterly infuriating that across the world, we spend hundreds of billions (trillions?) of dollars on computing hardware. And almost all of the cycles (easily over 99% of them) are totally wasted doing stupid, totally unnecessary makework because programmers are too lazy.
Tauri (an electron-like written in rust) used to have a page where they touted how efficient it is, boasting that it "only" took 300mb of ram to run a “hello world” app, and it started up in only ("only") 25000 syscalls. 300mb of ram is 2.4 billion bits. Just, how? The tauri benchmarks page has since been taken down - perhaps out of embarrassment.
In comparison, the Super Nintendo had 128kb of RAM (2000x less than needed for tauri's hello world) and it ran all sorts of incredible games, including the legendary super mario bros. The Atari 2600 shipped with 128 bytes of ram, which is less than one tweet. (Though cartridges could contain extra ram chips).
I got curious and it turns out both Tauri and Electron seem to have gotten much more bloated over time.
In 2021 they claimed a RAM usage on Mac OS of 13MB vs 34MB for Electron [0]. In 2022 the numbers (on Linux) were more than 10x as high, 180MB vs 462MB as well as a startup time of 0.4s vs 0.8s [1]. In 2023 the comparison is gone from the Readme.
The Super Nintendo didn't need to render mixed direction scripts from different languages at 120fps. It had zero mouse support, its keyboards lacked almost every key, and its audio output was simply terrible. I don't think it supported IMEs or screen readers either. When I'm doing desktop work, I'd take a 300MB browser application over a SNES any day.
The laudable terms used in the Tauri benchmark was a tad silly (though it makes some sense, as Tauri competes with Electron, not with normal native toolkits) but modern GUIs require more processing power than those 160x120 GUIs from back in the way did. It also helps a lot that you can subscribe to modern app subscriptions for half a decade before you've paid the price of what software used to cost (software that would be outdated by next year, and you had to pay again to upgrade), and that old price didn't come with hosting of any kind either.
Programmers have always been lazy, but the modern laziness when it comes to performance isn't because nobody cares; it's because nobody is willing to pay for the kind of responsiveness of yore anymore.
This is the standard response, and I've given it myself, but I refuse to accept that we need all of the bloat of Chromium in order to do internationalization, accessibility, fast high-resolution rendering, etc.
I use a different retro point of reference than josephg, though it happens to involve the same microprocessor. One of my favorite programs as a child was Diversi-Tune, an early music program for the Apple IIGS. It could play and record MIDI, and also show song lyrics, karaoke-style. The total program size is just under 48 KB, including one file that appears to contain configuration data. Granted, that doesn't count the Apple IIGS ROM or the ProDOS 8 operating system. When you add those, the total comes to just under 200 KB (for a ROM version 01 Apple IIGS).
And Diversi-Tune didn't even use all of the ROM. I know this because one of the areas where Diversi-Tune fell short, even by Apple IIGS standards, was accessibility. The documentation for the Textalker GS screen reader specifically called out that the screen reader didn't work with Diversi-Tune because Diversi-Tune didn't use QuickDraw II for text output. Needless to say, Diversi-Tune didn't support modern internationalization either.
But how much code would it really take to implement a modern Diversi-Tune with accessibility, internationalization, and so on? It must be well short of the ~180 MB (and growing) for an Electron binary. Current work on Rust GUIs suggests that the binary size might fall between 5 and 15 MB, depending on which compiler optimizations one chooses. And that's with an advanced GPU-based renderer that compiles shaders at run time. I'm sure it's possible to do better on that front. Of course, that's on top of the OS, but one could easily imagine a sub-500-MB Linux-based OS as the platform. I don't know about memory usage.
Yeah the other reference point I sometimes think about is the old windows IRC client mIRC. mIRC was super fully featured - it supported downloads, scripting (I think it had its own embedded scripting language), colors, and all sorts of other wild things that I never learned to use. It had way more features than modern discord, although it was missing discord's voice chat, video chat and screen sharing. (And it obviously didn't have any of the modern encryption algorithms).
I downloaded it over my 56k modem and if I'm remembering right, the download was around 2-3mb.
Programs like mIRC benefitted a lot from the hundreds of megabytes of Windows libraries that the OS shipped with. These days applications all seem to ship their own copy of every system DLL so they can work on every platform without having to support different APIs. As a result, mIRC only worked on Windows.
If mIRC is all you need, mIRC still works just fine. It's still maintained, so unlike ancient versions of MS Office, you can be reasonably safe while using it!
> Programs like mIRC benefitted a lot from the hundreds of megabytes of Windows libraries that the OS shipped with.
Hundreds of megabytes is overstating it. Windows 98 was 175mb for the entire OS, including the kernel and all included applications. I doubt the windows UI library of the day (winforms, gdi, etc) totalled more than 20mb. (But please fact check!)
And, I don’t think browser engines like electron do ship their own system dlls. Dlls are only relevant on windows - they won’t do you any good on Linux or macOS. And windows always comes with those dlls. I’m pretty sure chrome on windows still makes use of a lot of the underlying windows APIs to render content on webpages in a system-native way. It just has its own layer of heavily custom UI code layered on top.
But yeah, it’s cool that applications like mirc still work. It would definitely be a fun project to make a Spotify client or something in zig using winforms. It would be a fun exercise to make something so small.
> The Super Nintendo didn't need to render mixed direction scripts from different languages at 120fps.
So what? None of that stuff necessitates such obnoxious computing requirements. How would you fill 2.4 billion bits of ram with the stuff needed to support a 120hz display? You can run quake 3 at 120fps on a potato, and I promise you a full multiplayer 3d game engine has much higher ram and compute requirements than a font engine with a single font loaded.
> it's because nobody is willing to pay for the kind of responsiveness of yore anymore.
Aren't they? With an attitude like that, how would you know?
Sometimes I imagine an alternate universe where computer hardware doesn't get faster every year. In this world, consumers only replace their computers and phones when they physically break. As a result, our software couldn't keep getting slower over time. Any increasing bloat & new features would usually need to be "paid for" performance wise with newly added optimisations.
For most consumers, everything would look more or less the same. All the software most people depend on (email, teams, the web, etc) would run the same as it does today. But miraculously, people would save thousands every year by getting off the computer hardware treadmill. Across the whole world, hundreds of billions of dollars per year of money spent on computers would be saved.
There's no way you could convince someone living in that alternate reality that our reality makes any amount of sense.
The cost would be that programmers would need to stop getting increasingly lazy every year. We'd just need to stop at the current laziness level. Can we manage that? Apparently not. Instead we have attitudes like yours. "My hello world app is actually similar to crysis in its computing requirements. It somehow needs 300mb of RAM to be able to render text at 120fps, because thats how that works."
I'd love for every application to be a few megabytes in size like in the old days. I'm a big proponent of building applications using native code and OS native controls, as far as those still exist. Unfortunately, nobody sells that software anymore.
If you think you can make competing software for the Teams/Slacks of this world that runs the way programs written for Windows 2000 run today by just convincing a few developers not to be lazy, you stand to make a LOT of money. I'd welcome your attempt for sure.
Everyone knows there's no strict necessity to modern desktop bloat, but nobody seems to have come up with a solution other than "what if we all did a lot more hard work (mostly for free)".
This is like praising the reliability of modern automobiles while decrying the distance we have to travel for everything. You wouldn’t have fast computers without slow software.
That doesn't feel correct. I think we push the boundaries on hardware because we aspire to scale up computation. If all those aspirational computations were faster we'd still develop faster hardware and have bigger ambitions. Unnecessarily slow software has no redeeming features.
They are both cause and effect: we accept longer distances because cars are reliable and we insist on reliable cars because they are necessary. You can’t get one without the other.
It's hard to describe because modern Excel doing typical actions doesn't feel slow and prior to this experience I would have said it was immediately responding to my inputs, but the difference experienced was obvious and felt GREAT even though it was probably only a small number of milliseconds.