Fully agree with the premise. Most youngish users have never even experienced true performance. Our software stacks are shit and rotten to the core, it's an embarrassment. We'll have the smartest people in the world enabling chip design approaching the atomic level, and then there's us software "engineers" pissing it away with an inefficiency factor of 10 million %.
We are very, very bad at what we do, yet somehow get richly rewarded for it.
We've even invented a new performance problem: intermittent performance. Performance isn't just poor, it's also extremely variable due to distributed computing, lamba, whichever. So users can't even learn the performance pattern.
Where chip designers move heaven and earth the move compute and data as closely together as is physically possible, leave it to us geniuses to tear them apart as far as we can. Also, leave it to us to completely ignore parallel computing so that your 16 cores are doing fuck all.
You may now comment on why our practices are fully justified.
The reason is that the money people don't want to pay for it. We absolutely have the coding talent to make efficient, maintainable code, what we don't have are the available payroll hours.
10x every project timeline and it's fixed, simple as.
Granted, the big downside is, you have to keep your talent motivated and on-task 10x as long, that's like turning a quarter horse into a plough horse, it's not likely to happen quickly, if at all. You'd really need to start over with the kids who are in highschool now writing calculator apps in python by making them re-write them in C and grade them on how few lines they use.
ie, it's a pipe dream, and will continue to be until we run out of hardware capability, which has been "soon" for the last 30 years, so don't hold your breath.
> 10x every project timeline and it's fixed, simple as.
I'm skeptical. I've seen too many developers write poorly performing code purely out of indifference. If you gave them more time they'd have a lot more Reddit karma but I'd still be finding N+1 problems in every other code review.
Indifference... or lack of knowledge. I've conducted hundreds of interviews and it's interesting how many people know their O notation and how they can write code that theoretically is "efficient"... but performance is in the details and in knowing the stack top-to-bottom, and most people just don't know that. And the real problem is that they don't know what they don't know.
You'd say that code reviews is where this stuff should be caught, but unfortunately, code reviews are performative in most cases or the reviewers also don't know / care about these details either.
The real reason is the industry. We don’t reward deep knowledge of a particular language anymore. Things change too quickly.
Imagine a really good Angular dev who’s been using it for 7+ years. Sure, their code might be fast, but everything’s moving to React now.
Even backend languages change quite a bit. .NET Core is a big winner in performance compared to .NET Framework, but you can still write lousy-performing code in it.
From an architecture perspective, people seem to have a hard-on for serverless lately. Which typically involves considerably more HTTP requests, which causes performance issues. On a simple test system things are fast enough, but once you scale the latency starts to add up.
Sorry to say, but it’s no different now than before in terms of skills and keeping up with “deep knowledge” to be successful in the way described above. Sure, things move quicker now… but pace of change was never the challenge for good vs. bad engineers. The good ones easily keep up now, and the bad ones still existed when pace of change was slower. In my experience it’s more about people’s willingness to invest in deep understanding. Bad engineers generally just don’t care to learn their stack deeply.
Most people don’t catch it in code reviews, and I don’t catch everything, but I catch a lot. I see knowledge transfer as the primary purpose of code reviews though.
Is it really indifference or are employees responding to incentives? Few places reward code performance. Companies are getting exactly what they pay for and missing a deadline to improve efficiency is irrational under current comp strategies.
Labour productivity vs wages doubled since the 1970s and the trend seems to be continuing. It was about 150% in 2000 so we can use Excel as as the benchmark.
This means that an accountant today can wait for Excel to load for 2 whole hours of their 8 hour shift, and still be as productive as an accountant from 20 years ago!
Isn't that amazing! Technology is so cool, and our metrics for defining economic success are incredible.
Yeah, agreed ... but[1], in purely financial terms, we are leaving money on the table, right?
Just because an accountant can burn two hours a day waiting for the computer, doesn't mean that we should burn those two hours.
I think that most of the tech stack is there technology-wise, just not aesthetic wise.
If you are happy with how UI widgets looked an behaved on Windows 7, you can have sub-millisecond startup times now for many apps. Trouble is, people would rather wait and look at pretty things than have an uninterrupted workflow.
[1] You knew there was gonna be a "but", right? Why else would I respond?
I mean, we don't have to work 8 hours a day, we can work 10 right? 12? 18? We don't need to work 5 days a week we can work 6, maybe 6.5? It's pretty common in Japan to work 6 days a week. People like pocketing productivity bonuses and they become baked into labor expectations. It's how living standards rise.
> Just because an accountant can burn two hours a day waiting for the computer, doesn't mean that we should burn those two hours.
I think the problem is there is no economic incentive to hire people that would make excel take less than two hours. If good enough gets you the sale, then why spend money getting anything more.
People give push back when I tell them they should drop PHP for Go or Python for Rust.
It doesn't matter that it would be better for everyone and the planet. It's a prisoners dilemma. I only get rewarded and promoted for shipping stuff and meeting deadlines even if the products are slow.
Thank God for open source. Programmers produce amazing libraries, frameworks, languages and systems when business demands and salary is out of the picture.
Meanwhile some people have been using C# and Java for decades which perform better than Go on many benchmarks and their software is still slow as shit.
I was a little skeptical of this claim, so I looked it up. C# winning happens in a handful of cases, but it's extremely rare. Go practically always wins on memory usage, even if it doesn't win the cpu benchmark.
I spent years designing and implementing a new data management system where speed was its main advantage. I painstakingly wrote performant code that relied on efficient algorithms that could manage large amounts of data (structured or unstructured). I tried to take advantage of all the hardware capabilities (caching, multiple threads, etc.).
I mistakenly thought that people might flock to it once I was able to demonstrate that it was significantly faster than other systems. I couldn't have been more wrong. Even database queries that were several times faster than other systems got a big yawn by people who saw it demonstrated. No one seemed interested in why it was so fast.
But people will jump on the latest 'fashionable' application, platform, or framework often no matter how slow and inefficient it is.
The wonderful thing is that the hardware performance is there if you just dare to toss the stack. One of my long-standing projects is to build a bare metal (at least as far as userspace is considered) programming language called Virgil. It can generate really tiny binaries and doesn't rely on hardly any other software (not even C!). As LLVM and V8 and even Go get piggier and piggier, it feels more magical every day.
Name software that does nearly as good of a job for significantly faster. I think there is a huge difference between software is slow but best and class and a calculator that takes 5s to open.
The most significant feature of Virgil that is not in other mainstream languages is the concept of initialization time. To avoid the need for a large runtime system that dynamically manages heap memory and performs garbage collection, Virgil does not allow applications to allocate memory from the heap at runtime. Instead, the Virgil compiler allows the application to run initialization routines at compilation time, while the program is being compiled. These initialization routines allow the program to pre-allocate and initialize all data structures that it will need at runtime
You pulled that from the UCLA site, which is the microcontroller version from 15 years ago. That version actually burned the entire heap into the binary and did not allow dynamic allocation. In that case, no GC, write barriers, allocation, etc was necessary.
Today's Virgil (on GitHub) does allow dynamic allocation, but you can still initialize an arbitrarily large heap at compile time. It's not really related to ARC.
Oh ok thanks, sorry I just google the name trying to answer my GC related question. Maybe I missed it when skimming the github page, couldn't see a mention of memory management.
Isn't LLVM being a piggy good? As in only pay a hefty computational time price once upfront to stuff as many optimizations as you can in your binary, and make it fast forever after.
It takes 21 minutes to build on a 32 core machine because it's so enormous. It's a that thing translates text into binary. I love compilers; I could go into a long, drawn-out technical explanation of what it does, but really, it's a monster of insane complexity than no one can grok anymore.
> It existed, a startup that went under, was fully macOS native. I think this was it, "quill".
I remember that. I wonder why it didn't take off ...
I wasn't intending to make an economically successful chat application. I just want something to point at whenever someone goes "Of course it has to be electron-like; how else are you going to make it cross-platform?"
I want to demonstrate that, yes, you can have fast cross-platform applications without much more effort than going with electron-type designs.
I want to show the source code for it, I want to have a reasonably easy to follow explanation of that source code, and I want it to be reasonably low-effort.
I won't be doing it anytime soon (working on an MVP at the moment, in between gigs).
But, if you send me your email (lelanthran at gmail dort com), I'll put it into my notes so that when I have something to demonstrate you'll be the first to see it :-)
I'm sure you could do this in a browser if there was actually desire. A browser may be many levels of abstraction up from a native app but it is still fairly easy to make apps that respond instantly (except maybe startup time). It's amazing how much extra current apps are doing to slow it down.
> Most youngish users have never even experienced true performance
Most of them have never experienced the level of instability and insecurity of past computing either. Those improvements aren’t free. In the past, particularly with Windows (since this is what the videos recorded) it would be normal for my computer to freeze or crash every other hour. It happens much less today
I have absolutely no idea what you are talking about "true performance" - maybe you are from an alternate reality? I don't think of the 2000s as "true performance" but rather crashes and bugs and incredibly slow loading times for tiny amounts of data.
I can grep through a gigabyte text file in seconds. I couldn't even do that at all back in 2000.
It's one of the things that are infuriating for technical folks but meh for everybody else.
In the days of programs taking forever to load or web pages downloading images slowly we knew what technical limitations were there.
Now we know how much sluggishness comes from cruft, downright hostility (tracking), sloppy developer work, etc.. we know this is a human problem and that angers us.
But for non-technical people it hasn't changed too much. Computing wasn't super pleasant 25 years ago and it's not now. Instead of waiting half a minute for Word to load they wait 4-5 seconds for Spotify to load. They were never interested in Notepad or Cmd or Paint. It doesn't bother them that now they open slower than in 1999.
It's very surprising to me how much non-tech people are willing to put up with. They'd patiently wait for their phone to freeze for a solid 5 seconds because they installed an antivirus on it, and then proceed to try to use it at 2 FPS.
The quality bar is so low now, it went underground. It's so easy to compete with the status quo. You just pretend that all the "progress" that happened in software development technologies over the last 15 years didn't happen. And, most importantly, you treat JS as a macro language for a glorified word processor and don't try to build actual applications with this stack.
Many non-technical users assume blame for slowness and clunkiness. So many times I've heard self-blame when helping out family. "I shouldn't have so many pictures on my desktop" (there are 8 pictures). "I just can't figure out these phones" phone is dark patterning or has hidden functionality six menu items deep and behind a swipe/long press. "I never should have let it update, it's never been the same" - a forced update.
When the reality is more cpu/memory usually cleared things up. In the old times a good defrag did not hurt either.
People blame themselves when a lot of the time it is rubish software/hardware.
Then you have the other side of that coin. The yes clickers. If it asks a question just answer it however you need to do for whatever it is you wanted to work. Surprising number of dark patterns on that gem... Then they hand it to you and expect a miracle in 5 mins. Sorry this is the first time I have every seen something like this it may take me a few mins to figure out what is wrong before I can even begin to fix it.
> Then they hand it to you and expect a miracle in 5 mins. Sorry this is the first time I have every seen something like this it may take me a few mins to figure out what is wrong before I can even begin to fix it.
Oh yeah, like when I have to fix the smartphones of people who have been using them for years while I do not own or use a smartphone. I will fix it, but I've got to learn how to launch/exit an application first ;-)
Many people (hi mum!) are surprised that you may take a few minutes to calmly observe, probe, assess, read and think a bit before engaging in the actual fixing. It never occurred to them that one may do something else than clicking, more or less randomly and as fast as possible, when something shows up. And do not worry about your example altering their future behaviour, it will unshakeably go on as it always went :-)
Oh the "too many pictures" thing is a favourite of my older relatives. They think it must be "weighing it down" or something and they have to delete pictures to make it go faster.
On the flip side, I recently visited a local tourist attraction, where a visitor asked me to take a photo for him - and then promptly had to spend 2 minutes deleting files on his phone because he’d literally run out of storage space and his phone wouldn’t let him take another photo. Bit of an awkward moment!
My parent comment is based on real events. A relative did actually install an antivirus and a bunch of other crapware on her Android phone. The thing was so agonizingly slow I just didn't understand how she used it at all.
But that phone ran an Android version from before Google started seriously cracking down on apps running background services without restrictions. Modern Android is pretty robust in this regard — with a few exceptions, an app can't run in the background without showing a notification ("foreground service" is how the API calls it).
There wasn't much self-blame. She just assumes that technology is like that, complicated, unreliable, slow, and full of magic. It's nice when it works, but not a huge loss when it doesn't. She would also indefinitely quietly put up with things not working until someone asks and finds out, then I'd fix it.
I'm even more amazed at my IT colleagues (so "tech" people) putting up with windows' doing god knows what constantly. We have the exact same machines, yet mine somehow basically never has its fan on running Linux and compiling Rust, whereas theirs is often audible while running Windows with a couple of chrome tabs open and outlook.
They also don't bat an eye when lazy devs recommend we should reboot the servers daily. They put up with laggy VNC when they have a perfectly good remote desktop solution set up.
I guess, if it sorta kinda works, people just get used to it. The fans blowing like a jet engine. Apps taking forever to load. Windows getting stuck for no discernible reason.
> We have the exact same machines, yet mine somehow basically never has its fan on running Linux and compiling Rust, whereas theirs is often audible while running Windows with a couple of chrome tabs open and outlook.
I think some of this is just poor defaults. Every windows laptop I've had in the last 10 years has defaulted to a thermal management profile that ran the fans aggressively. Usually you have to install some crapware from the manufacturer (it's "Dell Optimizer" on my current machine) that allows you to change the thermal profile to quiet mode, and it's totally fine after that.
I can buy that, but, anecdotally, my machine is noticeably warmer when I'm under Windows [0], and it does blow its fan too, just like my colleagues'.
So, I don't think it's just a question of fan profiles. Plus, on Linux, I can't control the fan. I can't even read its speed. On Windows, they have all the "recommended" HP crap, but it's true that I didn't fiddle with those, and I doubt my colleagues did, either.
---
[0] I do use Windows rarely, but when I do, it's for fairly long stretches (multiple days on end) so it has the time to do its scanning, updating and whatever elsing.
The terrible thing about modern macOS and Windows is that they keep indexing your files in the background. Microsoft and Apple somehow consider that index-based "instant" file search better, yet it's actually extremely terrible and mostly useless. The file search in XP, that traversed the file system every time you searched something, worked so much better than anything modern. Right now I use a Mac app called "Find Any File" that does the same thing.
On other stuff Windows does in the background, I managed to disable Defender and the updater on my VM by deleting their files via the recovery mode command prompt. That seems to be the only reliable way to do that.
On Windows with NTFS drives (networked or local), I use VoidTools Everything for instant as-you-type search results. It doesn't index contents of files, and for networked drives must periodic'ly re-index them, at moderate cost (compared to the heavier cost Windows Search when indexing /local/ drives).
Also unlike Windows Search, it doesn't default to "search a few of the most common locations and either take a random number of seconds to realise at least one of the search targets was already displayed in the current folder where you started your search or claim that nothing was found (when one or more actually matches exist)".
One of my favorite ways to 'fix' vista era machines was to turn off the indexer. That bad boy would take over the machine randomly then rescan the files over and over. They fixed a lot of it in sp1 and win7. But it is still terrible.
Latency is one of the things my iPhone owning friends complain the most about when they try another phone (usually a Samsung loaded to the brim with bloatware)
This. My daily driver is a geriatric iphone 7. When I bought a new Galaxy A33 a few months ago for my mum, I took it for a spin for a day. I was shocked by how much lag there was, for example when scrolling. I know the A33 is a low-maybe-mid-tier phone. But it was brand new, whereas I bought my iPhone used in 2017 when the 8 was already out and maybe even the X (if not, it was right around the corner).
It actually was a very weird feeling because the scroll itself was way smoother than on the iphone, probably thanks to the somewhat higher refresh rate. But there was a noticeable and weird lag between my finger movement and the screen contents moving. Also, scrolling would sometimes randomly glitch for no reason in basic apps, like the settings.
My understanding is that the A33 is the same phone, only with somewhat cheaper display and cameras, so it should be able to get there, too.
So, how do you get rid of that bloat? I've tried removing all the apps I could, but it didn't make any difference.
Before my iphone, I used to have the GS1, 3 and 5 with custom ROMs. My fondest memory is of Slim Rom on the GS3. It gave it back its life. Before the A33, my mom used to have a GS4 mini with LineageOS. It ran circles around the stock version, even though it was a newer Android version. I think the A33 wasn't supported back when I bought it, so I left it with its stock ROM.
Do you think they do (or at least used to do) some tuning of how the UI reacts to touch?
Before my iphone, I used to have a GS5 with LineageOS, too. I don't remember the iphone being noticeably better for touch-related lag, which is why I was so shocked when comparing with my mom's brand new phone.
My Xs is still fantastic except on battery life generally or if it gets hot at all (sun, charging, running shit apps). My main issue is Google Maps kills my battery life faster than anything else except maybe scrolling through Instagram reels.
What's the situation for new iPhones? Is the compute power of chips more powerful than my computer eaten away by software latency? I'm thinking about buying one. My iPhone 7 only started to very sligthly lag after the iOS 16 released and its update lifetime ended. I'm still using it. I don't even notice the lag if I don't pay special attention.
And no, I'm not an Apple fanboy, rather a NixOS nerd but I need a phone that reliably works.
The lack of latency comes from decisions at every layer of the stack — apps don’t control scrolling or layout, the OS does. iOS is consistently responsive to user input throughout all versions and levels.
I agree though I’d say non-technical people are naive and so don’t know why the experience is not ideal or fun or smooth. I suspect if asked the right questions, non-technical people would also complain.
Computers have always been just useful enough for as long as I’ve used them (since the 80s). We’ve _always_ put up with a lot of nonsense and pain because the alternative is worse.
The thing is, users do care, they just don't understand how to articulate it. The Doherty threshold is real, and it's baked into our physiology as humans.
So it does bother them, it's not 'meh', it's just the status quo. Every once in awhile you run across an application or website that's fast, and it's jarring how much better it feels to use. That's something worth striving for.
The strange thing for me is how much time we spent in the early 2000s discussing website responsiveness and quick loading times as ways to improve user engagement and productivity. Although I can't provide any statistics that I'm intimately familiar with, I recall reading numerous case studies where improvements in responsiveness resulted in significant productivity gains for end users. If I recall correctly, this wasn't just about dial-up connections and multi-second page loads. This belief was still prevalent even when discussing sub-second responsiveness.
Perhaps the direction of the case studies started to shift, and we stopped hearing about it. However, it seems to me more like we pushed hard to reach a certain level of speed in our computer usage, then became complacent, and have been regressing ever since.
I don't have any data on it but I think people gradually came to accept the slower loading times as a reasonable cost, in return they got true multimedia and the fat client experience (first with flash, then jquery and so on) which was impossible in hypermedia.
Currently node unifies (or seems to unify) the FE and BE into something that looks pretty worrying (or rather alien) to someone who grew up with LAMP stack and CGI for dynamic content.
I've literally started using Word and Excel less often because of just how long it takes them to start up. Like, I just want to write a quick document or edit my CV, why do I have to sit here and wait for 10-20 seconds??
Electron apps don't have to be shit; VS Code demonstrates this. But the VS Code people are also insanely performance-focused; your average app developer does not care.
There is no mention in this article on the effect of modern security measures on desktop app latency. I'm thinking about things like verifying the signatures of binaries before launch (sometimes this requires network roundtrips as on macOS). Also, if there are active security scans running in the background, this will eventually effect latency, even if you have lots of efficiency cores that can do this in the background (at some point there will be IO contention).
Another quibble I have with the article is the statement that the visual effects on macOS were smoothly animated from the start. This is not so. Compositing performance on Mac OS X was pretty lousy at first and only got significantly better when Quartz Extreme was released with 10.2 (Jaguar). Even then, Windows was much more responsive than Mac OS X. You could argue that graphics performance on macOS didn't get really good until the release of Metal.
Nowadays, I agree, Windows performance is not great, and macOS is quite snappy, at least on Apple Silicon. I too hope that these performance improvements aren't lost with new bloat.
I don't think all this should be relevant - once it was launched at least one time, the OS should be able to tell that the binary hasn't changed, and thus signature verification is not necessary.
The comparison is using OS X 10.6 - I used to daily drive it and it was pretty snappy on my machine - which is corroborated by this guy's Twitter video capture.
As for Windows performance - Notepad/File Explorer might be slower than strictly necessary, but one of the advantages of Windows' backwards compatibility, is that I keep using the same stuff - Total Commander and Notepad++, that I used from the dawn of time, and those things haven't gotten slow (or haven't changed lol).
Assuming the binary on the disk hasn't changed opens to time of use type attacks, where an "evil disk" is substituted that changes the binary after the first load. (Or, more likely, some sort of rootkit)
Unlikely, yes, but that sort of thing comes up in security reviews.
Check it with a cryptographic hash. With I/O bottleneck, and assuming a SSD at 300 MB/s, that should bring the latency to 3 ms, well below human perception. Even the 60 MB/s of slow HDDs should be more than fine.
Anyone who has worked on optimizing complex programs knows that you can do a lot with a computer. > 100 ms latency to open Notepad is just ridiculous.
Even on an M2 Mac Spotlight takes a second or two to appear after you hit the shortcut. Apple notes takes an absurd 8 seconds to start. Apple music and Spotify are also take seconds to start. Skype takes 10 seconds.
I'm very happy with my M2 Mac. It's a giant leap forward in performance and battery life. But Electron is a giant leap backward of similar magnitude.
To fix the Spotlight delay, you have to disable Siri in the system settings.
If Siri is enabled, the OS waits until you release the space bar to determine if you performed a long or a short press, which causes Spotlight to be delayed.
I just tried all the examples you gave (on my M1 Max) except Skype which I don't have installed here, they all load in 1 second or faster, maybe its something else on your Mac?
I'm using the Air M2 16GB, switched from Notion to Notes and Reminders, one of the reasons is that these apps are way faster. It also uses under 100MB of ram.
Note: The main reason is that Notion is fantastic but too much for me. I'm not that organized...
The comparison between the win32 Notepad and the UWP version is telling, though, on the same hardware, and with the same security constraints. Similar between the old (Window 7) calculator and the newer one.
I'm glad you mentioned this in the comments - I was wondering if they were going to touch how applications are sandboxed and everything. I would imagine that is a large part of current 'sluggishness'.
If those mythical safety features actually make an impact then shouldn't they slow down everything, including a hello world program? Yet the performance gap between well-optimized and sluggish software only grows.
So, I had to try this. And look what happened on a 2015 Macbook running Monterey (edit: but check the thread below for possible explanation):
ojs@MacBook-Pro-4 /tmp % time ./a.out
Hello world
./a.out 0.00s user 0.00s system 1% cpu 0.268 total
ojs@MacBook-Pro-4 /tmp % time ./a.out
Hello world
./a.out 0.00s user 0.00s system 72% cpu 0.004 total
It's really that slow on first try. The binary was compiled just before running it, and it's the simplest possible hello world using C++ std::cout, compiled with -O3. C version with puts behaves just the same.
If applications were only slow the first time I wouldn't have any issue with it. But we all know that's not the case.
The difference you show depends on the internet connection, on slow wifi I've seen this delay go over 0.7s in the past. But again, just the first time, which is a problem for developers who recompile their code frequently but for the end user experience that's not as relevant.
Do you know why running the binary is slow for the first time, and that it will be fast forever once it's been run once?
I thought it's common knowledge that starting apps for the first time after reboot or after doing something else is slower and usually it's explained by the app being in cache for the next launch. But here the it can't be cache, because reading anything from SSD can't be that slow and it should already be in cache anyway.
The first time you start a program on Mac OS it'll contact an Apple server to see if it's known malware or something like that. So it doesn't happen the first time after boot but the first time you ever run the program after installation or compilation.
Thanks for the explanation. I thought code signing was supposed to reduce the need for this kind of checks. Still, stuff like this isn't helping with performance and if the new binary is sent somewhere outside the developer machine legal departments would be interested too.
Almost certainly not? Sandboxing and anything non-visual can happen at a ridiculously fast pace.
I'd suspect a lot of this is offloading so much of the graphical setup to not the application. Feels like we effectively turned window management into system calls?
I run a machine with lots of RAM and a hefty CPU and a monster GPU and an SSD for my Linux install and...a pair of HDD's with ZFS managing them as a mirror.
Wat. [1]
I also have Windows on a separate drive...that is also an HDD.
Double wat.
My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations. I run OpenRC with minimal services. I run most of my stuff in the terminal, and I'm getting upset at how slow Neovim is getting.
But my own stuff, I run off of ZFS on hard drives, and I'll do it in VM's with constrained resources.
Why?
While my own desktop has been optimized, I want my software to be snappy everywhere, even on lesser machines, even on phones. Even on Windows with an HDD.
This is my promise: my software will be far faster than what companies produce.
There is a Raymond Chen post (I'll come back to add the link if I find it) that explains how the developers working on Windows 95 were only allowed to use machines with the minimum specs required by the OS. This was to ensure that the OS ran well on those specs.
And, IMHO, that's the way it should be: I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population... and that's part of why I stick to older hardware for longer than others would say is reasonable.
> I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population
But a developer's needs are radically different from the user's needs. In typical web dev, the developer's machine is playing the role of the client machine, web server, and database server, all in one. On top of that you'll probably be running multiple editors/IDEs and other dev tools, which are pretty much always processor and memory hungry. Even for desktop development the dev needs to be able to run compilers and debuggers that the user doesn't care a thing about. If you truly care about the low end user experience then you need to do acceptance testing against your minimum supported specs. It's pretty crazy to intentionally hamstring the productivity of some of your most expensive employees.
Fair observation about web apps vs. desktop apps / systems. So maybe the browser should intentionally slow things down when using the device-specific previews to mimic real-world behavior, instead of just changing the window size? :)
In any case... the fact that you need so powerful machines to develop the app is also an indicator of waste at all layers. It hadn't cross my mind to showcase how even CLI tools have gotten sluggish, for example, but they have... Running 'aws', or 'az', or things like that show visible pauses at startup too. Furthermore, the tendency to depend on tons of services "for scalability" is also questionable in the majority of web apps.
It takes a lot of work to keep the development environment lean because it's easier than ever to pull in library and tool dependencies into them. It's doable, but usually also not a priority.
Edit: By the way, I do develop a few web apps on the side where I can be careful about dependencies and the like, and the resource needs are minimal. Running a database and web server on the local machine take almost zero resources. The heaviest thing of all is the JavaScript toolchain, which is... quite telling. And I can run the hundreds of integration/unit tests for the apps in milliseconds. It has taken care to get to this point though, and I have never experienced this kind of "lightweightness" in a corporate project. Here is a post that provides some background on just one aspect (https://jmmv.dev/2023/06/iii-iv-task-queue.html), and I'm working on another one to describe the testing process in detail.
> So maybe the browser should intentionally slow things down when using the device-specific previews to mimic real-world behavior, instead of just changing the window size? :)
I pulled up Chromium's DevTools and I do see settings for throttling in the device-specific previews. Whether they're actually used by developers is a different story.
Merad's sibling post is exactly why my machine is beefy instead of weak: I still need that power.
Of course, I run Gentoo, so...
And I run my Gentoo updates off of a ramdisk to save my SSD and for that much more does.
But if I intentionally run my code in hamstrung VM's, it achieves the effect you are advocating, which is a good thing, by the way. I do agree with you.
Oh what ritualistic nonsense. Those developer machines will also have to compile the code and run the application in debug mode, which will make the app slower than on a regular user’s machine if we assume the same hardware.
Fast software that nobody uses helps nobody. Slow software that everyone uses is, well, slow. So I'm curious: how many people use your software? I get that this topic is like nerd rage catnip but if people actually want to help users, then they need to meet users where they're at. And if "marketing" is what's needed, then maybe it is. Software is generally built for humans after all.
Don't get me wrong, All of my servers at home run Void Linux and use runit. Pretty much anything that runs on them is snappy and they run on 10 year old hardware but still sing because I use software written in Go or native languages. But remembering the particulars about runit services and symlinks is something I forget every 3 months between deploying new services. Trying to troubleshoot the logger is also a fun one where I remember then forget a few months later. Using systemd, this all just comes for free. Maybe I should write all of this down but I'm doing this for fun aren't I?
The reason users don't care that much about slow software is because they use software primarily to get things done.
So my new projects are not out yet, but I will market them heavily once they are.
One of them is an init system specifically designed to be easier to use than runit and $6, while still being sane, unlike systemd. Yes, I'm going to focus on ease of use because as you said, that matters a lot.
However, I do have a project out in the public now. It ships with the base system in FreeBSD and also ships with Mac OSX. It is a command-line tool that lots of people may use in bash scripts.
Is that widespread enough?
Also, one reason people adopted mine over the GNU alternative is speed.
> However, I do have a project out in the public now. It ships with the base system in FreeBSD and also ships with Mac OSX. It is a command-line tool that lots of people may use in bash scripts.
Is that widespread enough?
Also, one reason people adopted mine over the GNU alternative is speed.
> One of them is an init system specifically designed to be easier to use than runit and $6, while still being sane, unlike systemd. Yes, I'm going to focus on ease of use because as you said, that matters a lot.
Ooh can you link directly to this? I'd love to take a look at. Runit is sharp and annoying.
> My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations.
Same: a very lean Linux install with the Awesome WM, now running on a 7700X (NVMe PCIe 4.0 x4 SSD in my case). I also use one of the keyboard that comes stock with the lowest latency (and I bumped its polling rate because why not).
"A vibrant screen that responded instantly when you tapped replaced cramped keyboards."
I tentatively assume this results from a partial edit. Something to do with "replaced your dumbphone and tapped on cramped keyboards" or just maybe the slightly non sequitur "replaced erroneous characters on cramped keyboards"?
Parse errors aside,
"Design Tools - Users are consistently frustrated when Sketch or Figma are slow. Designers have high APM (actions per minute) and a small slowdown can occur 5-10 times per minute.
"
Haven't used those, but with some design tools, it felt like 5-10 times per second, especially when trying to get an idea on screen quickly!
Regarding when it's okay to be slow:
"When a human has to keep up with a machine (e.g. we slow down video game framerates, otherwise the game would run at 60x speed and overwhelm you).
"
I…wouldn't slow the framerate, unless you mean the game's speed is locked to refreshrate (as some ancient graphical games are), so allowing frames to be generated faster causes the gameworld to change faster. Or unless unrestrained framerate results in too much heat or power consumption.
I think there’s an important additional factor, which is how dynamic so much UI is these days. So much is looked up at runtime, rather than being determined at compile time or at least at some static time. That means you can plug a second monitor into your laptop and everything will “just work”. But there is no reason it should take a long time to start system settings (an example from the article) as the set of settings widgets doesn’t change much — for many people, never — and so can be cached either the first time you start it or precached by a background process. Likewise a number of display-specific decisions can be made, at least for the laptop’s screen or phone’s screen, and frozen.
The number of monitors was also checked at runtime at the 90's. The only difference is that by then it was checked only at startup, and now there some aynchronous function that checks it all the time... What is to say that now it should be faster.
On any of the complex Linux DEs, the set of settings widgets was always set at startup, since those complex DEs existed. On Windows that varies from one interface to another (there are many), but at least for Win10, things have gotten much more static on the new interface. (I dunno about 11.)
Anyway, the amount of things you can read from the user on a Windows app startup time is staggering. The applications on the article have many orders of magnitude less (relevant) data to deal with.
The "plug in a monitor" case is an example where computers are hilariously (and likely unnecessarily) slow to do something that should be simple.
Say I have two monitors plugged in and running. When I plug in a third one, here's what happens:
1. Monitor 2 goes blank.
2. Monitor one flashes off then back on.
3. Monitor 2 comes back on.
4. Both monitors go off.
5. All three monitors finally come back on.
Putting on my developer hat, I kind of know what's going on here. The devices are frantically talking to the device drivers, transmitting their capabilities, the OS is frantically reading config files to understand where to display the virtual desktops, everyone is frantically handshaking with everyone else. It's a terrible design and should not be excused. Putting on my end-user hat, what the fuck is this shit? I just plugged a monitor in. I'm not asking the computer to perform wizardry.
Even if the frantic handshaking is somewhat necessary, I don't think it's justified that it takes over 1 millisecond, let along multiple seconds with monitors turning on and off in the meantime. Seems like shoddy work on every end.
> Even if the frantic handshaking is somewhat necessary,
I don't believe it is. Sure, it's probably necessary in this frameworks of frameworks systems we're running right now. Drivers, kernel, privilege levels, userspace, whatever. But I believe if we were to rewrite everything from scratch without the 90s technical debt and careful planning beforehand it would be easy to have a dynamic monitor connecting algorithm which doesn't suck like that.
> Linux is probably the system that suffers the least from these issues as it still feels pretty snappy on modest hardware. […]. That said, this is only an illusion. As soon as you start installing any modern app that wasn’t developed exclusively for Linux… the slow app start times and generally poor performance show up.
This is not an illusion. Cross-platform programs suck, so everyone avoids them, right? Electron apps and whatnot are universally mocked. You would only use one for an online services like Spotify or something. The normal use case is downloading some nice native code from your repo.
> Cross-platform programs suck, so everyone avoids them, right?
The thing is, cross-platform tooling sucks. A plain CLI program is already bad enough to get running across platforms - even among Unix/POSIX compliant-ish platforms, which is why autoconf, cmake and a host of other tools exist... but GUI programs? That's orders of magnitude worse to get right. And games manage to blow past that as every platform has their completely own graphics stack.
Electron and friends abstract all that suckiness away from developers and allow management to hire cheap fresh JS coding bootcamp graduates instead of highly capable and paid Qt/Gtk/SDL/whatever consultants.
No, they don't always suck. As an example, QT is cross platform, fast and complete.
But Electron is god awfully slow, and Eclipse can be too. The difference is QT apps are compiled and are generally written in C/C++, where are Electron and Eclipse are interpreted / JIT'ed, and use GC under the hood. As a consequence, they run at 1/2 the speed, use many times more RAM and to make matters worse Electron is single threaded.
The problem isn't cross platform. It's the conveniences most programmers lean on to make themselves productive - like GC, or async so they can avoid the dangers of true concurrency, or an interpreter to so they don't have to recompile for every platform out there. They do work - we are more productive in turning out code, but the come at a cost.
Hello, I'm a consultant. In practice it's just a fancy word for temps. Company hsa money and needs some engineers but don't want to hire or take up the burden of employment, and / or needs some fresh bodies to wake up their in-house developers / make them fear for their own jobs because they've stagnated, they call us.
But we're a relatively small consultancy (think hundreds instead of tens of thousands like the bigger players), so it's a bit more refined. We do have decent developers, but the issue with consultancy on that level is they're too smart for their own good, they will suggest an over-engineered solution that can't be maintained by anyone BUT these consultants and / or hired self-employed people. It's a weird fallacy, a weird rotating door of consultants and cycles of development where every 10 or so years they do a full rearchitecture / rebuild of their back-ends, while meanwhile some people just sit and maintain the existing - and working - core systems.
It's massively wasteful. But it usually pays better than a regular in-house developer, and it's lower risk than going self-employed because you still work for an employer who pays you per month. I'm just salty that said employer creams off 2/3rds of my hourly rate.
there's a ton of consulting companies which are humonguous. Companies like Accenture with 780000 employees, Tata Consultancy with 488000 consultants, CapGemini with 300k-ish... it's one of the largest businesses in the world
I mean, I don't think that's exactly a fair comparison. These companies are focused a lot (and I mean a lot) more on enterprise software than your regular desktop software. I'm talking about Banking Systems, Automation Platforms, Government Software, stuff that has no use outside the corporate world. And they do genuinely do good work in these areas. Sure, they're not pioneers or doing some groundbreaking research, but they do their job and they do their job well (of course there are exceptions - but these aren't 100 billion dollar companies for nothing).
> but these aren't 100 billion dollar companies for nothing
Producing the best possible software doesn't produce the highest possible revenue. You'd want to produce something that works well enough that it's viable for the customer to keep hiring your consultants, but also makes it necessary to continuously keep hiring them.
So that translates to producing the worst possible software that barely works. So no, they're not $100B companies for nothing. They know exactly how to walk that line.
> And they do genuinely do good work in these areas.
I have seen genuinely passable. And lots of "good enough until we fix the underlining issue".
What I have never seen is offshored developers doing really good work. And I'm convinced their incentive structure will always punish anybody that tries.
I've seen turds as web pages at the Spanish Goverment or even the Healthcare systems.
Those are designed to deliberaterly harm the user experience full of judicial jargon so companies in the middle can get lots of profit handling the "legalese" such as filling the taxes.
If they can't afford to write a simple cross-platform QT5 based software for actual computing (AKA desktops except for niche usage on art with iPad for drawings and some audio production), they are doomed. Period.
Browsers are ridiculously fast. People just add more and more until they hit the wall where they need to start optimizing. For someone trying to get things done, that’s probably the right method. But it means that ultimately what determines how fast your program runs is not how fast your computer runs, but how much delay you’re okay with.
I think you’re saying that decades of r&d by the largest tech companies in the world can eventually produce a browser runtime that renders text about as fast as a couple people working in python and C.
"But that's haaaarddddddd for me as a lazy software developer who went to college and learned complex algorithms and calculus and physics and chemistry just because but can't seem to handle a few #ifdefs because they aren't in vogue anymore"
Except our organization builds python and java apps so all our shit is multiplatform by default or even by accident. Hell, I put together consistent, cross platform Swing UIs before I even understood what maven was.
I kinda disagree with this. I have definitely noticed that my cursor can be slow to respond when some linting/formatting (I can't quite tell what is causing it, I just notice it) is going on. Totally usable, but I'm also coming from emacs and it's not like emacs is known to be blazing fast so I actually think my experience is sort of inline with the article's point.
Despite or because of emacs being older, it's more responsive than vscode in my usage.
Vscode is fast compared to most modern editors, but it's still twice as slow as Sublime in terms of input latency, UI responsiveness, text coloring etc.
An hour with Sublime/Vim followed by going back to Vscode is a painful experience...
It's not really a 100% replacement for all of VS Code's features, but there is a pretty big overlap in functionality that may meet most people's requirements: Notepad++
It's not just cross-platform programs that suck. Applications moving to gtk4 from gtk3 add an extra 100-200ms of startup latency every time because of OpenGL initialization and shader compilation (there is currently no cache), varying a lot depending on your CPU. Applications that used to open instantly now have have a noticeable pause, even if it's still short compared to the worst Windows applications. It's been reported multiple times for well over a year now and no improvements have been made.
Not really sure about the cross-platform caveat... `dropbox-lnx` eats away 265MB of RES for it's file syncing job, I'm pretty sure what Dropbox was doing on my old laptop was very heavily 265MBly important. Also, `keepassxc` takes away 180MB of RES when you opened it.
The first Linux that I've used, Mandriva (KDE), runs smoothly on my Intel Pentium 4 PC with 256MB of RAM. I can even do some web browsing (Flash gaming) and music playing without feeling slow. That's an entire OS, running on 256MB of RAM.
On MDV, KDE and PIV, same here with an Athlon 2000 and 256 MB of RAM with Knoppix and later FreesBIE with XFCE, which ran much faster and snappier than even today's i7's with Plasma 5.
And if it's for an online service, why even have it as a separate app? Just run it in your browser. You'll get the exact same thing but in the one instance of Chromium that you already run anyway. You also get the added benefit that your extensions work with it.
These kinds of realizations have made me look into permacomputing [1], suckless [2] and related fields of research and development in the past few months.
We should be getting so much more from our hardware! Let's not settle for software that makes us feel bad.
Mind you, the software we had 20 years ago was fully featured, smaller and faster than what we have today (orders of magnitude faster when adjusted for the increase in hardware speed).
Suckless, on the other hand, is impractical esthetic minimalism that removes "bloat" by removing the program. I'd rather run real software than an art project.
If you want more from your hardware, the answer is neither the usual bloatware, nor Suckless crippleware.
st is actually one of the worst terminals available. For all their purported minimalism, it has abysmally bad performance. See https://danluu.com/term-latency/
Huh, interesting. Never noticed. Might have something to do with pretty regularly using actual serial terminals. I don't know how the testing under Mac OS X affects it -- I only run it under Linux and OpenBSD, on OS X I just use the OS X terminal. Anyway, `st` does its job for me!
"Worst" for me is stuff that grabs keys it shouldn't, scrolling that doesn't work right with `screen` and/or `tmux`, etc.
It's funny, the older I get the less I care about this stuff. I like to use technology to, well, live my life in a more effective, effort-free manner. I have lots of friends who aren't other nerdy techies. In high school I refused to use "proprietary, inefficient WYSIWYG garbage that disrespected the user" like Word and typed up all of my essays in LaTeX instead. Now I get accounting spreadsheets for vacations going on my smartphone using Google Sheets. I still love writing code but my code has become more oriented around features and experiences for myself rather than code for the sake of code.
I love exploring low-latency code but rather than trying to create small, sharp tools the suckless way, I like to create low latency experiences end-to-end. Thinking about rendering latency, GUI concurrency, interrupt handling, etc. Suckless tools prioritize the functionality and the simplicity of code over the actual experience of using the tool. One of my favorite things to do is create offline-first views (which store things in browser Local Storage) of sites like HN that paper over issues with network latency or constrained bandwidth leading to retries.
I find suckless and permacomputing to be the siren song of a type of programmer, the type of programmer who shows up to give a presentation and then has to spend 10 minutes getting their lean Linux distro to render a window onto an external screen at the correct DPI, or even to connect to the wifi using some wpa_supplicant incantations.
It's the cycle and not just in software. Younglings are idealists and try a thousand things, some of which happen to change and improve the ever-turning wheel of software, while elders turn the wheel and ensure the system, practices and knowledge continue and get transferred.
Fully agree. When I was younger I used to care more about privacy, and would use nothing but FOSS apps and OSes no matter how much it inconvenienced me.
There's a significant opportunity cost for zealotry.
OpenBSD is actually a good example of a system that envisions a holistic experience. Sure its experience is that of one for a techie nerd, but it's still very experience oriented. This is a pretty big contrast from suckless which creates small tools that only kinda work together and claim the code as documentation. OpenBSD docs are wonderful.
(Though in my experience modern Linux kernels have much better networking throughput than BSD kernels. It confounds the overall performance argument of this thread chain, but the overall OpenBSD experience leads to on average a more speedy experience.)
WTF happened to all the THOUSANDS AND THOUSANDS of machines I deployed to datacenters over the decades? Where are the F5 load balancers I spent $40,000 on per box in 1999?
I know that when we did Lucas' presidio migration, tens of million$ of SGI boxes went to the ripper. That sucks.
edit:
All these machines could be used to house a 'slow internet' or 'limited interenet'
Imagine when we graduate our wealth gap to include the information gap - where the poor only have access to an internet indexed to September 2021 - oh wait...
But really - that is WHAT AI will bring: an information gap: only the wealthy companies will have real-time access to information on the internet - and all the poor will have outdated information that will have already have been mined for its value.
think of how HFT network cards with insane packet buffers, and private fiber lines gave hedgies the microsecond advantages on trading stocks...
Thats basically what AI will power - the hyper-accelerated exploitation of information on the web via AI - but the common man, will be releagated to obsolete AI, while the @sama people of the world build off-planet bunkers.
Occasionally I end up with a truckload of gear from things like that. The circumstances that saved it from shredding are usually something like Founding Engineer X couldn't stand to see all that nice workstation stuff go in the trash so he kept it in his garage for 25 years and now his kids are selling it.
I use Suckless terminal myself, but if I'm not mistaken it's actually not the fastest terminal out there, despite its simplicity[^1]. My understanding is that many LOCs and complex logic routines are dedicated to hardware/platform-specific optimizations and compatibility, but admittedly this type of engineering is well beyond my familiarity.
Also, OpenBSD's philosophy is very similar to Suckless. One of the more notable projects that come to mind is the `doas` replacement for `sudo`.
[^1]: This is based on Dan Luu's testing (https://danluu.com/term-latency/). I don't know when this testing was done but I assume a few years ago because I remember finding it before.
I'm somewhat offended on behalf of OpenBSD! doas is a good program written for good reasons (cf. https://flak.tedunangst.com/post/doas). Suckless sudo would ensure it was as painful as possible to use, so suckless fans feel like cool sysadmin hackers setting it up. (Just compile in the permitted uids lol!)
I totally agree with doas being a good program with really good rationale. I also get that suckless does have a bit of a reputation that isnt for everyone (I've heard that it's basically impossible to make a PR to any of their projects). Theyre obviously not philosophically aligned, that was terrible wording, but I mentioned OpenBSD because its an excellent project that deserves more mention and a lot of the criticisms for modern software design in this post and thread are addressed by their design philosophy. That said, the compiling issue aside I think the overall embrace of simple, self-contained, C code is where you could compare the two.
Yeah, totally possible to get excellent results with older hardware, and really stellar results with very new hardware, if you're running stuff that's not essentially made to be slow.
I basically only upgrade workstations due to web browsers needs, and occasionally because a really big KiCAD project brings a system to a crawl. At this point even automated test suite runtimes are more improved by fixing things that stop test parallelization from working efficiently vs. bigger hardware.
I am constantly dumbfounded with the fact that the majority consumer of memory on my machine is a FN web browser! (that said, I *do* have basically like 30 tabs open at any given time...
(It would be cool if dormant tabs can just hold the URL, and kill all memory requirements when dormant after N period of time..)
But heck, even when I am running a high-end game on my machine, the memory consumption is less than when I have a bunch of tabs open displaying mostly text ...
I have a hundred tabs open in my Firefox. They are structured by Tree Style Tab, and most of them are offloaded by Auto Tab Discard.
This gives me a way to instantly switch between contexts (with a small delay if a tab has to be reinstated from cache). This is how bookmarks should really work.
This us why Firefox is not the biggest memory hog on my system. Most of the RAM is consumed by language servers and Emacs.
Indeed, tab-based navigation has less friction, no need to recreate some other order in another UI, and offloading should deal with resource use (workspaces in Vivaldi also helped reduce the clutter a lot along with tab groups)
I wish Android Firefox was as well behaved, beyond a certain amount of tabs new ones don't really load until I shut some, it's been like that for years, on different phoned and different android versions.
The amount I can keep open seems to increase as phones get better.
Small tangent: I wonder what people are seeing when they say they have "too many" tabs open.
I had over 1000 tabs open in Firefox (macOS) and never noticed any problems. (Trying to clear them out. Down to 920 now.) 2.26 GB out of 32.00 GB is used by Firefox. I have about 70 open in Chrome (since only a certain number fit on the screen) and it's split across 12 processes which are between 12 MB and 158 MB each. I have nearly 500 tabs open (because that's the limit) on my iPhone and iPad each, in Safari.
Just curious: what kind of workflow results in having over 1000 tabs open? (Oldschool Unix hobbyist here, so I always feel like drowning when I'm nearing 20+ tabs. :)
Not OP, but opening almost every link as a new tab, and keeping old tabs around because you don't have a great bookmark organization system.
I am trying to break myself of these habits, but they do have some benefits. For instance when I do a web search on a complex topic, I often scan the results, opening the most promising looking results as tabs. Then I look through the tabs comparing them. The trouble is I never remember to close them when the task is done.
I'm not a 1000+ tabbed, but I can relate to what you're describing. I'd add that it seems like it's getting harder all the time to find what you're looking for with the current state of web search, so when you come across a good page you don't want to "lose" it. Sure you could bookmark it, but you probably won't want it forever, just until you're done with whatever you were working on that prompted you to search for it in the first place. So bookmarking just moves the mess from the tab bar to the bookmark folder. Kind of like cleaning your room by stuffing all the mess in your closet.
That feeling completely disappeared with workspaces in Vivaldi (though for me the discomfort started with maybe around twice as many, vertical layout with more smaller tab height helped)
Mainly just not having a workflow of going back to close them. So it's just previous work like a view of some logs, a merge request, a link to a file in a repo, a monitoring dashboard, an alert in a monitoring system, etc.
In Chrome, I do end up closing them eventually because there's no more space to work.
For personal:
it's more like a bookmark, but don't often review them rather than open a new tab to research whatever I'm currently curious about.
Certain sites are absolute murder. A single wikia (now Fandom) tab or new reddit tab left open can sometimes bring a browser to its knees. I've seen google mail do the same when they screw something up.
edit: That is, I think people sometimes attribute the lag they're experiencing to the number of tabs they have open, when only a handful of their tabs are to blame.
I usually carry 3 sets of tabs, typically around ~10 each:
Personal info sites, like reddit and its links...
Tech Sites, like HN and its links....
Video Sites, like Nexflix / Prime...
EDIT:
Also - what tab-mgmt extensions do you use? Like Tab Grouping etc...
Also - one of the things I attempted to do, but failed to do so consistently, was to swipe to a different desktop between, work, person, and learning/browsing... instead of tab groups I had "workspaces" but that is a hard habit to build, for me, it seems.
I didn't answer before because the answer is none. Not sure if anyone will see this but today Firefox recommended some extensions and one was OneTab. That let me turn all my tabs into a list. Then I could search those and (manually, one by one) open all of the tabs from a given site as long as it's in the title. I'm already down to 264 tabs (from 900+) today in just a little bit of time, and still going.
Technically, all the bookmarks you have are hibernating tabs.
There's a whole spectrum of activity you can have on a tab, and to me I see really no difference between "unhibernating" a tab and just reloading it from scratch.
I would expect thawing a tab to present me with the tab as it was last time I saw it, no reload; whereas, being a link, I'd expect a bookmark to load the current version.
... Actually the latter is incorrect, in truth I'd expect bookmarks to present me with the exact version I had in front of me back when I bookmarked it, including if the site is temporarily down, has vanished, or if I navigated a more recent version.
Same, I was around 1500 tabs across three windows when I finally moved from the HP Z420 workstation I'd been using as my primary sometime around JAN 2023. Machine was fine with it.
(It would be cool if dormant tabs can just hold the URL, and kill all memory requirements when dormant after N period of time..)
The problem with this is that a lot of web pages don't handle this gracefully. You don't end up back where you left off!
It would be nice if all the details of the inactive web page were saved to storage, and then retrieved from storage when the tab is reactivated, without having to make any network calls at all.
I turned off JavaScript one day and found my browser memory woes completely evaporated. Now I turn it on per site on a case by case basis and it's easy to figure out which sites leak a lot and prepare accordingly.
Browsers have gotten worse. In..2013, I believe, I regularly had 200+ firefox tabs open on a laptop. It was a high-end laptop, with IIRC 16 gigs of ram, but it was still a laptop. The number may have been much higher than that - I've hit over a thousand tabs before, but I can't recall if that was then or more recently.
Multi-process firefox seems to have absolutely obliterated the performance of firefox with large numbers of tabs until recently (it seems to be have improved).
My impression of Suckless is that it’s “Unix philosophy” software where you edit the code and recompile instead of using dynamic configuration like all those config files. And while there are way too many ad hoc app-specific config systems out there, I don’t see how Suckless makes a huge difference for simplifying things.
As noted by another thread, the Notepad example is surprisingly telling.
My initial gut was to blame the modern drawing primitives. I know that a lot of the old occlusion based ideas were somewhat cumbersome on the application, but they also made a lot of sense to scope down all of the work that an app had to do?
That said, seeing Notepad makes me think it is not the modern drawing primitives, but the modern application frameworks? Would be nice to see a trace of what all is happening in the first few seconds of starting these applications. My current imagination is that it is something akin to a full classpath scan of the system to find plugins that the application framework supported, but that all too many applications don't even use.
That is, used to, writing an application started with a "main" and you did everything to setup the window and what you wanted to show. Nowadays, you are as likely to have your main be offloaded to some logic that your framework provided, with you providing a ton of callbacks/entrypoints for the framework to come back to.
> Rumor 1: Rust takes more than 6 months to learn – Debunked !
> All survey participants are professional software developers (or a related field), employed at Google. While some of them had prior Rust experience (about 13%), most of them are coming from C/C++, Python, Java, Go, or Dart.
> Based on our studies, more than 2/3 of respondents are confident in contributing to a Rust codebase within two months or less when learning Rust. Further, a third of respondents become as productive using Rust as other languages in two months or less. Within four months, that number increased to over 50%. Anecdotally, these ramp-up numbers are in line with the time we’ve seen for developers to adopt other languages, both inside and outside of Google.
> Overall, we’ve seen no data to indicate that there is any productivity penalty for Rust relative to any other language these developers previously used at Google. This is supported by the students who take the Comprehensive Rust class: the questions asked on the second and third day show that experienced software developers can become comfortable with Rust in a very short time.
It seems kinda useful to me. Like, the normal problem looks much less like ‘we took competent professionals over a long time’ and much more like ‘we took 10 undergraduates over a month or two’.
If you add up employed engineers at google-like companies (by which I mean large tech companies), you’ll quickly come to hundreds of thousands, and there are plenty of other companies who hire similar people. I think Google employees are reasonably uniformly distributed across that population (that is it isn’t like the Google has the best X% of them) and I think a study of such people is more interesting to me than a study of undergraduates (or indeed a study of programming language enthusiasts). I think an unrepresentative thing about Google is that they may use C++ more than many newer tech companies, and this might help for learning that.
This is also my experience. I was ok in rust after using it for 2 weeks, I still had the weird issues here and there, sometimes but after just using it for as little as one month I started feeling confident enough in it where I could develop anything given enough time. Now after almost a year, there is no language i'd rather use than rust
"macOS ... the desktop switching animation is particularly intrusive, oh god how I hate that thing"
Why oh why won't Apple let you turn this off? It annoys if not nauseates so many people.
This is is just one example, but indicative of their mindset, why I dislike Apple, and use linux whenever I can. It's such a calming joy to switch instantly between desktops without the rushing-train effect.
Problem with that is that it's global. It can even affect websites if they use a media query for it. I'm ok/enjoy the other animations (I rarely see them, but appreciate when they happen), the only animation I wanted to disable was the desktop switch one.
It was once possible I think, but it seems that's impossible on new versions of macOS. Spaces are unusable because of that as a programmer. I'd love to have a terminal in one space and browser in the other, but the delay in switching between both is very noticeable and considering how many times I'd do that it'd probably take minutes off my day.
Yes, it is global. Even running `matchMedia("(prefers-reduced-motion)")` in the browser console returns true. I see no way of disabling reduced motion only for Safari either.
That being said, if you do decide to use spaces, I want to point out a MacOS setup that would help you to keep apps on different spaces and have an experience (slightly) closer to i3wm and other window managers.
First, you should create 10 spaces. Then go to Settings -> Keyboard -> Keyboard Shortcuts -> Mission Control -> Expand the Mission Control dropdown. You'll see options to set keyboard shortcuts for each workspace there. I've set it to Option+{1-9, 0 for 10}.
Then just open some of the permanent apps you use, and right click on their Dock icon -> Options -> Assign to this desktop. I keep the browser in workspace 1, and messaging app in workspace 10.
I know this isn't the best solution, but behind crazy-hidden settings, it is possible to get a pretty decent solution for window management on macOS. Ohh also, I use Amethyst sometimes, for i3wm-like window layouts, and it allows you to set shortcuts to move apps from one workspace.
one workaround I use is not to use Spaces but custom Alfred hotkeys for most popular apps that you often switch such as: iterm, browser, xcode, vsc, file browser + rectangle.app for shortcuts to maximize/minimize app (without default animations). Takes some time to train muscle memory though - I use cmd+ctrl+(j or or k or l or ;) to switch between most frequent apps. I also use F1 as hot key for iTerm (quake style) and F2 for Dash.app
I think it still takes like half a second for focus to change, which is the thing I care more about. I would be fine if I could switch and then immediately start typing before the animation was finished.
I use TinkerTool to remove most of the animations from macOS, which makes it feel much snappier. That one, unfortunately, I haven't been able to disable.
Which is weird, because the launchpad page switching animation can be disabled and it is very similar to the Spaces animation.
It is amazing how much snappier the os feels by just removing those dock and "genie" animations from the interactions.
But I don't want to disable all animations, just this one.
More to the point, I should be able to pick and choose which animations I want, install my own or 3rd party ones, etc, ie as far as possible, my computer should be controlled by me, not Apple.
The desktop switching animation is the reason why I don't use Spaces at all. I just can't force myself to "be along for the ride" every time I switch, which I might want to do very often.
There's one situation where it's specially terrible: screen sharing. When I'm on a Zoom call, and somebody is sharing their screen, and begins switching spaces... I get dizzy (and the streaming compression goes crazy).
I LOVE the desktop switching animations. On Ubuntu Unity when I send a window to another virtual desktop, it keeps the window still, and slides the background, while on Windows VirtualDesktopGridSwitcher just switches instantly. (Same for just switching desktop windows just switches, Unity slides it. But it is more common to slide that.)
On Ubuntu, I just know where am I and where are the things, while on Windows I have no idea.
On KDE my computer slides all the old windows and background away, and the new ones into the screen. Yes, it's a great thing that makes my brain know what I'm seeing.
It is also long finished by the time my finger is away from the switching key. And that is very relevant. I have no idea what Apple users experience, and "desktop switching animations" aren't all the same thing.
Mac OS takes about one second to switch desktops, with at least half that time used for the last fraction where the animation smoothly slows down. It's fine if you only need it now and then, but when switching repeatedly for some task it gets annoying quickly.
Much worse: Mac OS fullscreen mode works by moving the current window to a new virtual desktop while hiding the statusbar etc, meaning you always have a one-second delay when entering and leaving fullscreen, with videos on Twitter and some other sites it's even slower. But the UI is responsive sooner than that, so recently I accidentally minimized all applications by clicking while the screen was still completely black.
Animations are difficult to get right, I always err on the faster side because waiting for an animation to finish is never enjoyable.
I haven't paid any attention to the robotics world in ages and in the last six months I've discovered a bunch of interesting things people have been doing with less instead of more. Particular standouts are tiny maze-running robots, and classifications of fighting robots by weight. There's a guy with a bot named 'cheesecake' that has some interesting videos.
I think we could all do with celebrating the small. ESP32 and STM32 have hit a point where you can do modest computing tasks on them without having to become an embedded hardware expert to do so. I'm at one of those crossroads in my career and I'm trying to decide if I double down on a new web friendly programming language (maybe Elixir) or jump into embedded.
I've done a reasonable amount of programming in the small, several times tricked into it, and while it's as challenging if not moreso in the middle of doing the work, the nostalgia factor after the fact is much higher than most of the other things I've done.
I hadn't heard of this bot before and honestly this video is fantastic. Battlebots writ small is just as if not more compelling than the original I think.
Not that I disagree with the core point that some software gets worse over time, but I don't think it is valid to say that users have not benefitted from the ease of development afforded by Electron. Spotify has maintained its nominal price at $9.99 for 12 years, meaning its real price has fallen by one third. I don't know anything about Spotify or why they spend over a billion dollars in R&D each year, but if lack of attention to UI performance has helped cut their development costs then users might be benefitting through lower prices.
> I don't know anything about Spotify or why they spend over a billion dollars in R&D each year
Standups, one-on-ones, team fika, NIH, retros, agile retros, incident post mortems, cross-team fika, town halls, pool/billiards, table-tennis, and testing-in-prod.
A few years ago, I was working with a team that was trying to convert an entire API for a fairly straightforward application into REST api microservices.
The architect wanted to break everything up into extremely small pieces, small enough pieces that many were dependent on each other for every single call. Think "a unified address service" to provide an physical address via an identifier for anything that had an address (customers, businesses, delivery locations, etc).
The problem was that it turns out when you're looking up a customer or business, you always need an address, so the customer service needed to hit the address service every time.
Disregarding the fact that this whole thing was a stupid design, the plan was that when you hit the customer api, the customer code would make internal http calls to the address service, etc.
I pointed out that this was a ton of unnecessary network overhead, when all of this information was sitting in a single database.
The whole team's argument was effectively - "it's 2015, computers and networks are fast now, we don't need to worry about efficiency, we'll just throw more hardware at it".
The whole thing ended up being scrapped, because it was crippled by performance issues. I ended up rewriting the whole thing as a "macroservice" which was 60000% faster for some particularly critical backend processes.
Anyway ... I think that mentality is prevalent in a lot of people involved in technology creation, technology has improved so much, moore's law etc etc etc.
So let's not worry about how much memory this thing takes, or how much disk space this uses, or how much processing power this takes, or how many network calls. Don't worry about optimization, it's no big deal, look at how fast everything is now.
> a lot of the Windows shell and apps have been slowly rewritten in C#
I worked on the Shell team until late 2022. There is very little C#, if any at all.
The vast majority of the Windows Shell is still C++ with a significant amount of WinRT/COM.
What I had in mind when I wrote this sentence though is how the modern apps that people seem to like /are/ C#, such as Windows Terminal and PowerToys, and these feel quite slow to me. But, yeah, calling those the shell is a stretch.
Bad memory. The idea that Windows Terminal was C# was in the back of my mind for some reason but I did not verify this fact. Thanks for keeping me honest; updated the text now.
PowerToys is mostly C# though, which I have also verified now.
> Also they didn't consider themselves to be unique and special snowflakes that needed non-standard widgets and theming.
Ugh, this hurts my soul. I've worked with way too many designers who simply had to use their own icons and menus and scroll bars and drop-downs and on and on. Yea, let's throw out the person-centuries of interaction design research that the OS vendor put into the standard controls so that you, 3 years out of design school, can impose your "improved" version of them.
This is the reason I like SumatraPDF. It has an old-fashioned user interface, a rather limited set of features, but boy it opens fast. I wish there were more apps in that mold.
In general Linux seems pretty snappy. I like evince, which is not particularly svelte (but it does color inverted rendering and reloads your PDFs when they change on the disk, so it is nice for writing at night).
He mentions Linux in the blog post. But then goes off on a tangent about cross-platform programs. I think he must be primarily a windows guy or something? In general cross platform programs suck, it is well known, so everyone avoids them.
I use a Linux desktop that is extremely snappier in comparison with how popular distributions like Ubuntu or Fedora feel, at least in their default configurations.
However, I have never analyzed which are the exact causes, because my configuration has a lot of differences and I am not sure which of them matter.
I suppose that it is important that I use neither Gnome nor KDE, but XFCE. Even so, some of the applications that I use are intended for KDE or Gnome, so they use the corresponding libraries, and I do not know if these have any reason to be more responsive under XFCE than under their native desktop environments.
I prefer to use a completely empty desktop, without icons or toolbars and having as background a neutral gray. I launch applications by right click on the desktop and I restore minimized windows from an auto-hidden taskbar.
I take care that there are no active services/daemons besides those that I really need. I do not use systemd or anything associated with it. I have stopped using swap memory two decades ago (but I equip all my computers with generous amounts of DRAM, to avoid out-of-memory situations; my oldest computers have at least 32 GB and nowadays I would not buy one with less than 64 GB).
I use only 4k monitors with 30-bit color, so using lower resolutions is not needed for a snappy GUI.
I use the proprietary NVIDIA drivers or the Intel drivers on computers with the integrated Intel GPU. I use a customized Linux kernel, but I doubt that this can have any influence on GUI responsiveness, even if it ensures fast boot times. I do not use Wayland and I doubt whether it is the right replacement for X Window, because its initial design was very flawed, even if some of the original defects have been corrected meanwhile.
For viewing PDFs or EPUBs, I use MuPDF, which is much faster (especially on startup, which is instantaneous) than the other viewers that I have tried. It has some limitations, so I also keep around other viewers, e.g. Okular, for the cases when I need a feature not provided by MuPDF. As file manager, I use Xfe.
Thanks for all the tips, I’ve been looking for a good pdf viewer on the Linux side, ironically, Preview is one of the hardest things to replace for me from the Mac.
That’s interesting. So do you start the programs floating, and then put them in tiling mode sometimes? Or just avoid tiling completely? (It seems like that would be giving up a lot of the specific strengths of the window manager… but of course if it works for you, it works!)
I use the multiple desktops feature a lot, though. Some programs are pinned to a dedicated desktop and run all day. Others I start in (or move to, later) a numbered desktop, and switch to them by changing the desktop. Often, most of the desktops contain only one window.
I once used i3 and sway the right way, of course. I guess I had one too many of those programs that were not designed to work at arbitrary window sizes, added the above rule and called it a day.
I recently-ish switched to Ubuntu and tried to give their weird gnome a try, but it just became a nightmare of addons and then the addons started conflicting or something. Anyway I’ve come to the conclusion that I’m not smart enough to use a desktop environment.
For my laptop, I’ve used i3, polybar (to get pretty bar at the top), rofi (a fine launcher), autorandr (detect and switch to my monitor and turn off laptop screen when I plug in), and… a random Arch Linux forum post to get automatic accelerometer based screen rotation going.
I bet there’s a fancy tool out there but this script seems to work after a little customization, and it is only a couple lines, so it can be understood!
While I haven't used a full DE in years, if you want snappy going with a lightweight WM wouldn't hurt. I use StumpWM but I haven't had any issues with responsiveness using i3, awesome, or XMonad.
Personal anecdote: my laptop (6 years old and outdated long before it was manufactured) was thrashing with 99% ram used and disk bandwidth maxed out due to a combination of a 100-tab Chrome monster and doing pacman -Syu on my Arch Linux WSL (I think it might be fixed now, but some time ago WSL file cache would eat all your Windows ram). After accidentally clicking on a pdf document Sumatra somehow still managed to open it in roughly 100-200ms.
I think it might be the only Windows program where I've legitimately been surprised by it's speed.
It's clear that the biggest problem is companies prioritizing their cost, at the expense of user experience.
But after that, the biggest problem is clearly the framework/language you use. The maxim "premature optimization is the root of all evil" has done damage here. The problem with frameworks/languages is that by the time you finish your features and profile your code, you're already doomed. There's no way to speed up the entire framework/language, because it's part of everything you do - death by a thousand cuts. Nothing you do can improve upon the fundamental fact of running in an interpreter, with a garbage collector, with complex layout calculations (a la HTML/CSS instead of Win32), or with major choices like processes over threads, sync IO over async IO.
Well, there is a step beyond framework hell that can work, which is "living inside your own black box"[0]. This strategy intentionally supersedes the lower-level abstraction layers with a higher-level, application-focused one that eases rewriting the underlying stack "sometime down the road". It's nearly the only way you can get that.
But it does require a good understanding of what the application is and does, and a lot of software isn't that: it's just more stuff that has a behavior when you click around and press keys.
Actually I suspect unnecessary use of async IO is what makes many Rust applications slow. It surely makes things slower to compile (+100s of crates dependencies for the Tokio ecosystem), it makes the binaries bigger, which in turn makes the application slower to cold start and download.
GIMP will stall for 10-15 seconds at startup looking for XSANE plugins. Apparently it's calling some external server, bad in itself, and that external server is slow. Worse, this delay stalls out the entire GUI for both GIMP and other programs.
There's no excuse for this "phoning home". Especially for XSANE, which is a rather bad scanner interface.
> GIMP will stall for 10-15 seconds at startup looking for XSANE plugins. Apparently it's calling some external server, bad in itself, and that external server is slow. Worse, this delay stalls out the entire GUI for both GIMP and other programs.
Do you remember where you came across that explanation?
I'd be very surprised if it weren't something like an mDNS query with a high timeout. Which is it's own problem (ideally it'd be async), but a far cry from it trying to access something on the internet.
It absolutely requires a responsive UI, there are many studies that show that milliseconds of latency matter when it comes to engagement, and it is one of the most important criteria that Google uses for ranking. Of course Google is all about serving ads and content consumption.
TikTok success can be largely attributed to the uninterrupted flow of content it provides.
Same thing for audio and video compression, streaming is also about milliseconds, slowness breaks the experience on a subconscious level, and who knows the user may even take his eyes off the screen and get out or read a book, terrible!
OP was clearly being sarcastic, but I'm not convinced that this isn't how Microsoft et al actually see things. Latency is hard and not core to how they see the world, so might as well not invest in it. In reality, it's very important, but we're fine as long as we can ignore reality.
I have mostly old hardware (the best computers I have is a smartphone and a macmini from 2012). I'm disappointed when I load newer Linux distros and they appear slower. Even though I now have SSDs. Browsers are really greedy. I have a machine with 2GB of ram and a single core CPU. It's a low power rig. But cannot run Windows 10, it just about did when I first installed it. Debian runs better, but a browser kills it. It's a one app only. We had someone donate us a Windows 3.1 rig once, at the time we had 600MHZ Windows 2000 machines and ME. And we were gob smacked at how fast the 3.1 machine was on some ancient hardware.
Windows always used to have that fresh lightweight install vibe. But as soon as it started indexing, doing updates, and virus scanning it would drag to nothing. Along with all those system tray apps that would take an age to fire up.
Partial suspend to disk made things boot faster. Like not doing the whole driver scan thing on boot. I don't know if Linux has ever taken this on board. It's a blessing and a curse as Windows doesn't like being ported to another machine, whereas my Debian disks I can swap between some desktops and laptops without much issue.
There's also that weird delayed animation thing, that is meant to feel like polish. But slows down desktops. Weird animation effects and what not. I tend to run XFCE and turn off any thing like that.
I'm using a Chromebook right now and this is an old machine, but still feels pretty snappy. Certainly weird and wonderful experiences between hardware. I have High Sierra on an SSD on my Mac Mini, and that's slower by far to boot than my Linux Arch box on the same age hardware. Having said that the UI always feels more responsive as it's tailored for that.
Linux suffers lots for me with kcompactd or whatever it is. Some weird memory disk swap stuff. If I accidentally code an infinite loop my machine turns to complete mud, and takes about 5 minutes to recover. Whereas it boots to the browser in under 1minute. Weird huh?
> I have a machine with 2GB of ram and a single core CPU. It's a low power rig. But cannot run Windows 10, it just about did when I first installed it. Debian runs better, but a browser kills it. It's a one app only.
I put Alpine Linux on my laptop that has 1 GB of RAM and I can actually have a lot of stuff running at once, especially in the terminal (Emacs, SBCL, w3m, Deno).
But yes, opening Firefox consumes all available RAM until I have to power off the machine, sadly. The lightest weight, but functional browser, I've found is Midori, but I probably wouldn't trust it for, say, accessing my bank account.
It's depressing because I was initially blown away at how fast (and productive) old hardware can be... until I tried to use the web.
"Retrofitting performance into existing applications is very difficult technically, and almost impossible to prioritize organizationally."
While this may not be the only explanation for why this trend continues, I think it's *spot on* to explain why this continues in commercially-funded software.
The prioritisation has changed. Opening apps is slower, starting new documents/views within the apps is fast. With enough RAM you just keep all the apps you are using running and switch between them.
I've not started an app so far today, not sure I did yesterday either for that matter. I certainly created plenty of tabs in chrome, and new terminal windows, but those just pop as instantly as far as I can tell.
Applications tend to stay open on Mac even with no windows open. As a KDE on Linux user, I tend to open and close apps as I use them. Very few applications take more than a second to load on a modern machine with an SSD, anyway.
I find that many desktop GUI programs are slightly misbehaved for whatever reason, and consume some small but real amount of resources when idle in the background. When a dozen are open response time will be poor.
Its not just that current crop of systems are laggy, but there aren't even clear paths towards recovery. Maybe xilem will get released and mature at some point for devs to be able to rally around it and build new desktop experiences with great perf again, but the timeline for that feels more like in 2030s with great deal of uncertainty. But in general I don't see any groups or projects taking serious, principled, look into making desktop performance great.
Root cause is that startup path is more crowded these days with JIT, dynamic linking, app initialization, filesystem/DB scans, sandbox initialization, subprocesses, etc. It's a natural consequence of the apps (or underlying frameworks) growing in size and complexity. Optimizing it all by hand is unreasonably expensive if not impossible.
What we need instead is a way to persist the process or even whole sandbox to disk and then fork off new instances from that. That's much less work and much more effective than piecemeal optimizations (AOT, native code, etc.). Unfortunately, existing platforms and frameworks do not support forking, let alone forking from persisted app image.
This sounds hilariously like how you make a lisp executable binary.
Looking back at an old discussion on emacs trying to abandon the "unexec" idea, it is notable that they saw a substantial speed benefit from the old method. (https://lwn.net/Articles/673724/)
On one hand, lag is real. I rebooted my M1 Mac mini yesterday running Ventura, and while it only took a few seconds to boot to the desktop, it took, like, two minutes for it to stop beach-balling and finish initializing. (It was most likely waiting on network I/O to finish, but still.)
On the other hand, machines are faster than ever before while doing more than ever before, and apps are more flexible and change more quickly (largely in part to super approachable cross-platform frameworks like Electron). A few milliseconds of UI latency will bother almost no-one.
On the other _other_ hand, Julio's demo of using Windows 2000 on an time-appropriate machine is best case. I definitely remember waiting what felt like eons for lots of useful apps to load, like Office and Internet Explorer. (Remember when browsers had startup screens?)
He's definitely right about how far hardware has come since 1999.
The shift to Apple Silicon was even more transformational than the transition to SSD, IMO. Julio's right in that the shift to SSD was night-and-day, but going from an Intel Mac to an Apple Silicon Mac sped up literally everything you used to an insane degree AND you got 12+ hour battery life to boot.
This used to be an either-or kind of trade-off, and for Windows devices, it still is.
To wit: I used an Intel Mac Pro (the trash can) last week for some audio work, and that super expensive almost-server-class machine was _slower_ than my base model M1 Mac mini at home, while running (slightly) louder and hotter!
>On the other hand, machines are faster than ever before while doing more than ever before, and apps are more flexible and change more quickly
I suspect this would fall apart the instant you try to rigorously quantify it. Put an actual number on how much faster machines are, and put an actual number on how much more useful stuff is happening. I cannot imagine that all of the slowness could be accounted for. I doubt you'd even account for 10% of it.
>A few milliseconds of UI latency will bother almost no-one.
No, this mindset is a death sentence for good software. All it takes is a few people at each level of the stack making the same excuse. Five milliseconds here, three milliseconds there, ten milliseconds, surely nobody will notice. But it all adds up (or worse, it multiplies in some cases).
> No, this mindset is a death sentence for good software. All it takes is a few people at each level of the stack making the same excuse.
To pile on to this, it's not just five milliseconds here and there in your application but its position in the rest of the system. Your application's 5ms regression is tested on an unloaded machine (for example). That means you lost 5ms on an unloaded machine with all its considerable resources available.
That regression is very likely to be much larger in a loaded system where your process is contending for resources. On a loaded system your 5ms regression becomes a 20ms regression along with everyone else's 5ms regression. So now the user is sitting there wondering WTF is happening.
There's a pretty good chance your GUI application will be running on a mobile device of some sort. This means it's running on a device with power limitations. The CPU might be vastly underclocked to save power or you could be running on an efficiency core instead of a more powerful performance core. Now your 5ms performance regression is possibly 100ms. A couple such regressions that "don't matter" on the 3GHz test desktop become significant on the user's laptop with 15% battery remaining that's clocked the CPU down to 600MHz.
Performance regressions should always be a concern and testing should to encompass the pathological worst cases. Even if there's nothing you can do about performance regressions (that 5ms of work is exploit mitigations or something) they should at least be quantified and understood.
> On the other hand, machines are faster than ever before while doing more than ever before, and apps are more flexible and change more quickly (...). A few milliseconds of UI latency will bother almost no-one.
On the forking hand (distinct from your "other _other_ hand"), the "doing more than ever before" neatly cancels out "faster than ever before", which prompts a question: how useful is whatever it is our machines are doing now? Because one thing is clear: most of that extra work is not visible to the user. The increase in software capabilities is much, much smaller - nowhere near enough to explain where all the performance goes.
We could break the extra work into several categories:
- Support for better hardware - e.g. higher-resolution screens require more RAM (somewhat compensated by GPUs), and require higher-resolution assets to maintain (not even improve on) decent clarity, which has second-order effects on storage use (partially offset by compression) and CPU use on decompression (the price of offsetting storage use), etc.
- Security features - cryptography, secure protocols, isolation, sandboxing, etc. - those all cost CPU and memory.
- "Security" features - "real-time" malware scans (bane of my existence), shady anti-virus/firewall software (i.e. almost all of it), and various other parasites living off people's fear (or need for regulatory compliance), and convincing users to pay them with money, time and compute. Not (x)or, and.
- Legitimate new features - handling more type and variety of content, more advanced algorithms, etc. Continuous auto-saving and collaborative editing are two big and perhaps underappreciated generic advancements.
- ... ???
The first four items on the list don't seem to account for all the performance loss, especially for software that doesn't do things requiring modern security, and offers the same set of features as its equivalent 10 or 20 years ago. Often less features, even.
That last point, the "dark matter" of computing, needs more detailed study. We can guess a chunk of it is developer convenience, but that's not all of it either.
WRT. developer convenience - as a developer, of course I like it, but it's really getting out of hand - at this point, whatever I buy for myself by externalizing the cost on users, I still lose because I'm also the user of development tooling, and those developers externalize on their users too...
> I definitely remember waiting what felt like eons for lots of useful apps to load, like Office and Internet Explorer. (Remember when browsers had startup screens?)
I agree completely.
It's definitely disappointing that simple programs like Notepad and Paint are now laggy (I can reproduce the exact same lag on my 2-year old 16-core i7), but computing in general was SLOW back then.
Booting a computer often took over 5 minutes. I'd turn it on, prepare a drink and come back to the desktop partially loaded with a laggy 'wait' cursor waiting for my startup programs to finish loading.
Launching larger programs such as Word took more than a minute in some instances.
You're playing music AND want to open another application? Well that application will open 2x slower now due to lack of RAM.
I'd love to see a comparison of a current version of Word on a current machine opening vs Word 2000 on a 2000 machine.
I know it's easy to point at Electron specifically for examples of poor performance (and the author acknowledged it) but Spotify isn't a great example - it's the CEF flavour of web based frontend.
Always reminds me of the YouTube interview of Todoist's CTO where he was asked by the JetBrains guys what is the architecture of the desktop, and CTO sheepishly said Electron and the JetBrains interviewers couldn't stop laughing.
Used to be you would click on a URL and the page would load immediately, now there's a half a second wait so you can be tracked before they feed you the page. It's very noticeable.
There was a period where getting a new computer or phone meant you’d viscerally feel the power in your hands and everything would be tangibly snappier.
Then the SW and Product teams would catch up, add more crap and it’d be time for a new machine.
It seems like that cycle has ended and HW will never catch up again at this point.
Windows 2000 was later corrupted by XP-ification. The primary difference was 2000 was "NT 5", where the Explorer UI appeared to be split between foreground and background components. XP threw this away and went with the Windows 9x/ME shitshow as late as Vista. Windows Server 2003 (Whistler Server) was better than XP (Whistler) and was mostly an improved Windows 2000 (originally named NT 5 in the source code). I would generally consider using Windows Server 2003 for workstations instead of Windows XP because of its close compatibility with the latter. Windows XP Professional x64 (amd64) and Windows XP 64-Bit Edition [ia64] existed but didn't find many takers except in retro systems and niches where they had HP Itanium machines.
One of the reasons I showed the Surface Go 2 tablet is because it is running the Microsoft out-of-the-box experience. So... I'd expect 1. for it to behave nicely and 2. for Microsoft to have profiled and optimized this. It is true that Windows 10 (the version that originally shipped with it) behaved better, but not by much. In any case, the Surface Go 3 is a contemporary computer with Windows 11 and people quote it as only being 15% faster than the Go 2, so it's not going to be substantially better than what I showed.
Apple has traditionally fared better in this regard. In general, any new computer model they launch runs the contemporary OS version perfectly well, even if the newer OS does more stuff.
I also question whether all of the "extra stuff we do today" is valuable though... but the answer will depend on who you ask.
> Nobody prioritizes performance anymore unless for the critical cases where it matters (video games, transcoding video, and the like). What people (companies) prioritize is developer time. For example: you might not want to use Rust because its steep learning curve means you’ll spend more time learning than delivering, or its higher compiler times mean that you’ll spend more time waiting for the compiler than shipping debugging production. Or another example: you might not want to develop native apps because that means “duplicate work”, so you reach out for a cross-platform web framework. That is, Electron.
Small but important disagreement IME: what companies prioritize is time to market which is often the same as developer time but importantly different in one key way: it means it's no longer an issue of "lazy developers" but it's a market question of "what will users tolerate."
I don't say that just to shift blame, but because it's really key to understanding what's going on - as the article notes, performance "where it matters" has gotten better and better over time.
We also have a few other improvements in the last two decades that have made things like "app startup time" less important for the market: OSes that don't need to be restarted every day, and enough RAM (and good enough memory management) to manage running a ton of open apps/windows/tabs without manually quitting and re-opening apps all the time.
Would you pay more for a Notepad that opened in 0.1s instead of 2.0s or whatever? Would you give up other features? Certainly a lot of people on HN likely would - and many probably already are running Linux (hell, I am on one of my machines as a daily driver) which hasn't had the same "ship features fast!!" pressures.
Maybe one year Apple will do another Snow Leopard, or MS will do something similar - https://www.macworld.com/article/191006/snowleopard-3.html - but right now most folks I know are enjoying the features, even the ones that we power users often find superfluous or downright annoying.
We need a way of benchmarking all the things super easily, so you can say, "version x of this app is 3.2 times slower than version y, and compared to xx software, it's 30% slower". Then the bean counters will have numbers to put in their powerpoint slides, and it can be a point of differentiation between companies.
Like a standard GUI test suite. One click and it's measured. It could ignore or use predefined times for each network query.
I strongly believe the technology becoming faster is itself the biggest problem around slow technology. The constraints of slow hardware forced developers to do things "the right way". Today, you can serialize a gigabyte of JSON before the user will likely notice. Really hard to not be tempted by this dark path of incremental performance cuts in favor of dev-time convenience.
Just look a bit further down the front page of HN today and you'll find a legitimately-heated debate regarding whether ORMs bring value or no. But, who can blame the developers? The pressure from management is eternal and some of us like to spend our time on non-computer problems too. Reaching for a JSON serializer or ElectronJS over Win32 and raw SQL is an understandable decision when presented with real-life pressures.
I think if we want to see good user experiences, the capitalists and management will need to start prioritizing them again (at their own expense). This is a very difficult narrative to sell, but I've had some traction in my company with "higher order consequences of good UX today means more sales in the future" kind of talk. There is also a story about pride in the product and the improvements this brings for internal operations. Imagine if your executive leadership came into the room, smack talked the electron JS pile and insisted it be rewritten from scratch while targeting native toolchains? Would that be enough of a fire to get the average developer to start thinking again or are we too far gone at this point? Clearly, we are mostly beyond the phase of "valiant developer works overtime to strive for better UX despite any and all mgmt pressures".
If my cofounder told me to translate our current Electron app to something else (maybe Rust), I'd jump on it. As it is, I've wondered if I have an obligation to do so anyway, on my own time, giving up all nights and weekends until it's done. But our actual users don't even seem to care, and some have been very positive about the product.
I've posted about this before, but this is a fantasy land of the "before times". Sure, a bare app on a bare system would open snappy, but who cares? Get any significant load on that machine and it begins to wither and die. It regularly took minutes to load large docs, large 3d files, large dev environments. Sure it could load a simple text file instantly, but try to do a search through a few 10,000s of lines of code and watch things slow.
Modern software does have bloat, and I think we need to fix it, but modern applications are doing far more than just slamming in a text file. In 2000, OSs and apps weren't connected to anything. Their functionality was miniscule compared to what we have today, and they crashed all the time. Losing your work was a constant, which is why people learned to make constant backups.
I loved win2000, but I'd never go back there (although I would like advertisement to stay the hell out of my OS!)
I agree with a lot of the points in this article but I think the ship has long since sailed, it seems very unlikely to me that companies will revert to native frameworks over electron/chromium, the incentives are not there.
"Native" is not really the right thing to look in here. As the article notes, even Windows built-in apps like calculator (/notepad/explorer/paint) are still "native" to some degree (certainly not electron based) and nevertheless dog-slow compared to their predecessors.
Any Windows users should have switched to something like Qalculate.
Or the calculator from ReactOS, which is virtually the same and it would run and look native under Windows.
There's a new wave of C++/Rust GUI libraries that are looking to be the frontier of bridging the gap from cross-platform -> native frameworks. If that bet pays off, we might have an alternative to Electron. Would be a couple of years down the road however.
Ok. Can we at least revert to sanity in the FOSS realm then?
My guess is no, since for many writing FOSS software is just a way to hone their saleable skills for BigCorp where they'll spend their days finding new creative ways to put ads in front of people's eyes.
Not all code is slow. I can now boot a FreeBSD kernel in under 20 ms, which I'm pretty sure beats a 20 year old FreeBSD (not to mention a 20 year old Windows).
Is this on real hardware? And does “boot” mean to a login prompt?
I ask because I’ve never had a computer (no matter how fast) boot in less than 10 seconds on Windows or Linux. Just getting to the boot loader takes a good 3 seconds or more. If I have to FreeBSD to get a fast boot, I’ll do it! (On an old laptop, at least).
That's in Firecracker, time it takes to boot the kernel aka before we start running init.
Including userland -- i.e. "boot to login prompt" -- it's around 450 ms on the same VM; I haven't optimized the userland bits all that much yet.
Real hardware will take longer and will vary of course; we can't do anything about the time between poweron and when the boot loader starts running, for example.
This is everywhere. Quickbooks Online just changed significant portions of their UI such that there are noticeable delays when typing in many input fields. Its hideous and a big slowdown when trying to get work done.
I don't think many of these programs are used by the people who write them.
When they refreshed the Calculator app in the same way as Notepad, I went on a bit of a rant and "took it apart" with tracing tools to see where the computer power was going.
The problem is that modern package management tends to "drag in" unnecessary dependencies, each of which has to be loaded, remapped, virus-scanned, CRC-checked, etc...
The new Calculator was loading NVIDIA libraries, Windows Hello 4 Business Password Recovery helpers, and a decent chunk of a web browser.
Why?
1. Calculator gained some basic graphing capabilities, so it calls some DirectX functions, which call NVIDIA functions, which... drag in hundreds of megabytes of game driver garbage, including Event Tracing for Windows (ETW) circular logs that NVIDIA uses to spy on you. Err.. I mean "improve software quality".
2. It uses web requests (e.g.: currency conversions), which might need to go through a corporate proxy, which might need authentication, which might need Windows H4B, which might need a self-service password reset.
3. The aforementioned web requests use HTTPS and HTTP/3, which then drags in QUIC, a bunch of cryptographic libraries, CRL checking, Enterprise PKI policy, etc...
The hiccup is that all of this "work" is being done by the Calculator app itself, not the operating system. The OS just provides the DLL files, it doesn't pre-cache them in any meaningful way any more. These are all user-mode libraries, so each application has to load and process them in full.
Even in web development, you get the same effect. A small web site in ASP.NET / C# might have been 500 KB in the past and would load in milliseconds and then take 50 MB of memory. Now? If you add the new "session state" plugin, it drags in Azure SQL support, which needs Azure Active Directory Authentication, which needs OAuth, which needs JWT, which needs a JSON parser, which needs the new UTF8 decoders, which needs the new Memory/Span code, etc... That 500 KB app instantly bloats to 350 MB and takes 10 seconds to restart!
In NT4 and 2000 authentication and web proxies were "ambient" services provided by the OS, and much of it was in the kernel or pre-loaded and shared by all applications.
Summary:
The transitive dependencies of user-mode libraries are always processed in full. There's no compile-time "trimming" at this level. There's no try dynamic loading any more either, "dynamic" libraries are loaded statically because of hash-verification and anti-malware checks. There's no sharing any more to avoid DLL hell. All of the above also breaks the older optimisations where code was paged in dynamically.
The result is that it is "so easy" to write code now, but impossible to make it launch fast.
Perhaps what's needed is hibernation for apps? Just let all apps run forever and swap them out to disk when not in use. Hibernate the whole system instead of shutting it down while at it.
I somewhat blame the internet. Before the internet it was unheard of to wait for an application or dialogue to load. The internet normalized this deviance, and here we are.
There's actually an answer to this; if you compare the Windows machine to the average spec from the time it was released you'd probably find comparable performance. This is not a coincidence. What is happening is this: Work is put in to attain the minimum level of performance a paying user will find acceptable. This threshold has not changed, so in general, neither has real performance. The places where modern computers actually strut their stuff is where the paying users care a lot. For example, a game pushes 120Hz at 4K because the paying customer cares. Video editors care about every last bit of performance because at scale each bit becomes seconds to minutes to hours pretty quickly.
(There's an averaging effect to. That game paying customer might like it if Windows was snappier, but in that market they're just another voice in the crowd of people who on average are satisfied with the current status quo.)
"Paying customer" is important too because you can almost measure how much social power the user has by how slow the software is. Why is the point-of-sale device in the checkout miserably slow? Why does the device the desk receptionist is using at your national-scale bank need 30 seconds just to pull up your appointment information? Those people have no social power, so they get to just sit and wait while the computer swaps things in and out for a truly simple task. Executives get machines that can switch between email and the browser in a heartbeat with the other 24 gigs of RAM and 6 cores do nothing. Using what machines the developers get as a metric for how much the company cares about them is something smarter developers have been doing for a while; if your developer interviewer uses the 8 minutes it takes their machine to boot to sing the praises of the engineering culture the company has, well... make appropriate judgments.
(In 2023 this almost impresses me. Setting up a computer that needs 30 seconds to switch between a calendar and an email is almost a challenge now. You need to harvest those last few precious $10/unit to give them really shitty hardware and combine it with an almost sociopathic dedication to layering on enough virus scanning and tracking software to slow down a cheap machine even so. Even the cheap stuff shouldn't be that slow!)
If you work at it even a bit on Linux, you can set up a very responsive system. You've been able to for a long time, really. But you may find you don't get to use the latest and greatest of the heaviest weight desktop system at the time. It's a lot easier on Linux to use the stuff from years ago that flies now. Emacs use to be joking referred to as "Eight Megabytes And Constantly Swapping"; now it's a "light-weight" text editor. (Though it doesn't quite start instantly for me, it is under a second in my configuration. YMMV.)
>if you compare the Windows machine to the average spec from the time it was released you'd probably find comparable performance.
This is not the case, he specifically debunks it it in the twitter thread. He took a contemporary late 90s machine with modest specs, and it was still snappier than the modern PC with Windows 10.
And even if that were the case, it does not follow that we should accept modern machines not being able to load Notepad instantly. Modern machines are hundreds of times more powerful than 20 years ago, but Notepad has not become hundreds of times more featureful.
There is a lot of fuzz in that comparison. I used machines at that time. They were not snappy if you had the average specs. I am not convinced he got that in his comparison. In particular modern iterations of those old machines are usually stuffed with RAM by comparison because since then it has become so dirt cheap to do that that it's not worth not doing it. I acknowledge he at least tried but I'm not convinced he succeeded and even by his own text you can tell he's not exactly putting a huge marker down on the older system being the exact right system. Average specs typically dipped into swap. (The supposed minimum specs were often a joke.)
"accept"
I'm a bit at a loss as to how anything I said sounds like we should accept this. Did I not call it almost sociopathic how IT departments manage to slow machines down for people who have no social power? Does that sound like I think this is hunky dory?
Not snappy on reading I/O but the UI was fast as hell.
With SSD's a Pentium IV with SSE2 support with TDE3 under Slackware Linux running Konqueror, Amarok and Kopete (Konqueror against a non-JS site) will run circles around lots of core i7 setups with Spotify and Slack...
I am going to play devil's advocate here. What can an average person do. I eventually dropped windows for a linux distribution and while it was a learning experience, I can kinda see why it is not a default approach for most people. It is hard for most to overcome inertia. Personal example, before I felt compelled to jump I installed dual boot I almost never used. Things simply have to get bad enough for people not to accept things. As long as they in the 'annoying' realm, most will put up with it.
It is a sad commentary about human nature, because it suggests a slow enough decline will be accepted.
Which features? For instance, Kopete had far more features than Slack and Discord combined and using 10X less resources. With inline videos, videoconferences, inline LaTeX rendering and who knows what.
Ironically Spotify and the rest of the Electron turds barely cover the 10% of what we did with Amarok and KDE3 with Kparts and using 10% of the resources.
What we are seeing it's people using damn 18-wheelers with less features than a tricycle.
Opera for instance was propietary but it even had a Torrent/Email client while being usable even under a Pentium 4 at amazing speeds.
Now try that today with Vivaldi.
More IM. Pidgin. Bloat it to the extreme with 4 or 5 or even 10 protocols being run at once while adding lots of plugins such as services for Google Translate, inline videos or HTML parsing until it uses ~512MB of RAM. Still it runs far snappier than Discord, it uses 4X less RAM and a simple Athlon/Pentium III can drive literal thousands of conversations in parallel.
Which features did the slower Notepad gained? And just as importantly - which of these features could not have been gained without a drop in startup time?
That makes me wonder, would a notepad executable from XP or 7 run on 10 and 11, and would it be faster?
On my Win 10 machine with an SSD and animations turned off Notepad and cmd do launch almost as fast as in that NT demo so maybe it wouldn't be much of a difference.
I'm not sure this is a great example. A formula 1 car is significantly larger and more complex than any go-kart is is 4-5x as fast. In fact almost any car you can purchase will significantly out perform a go-kart.
I don't think just adding features means software must be slower. I would venture to guess Emacs has more features that Word, especially when you factor in plugins, and is significantly faster than Word.
The more I keep thinking about this the less I am inclined to do any optimizations. This is operating system problem. OSes are supposed to implement proper sleep modes for apps instead of mindlessly restarting them.
Don't throw this on devs. Attempting to fix the problem on dev side would cost billions of dollars worldwide and it would probably fail. Meanwhile, proper app sleep modes across all OSes is a comparatively small million-dollar project.
Personally, I am not going to invest a single second of my time into optimizations of app launch if I can avoid it. If you are bothered by app launch performance, go hack app sleep states into Linux or other free OS instead of shouting at devs.
We are very, very bad at what we do, yet somehow get richly rewarded for it.
We've even invented a new performance problem: intermittent performance. Performance isn't just poor, it's also extremely variable due to distributed computing, lamba, whichever. So users can't even learn the performance pattern.
Where chip designers move heaven and earth the move compute and data as closely together as is physically possible, leave it to us geniuses to tear them apart as far as we can. Also, leave it to us to completely ignore parallel computing so that your 16 cores are doing fuck all.
You may now comment on why our practices are fully justified.