> ...software starts to demand more CPU cycles, since there's less of an incentive to optimize.
Node and Electron and crew would agree.
A "modern" chat client (Element, Signal, etc) will lag on a quad core 1.5GHz system in terms of "typing into the text input box" (RasPi4, yes, I've set the governor to performance), when a single core 66MHz 486 could manage that trick without any lag in a range of chat clients. Hexchat still works fine, at least.
What we need to do is stop letting developers use high end, modern systems, and put them on something a decade old for daily use. Then maybe they'll stop writing crap that only works well on a 1-2 year old Xeon workstation. Google, I'm looking at you. Buy all your devs a Pi4 and make them use it once a week for a day.
Some of it is explained by bloat but some of it is explained by things like 4K-5K displays, pretty antialiased fonts, support for multiple languages and unicode, support for inline images and videos, emojis, support for all kinds of inline objects like polls or HTML documents, transport encryption everywhere (sometimes multiple layers of it like Signal ratchet over TLS), scrollback history with search going back to the beginning of time, real time sync between devices, and so on.
High resolution displays for example impose an exponential cost right out of the gate. Just printing "hello world" requires more pixels as a function of the square of the increase in resolution. That's more memory, more memory bus traffic, larger bitmaps, scalable vector graphics formats that take more processing power to draw, antialiasing, ...
Again I am not writing off bloat. We could probably shrink most software by at least 2X and most Electron stuff by 3-4X. I doubt we could shrink it 10-100X though without sacrificing some of the things I listed up there, especially all the eye candy and rich interaction media.
In my particular context, it's a Raspberry Pi on a 1920x1200 display, 1:1 pixel scaling, no antialising I know of, and... the rest shouldn't matter that much in terms of how long it takes a key pressed at the keyboard to show up in the input text area. If it does, something is quite wrong in the architecture, IMO, and it's probably 4 layers of nested libraries nobody actually understands down, buried under something else.
I don't think pixel density has increased as much as you seem to think it has in the 1x scaling space.
I ran some very nice 1600x1200 21" monitors back in the day - ~95 ppi.
A 27", 2560x1440 monitor (my preferred native size) is 108 ppi.
The 24", 1920x1200, is 94 ppi.
I understand the issues with scaling a display, but those are more OS level, and shouldn't impact key-to-screen delays noticeably. It certainly doesn't on more recent Apple hardware, even if you're using some screwball non-integer scaling.
But the reality remains, I now have 6GHz of CPU cores, 8GB of RAM, and things are objectively slower than 66MHz with 24MB. That's not progress.
And don't forget accessibility. I don't know exactly how much accessibility contributes to the total size of Chromium (singling it out since Electron is everyone's punching bag), but it's something. In my AccessKit project [1], I'm probably going to spend at least a few weeks working on the accessibility of text editing alone. And that's just multi-line plain text; hypertext is way more complicated.
Node and Electron and crew would agree.
A "modern" chat client (Element, Signal, etc) will lag on a quad core 1.5GHz system in terms of "typing into the text input box" (RasPi4, yes, I've set the governor to performance), when a single core 66MHz 486 could manage that trick without any lag in a range of chat clients. Hexchat still works fine, at least.
What we need to do is stop letting developers use high end, modern systems, and put them on something a decade old for daily use. Then maybe they'll stop writing crap that only works well on a 1-2 year old Xeon workstation. Google, I'm looking at you. Buy all your devs a Pi4 and make them use it once a week for a day.