The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.
In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.
Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).
So I absolutely agree with you.
I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while programming.
The younger programmers may be right that resources don't matter much because they are cheap and available. But now I had to upgrade my RAM.
Web apps masquerading as desktop apps are terribly slow and it's a surprise we've got so used to it. My slack client takes a few seconds to launch, then it has a loading screen, and quite often it will refresh itself (blanking out the screen and doing a full-blown re-render) without warning. This is before it starts spinning the fans when trying to scroll up a thread, and all of the other awkward and inconsistent pieces of UI thrown in there. Never mind having to download a 500MB package just to use a glorified IRC client.
I'm really enjoying writing code outside of the browser realm where I can care a lot more about resource usage, using languages and tools that help achieve that.
 - https://cancel.fm/ripcord/
"(ii) copy, adapt, modify, prepare derivative works based upon, distribute, license, sell, transfer, publicly display, publicly perform, transmit, stream, broadcast, attempt to discover any source code, reverse engineer, decompile, disassemble, or otherwise exploit the Service or any portion of the Service, except as expressly permitted in these Terms;" 
Given that the API is not public if you are not using a bot key, I would think that using it with a third party client would take some form of reverse engineering.
The devs also stated that other client modifications like betterDiscord are against the TOS.
https://discord.com/terms (Under Right To Use The Service)
But at least it's generally faster when you do it!
I'm not convinced it's the programmers driving these decisions. Assuming that it takes less developer effort - even just a little - to implement an inefficient desktop application, it comes down to a business decision (assuming these are programs created by businesses, which Spotify and Slack are). The decision hinges on whether the extra cost results in extra income or reduced cost elsewhere. In practice people still use these programs so it seems the reduced income is minimal. What's more, the "extra cost" of a more efficient program is not just extra expense spent on developers - it's hard to hire developers so you probably wouldn't just hire an extra developer or two and get the same feature set with greater efficiency. Instead, that "extra cost" is an oppotunity cost: a reduced rate of implementing functionality.
In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too. I'm not saying that I agree with it, but it's how the market works.
And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it. Sure, a factor in this is that it works on multiple platforms (including mobile) and you don't have to worry about setting it up for yourself, but that has nothing to do with the in-app features and overall UX. Or Spotify, whose biggest value is that it's a cheap and legal alternative to pirating music. And that value has, again, nothing to do with softare, and everything to do with the deals they've managed to secure with artists and labels.
I exercise my preferences wrt. Slack by using Ripcord instead of the official client. Most people I know exercise their preferences wrt. Spotify by using YouTube instead (which is arguably lighter resource-wise). And speaking of alternative clients, maybe that could be the way to go - focus on monetizing the service, but embrace different ways of accessing it. Alas, time and again, companies show they prefer total control over the ecosystem surrounding their service.
The consumer here is the business itself, not their employees.
I do webdev mostly and there it's also a matter of management. I want to optimize applications to be less hungry, those are interesting challenges to me. But I've been told by management to just upgrade the server. Either I'd spend a day optimizing, and maybe fixing the issue. Or we just spend 50 euros a month more on a server.
Sometimes the optimization is not worth the effort. For applications like Blender? Optimization means a lot.
So, discounting additional effects like more satisfied userbase, the optimization would pay for itself in a year. And optimizations stack.
... articles with titles like:
“Slack in one Ruby statement” a la https://news.ycombinator.com/item?id=23208431
More seriously though, Spotify and Slack are optimised to intentionally be huge time wasters, so it makes sense the organisations that produce them don’t care about performance / efficiency.
On the topic of flagrantly unnecessary resource usage...
My first child was born six months ago. Newborns (we discovered) sleep better with white noise. So of course we found a three hour white noise track on Spotify and played it from our phones for every nap, never bothering to download it.
But I can't. My ram is soldered on. How many tons of carbon dioxide should I emit so that you can use React? There are ways to do declarative ui/state management without dom...
My computer still computes with 2 GB of ram. It’s just that developers are gluing more and more stuff together to do things we did on Pentium processors with 64 MB of ram.
If there was some modern tool like WxWidgets that supported modern apis like DOM, Android and UWP, would we see more use of native? Electron would therefore become pointless.
Qt is a modern toolkit with native-cross platform support, but costs money for commercial use, and businesses and software developers don't want to spend the money on it.
Not to mention, the HTML/CSS combo is possibly the best we've come up with for designing user interfaces.
I recently got a new PC myself and decided to go for 16GB, my previous one (about a decade old) had 8GB and I didn't feel I really hit the limit, but wanted to be future proof. Because as you said, a lot of 'modern' applications are taking up a lot of memory.
For each of your Electron apps there is a little compiler chugging away.
Spotify and Slack are not problematic as individual programs but since I have a lot of other programs open they are the ones that take up more memory than they should. I mean: Spotify is just a music player. Why does it need 250MB RAM?
This was meant to be sarcastic, but I'm not even sure how to continue. Maybe someone else can bulk up that list to get to something that requires 250MB. :)
Parenthetically, we do use Slack and I am double-dipping on a lot of heavy functionality by having both Spacemacs (which I use for code editing and navigating / search within files) and Visual Studio (which I use for building, debugging, and jump-to-definition) open at the same time.
Just had to upgrade to 32 myself
What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.
AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.
Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.
I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.
Internet was slow but that was largely because for the most part of the 90s i was stuck with a very slow 2400 baud modem - i got to appreciate the option that browsers had to not download images by default :-P.
But in general i do not remember being unable to run multiple programs at the same time, even when i was using Windows 3.1 (though with Win3.1 things were a bit more unstable mainly due to the cooperative multitasking).
Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.
I used to use it all the time. Now I use Spotify instead. I'm not sure I want to go back to curating my own collection of mp3's again.
People aren't using Spotify because the player is fantastic, they use it because Spotify has a huge library, is reasonable price and the player is sort of okay.
Like everything these days, it's barely good enough. And why bother implementing your DRM as a 1KB C++ library when you can use a 5MB Objective C framework instead?
Alright, the missing part of my story was that I was also using a proxy program to share a single connection with my brothers. NAT wasn’t widely available yet, and Winmodems were much easier to find than hardware modems. (And I hadn’t discovered Linux and the free Unixes yet.)
So, every TCP connection that AIM made was 2 connections in the proxy program. We quickly discovered that AIM on more than one computer at a time made the entire Internet unusable.
Every generation of developers decries the next generation for bloat, but Windows 95 had preemptive multitasking that made the computer so much snappier (plus other features), at the cost of multiple times more RAM needed than Windows 3.1. (16 MB was the unofficial minimum, and often painfully small. Microsoft’s official minimum was impractical back then.) Windows XP had protected memory that made it more feasible to run multiple applications, because they were much less likely to crash each other (plus other features, including a TCP/IP stack featuring NAT and a useful connection limit), at another several multiples more RAM needed.
There have always been tradeoffs. Back in the day, programs were small and developers focused more on making sure they did not crash, because they didn’t have lots of RAM and crashing would often require the computer to reboot. That developer focus meant less focus on delivering features to users. (Also, security has often meant bloat.) Now, you barely need to know anything about computer science, and you can deliver applications to users, at the cost of ginormous runtime requirements.
I used Visual Studio 6 for years, and yes, I can confirm, it was really that fast.
It's also not true that there were problems with more applications running etc as "Decade" claims. Or to be more precise, there were no problems if one used Windows NT, and I've used NT 3.51, NT 4 and 2000 for development, starting with Windows development even before they were available. And before that, Windows 3.x was indeed less stable, but it is the time before 1995. Note that the first useful web browser was made in 1993, internet as we know it today practically didn't exist. There were networks, but not web.
Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on. Starting with XP, the professional and consumer versions of Windows have merged. We are so lucky.
That explains your inaccurate perspective.
> Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on.
Allow me to claim that that is also not true, in the form you state it. Again, I've lived through all this, and I can tell you what that was about. The "pickiness" of NT was even at these times not about the motherboards and the chipsets. It was about the hardware consumer devices. Many things that probably don't even exist as the products today, like a black-and-white hand-scanner that scanned as you moved your hand over the paper and had only Windows 3.x drivers on the floppy with it. There was never a problem of having a developer machine running NT in any reasonable price range, with a reasonable graphic card, monitor, keyboard and mouse. And, at the start, a phone line modem transmitting some kilobytes per second!
The RAM needs did exist, but again not such as they are made to be believed by later distortions. If I remember correctly (it changed relatively fast), at the time NT was published, Microsoft had to deliver it claiming that it will run on 4 MB, the OS and the programs and the graphics, all had to fit. Let me repeat, 4 MB. It run, but not comfortably for bigger programs. But the point is -- as soon as you at that time had 8 MB you haven't had a problem. A little later, for comfortable work, 16 MB were more than a good choice. It was a hundred, two or three of $ more than the cheapest possible offer (yes, that were the prices then), but that was it. RAM was the only thing you had to care about to have NT running.
The point is, at that time there were enough those who didn't want to use Windows NT at all, clutching to 3.x and then 95 and these are those who promoted the horror stories about OS problems. But it was just their ignorance. 95 was also reasonably stable, unless you used, like many, some "utility" programs that were more malware than of real use (the "cleaning", "protection" or even "ram expander" snake oils were used by some even then -- no to mention that a lot of people believed they had to try any program that happens to access them).
The good development tools were good and stable, especially command line (in GUI areas, there were some snake oils among them too). But Word did crash even under NT, and even during the first half of 2000-s decade, and that's completely different story, that was intentional at that time for these products.
Yep, you really succeeded at empathy, there. /s
> reasonable price range
The word “reasonable” is doing a lot of work, here.
Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98, which had arbitrarily severe limitations and lacked memory protections.
So, while it was technically possible to run lots of applications simultaneously on 256 MB of RAM, for most people it was a fun adventure in whether some buggy program has destabilized the system into needing to reboot to run properly again. Or whether it’s still usable with degraded functionality. In my case, that’s without using the cleaning, protection, RAM expander programs.
And even on professional operating systems, web browsers crashed a lot, and any other program that had to deal with untrusted input, which is basically anything that can open files or connect to the network, has gradually bloated as they learn security or add features.
Once again: only somebody using a computer not selected for serious development used Windows 95 and 98. No developer who knew what he was doing was using Windows 95 and 98 as his primary development machine. So if you complain about that, you used the wrong tool for your work. Like I've said, it was easy to install Windows NT, and I don't know any computer which wasn't able to run it, if it had reasonably enough RAM.
> on 256 MB of RAM
To illustrate "reasonably" once again, that changed at these times: I remember buying an AMD-based notebook in 2002 with 256 MB and running absolutely without problems Windows 2000 on it for a few years, before upgrading to 512 MB, which was the maximum for that notebook. And that was the time of Pentium III and IV, not Pentium II, and like I've said, I've run Windows NT on 8 MB computers, all with compilers, resource editors, debuggers and even IDE. And even before, I've run Windows 3.11 on 2 MB computer and used that for development too (the development tools being in text mode, of course).
> some buggy program has destabilized the system into needing to reboot to run properly again
Only on non-NT systems, and surely not developer tools. I used Windows 3.x and Windows 9x, and never had to reboot due to the developer tools "making system unstable." Not even on a 4 MB or a 16 MB machine.
> web browsers crashed a lot
I've used both Mosaic and Netscape, and before 2000 my main problem was surely not them crashing. Surfing mostly worked (only the pages loaded slowly, there were no CDNs then). Again, on a NT system.
Maybe it’s simultaneously true, that you could run many developer tools at the same time on Windows NT with hundreds of dollars of RAM, and attempting to run a bunch of consumer network programs at the same time (especially on consumer Windows) was asking for trouble.
I remember one of the attractions of IE 5 back in the day was how each newly launched window was its own process (not windows opened by the open link in new window menu option), so unlike Mosaic and Netscape, a crash in one copy of IE did not necessarily bring down all the other windows. Multiple windows being useful because surfing with a modem was slow regardless of CDN. Remember when Yahoo was scandalous, because banner ads took so much bandwidth?
> and now you’re talking about developer tools on a Pentium 4.
It's to illustrate that arguments are wrong: it's Decade who uses "256 MB" as an argument which is not "small memory" for Pentium II, and I illustrate that it was common in 2002 for notebooks, the time when Pentium IV was common for developer machines.
> The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers
Let me check again:
"For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II."
OK. That is also obviously a bit off. 256 MB with Pentium II is quite a lot, as I showed 256 MB was normal even in 2002 for notebooks, as Pentium III was already common on notebooks and IV on desktops. Working with email -- at that time e-mail clients, if they used html at all, were limited to html formats of that time so "using email" completely worked, no crashes of system on NT (Outlook did have a limit of single PST having to be less than N GB, I remember that). IM too just worked, and also without crashes on NT.
That leaves "video/audio calls". Video calls were surely not common at that time, and I personally also haven't used audio calls.
But the "stability" problems you claim to have been common definitely didn't exist the way you claimed, as soon as one used NT, that is, since around 1994, or later on Windows 2000 or even later on XP or Server 2003, all NT-based. And as I've said, it was not that "too much" RAM was needed, as I've run NT on 8 MB with no problem.
So I still don't understand why you continue to stick to the narration that was simply not true. No, it was not that bad like you claim. Computers were quite stable even then for those who knew what they were doing. On NT, almost nothing crashed the system, except for failed hardware. Like I've said, it was that some apps were indeed less stable, like Word crashing or saving the invalid DOC file. But Excel, for example, while being in the same "suite" I don't remember to have ever crashed. I also don't remember browsers actually crashing, just the pages downloading very, very slowly.
But clearly you want to have the last word, so I guess I should let you have it.
In a similar vein, Industrial Light and Magic used to have a few highly talented people crafting incredibly intelligent solutions to make their movies possible: https://youtu.be/AtPA6nIBs5g
By now, most of those effects would instead be done using CGI and outsourced to Asia.
According to http://www.vogons.org/viewtopic.php?p=423016#p423016 a 350MHz PII would've been enough for DVD, and that's 720x480@30fps; videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.
Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.
I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams; instead a MCU or "mux box" is used to combine the streams from all the other participants into one stream, and they do it with dedicated hardware.
That said, video is one of the cases where increased computing power has actually yielded proportional returns.
Modern videoconferencing solutions (WebRTC) usually use 1280x720 and either H264 or VP8. Some apparently use HEVC. Also most modern processors and SoCs come with hardware-accelerated codecs built in, so most of the work related to compression isn't even done by the CPU itself.
> I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams
Yes you are receiving separate streams.
While on the topic, let's remember the speech recognition software available for Windows (and some for Android 2.x) that was completely offline and could be voice activated with, gasp, any command!
Google with its massive data centers can only do "OK/Hey Google". Riiight. I can't believe there are actually apologists for this bs.
Anyway, old speech recognition software was quite horrible. Most did not even worked without prior training. And Google does have now offline-speech recognition too. But true, the ability to trigger with any desired phrase is something still missing.
The inability to change it from Hey google is done for marketing / usability reasons.
Voice recognition used by things like Google Assistant, Siri, Cortana, and Alexa usually relies on a "wake word", where it's always listening to you, but only starts processing when it is confident you're talking to it.
Older speech recognition systems were either always listening and processing speech, or only started listening after you pressed a button.
The obvious downside of the older systems is that you can't have them switched on all the time.
It's so embarrassing saying Hey Google all the time, and for me, it just feels like I'm a corporate bitch, tbh. It's true, which just makes me feel worse :D
With Moore's law being dead, efficiency is going to get a lot more popular than it has been historically. I think we're going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.
We see new languages like Nim and Crystal with their only value proposition over Python being that they're more efficient.
Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.
Very few companies are rewritting their electron apps in win32 (although they should be). Instead it continues moving in that direction, or worse. Crashplan rewrote their java GUI a while back in electron. Java UI's are mostly garbage, but compared with the electron UI it was lightweight and functional. The electron UI (besides shipping busted libraries) has literally stripped everything out, and uses a completely nonsensical paradigm/icon set for the tree expand/file selection. Things like slack are a huge joke, as they struggle to keep it from dying under a load my 486 running mIRC could handle. So blame it on the graphics and animated gif's people are posting in the chat windows, but the results speak for themselves.
Though the only measurement I think people would actually care about is battery impact, and even that is pretty much hidden away on phones except to the few people who actually look.
But the other problem is: who cares if Discord or a browser's HN tab aren't optimally efficient? You're just going to suck it up and use it. With this in mind, a lot of the native app discussion is technical superiority circlejerk.
I'd say it's more of a "without a way for end-users to compare" --- the average user has no idea how much computing resources are necessary, so if they see their email client taking 15 seconds to load an email and using several GB of RAM, they won't know any better; unless they have also used a different client that would do it instantly and use only a few MB of RAM.
Users complain all the time when apps are slow, and I think that's the best point of comparison.
edit: it was 2.0, I misremembered.
If I decide that I don't want to use Slack because it drains my battery, then I can't take part in Slack conversations.
Because Slack is the go-to chat application for so many teams, excluding myself from those conversations is not feasible.
End result: I carry on using Slack.
This weekend I noticed that Amazon Fresh now delivers the same day—-for the past few months they had no slots. I switched to Amazon away from Instacart at once. The Amazon website lacks some bells and whistles compared to Instacart but it is completely speedy. If Instacart website were satisfactory I would never have switched.
Slow, bloated websites can absolutely cost companies money.
Windows 95 never had a proper, operating-system supported package manager, and I think that's a big part of why web applications took off in the late 90s/early 2000s. There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.
Mobile has forced a big reset of this, largely driven by the need to run on a battery. You can't get away with as much inefficiency when the device isn't plugged into the wall.
Of course apt-get is very convenient but I can't see a Microsoft version of it letting companies deliver multiple daily updates.
Based on my experience of the time the reasons were, in random order
- HTML GUIs were less functional but easier to code and good enough for most problems
- we could deploy many times per day for all our customers
- we could use Java on the backend and people didn't have to install the JVM on their PCs
- it worked on Windows and Macs, palmtops (does anybody remember them?) and anything else
- it was very easy to make it access our internal database
- a single component inside the firewall generates the GUI and accesses the db instead of a frontend and a backend, which by the way is the modern approach (but it costs more and we didn't have the extra functionality back then, js was little more than cosmetic)
Bloated, inefficient software is certainly present on the native side too, but it's also possible to write single-binary "portable" ones that don't require any installation --- just download and run.
(Hi, I'm a windows dev)
I would predict that if only Qt didn't cost a mind-boggling price for non-GPL apps. They should really switch to pay-as-you-earn e.g. like the Unreal engine so people would only have to start paying as they start earning serious money selling the actual app. If they don't Qt popularity is hardly going to grow.
I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).
Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.
I sense there's a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust's borrow checker, Swift's reference counting, or C++ smart pointers.
I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are "hard" in a way that C#/Java/JS aren't, and I also think it has a big effect on performance, or at least, latency. I completely agree we've backslid, and far, but the reality is, today, it's expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we're stuck with the Electron / web shitshow, in large part because it's just faster, and easier for non-specialists to develop. It's all driven by economic factors.
The language Go has sub millisecond GC with multi-GB heaps since 2018. See https://blog.golang.org/ismmkeynote
Java is also making good progress on low latency GC.
Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.
I don't want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.
I get the feeling that the industry is finally starting to realize that GC has been a massive mistake.
Memory management is a very important part of an application, if you outsource that to a GC you stop to think about it.
And if you don't think about memory management you are guaranteed to end up with a slow and bloated app. And that is even before considering the performance impact of the GC!
The big hinderence has been that ditching the GC often meant that you had to be using an old an unsafe language.
Now we have rust, which is great! But we need more.
The new JVM GCs (ZGC and Shenandoah) are more sensibly designed. They sacrifice a bit of throughput, but not much, and you get pauseless GC. It still makes sense to select a throughput oriented collector if your job is a batch job as it'll go faster but something like ZGC isn't a bad default.
GC is sufficiently powerful these days that it doesn't make sense to force developers to think about memory management for the vast bulk of apps. And definitely not Rust! That's one reason web apps beat desktop apps to begin with - web apps were from the start mostly written in [pseudo] GCd languages like Perl, Python, Java, etc.
There is no free lunch no matter what one picks.
I worked for a robotics company for a bit, writing C++14. I don't remember ever having to use raw pointers. That combined with the functionality in Eigen made doing work very easy --- until you hit a template error. In that case, you got 8 screens full of garbage.
Work expands so as to fill the time available for its completion.
Corollary: software expands to fill the available resources.
from A Plea for Lean Software (1995) https://cr.yp.to/bib/1995/wirth.pdf
A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.
It's our responsibility to minimize our carbon footprint.
> It's our responsibility to minimize our carbon footprint.
This, a hundred times.
As a long-time Linux user, that's what I say as well.
And as a privacy activist, that's what I routinely use.
If we save developer-cycles, it's not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.
> For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II.
Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don't realize there are there. Some of them working in the background, some being so naturally that you can't imagine them not being there since the beginning of everything.
In other words, pass the buck to the user (the noble word is "externality").
https://tsone.kapsi.fi/em-fceux/ - This is an NES emulator. The Memory tap in Developer tools says this takes up 2.8MB. Runs in 60fps on my modern laptop.
It seems possible to build really efficient applications in JS/WebASM.
Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749
The win32 code doesn't run anywhere, except on Windows, but most of the compute devices are mobile (non-laptop) systems and those don't come with Windows.
Running your native apps now takes both less work and more work: you can write (Somewhat) universal code but the frameworks and layers required to get it to build and run on Windows, macOS, Linux, iOS, Android, and any other system the market you target relies on now comes in as a dependency.
It used to be that the context you worked in was all you needed to know, and delivery and access was highly top-down oriented meaning you'd have to get the system (OS, hardware) to run the product (desktop app). That is no longer the case as people already have a system and will select the product (app) based on availability. If you're not there, that market segment will simply ignore you.
That is not to say that desktop apps have no place, or that CEF is the solution to all the cross-platform native woes (it's not, it's the reason things have gotten worse), but the very optimised and optimistic way to writing software from the 90's is not really broadly applicable anymore.
This is just the nature of “induced demand”. We might expand the power of our computers by several orders of magnitude, but our imaginations don’t keep up, so we find other ways of using all that capacity.
You might have used these words as a way to say "way faster", but factually you are incorrect. several orders of magnitude = thousands of times faster. No way.
A few kb for the binary + 20-40 gb for the OS with 25 years of backwards compatibility
Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.
While the browser security sandboxes have its issues, I don't want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .
Till late 2000's Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family's systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.
I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that
Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.
Arguably many users don't know how to use a Windows desktop. But that's not a failure of desktop; that's failure of Windows. They could have provided an easy way to install applications to a sandbox. On Android you can install from apk files and they are installed to a sandbox. If Windows had such a feature easily available, I think most of genuine desktop app makers would have migrated to it. This would have the advantages of the browser and no battery drain, no fan noise, no sluggishness.
It is not that OS developers are not improving , for example S mode on surface is a good feature, however as long as adobe’s of the world still abuse my system the problem is not solved .
It is just not older gen software either , Slack desktop definitely takes more resources than web version while delivering broadly the same features
Sure it is electron abstraction , however if a multi billion company with vc funding can not see value in investing on 3 different stacks for windows,MacOS and Linux how can most other developers?
The reason that people don't write them is because users aren't on "the desktop". "The desktop" is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can't run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there -- Android users won't like your iOS UI, and iOS users won't like your Android UI. Then there are tablets.
Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.
I must be living in a parallel world because I use a ton of desktop apps that aren't "written 6 times" - and write a few, including a music & other things sequencer (https://ossia.io).
Just amongst the ones running on my desktop right now, Strawberry (Qt), Firefox (their own toolkit), QtCreator (Qt), Telegram Desktop (Qt), Bitwig Studio (Java), Kate (Qt), Ripcord (Qt), all work on all desktop platforms with a single codebase. I also often use Zim (GTK), which is also available on all platforms, occasionnally Krita (Qt) and GIMP (GTK), and somewhat rarely Blender. Not an HTML DOM in sight (except FF :-)).
Fast-forward 5 years and I've been doing JS in VSCode for a while. My current company offered to pay for Webstorm so I gave it a try. Lo and behold it was still sludgy, but now unbearable to me because I've gotten used to VSCode.
The one other major Java app I've used is DBeaver, which has the same problem to an even greater degree. Luckily I don't have to use it super often.
I wonder what an API with both only C (or some other low level) bindings and designed to be easy to use externally might look like.
You can get wildcard ssl certificates for your domain and all subdomains from Letsencrypt.
(Likewise, applications using the JVM may also look very convincingly like native ones, but you will feel it as soon as you start interacting with them.)
You're fundamentally mistaken about where Qt sits in the stack - it effectively sits in the same place as USER32/WinForms in Windows or NS/Cocoa GUI widgets of OSX. It is reasonable to think of it as an alternative native GUI library in that sense. If it is slower, it's because an implementation of something is slower, not because of where it lives or an abstraction cost.
Qt pretty much draws using low-level drawing APIs on the respective platform. And although Qt itself is not written in the most performance sensitive C++, it is still orders of magnitude faster than most (and it's not like Chrome doesn't pay overhead) - people rag on vtable dispatch speed but jeez its still orders of magnitude faster than something like ObjC which served Apple quite well for years.
The performance of a Qt app is more likely a function of the app itself and how the app developers wrote it.
But no, you're not noticing any micro-seconds differences in C++ overhead for Qt over "native native" - and you're basically comparing the GUI code of the platform - since Qt does it's own rendering. Win32 is mostly pretty good, NS is a mixed bag, and Gtk+ is basically a slug. In all cases there is some kind of dynamic dispatch going on, because that is a fundamental pattern of most GUI libraries. But dynamic dispatch is almost never a factor in GUI render performance. Things like recalculating sizes for 1 million items in a table on every repaint are the things that get people into trouble, and that is regardless of GUI library.
I still use Macvim or even Sublime Text a lot for speed reasons, especially on large files.
VSCode very quickly freezes because it cannot handle a file that size. Sublime not only opens it but syntax highlights immediately.
I mean, is anyone clamouring for VS Code, for example, to be rewritten in native toolkits?
Curious what desktop do you run your browser under?
I would give you an example of a simple video split application. A web platform requires uploading, downloading and slow processing. A local app would be hours quicker as the data is local.
- Qt is actually the native toolkit of multiple operating systems (Jolla for instance and KDE Plasma) - you just need to have a Linux kernel running and it handles the rest.
It also does the effort of going to look for the user theme for widgets to mix in with the rest of the platform, while web apps completely disregard that.
- Windows has at least 4 different UI toolkits now which all render kinda differently (win32, winforms, wpf, the upcoming winui, whatever is using Office) - only Win32 is the native one in the original sense of the term (that is, rendering of some stuff was originally done in-kernel for more performance). So it does not really matter on that platform I believe. Mac sure is more consistent, but even then ... most of the apps I use on a mac aren't cocoa apps.
- The useful distinction for me (more than native and non-native) is, if you handle a mouse event, how many layers of deciphering and translation has it to go through, and are these layers in native code (eg. compiled to asm). As it reliably means that user interaction will have much less latency than if you have to go through interpreted code, GC, ...
Of course you can make Qt look deliberately non-native if you want, but by default it tries its best - see https://code.woboq.org/qt5/qtbase/src/plugins/platforms/coco... and code such as https://code.woboq.org/qt5/qtbase/src/plugins/platforms/wind...
I've taken c++ qt desktop apps that never had any intention of running on a phone, built them, ran them, everything "just worked. I was impressed.
Also worth noting that many creation-centric applications for the desktop (graphics, audio, video etc. etc.) don't look "native" even when they actually are. In one case (Logic Pro, from Apple), the "platform leading app from the platform creator" doesn't even look native!
Qt also supports rendering directly on the GPU (or with software rendering on the framebuffer) without any windowing system such as X11 or Wayland - that's likely how it is most commonly used in the wild, as that's one of the main way to use it on embedded devices.
You're seriously suggesting that the common use of Qt on Linux systems is direct rendering without the windowing system?
Arguably it's use in embedded contexts is much larger than desktop. It's quite popular for in-car computers, defense systems, etc.
For desktop linux, yes, it uses the windowing system.
I know how Qt works at that level. I did a bunch of work in the mid-naughts on the equivalent stuff in GTK.
That's true of QtWidgets, but not QML / Qt Quick (the newer tool), correct? (I found this hard to determine online).
- QML can be used with other graphics stack - for instance Qt Widgets like with https://www.kdab.com/declarative-widgets/ or alternatives to the official Qt Quick controls with https://github.com/uwerat/qskinny or with completely custom rendering like here with NanoVG : https://github.com/QUItCoding/qnanopainter
- QtQuick Controls can be made to use the same style hints than the desktop with https://github.com/KDE/qqc2-desktop-style
Put a <button/> in an HTML page and you get a platform-provided UI widget.
I did a small bit of research into this, and found plenty of "anecdotal" evidence, but nothing confirming for sure. I'm looking and interacting with the controls and they seem pretty native - if they're a recreation than that's pretty impressive :)
Implementing the look is simple, adding behaviour quite harder and utilizing the service the endgame. WebUI usually does nothing from those or some parts, it all depends on the constellation. But usually there is a obvious difference at some point where you realize whether something is native or just made an attempt.
Imagine if when trying to run a QT app on Windows a dialog box could popup saying. "Program X is missing the Y, install from the Windows Store (for free): Yes / No"
In particular C++ code works on every machine that can drive a screen without too much trouble - if the app is built in C++ you can at least make the code run on the device... just have to make something pretty out of it afterwards.
Off-topic, but regarding Bitwig: of course it makes perfect sense to use it on smaller devices. Not phones, but tablets. It's even officially supported with a specific display profile in your user interface settings (obvious target amongst others: windows surface). This is particularly useful for musicians on stage.
You write an app for the Mac... how do you ship on Windows as well?
Linux of course doesn't have any standard toolkit, just two dominant ones. There's no real expectation of "looking native" here, either.
Which leaves macOS. And even there, the amount of users really caring about native UIs are a (loud and very present online) minority.
So really, on the Desktop, the only ones holding up true cross-platform UIs are a subset of Mac users.
And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?
I don't think that's how fraud works in actuality; malicious actors will pay more attention to UI consistency than non-malicious actors (who are just trying to write a useful program and not trying to sucker anyone), inverting that signal.
Also I think you can deploy to all those things with Qt.
On iOS and Android the situation might be a bit more complicated, but this discussion seems to say that dynamically linking would also work there.
It's a major reason UI coding sucks. There is no incentive for anyone to make it not suck, and the work required to build a modern UI library and tooling is far beyond what hobbyist or spare time coders could ever attempt.
Anything that you need to run on a desktop can't be used effectively on a touch screen anyway, so phones and tablets don't really count for serious software. (Writing this comment is stretching the bounds of what I can reasonably do on an IPhone).
In many ways, the consumer and non specialty business are post desktop. Turns out documents, email, and other communication apps cover 90% of use cases. Anything that requires major performance gets rendered in a cloud and delivered by these other apps.
Note that this data is missing phones.
2010: 157mm Desktops, 201mm Laptops sold
2019 forecast; 88.4 + 166mm
Or, perhaps better, is in terms of internet use
Mobile went from 16.2% to 53% of traffic since 2013; nearly 4 fold. Which means since 2013, non mobile usage went from 84% to 47%; or nearly 50%
But there are definitely examples within Accountants, Animators, and Musicians where Phones, Tablets, and Chromebooks (not specialty desktop apps) have taken over the essential day to days.
For animators; the Ipad VS. Surface face off is a great example -- also where they offload concepts to "the cloud" to render instead of a Mac Pro.
I'm finally upgrading from an Intel Sandy Bridge processor after nine years, and I still don't need to - it's cranking along pretty well as a dev and gaming machine still.
You can do it today with C# and https://www.mvvmcross.com/
And they'll let you know it, too. Unfortunately this has been an issue since the first Macs left the assembly line in 1984. If you point out that based on their share of the software market they're lucky they get anything at all, the conversation usually goes south from there.