Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is there still a place for native desktop apps?
524 points by Jaruzel on May 17, 2020 | hide | past | favorite | 752 comments
Modern browsers these days are powerful things - almost an operating system in their own right. So I'm asking the community, should everything now be developed as 'web first', or is there still a place for native desktop applications?

As a long-time Win32 developer, my only answer to that question is "of course there is!"

The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.

In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.

I recently had to upgrade my RAM because I have Spotify and Slack open all the time. Today RAM is cheap but it is crazy those programs take up so much resources.

Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).

So I absolutely agree with you.

I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while programming.

The younger programmers may be right that resources don't matter much because they are cheap and available. But now I had to upgrade my RAM.

It really surprised me when I downloaded Godot only to get a 32MB binary. Snappy as hell.

Web apps masquerading as desktop apps are terribly slow and it's a surprise we've got so used to it. My slack client takes a few seconds to launch, then it has a loading screen, and quite often it will refresh itself (blanking out the screen and doing a full-blown re-render) without warning. This is before it starts spinning the fans when trying to scroll up a thread, and all of the other awkward and inconsistent pieces of UI thrown in there. Never mind having to download a 500MB package just to use a glorified IRC client.

I'm really enjoying writing code outside of the browser realm where I can care a lot more about resource usage, using languages and tools that help achieve that.

It's interesting to compare Ripcord[0] to Slack. Ripcord is a third-party desktop client for Slack and Discord. It has something like 80% of features of the official Slack client and a simpler UI (arguably better, more information-dense), but it's also a good two orders of magnitude lighter and snappier. And it also handles Discord at the same time.


[0] - https://cancel.fm/ripcord/

I wish so much that 3rd party clients weren't directly against the TOS of Discord. I sorta miss the old days where it seemed like anyone could hook up to MSN/Yahoo/AIM.

I wish that too. More than that, I keep wondering whether there could be a way to force companies to interop, because right now you generally can't, without getting into some sort of business relationship with these companies. That's the problem with services - they take control of interop, and the extent to which interop is allowed is controlled by contracts between service providers.

Where in the terms of service does it say that third party clients are disallowed?

(It doesn’t.)

While it does not explicitly state that, it does say:

"(ii) copy, adapt, modify, prepare derivative works based upon, distribute, license, sell, transfer, publicly display, publicly perform, transmit, stream, broadcast, attempt to discover any source code, reverse engineer, decompile, disassemble, or otherwise exploit the Service or any portion of the Service, except as expressly permitted in these Terms;" [1]

Given that the API is not public if you are not using a bot key, I would think that using it with a third party client would take some form of reverse engineering.

The devs also stated that other client modifications like betterDiscord are against the TOS.

[1]https://discord.com/terms (Under Right To Use The Service)

Ripcord isn't a modification of their software. It's an original implementation. I didn't look at any of their code.

and is written by a single person, in Qt

ripcord is amazing. I bought it right away because in its current form, it's already worth the money

LMMS (https://lmms.io/) is a full blown DAW that's only 33MB as well

Just remember that it's entirely possible to do awkward and inconsistent UI in native apps, and there's a very long tradition of it.

But at least it's generally faster when you do it!

> I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do.

I'm not convinced it's the programmers driving these decisions. Assuming that it takes less developer effort - even just a little - to implement an inefficient desktop application, it comes down to a business decision (assuming these are programs created by businesses, which Spotify and Slack are). The decision hinges on whether the extra cost results in extra income or reduced cost elsewhere. In practice people still use these programs so it seems the reduced income is minimal. What's more, the "extra cost" of a more efficient program is not just extra expense spent on developers - it's hard to hire developers so you probably wouldn't just hire an extra developer or two and get the same feature set with greater efficiency. Instead, that "extra cost" is an oppotunity cost: a reduced rate of implementing functionality.

In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too. I'm not saying that I agree with it, but it's how the market works.

> In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too.

And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it. Sure, a factor in this is that it works on multiple platforms (including mobile) and you don't have to worry about setting it up for yourself, but that has nothing to do with the in-app features and overall UX. Or Spotify, whose biggest value is that it's a cheap and legal alternative to pirating music. And that value has, again, nothing to do with softare, and everything to do with the deals they've managed to secure with artists and labels.

I exercise my preferences wrt. Slack by using Ripcord instead of the official client. Most people I know exercise their preferences wrt. Spotify by using YouTube instead (which is arguably lighter resource-wise). And speaking of alternative clients, maybe that could be the way to go - focus on monetizing the service, but embrace different ways of accessing it. Alas, time and again, companies show they prefer total control over the ecosystem surrounding their service.

> And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it.

The consumer here is the business itself, not their employees.

Technically yes (well, the customers, not consumers), but that's the problem itself: the feedback pipeline between end-users and producers is broken because the end-users aren't the customers.

Maybe we need a power declaration of software as with dish washers. (No joke)

As a younger developer I'd say I agree. But it's not just developers being used to resources being plentiful.

I do webdev mostly and there it's also a matter of management. I want to optimize applications to be less hungry, those are interesting challenges to me. But I've been told by management to just upgrade the server. Either I'd spend a day optimizing, and maybe fixing the issue. Or we just spend 50 euros a month more on a server.

Sometimes the optimization is not worth the effort. For applications like Blender? Optimization means a lot.

> Either I'd spend a day optimizing, and maybe fixing the issue. Or we just spend 50 euros a month more on a server.

So, discounting additional effects like more satisfied userbase, the optimization would pay for itself in a year. And optimizations stack.

Yes, that was my thought process as well. But management didn't agree. To them, the short term cost of me optimizing the problem was higher than the long term costs would be.

Something I noticed over my career. Programmers tend to get super beefy machines. My machine has 64GB of memory, and 12 cores. But the typical users who use our software don't have anywhere near those same specs.. but programmers often just say "it worked on my machine" without thought about the specs.

Same problem exists with designers and monitors.

I like to imagine 20 years in the future we’ll see articles posted on HN, or whatever the cool kids are reading by then ;)

... articles with titles like:

“Slack in one Ruby statement” a la https://news.ycombinator.com/item?id=23208431

More seriously though, Spotify and Slack are optimised to intentionally be huge time wasters, so it makes sense the organisations that produce them don’t care about performance / efficiency.

Most Spotify user-hours are probably office workers or students pumping music into headphones while working. If anything it's a productivity application because it trades flagrantly unnecessary resource usage (streaming the same songs over and over) for users' time (no more dicking around crafting the perfect iPod).

On the topic of flagrantly unnecessary resource usage...

My first child was born six months ago. Newborns (we discovered) sleep better with white noise. So of course we found a three hour white noise track on Spotify and played it from our phones for every nap, never bothering to download it.

I find it hard to believe at least some of that that data wasn't cached on your device. Setting a track to be downloaded just means the cached data is evicted differently. If you run their desktop client with logging enabled you'll see this happening and I'd say it's likely to be the same across platforms. That is of course the actual reason they have a non-native app - to reuse the codebase and save money.

> But now I had to upgrade my RAM.

But I can't. My ram is soldered on. How many tons of carbon dioxide should I emit so that you can use React? There are ways to do declarative ui/state management without dom...

If carbon footprint is that important to you, maybe you should find ways to encourage companies not to solder on RAM instead.

Why not both?

My computer still computes with 2 GB of ram. It’s just that developers are gluing more and more stuff together to do things we did on Pentium processors with 64 MB of ram.

no. just because soldered on ram is bad doesn't mean that bad code is ok.

I guess the question becomes: what is the native ecosystem missing that means devs are choosing to deliver memory/CPU hungry apps, rather than small efficient ones?

(Easy) Cross platform publishing.

HTML, CSS and Javascript. Most of these electron apps are basically wrappers around actual websites to give a place in the dock and show notifications and access the filesystem.

But that isn't what's missing. It's a restatement of the problem. DOM based apps are much more resource intensive than native. What is missing from native that makes business choose DOM?

If there was some modern tool like WxWidgets that supported modern apis like DOM, Android and UWP, would we see more use of native? Electron would therefore become pointless.

That is what's missing, albiet tersely stated.

The hypothetical business has two choices. Choose Electron, or choose some other toolkit that has native, cross-platform support (like Qt). It's far easier for the business, and the developers there, to take their existing website HTML, CSS, and Javascript; and simply wrap it in Electron (which costs $0), and call it a day. Every other choice is (perceived as being) more expensive.

Qt is a modern toolkit with native-cross platform support, but costs money for commercial use, and businesses and software developers don't want to spend the money on it.

Qt Quick takes plenty of ideas from the web playbook.


as someone who has done both desktop apps and electron apps, it is much faster to write some html/css and wrap it in electron than to do the same in qt/gtk/etc...

Not to mention, the HTML/CSS combo is possibly the best we've come up with for designing user interfaces.

If you don't mind me asking, how much RAM did you have before, and what did you upgrade to?

I recently got a new PC myself and decided to go for 16GB, my previous one (about a decade old) had 8GB and I didn't feel I really hit the limit, but wanted to be future proof. Because as you said, a lot of 'modern' applications are taking up a lot of memory.

I also went from 8GB to 16GB recently (virtual machines are hungry); but I had gotten rid of Slack even before that. I mean, yes, it has round edges and goes ping and has all those cutesy animations - but 2GB of RAM for a glorified IRC client, excuse me, what exactly is it doing with billions of bytes worth of memory? ("Don't know, don't care" seems to be its developers' mantra)

The answer is JIT'ing JS.

For each of your Electron apps there is a little compiler chugging away.

Back in the day, we didn't have 2GB of RAM total, much less just for a compiler!

I upgraded from 8 to 16GB. But I'm in the process of ordering a new desktop that will have 32GB.

Spotify and Slack are not problematic as individual programs but since I have a lot of other programs open they are the ones that take up more memory than they should. I mean: Spotify is just a music player. Why does it need 250MB RAM?

Because it is not just a music player. It plays music, aside from giving you an entire experience that consists of connecting with your friends, managing your music library, consuming ads, having a constant connection with the server, and ...

This was meant to be sarcastic, but I'm not even sure how to continue. Maybe someone else can bulk up that list to get to something that requires 250MB. :)

64GB RAM is borderline reasonable today. Why not jump to that?

It probably uses it for buffering.

I work on a desktop CAD / CAM application, and I need every one of the 12 cores and 32 GB RAM on my windows workstation. I know this because I also have a mac workstation with lower specs (16 GB RAM, don't know offhand how many cores) and developing on it is intolerable (let's play "wait an hour to see if clang will compile my changes" - I know, I know, I should read the C++ standard more carefully so I'm not disappointed to discover that MSVC was overly permissive).

Parenthetically, we do use Slack and I am double-dipping on a lot of heavy functionality by having both Spacemacs (which I use for code editing and navigating / search within files) and Visual Studio (which I use for building, debugging, and jump-to-definition) open at the same time.

Pair that with some discord, vs code and chrome and all of a sudden my 16gb is getting maxed semi regularly

Just had to upgrade to 32 myself


You are looking back at the past with rosy goggles.

What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.

AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.

Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.

I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.

FWIW i do not remember having issues like that, i had mIRC practically always open, a web browser, email application, etc and i do not remember ever having networking issues.

Internet was slow but that was largely because for the most part of the 90s i was stuck with a very slow 2400 baud modem - i got to appreciate the option that browsers had to not download images by default :-P.

But in general i do not remember being unable to run multiple programs at the same time, even when i was using Windows 3.1 (though with Win3.1 things were a bit more unstable mainly due to the cooperative multitasking).

Me neither, I'm not going to lie and say that I had 40 applications opened, but I DID have 5-10 apps using the web with 0 issues (A browser+IRC App + Email Client+MICQ+MSN Messenger+Kazaam/Napster+Winamp in stream mode).

Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.

It really whipped the llamas ass.

It still does! https://www.winamp.com/

I still use Winamp for offline music. Nothing else is faster.

Try AIMP 3. It's amazing and is an improvement over Winamp.

I did use foobar2000 for a while and it was quite snappy.

And yet, people stopped using it.

I used to use it all the time. Now I use Spotify instead. I'm not sure I want to go back to curating my own collection of mp3's again.

Sure, but if you could have Spotify as is, or a light weight player, like WinAMP, both with equal access to the Spotify service, which would you pick?

People aren't using Spotify because the player is fantastic, they use it because Spotify has a huge library, is reasonable price and the player is sort of okay.

Totally agree. But the DRM monster rears its head. Everyone is afraid you'll steal their choons if you're allowed to play them on whatever player you like sigh

Still, all iTunes content can be de-DRM-ed in 500 lines of C code, so it's not like "the industry" actually requires it to be secure.

Like everything these days, it's barely good enough. And why bother implementing your DRM as a 1KB C++ library when you can use a 5MB Objective C framework instead?

Spotify is a case in point: it used to have a fantastic, small and fast native desktop app. It replaced it with the bloated web-based one we see today.

That's an artifact of IP laws. The reason you don't have to curate your own mp3s again is because some service managed to find a way to give you a searchable, streamable collection of music that's also legal. But that in no way implies Spotify needs to be so bloated.

In a better world, you really wouldn't need to. Winamp was great - The weak point was always the playlist editor, but winamp's interface for simply playing music and seeing what was up next was wonderful. Spotify could provide you with a playlist plugin that simply gave a list of URLs, or let you download one that lasted X hours.

Mobile phones happened. Tiny memories while always being connected.

Same here. About the only time when by browser + mIRC + WinAMP + IM + Visual C++ 6.0 combo slowed down was when the VC++ was compiling the game I was working on. I would then close the IM, because doing so would speed up the compile times by 1.5x.

> Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.


IRC, SMTP, IMAP are protocols from back when desktop operating systems didn’t even come with TCP/IP. They would use a single connection for unlimited messages. I was using a “modern” chat program, AOL Instant Messenger.

Alright, the missing part of my story was that I was also using a proxy program to share a single connection with my brothers. NAT wasn’t widely available yet, and Winmodems were much easier to find than hardware modems. (And I hadn’t discovered Linux and the free Unixes yet.)

So, every TCP connection that AIM made was 2 connections in the proxy program. We quickly discovered that AIM on more than one computer at a time made the entire Internet unusable.

Every generation of developers decries the next generation for bloat, but Windows 95 had preemptive multitasking that made the computer so much snappier (plus other features), at the cost of multiple times more RAM needed than Windows 3.1. (16 MB was the unofficial minimum, and often painfully small. Microsoft’s official minimum was impractical back then.) Windows XP had protected memory that made it more feasible to run multiple applications, because they were much less likely to crash each other (plus other features, including a TCP/IP stack featuring NAT and a useful connection limit), at another several multiples more RAM needed.

There have always been tradeoffs. Back in the day, programs were small and developers focused more on making sure they did not crash, because they didn’t have lots of RAM and crashing would often require the computer to reboot. That developer focus meant less focus on delivering features to users. (Also, security has often meant bloat.) Now, you barely need to know anything about computer science, and you can deliver applications to users, at the cost of ginormous runtime requirements.

It may be true that people are partially looking back in rose-tinted glasses, but there's more than just an inkling of truth to their side. Casey Muratori (game developer for the Witness) has a really good rant [1] about bloat in Visual Studio specifically, where he demonstrates load times & the debugger UI updating today vs on an Pentium 4 running XP. Whether or not you attribute the performance difference to new features in Win10/VS, it's worth considering the fact that workflows are still being impacted so significantly on modern hardware. We were able to extract 100s of times more out of hardware and gave it up for ???

[1] https://www.youtube.com/watch?v=GC-0tCy4P1U

The Visual Studio 6 on Pentium 4 demonstration starts around 36th minute.

I used Visual Studio 6 for years, and yes, I can confirm, it was really that fast.

It's also not true that there were problems with more applications running etc as "Decade" claims. Or to be more precise, there were no problems if one used Windows NT, and I've used NT 3.51, NT 4 and 2000 for development, starting with Windows development even before they were available. And before that, Windows 3.x was indeed less stable, but it is the time before 1995. Note that the first useful web browser was made in 1993, internet as we know it today practically didn't exist. There were networks, but not web.

Maybe it’s possible for opposite things to be true if they happen to different people. I wasn’t a developer back then.

Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on. Starting with XP, the professional and consumer versions of Windows have merged. We are so lucky.

> I wasn’t a developer back then.

That explains your inaccurate perspective.

> Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on.

Allow me to claim that that is also not true, in the form you state it. Again, I've lived through all this, and I can tell you what that was about. The "pickiness" of NT was even at these times not about the motherboards and the chipsets. It was about the hardware consumer devices. Many things that probably don't even exist as the products today, like a black-and-white hand-scanner that scanned as you moved your hand over the paper and had only Windows 3.x drivers on the floppy with it. There was never a problem of having a developer machine running NT in any reasonable price range, with a reasonable graphic card, monitor, keyboard and mouse. And, at the start, a phone line modem transmitting some kilobytes per second!

The RAM needs did exist, but again not such as they are made to be believed by later distortions. If I remember correctly (it changed relatively fast), at the time NT was published, Microsoft had to deliver it claiming that it will run on 4 MB, the OS and the programs and the graphics, all had to fit. Let me repeat, 4 MB. It run, but not comfortably for bigger programs. But the point is -- as soon as you at that time had 8 MB you haven't had a problem. A little later, for comfortable work, 16 MB were more than a good choice. It was a hundred, two or three of $ more than the cheapest possible offer (yes, that were the prices then), but that was it. RAM was the only thing you had to care about to have NT running.

The point is, at that time there were enough those who didn't want to use Windows NT at all, clutching to 3.x and then 95 and these are those who promoted the horror stories about OS problems. But it was just their ignorance. 95 was also reasonably stable, unless you used, like many, some "utility" programs that were more malware than of real use (the "cleaning", "protection" or even "ram expander" snake oils were used by some even then -- no to mention that a lot of people believed they had to try any program that happens to access them).

The good development tools were good and stable, especially command line (in GUI areas, there were some snake oils among them too). But Word did crash even under NT, and even during the first half of 2000-s decade, and that's completely different story, that was intentional at that time for these products.

> That explains your inaccurate perspective.

Yep, you really succeeded at empathy, there. /s

> reasonable price range

The word “reasonable” is doing a lot of work, here.

Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98, which had arbitrarily severe limitations and lacked memory protections.

So, while it was technically possible to run lots of applications simultaneously on 256 MB of RAM, for most people it was a fun adventure in whether some buggy program has destabilized the system into needing to reboot to run properly again. Or whether it’s still usable with degraded functionality. In my case, that’s without using the cleaning, protection, RAM expander programs.

And even on professional operating systems, web browsers crashed a lot, and any other program that had to deal with untrusted input, which is basically anything that can open files or connect to the network, has gradually bloated as they learn security or add features.

> Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98

Once again: only somebody using a computer not selected for serious development used Windows 95 and 98. No developer who knew what he was doing was using Windows 95 and 98 as his primary development machine. So if you complain about that, you used the wrong tool for your work. Like I've said, it was easy to install Windows NT, and I don't know any computer which wasn't able to run it, if it had reasonably enough RAM.

> on 256 MB of RAM

To illustrate "reasonably" once again, that changed at these times: I remember buying an AMD-based notebook in 2002 with 256 MB and running absolutely without problems Windows 2000 on it for a few years, before upgrading to 512 MB, which was the maximum for that notebook. And that was the time of Pentium III and IV, not Pentium II, and like I've said, I've run Windows NT on 8 MB computers, all with compilers, resource editors, debuggers and even IDE. And even before, I've run Windows 3.11 on 2 MB computer and used that for development too (the development tools being in text mode, of course).

> some buggy program has destabilized the system into needing to reboot to run properly again

Only on non-NT systems, and surely not developer tools. I used Windows 3.x and Windows 9x, and never had to reboot due to the developer tools "making system unstable." Not even on a 4 MB or a 16 MB machine.

> web browsers crashed a lot

I've used both Mosaic and Netscape, and before 2000 my main problem was surely not them crashing. Surfing mostly worked (only the pages loaded slowly, there were no CDNs then). Again, on a NT system.

I think we’re losing the plot. The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers, and now you’re talking about developer tools on a Pentium 4.

Maybe it’s simultaneously true, that you could run many developer tools at the same time on Windows NT with hundreds of dollars of RAM, and attempting to run a bunch of consumer network programs at the same time (especially on consumer Windows) was asking for trouble.

I remember one of the attractions of IE 5 back in the day was how each newly launched window was its own process (not windows opened by the open link in new window menu option), so unlike Mosaic and Netscape, a crash in one copy of IE did not necessarily bring down all the other windows. Multiple windows being useful because surfing with a modem was slow regardless of CDN. Remember when Yahoo was scandalous, because banner ads took so much bandwidth?

> on Pentium II

> and now you’re talking about developer tools on a Pentium 4.

It's to illustrate that arguments are wrong: it's Decade who uses "256 MB" as an argument which is not "small memory" for Pentium II, and I illustrate that it was common in 2002 for notebooks, the time when Pentium IV was common for developer machines.

> The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers

Let me check again:

"For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II."

OK. That is also obviously a bit off. 256 MB with Pentium II is quite a lot, as I showed 256 MB was normal even in 2002 for notebooks, as Pentium III was already common on notebooks and IV on desktops. Working with email -- at that time e-mail clients, if they used html at all, were limited to html formats of that time so "using email" completely worked, no crashes of system on NT (Outlook did have a limit of single PST having to be less than N GB, I remember that). IM too just worked, and also without crashes on NT.

That leaves "video/audio calls". Video calls were surely not common at that time, and I personally also haven't used audio calls.

But the "stability" problems you claim to have been common definitely didn't exist the way you claimed, as soon as one used NT, that is, since around 1994, or later on Windows 2000 or even later on XP or Server 2003, all NT-based. And as I've said, it was not that "too much" RAM was needed, as I've run NT on 8 MB with no problem.

So I still don't understand why you continue to stick to the narration that was simply not true. No, it was not that bad like you claim. Computers were quite stable even then for those who knew what they were doing. On NT, almost nothing crashed the system, except for failed hardware. Like I've said, it was that some apps were indeed less stable, like Word crashing or saving the invalid DOC file. But Excel, for example, while being in the same "suite" I don't remember to have ever crashed. I also don't remember browsers actually crashing, just the pages downloading very, very slowly.

The 256 MB number came from the ggp post. At the beginning of the Pentium II era, that was very expensive, but it was not the only issue with running multiple programs at the same time.

But clearly you want to have the last word, so I guess I should let you have it.

We gave it up for slightly higher profit margins enabled by hiring slightly less qualified programmers at a slightly lower rate.

In a similar vein, Industrial Light and Magic used to have a few highly talented people crafting incredibly intelligent solutions to make their movies possible: https://youtu.be/AtPA6nIBs5g

By now, most of those effects would instead be done using CGI and outsourced to Asia.

There's probably a long rant waiting to be written on this topic. Myself, I've observed how over the last four decades, CGI effects went from worthless, through novelty, through increasingly awesome, all the way to "cheapest garbage that can be made that looks convincing enough when the camera is moving very fast".

A Pentium II could barely process DVD-resolution MPEG-2 in realtime.

According to http://www.vogons.org/viewtopic.php?p=423016#p423016 a 350MHz PII would've been enough for DVD, and that's 720x480@30fps; videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.

Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.

I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams; instead a MCU or "mux box" is used to combine the streams from all the other participants into one stream, and they do it with dedicated hardware.

That said, video is one of the cases where increased computing power has actually yielded proportional returns.

> videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.

Modern videoconferencing solutions (WebRTC) usually use 1280x720 and either H264 or VP8. Some apparently use HEVC. Also most modern processors and SoCs come with hardware-accelerated codecs built in, so most of the work related to compression isn't even done by the CPU itself.

> I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams

Yes you are receiving separate streams.

Don’t think it can be an MCU box. You can select an individual stream from the grid to make it larger almost instantly. The individual feeds can display both as grid and a horizontal row. I’m assuming they send individual feeds and the client can ask for feeds at different predefined resolutions.

Without having used Zoom much I can't definitively say how it works, but I've used BlueJeans quite a bit and noticed compression artifacts in various parts of the UI (e.g. text underneath each video source). That means BlueJeans is definitely muxing video sources and it really does not have a noticeable delay when changing the view. Since each video is already so compressed I think they can get away with sending you really low bitrate streams during the transition and you'll barely notice.

Mixed plus whichever feed you request to enlarge sounds more reasonable.

With Skype you're definitely able to receive separate streams from each participant, as I can access them individually via NDI and pipe them into OBS to do live multi-party interviews. You can see the resolution of individual feeds change when their bandwidth drops, and you can choose high/low bandwidth and latency modes for each feed. I would guess Zoom does the same but doesn't provide an NDI feed (yet).

I have an iMac G4 from 2003 (the sunflower things) on which I installed Debian PPC and it is able to stream 720p content from my local network and play it back smoothly on VLC

I could see Street View-like vistas under a Pentium3/Amd Athlon. On power, I did the same you can do today but with an Athlon XP and Kopete. On video, since BeOS and Mplayer I could multitask perfectly XVid movies good enough for its era.

To be fair, 10-20 years ago was the age of Windows XP and Windows 7, not Windows 95. There barely was anything good about Windows 95, and there are likely not many people missing it, but it was also a complete different era from the later "modern" desktops, hardware as also software-wise. If anything I would call that era the alpha-version, problems included.

Most of those have nothing to do with OP's point, which is that some software uses way too much processing power than it should.

While on the topic, let's remember the speech recognition software available for Windows (and some for Android 2.x) that was completely offline and could be voice activated with, gasp, any command!

Google with its massive data centers can only do "OK/Hey Google". Riiight. I can't believe there are actually apologists for this bs.

Do you mean Dragon Speech or how it was called?

Anyway, old speech recognition software was quite horrible. Most did not even worked without prior training. And Google does have now offline-speech recognition too. But true, the ability to trigger with any desired phrase is something still missing.

The ability to trigger with any desired phrase is easy, but not done for privacy reasons, to reduce the chance of it accidentally listening to irrelevant conversations.

The inability to change it from Hey google is done for marketing / usability reasons.

Nuance Dragon NaturallySpeaking

What was the software name, if I may ask? I remember speech recognition pre-CNN to be quite terrible.

Microsoft had speech recognition since WinXP. And also Dragon Naturally Speaking. Both needed a couple of hours of training, but worked really well, completely offline, it was amazing for me at the time. It did have a very high processor usage, but that was on freaking single core Athlon and Pentium. I'm not even a native English speaker, though dare I say my English is on par with any American.

You're talking about different concepts.

Voice recognition used by things like Google Assistant, Siri, Cortana, and Alexa usually relies on a "wake word", where it's always listening to you, but only starts processing when it is confident you're talking to it.

Older speech recognition systems were either always listening and processing speech, or only started listening after you pressed a button.

The obvious downside of the older systems is that you can't have them switched on all the time.

I think it would be really easy to create an app that would also listen to a very specific phrase (like "Hey Merlin", simple pattern match, with a few minutes of training for your own voice) and then start Google Assistant.

It's so embarrassing saying Hey Google all the time, and for me, it just feels like I'm a corporate bitch, tbh. It's true, which just makes me feel worse :D

There were always idiots writing buggy code. The issues you mention are about “old software” on “old hardware”. GP is only talking about “old style of software development”. Granted Qt, X, Win API is unnecessarily complicated.

> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

With Moore's law being dead, efficiency is going to get a lot more popular than it has been historically. I think we're going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.

We see new languages like Nim and Crystal with their only value proposition over Python being that they're more efficient.

Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.

I said that 20 years ago, but so far I've been proven completely wrong. Skype/etc just keep getting bigger and slower despite, from what I can tell adding absolutely no additional functionality. In fact if you consider it can't seem to do peer to peer anymore, its lost features.

Very few companies are rewritting their electron apps in win32 (although they should be). Instead it continues moving in that direction, or worse. Crashplan rewrote their java GUI a while back in electron. Java UI's are mostly garbage, but compared with the electron UI it was lightweight and functional. The electron UI (besides shipping busted libraries) has literally stripped everything out, and uses a completely nonsensical paradigm/icon set for the tree expand/file selection. Things like slack are a huge joke, as they struggle to keep it from dying under a load my 486 running mIRC could handle. So blame it on the graphics and animated gif's people are posting in the chat windows, but the results speak for themselves.

Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.

Though the only measurement I think people would actually care about is battery impact, and even that is pretty much hidden away on phones except to the few people who actually look.

But the other problem is: who cares if Discord or a browser's HN tab aren't optimally efficient? You're just going to suck it up and use it. With this in mind, a lot of the native app discussion is technical superiority circlejerk.

Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.

I'd say it's more of a "without a way for end-users to compare" --- the average user has no idea how much computing resources are necessary, so if they see their email client taking 15 seconds to load an email and using several GB of RAM, they won't know any better; unless they have also used a different client that would do it instantly and use only a few MB of RAM.

Users complain all the time when apps are slow, and I think that's the best point of comparison.

There is an economic theory that's escaping me right now, but the gist is that with certain goods, the market will hover at the very edge of efficiency; they have to become just scarce enough to break a certain threshold, then the market will realize that they are in fact a scarce resource, then correct to achieve a high efficiency equilibrium.

Even further: without a way for end users to take action based on that comparison.

If I decide that I don't want to use Slack because it drains my battery, then I can't take part in Slack conversations.

Because Slack is the go-to chat application for so many teams, excluding myself from those conversations is not feasible.

End result: I carry on using Slack.

THIS. I do remember outright revelations in user experience, as I showed people how much better Firefox 2.0 was, compared to IE6 (and looking back, version 2.0 wasn't all that wonderful, from present point of view - tells you more about IE than about FF).

edit: it was 2.0, I misremembered.

Instacart website has dreadfully slow search. Looks like instant search update takes forever to update with each character. The whole site is so slow. It makes my Mac Safari complain that the page uses significant resources.

This weekend I noticed that Amazon Fresh now delivers the same day—-for the past few months they had no slots. I switched to Amazon away from Instacart at once. The Amazon website lacks some bells and whistles compared to Instacart but it is completely speedy. If Instacart website were satisfactory I would never have switched.

Slow, bloated websites can absolutely cost companies money.

I think the other major, major thing people discount is the emergence of viable sandboxed installs/uninstalls, and the accompanying software distribution via app stores.

Windows 95 never had a proper, operating-system supported package manager, and I think that's a big part of why web applications took off in the late 90s/early 2000s. There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.

Mobile has forced a big reset of this, largely driven by the need to run on a battery. You can't get away with as much inefficiency when the device isn't plugged into the wall.

> [the absence of a package manager was] a big part of why web applications took off in the late 90s/early 2000s.

Of course apt-get is very convenient but I can't see a Microsoft version of it letting companies deliver multiple daily updates.

Based on my experience of the time the reasons were, in random order

- HTML GUIs were less functional but easier to code and good enough for most problems

- we could deploy many times per day for all our customers

- we could use Java on the backend and people didn't have to install the JVM on their PCs

- it worked on Windows and Macs, palmtops (does anybody remember them?) and anything else

- it was very easy to make it access our internal database

- a single component inside the firewall generates the GUI and accesses the db instead of a frontend and a backend, which by the way is the modern approach (but it costs more and we didn't have the extra functionality back then, js was little more than cosmetic)

There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.

Bloated, inefficient software is certainly present on the native side too, but it's also possible to write single-binary "portable" ones that don't require any installation --- just download and run.

OS API sets have evolved toward more sandboxing. Things are more abstract. Fewer files on disk, more blob-store-like things. Fewer INI files in C:\Windows, more preference stores. No registry keys strewn about. .NET strong naming rather than shoving random DLLs into memory via LoadLibraryA()

(Hi, I'm a windows dev)

IMHO web applications took off because developers learned pretty fast how useful "I can update any time without user consent" is, especially when your software is a buggy mess (or a "MVP" if you like buzzwords) and you need to update every five minutes.

> Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason.

I would predict that if only Qt didn't cost a mind-boggling price for non-GPL apps. They should really switch to pay-as-you-earn e.g. like the Unreal engine so people would only have to start paying as they start earning serious money selling the actual app. If they don't Qt popularity is hardly going to grow.

Qt through the LGPL license is free for non-GPL apps. Tesla is using it as LGPL in their cars without paying a dime to Qt Company (which is, imho, super shitty given the amount of money they make).

Agree 100%.

I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).

Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.

I sense there's a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust's borrow checker, Swift's reference counting, or C++ smart pointers.

I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are "hard" in a way that C#/Java/JS aren't, and I also think it has a big effect on performance, or at least, latency. I completely agree we've backslid, and far, but the reality is, today, it's expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we're stuck with the Electron / web shitshow, in large part because it's just faster, and easier for non-specialists to develop. It's all driven by economic factors.

There is movement away from stop-the-world GC, but not to reference counting. The movement is towards better GC.

The language Go has sub millisecond GC with multi-GB heaps since 2018. See https://blog.golang.org/ismmkeynote

Java is also making good progress on low latency GC.

Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.

I don't want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.

Yet we still read articles and threads about how bad the Go GC is and the tradeoffs that it forces upon you.

I get the feeling that the industry is finally starting to realize that GC has been a massive mistake.

Memory management is a very important part of an application, if you outsource that to a GC you stop to think about it.

And if you don't think about memory management you are guaranteed to end up with a slow and bloated app. And that is even before considering the performance impact of the GC!

The big hinderence has been that ditching the GC often meant that you had to be using an old an unsafe language.

Now we have rust, which is great! But we need more.

The Go GC isn't that great, it's true. It sacrifices huge amounts of throughput to get low latency: basically a marketing optimised collector.

The new JVM GCs (ZGC and Shenandoah) are more sensibly designed. They sacrifice a bit of throughput, but not much, and you get pauseless GC. It still makes sense to select a throughput oriented collector if your job is a batch job as it'll go faster but something like ZGC isn't a bad default.

GC is sufficiently powerful these days that it doesn't make sense to force developers to think about memory management for the vast bulk of apps. And definitely not Rust! That's one reason web apps beat desktop apps to begin with - web apps were from the start mostly written in [pseudo] GCd languages like Perl, Python, Java, etc.

I don’t think it’s fair to call garbage collection a mistake. Sure, it has properties that make it ill-suited for certain applications, but it is convenient and well suited for many others.

Go achieves those low pause times by allocating 2x memory to the heap than it's actually using. There's no free lunch with GC.

Same applies with manually memory management, you get instead slower allocators unless you replace the standard library with something else, and the joy of tracking down double frees and memory leaks.

I'm using Rust, so no double frees and no accidental forgetting to call free(). Of course you can still have memory leaks, but that's true in GC languages too.

That is not manually memory management though, and it also comes with its own set of issues, like everyone that was tried to write GUIs or games in Rust is painfully aware of.

There is no free lunch no matter what one picks.

That's true. The comment by mlwiese up-thread, that I responded to, praised Go's low GC latency without mentioning the heavy memory and throughput overheads that come with it. I felt it worth pointing out the lack of a free lunch there; I think a lot of casual Go observers and users aren't aware of it.

Agreed, although if Go had proper support for explicit value types (instead of relying in escape analysis) and generics, like e.g. D, Nim, that could be improved.

I don't think that's as hard as you make it out to be. Notably, Zig does not have a default allocator and its standard library is written accordingly, making it trivial to ensure the use of the appropriate allocation strategy for any given task, including using a debug allocator that tracks double-free and memory leaks.

Has Zig already sorted out the use-after-free story?

No, and as far as I am aware it makes no attempt to do so other than some allocators overwriting freed memory with a known signature in debug modes so the problem is more obvious.

> Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.

I worked for a robotics company for a bit, writing C++14. I don't remember ever having to use raw pointers. That combined with the functionality in Eigen made doing work very easy --- until you hit a template error. In that case, you got 8 screens full of garbage.

Yeah, software seems to frequently follow Parkinson's Law:

Work expands so as to fill the time available for its completion.[1]

Corollary: software expands to fill the available resources.

1. https://en.wikipedia.org/wiki/Parkinson%27s_law

See also: Wirth's Law https://en.wikipedia.org/wiki/Wirth%27s_law

from A Plea for Lean Software (1995) https://cr.yp.to/bib/1995/wirth.pdf

Neat. I wasn't aware of that one.

This sentiment is why I've moved to write Elixir code professionally three years ago, and why I write Nim for all my personal projects now. I want to minimize bloat and squeeze out performance from these amazing machines we are spoiled with these days.

A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.

It's our responsibility to minimize our carbon footprint.

Some of blame is to be put on modern development environments that pretty much require the latest best hardware to run smoothly.

> It's our responsibility to minimize our carbon footprint.

This, a hundred times.

My normal work computer is a Sandy Bridge Celeron laptop. I might need to upgrade it soon, but I'd still prefer something underpowered for exactly same reason; perhaps I'll purchase an Athlon 3000 desktop.

> As a long-time Win32 developer, my only answer to that question is "of course there is!"

As a long-time Linux user, that's what I say as well.

And as a privacy activist, that's what I routinely use.

I don't know ...

https://tsone.kapsi.fi/em-fceux/ - This is an NES emulator. The Memory tap in Developer tools says this takes up 2.8MB. Runs in 60fps on my modern laptop.

It seems possible to build really efficient applications in JS/WebASM.

Multiple layers of Javascript frameworks is the cause of the bloat, and is the real problem I think.

> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

If we save developer-cycles, it's not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.

> For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II.

Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don't realize there are there. Some of them working in the background, some being so naturally that you can't imagine them not being there since the beginning of everything.

> If we save developer-cycles, it's not wasted, just saved somewhere else.

In other words, pass the buck to the user (the noble word is "externality").

No, an externality is when a cost is passed to a external party (not involved in the transaction), like air pollution or antibiotic resistance. Passing a cost to the user is just a regular business transaction, like IKEA sending you a manual so you can build the furniture yourself.

> The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749

While true, it also had plenty of limitation. You have to keep carrying around a huge legacy, you're locked in to the APIs, SDKs and operating systems of a single vendor, often locked themselves to a single type of hardware.

The win32 code doesn't run anywhere, except on Windows, but most of the compute devices are mobile (non-laptop) systems and those don't come with Windows.

Running your native apps now takes both less work and more work: you can write (Somewhat) universal code but the frameworks and layers required to get it to build and run on Windows, macOS, Linux, iOS, Android, and any other system the market you target relies on now comes in as a dependency.

It used to be that the context you worked in was all you needed to know, and delivery and access was highly top-down oriented meaning you'd have to get the system (OS, hardware) to run the product (desktop app). That is no longer the case as people already have a system and will select the product (app) based on availability. If you're not there, that market segment will simply ignore you.

That is not to say that desktop apps have no place, or that CEF is the solution to all the cross-platform native woes (it's not, it's the reason things have gotten worse), but the very optimised and optimistic way to writing software from the 90's is not really broadly applicable anymore.

Is it practical to target wine as an application platform? That will require building without vs. Or build on windows and test with wine. What are the apis one would need to avoid in order to ensure wine compatibility?

What are some solid resources for learning more about optimization? I graduated from a bootcamp, and at both jobs I have had I ask my leads about optimization and making it run even faster and am often told that we don't need to worry about it because of how fast computers are now. But I am sitting there thinking about how I want my stuff to run like lightning for every system.

256MB RAM? How extravagant! My first computer had 3kB.

This is just the nature of “induced demand”. We might expand the power of our computers by several orders of magnitude, but our imaginations don’t keep up, so we find other ways of using all that capacity.

> is easily several orders of magnitude

You might have used these words as a way to say "way faster", but factually you are incorrect. several orders of magnitude = thousands of times faster. No way.

> only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions

A few kb for the binary + 20-40 gb for the OS with 25 years of backwards compatibility

The actual part of the OS providing that is a small fraction of the number you quoted.

And if it weren’t for all that rest of the OS, the small fraction wouldn’t get the funding to survive to today.

If the browser is computationally expensive abstraction, so is the various .NET SDKs, the OS, custom compiler and the higher language of your choice. Yes there were days were an game like prince of persia could be fit in to the memory of apple IIe and all of it including the sound graphics, mechanics and the asset was less than 1.1 MB ! However the effort required to write such efficient code, hand optimise compiler output is considerable not to mention very few developers will be able to do it.

Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.

While the browser security sandboxes have its issues, I don't want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .

Till late 2000's Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family's systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.

I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that

Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.

I use windows (although not a heavy user, I mainly use Linux these days), and only outsider apps I have installed are lightweight open source ones and some "official" versions of software. You don't need an antivirus apart from built-in Windows Defender. And I don't notice any slowdown. I have a non-admin account which I regularly use and admin account is separate.

Arguably many users don't know how to use a Windows desktop. But that's not a failure of desktop; that's failure of Windows. They could have provided an easy way to install applications to a sandbox. On Android you can install from apk files and they are installed to a sandbox. If Windows had such a feature easily available, I think most of genuine desktop app makers would have migrated to it. This would have the advantages of the browser and no battery drain, no fan noise, no sluggishness.

You already can use UWP that has sandbox. Win32 apps can be converted to it. So no one cares about more security. Most vendor stuck at "just work" Win32.

Can convert does not mean thats only way to install , as long you give an insecure option your security is still weak .

It is not that OS developers are not improving , for example S mode on surface is a good feature, however as long as adobe’s of the world still abuse my system the problem is not solved .

It is just not older gen software either , Slack desktop definitely takes more resources than web version while delivering broadly the same features Sure it is electron abstraction , however if a multi billion company with vc funding can not see value in investing on 3 different stacks for windows,MacOS and Linux how can most other developers?

Is it possible to run a process (and its children processes) in sandbox manually with only given permissions?

Native desktop apps are great.

The reason that people don't write them is because users aren't on "the desktop". "The desktop" is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can't run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there -- Android users won't like your iOS UI, and iOS users won't like your Android UI. Then there are tablets.

Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.

> Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.

I must be living in a parallel world because I use a ton of desktop apps that aren't "written 6 times" - and write a few, including a music & other things sequencer (https://ossia.io).

Just amongst the ones running on my desktop right now, Strawberry (Qt), Firefox (their own toolkit), QtCreator (Qt), Telegram Desktop (Qt), Bitwig Studio (Java), Kate (Qt), Ripcord (Qt), all work on all desktop platforms with a single codebase. I also often use Zim (GTK), which is also available on all platforms, occasionnally Krita (Qt) and GIMP (GTK), and somewhat rarely Blender. Not an HTML DOM in sight (except FF :-)).

In my experience Java GUIs are consistently even more laggy and unresponsive than Electron apps. They may be lighter in terms of memory, but they never feel lighter. Even IntelliJ and family - supposedly the state of the art in Java apps - feel like mud on a brand-new 16" Macbook Pro.

Lighter in terms of memory? No way. Intellij is always at a few gb per instance. They are indeed laggy as hell. With the latest Mac os intellij products specifically bring down the entire os for ten to twenty minutes at a time requiring a hard reboot without which the cycle starts again. Except it's not java or intellij, it's the os. I only wish they were electron apps. That way I wouldn't have to return a $4400 brand new 16" mbpro because of its constant crashing due to horrible native apps. All apps can be shitty. At least electron ones are cross platform, work, and generally do not bring the whole system to a standstill followed by a hard crash. While using about the same resources as electron apps.

Interestingly they seem to run exactly the same on horribly low spec machines. I blame the jvm's love for boxing and unboxing everything in IL land. Of course by now I'd hope it's less wasteful - last I spent serious time in Java was 2015.

I've definitely noticed the same on IntelliJ but weirdly enough Eclipse feels just fine. IIRC both are written in Java, so maybe it comes down to the design of IntelliJ moreso than the limitations of the JVM?

I used Eclipse for a while before switching to IntelliJ around ~2015 and it actually seemed like a vast improvement, not just in terms of features but in terms of performance. It still wasn't "snappy", but I figured I was doing heavy work so that was just how it was.

Fast-forward 5 years and I've been doing JS in VSCode for a while. My current company offered to pay for Webstorm so I gave it a try. Lo and behold it was still sludgy, but now unbearable to me because I've gotten used to VSCode.

The one other major Java app I've used is DBeaver, which has the same problem to an even greater degree. Luckily I don't have to use it super often.

Eclipse interestingly uses native controls which it hooks up to Java, while IntelliJ draws everything essentially.

I work daily in a codebase with 20M lines and RubyMine can still search near-instantly compared to say VS Code. One thing that's still true is that there are sometimes long pauses, presumably garbage collection, or I suspect more likely bugs as changing window/input focus can sometimes snap out of it.

If that's the case regarding IntelliJ then you probably haven't changed the jvm heap size, which is defaulted to something very small (2GB maybe) by IntelliJ.

QT is excellent, but C++ is quite a tough pill to swallow for many. Especially as QT layers a macro system on top. I predict that native desktop apps will make a comeback when there's QT-quality cross-platform framework in a more approachable language (Rust, nim, or similar).

Why not use Qt bindings for $YOUR_LANGUAGE_OF_CHOICE? https://wiki.qt.io/Language_Bindings

It's rather clunky and often requires writing like C++ in whatever language of choice you're using, the worst of both worlds.

I wonder what an API with both only C (or some other low level) bindings and designed to be easy to use externally might look like.

Flutter for desktop solves this.

Your subdomain http://forum.ossia.io/ lacks an ssl certificate for the login mask.

You can get wildcard ssl certificates for your domain and all subdomains from Letsencrypt.

Thanks for the info, gonna check.. we already have a cert, no clue what's missing precisely in the config, how can I try ?

Configure your HTTP server to serve that subdomain from HTTPS. Here's an example using nginx:


...or just use CloudFlare, which will automatically take care of it.

Do you count Qt apps as native, but not count web apps as native? Why?

Qt may not 'look' native, but it has native performance, whereas Electron really doesn't.

The difference between "Qt native" and "native native" (e.g. Win32 or Cocoa) is still noticeable if you pay attention, although it's not quite as obvious as between Electron and the former.

(Likewise, applications using the JVM may also look very convincingly like native ones, but you will feel it as soon as you start interacting with them.)

Is it really even worth highlighting though? I use Telegram Desktop (Qt) daily and it is always, 100% of the time completely responsive. It launches basically instantly the second I click the icon and the UI never hangs or lags behind input to a noticeable degree. If we transitioned to a world where everyone was writing Qt instead of Electron apps we would already have a huge win.

If you're using KDE, Qt is "native native".

You're fundamentally mistaken about where Qt sits in the stack - it effectively sits in the same place as USER32/WinForms in Windows or NS/Cocoa GUI widgets of OSX. It is reasonable to think of it as an alternative native GUI library in that sense. If it is slower, it's because an implementation of something is slower, not because of where it lives or an abstraction cost.

Qt pretty much draws using low-level drawing APIs on the respective platform. And although Qt itself is not written in the most performance sensitive C++, it is still orders of magnitude faster than most (and it's not like Chrome doesn't pay overhead) - people rag on vtable dispatch speed but jeez its still orders of magnitude faster than something like ObjC which served Apple quite well for years.

The performance of a Qt app is more likely a function of the app itself and how the app developers wrote it.

But no, you're not noticing any micro-seconds differences in C++ overhead for Qt over "native native" - and you're basically comparing the GUI code of the platform - since Qt does it's own rendering. Win32 is mostly pretty good, NS is a mixed bag, and Gtk+ is basically a slug. In all cases there is some kind of dynamic dispatch going on, because that is a fundamental pattern of most GUI libraries. But dynamic dispatch is almost never a factor in GUI render performance. Things like recalculating sizes for 1 million items in a table on every repaint are the things that get people into trouble, and that is regardless of GUI library.

VSCode is indistinguishable from native, so not sure its Electron that's at fault here.

This gets said a lot, and granted VSCode is certainly one of the best performing Electron apps, but it definitely is not indistinguishable from native apps. Sublime, Notepad++, or TextAdept all fly compared to VSCode in terms of performance and RAM efficiency.

On Mac, VSCode does a better job than many apps at emulating the Cocoa text input systems but, like every electron app, it misses some of the obscure corners of cocoa text input system that I use frequently.

If we’re going to use JavaScript to write native apps, I’d really like to see things like React Native take off: with a good set of components implemented, it would be a first class environment.

No. I like VS Code but it's a hog.

I still use Macvim or even Sublime Text a lot for speed reasons, especially on large files.

If your native apps are indistinguishable from VSCode, they're doing something wrong.

Start Notepad++ or https://github.com/rxi/lite and then compare the startup speed with VSCode.

I use VS Code daily (because it seems to be the only full-featured editor that Just Works(TM) with WSL), but it can get pretty sluggish, especially with the Vim plugin.

Try to use AppleScript or Accessibility. It's like VS Code doesn't even exists.

If I recall correctly, Microsoft forked their own version of electron to make vs code feel more snappy. Because normal electron runs like slack.

I don't think so, Microsoft wanted to fork electron in the past to replace Chromium with edgeHTML, but it didn't happen. VSCode is powered by Monaco Editor github.com/microsoft/monaco-editor, and VSCode feels snappier than let's say Atom, probably because of Typescript.

Try opening a moderately large (even 2MB) .json file in VSCode, and then do the same in sublime.

VSCode very quickly freezes because it cannot handle a file that size. Sublime not only opens it but syntax highlights immediately.

This is something with your configuration. OOB VSCode will immediately show you the file but disable tokenization and certain other features. I regularly open JSON files upto 10 MB in size without any problem. You probably have plugins which impede this process.

Isn’t that more of an Electron issue?

I mean, is anyone clamouring for VS Code, for example, to be rewritten in native toolkits?

I would argue that the web platform is one of the most optimised and performant platforms for apps.

When you say web platform do you mean a browser? Using a browser is the mosted optimised and performant over installing an application on your desktop?

Curious what desktop do you run your browser under?

I would give you an example of a simple video split application. A web platform requires uploading, downloading and slow processing. A local app would be hours quicker as the data is local.

No reason a video splitting app couldn't be written with client-side JS.

That sounds like it’d probably be slow.

So please do.

a few reasons :

- Qt is actually the native toolkit of multiple operating systems (Jolla for instance and KDE Plasma) - you just need to have a Linux kernel running and it handles the rest. It also does the effort of going to look for the user theme for widgets to mix in with the rest of the platform, while web apps completely disregard that.

- Windows has at least 4 different UI toolkits now which all render kinda differently (win32, winforms, wpf, the upcoming winui, whatever is using Office) - only Win32 is the native one in the original sense of the term (that is, rendering of some stuff was originally done in-kernel for more performance). So it does not really matter on that platform I believe. Mac sure is more consistent, but even then ... most of the apps I use on a mac aren't cocoa apps.

- The useful distinction for me (more than native and non-native) is, if you handle a mouse event, how many layers of deciphering and translation has it to go through, and are these layers in native code (eg. compiled to asm). As it reliably means that user interaction will have much less latency than if you have to go through interpreted code, GC, ...

Of course you can make Qt look deliberately non-native if you want, but by default it tries its best - see https://code.woboq.org/qt5/qtbase/src/plugins/platforms/coco... and code such as https://code.woboq.org/qt5/qtbase/src/plugins/platforms/wind...

Knowing what I know about Qt and what I've done with it in my day job, it's basically the best kept secret on hn. What they're doing with 6+licensing... I'm not sure how I feel, but from a pure multi-platform framework it really is the bees knees.

I've taken c++ qt desktop apps that never had any intention of running on a phone, built them, ran them, everything "just worked. I was impressed.

I just wish it weren't stuck, anisotropically, ~10 years in the past. Maybe Qt6 will be better, but more likely it will be more and more QML.

Since QML uses Javascript it may be their best bet to attract new developers.

Yes, well, QML also uses JavaScript.

This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).

Also worth noting that many creation-centric applications for the desktop (graphics, audio, video etc. etc.) don't look "native" even when they actually are. In one case (Logic Pro, from Apple), the "platform leading app from the platform creator" doesn't even look native!

> This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).

Qt also supports rendering directly on the GPU (or with software rendering on the framebuffer) without any windowing system such as X11 or Wayland - that's likely how it is most commonly used in the wild, as that's one of the main way to use it on embedded devices.

I'd like to see it do that on macOS ...

You're seriously suggesting that the common use of Qt on Linux systems is direct rendering without the windowing system?

Not parent, but yes, sort of.

Arguably it's use in embedded contexts is much larger than desktop. It's quite popular for in-car computers, defense systems, etc.

For desktop linux, yes, it uses the windowing system.

Well, yes. I can't tell too much because of NDAs but if you go buy a recent car there is a good chance that all the screens are rendered with Qt on Linux or a RTOS - there is likely more of those than desktop linux users as much as this saddens me

On macOS Qt doesn’t really use Cocoa, it use Quartz/CoreGraphics (the drawing rather than the application layer). Note that Apple’s pro apps are native controls with a UI theme: they usually behave like their unthemed counterparts.

I had meant to write Quartz, not Cocoa.

I know how Qt works at that level. I did a bunch of work in the mid-naughts on the equivalent stuff in GTK.

> Qt relies on a lower level windowing system

That's true of QtWidgets, but not QML / Qt Quick (the newer tool), correct? (I found this hard to determine online).

Kinda - QML is a programming language, Qt Quick is an UI scene graph (with the main way to use it being through QML) which also "renders everything" and makes by default less effort than widgets to look like the OS.

But :

- QML can be used with other graphics stack - for instance Qt Widgets like with https://www.kdab.com/declarative-widgets/ or alternatives to the official Qt Quick controls with https://github.com/uwerat/qskinny or with completely custom rendering like here with NanoVG : https://github.com/QUItCoding/qnanopainter

- QtQuick Controls can be made to use the same style hints than the desktop with https://github.com/KDE/qqc2-desktop-style

What does “native” even mean?

Put a <button/> in an HTML page and you get a platform-provided UI widget.

It's not platform-provided in my experience, but browser provided. The result of <button/> when viewed in a browser on macOS has no relation to the Cocoa API in any meaningful sense.

I'm pretty sure that when you render just a <button> in at least Safari, the browser will render a native Cocoa button control. If you set a CSS property like background colour or change the border, then it will "fall back" to a custom rendered control that isn't from the OS UI.

I did a small bit of research into this, and found plenty of "anecdotal" evidence, but nothing confirming for sure. I'm looking and interacting with the controls and they seem pretty native - if they're a recreation than that's pretty impressive :)

The drawing is native and the interaction is handled by the browser.

A GUI is a collection of elements with specific look and behaviour. A Desktop Environment is a collection of GUI(s), tools and services. Native means you have something which follows this look and behaviour 100% and can utilize all the tools and services.

Implementing the look is simple, adding behaviour quite harder and utilizing the service the endgame. WebUI usually does nothing from those or some parts, it all depends on the constellation. But usually there is a obvious difference at some point where you realize whether something is native or just made an attempt.

I'd also love for Mac and Windows to make it really easy to get a vendor blessed version of QT installed.

Imagine if when trying to run a QT app on Windows a dialog box could popup saying. "Program X is missing the Y, install from the Windows Store (for free): Yes / No"

Ossia looks pretty sweet! I'll be checking that out for sure.

thanks ! it's still kinda in alpha but making progress :)

Bitwig has been ported to Android ? Or IOS ?

I don't think it makes sense to use it on even small laptops screen to be honest so I don't really see the point. You'd have to redo the UI and whole paradigm entirely anyways for it to be meaningful on small devices. But there is certainly not any obstacle to porting - from my own experience with ossia & Qt, it is fairly easy to make iOS and Android builds, the difficulty is in finding a proper iOS and Android UX.

In particular C++ code works on every machine that can drive a screen without too much trouble - if the app is built in C++ you can at least make the code run on the device... just have to make something pretty out of it afterwards.

The point is that the parent poster mentioned tablets and phones which you don't address in your point. Of course your examples aren't written 6 times, but they support fewer platforms too (only desktop).

Off-topic, but regarding Bitwig: of course it makes perfect sense to use it on smaller devices. Not phones, but tablets. It's even officially supported with a specific display profile in your user interface settings (obvious target amongst others: windows surface). This is particularly useful for musicians on stage.

I still can't believe bitwig is java. I'm a bitwig user. It even runs on linux.

Only small part, the core is C++ and assembly.


Wasn’t that Java’s original selling point, it runs anywhere (there’s a JVM)?

Yes, today you can use JavaFx to build cross platform desktop apps.

Sounds like that’s the Java Swing replacement for the same thing you could do over a decade ago?

Only part of it will be Java, probably the UI logic layer

I think he did not mean "written 6 times", but more like Compiled 6 times, with 6 different sets of parameters, and having to be tested on 6 different devices.

Because you NEVER have to do that with browsers, right?

You don't have to sign and deploy your web app six times.

You have to do it anyway if you're using Electron

That's not a web app then.

Isn't it? The UI is rendered using web technologies inside a specialized browser and it's written in a web-specific language. I might consider an electron app an hybrid app (that leans heavily towards the web side), but for sure not a native app

CI/CD + uh... doing your job? I build one app (same codebase) on 4 different platforms often, it isn't terribly hard.

I don’t know, I think they did mean that.

You write an app for the Mac... how do you ship on Windows as well?

Concerning the desktop, I honestly don't see Windows users caring much about non-native UIs. Windows apps to this day are a hodgepodge of custom UIs. From driver utilities to everyday programs, there's little an average Windows user would identify as a "Windows UI". And even if, deviations are commonplace and accepted.

Linux of course doesn't have any standard toolkit, just two dominant ones. There's no real expectation of "looking native" here, either.

Which leaves macOS. And even there, the amount of users really caring about native UIs are a (loud and very present online) minority.

So really, on the Desktop, the only ones holding up true cross-platform UIs are a subset of Mac users.

During my days of Windows-exclusive computing, I wondered what people meant by native UIs, and why do they care about them. My wondering stopped when I discovered Mac OS and, to a lesser extent, Ubuntu (especially in the Unity days). Windows, with its lack of visual consistency, looked like a hot mess compared to the aforementioned platforms.

And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?

I don't know exactly what time period you're referring to, but back when Java was attempting to take over the desktop UI world with Swing, it was painfully obvious when an app wasn't native on Windows. Eclipse was the first Java app I used that actually felt native, thanks to its use of native widgets (through a library called SWT) instead of Swing.

As far as I know, you can even write your own applications based on SWT which would make jvm apps pretty consistent and performant across platforms, but not many people seem to have chosen that route for some reason.

> And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?

I don't think that's how fraud works in actuality; malicious actors will pay more attention to UI consistency than non-malicious actors (who are just trying to write a useful program and not trying to sucker anyone), inverting that signal.

I don't know, I've read that e.g. spam will not focus on grammatical accuracy because they want to exclude anyone who pays attention to details. Also most fake Windows UIs from malicious websites I used to see weren't exact matches of the native UI.

I think this has changed. People used to be very particular about how their apps looked on different native platforms, like you say. But I don't think it's like that anymore. People are more agnostic now, when it comes to how user interfaces look, because they've seen it all. Especially on the web, where there's really no rules, and where each new site and web app looks different. I believe this also carries over to native apps, and I think there's much more leeway now, for a user interface to look different from the native style, as long as it adheres to any of the general well established abstract principles for how user interface elements ought to behave.

Speaking for myself only, I haven’t changed my preference, I’ve just given up hoping that any company gives a shit about my preference.

The other thing is that I trust the web browser sandbox. If I have to install something I’m a lot more paranoid about who wrote and whether it’s riddled with viruses.

And therefore, beware any OS attempts to break cross platform browser compatibility.

Also I think you can deploy to all those things with Qt.

And pay QT like 5,000 $ a year to keep it closed source. No thank you. Would rather write it 6 times. Or just use electron.

Qt is LGPL licensed is it not? LGPL license means you can distribute your app closed source, so long as the user can swap out the Qt implementation. This usually just means dynamic linking against Qt so the user can swap the DLL. The rest of your app can be kept closed source.

On iOS and Android the situation might be a bit more complicated, but this discussion[0] seems to say that dynamically linking would also work there.

[0]: https://wiki.qt.io/Licensing-talk-about-mobile-platforms

Qt doesn't require that but even if it did writing it 6 times is vastly more expensive. People would just rather spend 500k on writing it 6 times than 5k on a license because they are somehow offended at the notion of paying for dev software or tooling.

It's a major reason UI coding sucks. There is no incentive for anyone to make it not suck, and the work required to build a modern UI library and tooling is far beyond what hobbyist or spare time coders could ever attempt.

Qt is mostly LGPL. It's really not that hard to comply with that on desktop, and doesn't require you opening your source code.

It’s hard to satisfy the requirements in various app stores. They also sneak in more restrictive licenses in their graphs

AFAIK you only have to pay if you modify Qt itself and don't want to release those changes.

Targeting Windows alone gets you 90% of the desktop market. 95% if you make it run reasonably in Wine. This argument is often used, but it's an excuse.

Anything that you need to run on a desktop can't be used effectively on a touch screen anyway, so phones and tablets don't really count for serious software. (Writing this comment is stretching the bounds of what I can reasonably do on an IPhone).

95% of a market that has shrunk nearly 50% over the last decade.

In many ways, the consumer and non specialty business are post desktop. Turns out documents, email, and other communication apps cover 90% of use cases. Anything that requires major performance gets rendered in a cloud and delivered by these other apps.

You live in a bubble, Windows still dominates desktop (obviously, most people can't afford a Mac and the year of Linux on desktop has not come yet)


They're not refuting that. They agreed that it's "95% of the market." Their point is that the overall desktop has shrunk, regardless of Windows's share of that.

Note: People questioning the stats


Note that this data is missing phones.

2010: 157mm Desktops, 201mm Laptops sold 2019 forecast; 88.4 + 166mm

Or, perhaps better, is in terms of internet use https://www.broadbandsearch.net/blog/mobile-desktop-internet...

Mobile went from 16.2% to 53% of traffic since 2013; nearly 4 fold. Which means since 2013, non mobile usage went from 84% to 47%; or nearly 50%

Shippments of desktops/laptops doesn't tell the whole story. I'm still using a 2009 desktop (with some upgraded components) and wouldn't show up on any of those stats. Similar story for a lot of my friends. They still use desktops/laptops daily, but they don't replace them as often as in the 2000s.

What do you consider as a specialty business? There are hundreds of millions of professionals - scientists, engineers, accountants, animators, content creators, visual artists, chip design folks, folks writing drivers for equipment, photographers, musicians, manufacturing folks, etc who simply cannot earn a living without native apps. Sure, maybe when those people go home, they don't always need native apps, but IMHO its a mistake to only think about them in such a narrow scope.

You name several that are speciality businesses and are part of that 10%.

But there are definitely examples within Accountants, Animators, and Musicians where Phones, Tablets, and Chromebooks (not specialty desktop apps) have taken over the essential day to days.

For animators; the Ipad VS. Surface face off is a great example -- also where they offload concepts to "the cloud" to render instead of a Mac Pro.

Well, I am not talking about examples, I'm talking about entire industries. For e.g., There is absolutely no way for my industry (vaccine r&d) to any work without native apps. Even for animators, no native apps = no pixar. Maybe you were thinking of some other kinds of animation. I don't disagree that you can find small examples here and there of people not needing native apps in any industry.

really shrunk?

Lots of people use an android or an iPhone as their main computer nowadays. If you're targeting keyboard/mouse style input, then Windows is probably close to as popular as ever. But if you're targeting people using some kind of device to access your service in exchange for money, Windows is wasting your time.

Sales shrunk doesn't mean usage shrunk. Replacement cycle of PC should be getting longer due to enough performance.

Any PC from the past decade is still mostly serviceable.

I'm finally upgrading from an Intel Sandy Bridge processor after nine years, and I still don't need to - it's cranking along pretty well as a dev and gaming machine still.

I'm surprised nobody mentioned Emscripten. Unfortunately I have no experience with it but I get that you could write a native app and also get it to work in browser with this. I also get there could be a performance penalty but hey... There's also a native app! It feels like we could reverse steam and get 1st class native apps again.

Many desktop apps these days seems to be built on Electron, a JS framework for building desktop-class apps.


And many of those apps end up with terrible performance. I'm sure it's possible to write a performant electron app, but I don't see it happen often and it's disappointing.

What is a desktop-class app?

One that can read and write files and directories among other things. (Not an electron fan, but web pages are still pages, not real apps.)

There's a web API for that! https://web.dev/native-file-system/

Does anything but Chrome support this?

Or you could use languages that allow you to share code, so you have 6 thin native UI layers on top of a shared cross-platform core with all the business logic and most interactions with the external world.

You can do it today with C# and https://www.mvvmcross.com/

u forgot linux. :(

I started using Linux in the late 90s and have since lost all expectation of someone writing an app for it.

actually id say app support is better than ever (of course all the caveats that go along with being a 1% os apply..)

If wine and steam count as app support, then you're not wrong. It's pretty amazing what can run on linux nowadays compared to yesteryear.

downvoters I'm curious to hear your counterexamples!

Windows-app-compiled-for-Mac is going to annoy Mac users

And they'll let you know it, too. Unfortunately this has been an issue since the first Macs left the assembly line in 1984. If you point out that based on their share of the software market they're lucky they get anything at all, the conversation usually goes south from there.

Every single app that I use, I try and make sure it is native. I shun electron apps at all cost. It's because people who put in effort to use the native APIs put in a lot more effort in the app in general based on my anecdotal evidence. It is also more performant and smaller in size, things that I cherish. It also pays homage to limits and striving to come up with new ways of overcoming them, which hackers would have to do in the past. I don't think not worrying about memory, CPU, etc are not healthy in the long run. Slack desktop app is almost 1 gig in size. That is crazy to me, no matter the "memory is cheap" mantra.

Agreed. If you have a CPU from 2012 onwards, 16GB of RAM and an SSD, that’s a respectable hardware setup. It might not be the fastest piece of kit on the planet, but I don’t see any reason why it couldn’t last another 3 to 5 years without feeling slow.

Electron apps invariably make such kit feel slower than it actually is. You can get good performance out of even older hardware if you treat it well and load it with good software that respects the hardware.

I type this from a 2014 MacBook Air with 8GB of ram, still going strong, no upgrades except a battery replacement. Everything is still as snappy as the first day I got it.

No electron apps!

Yep, the new Steam library being a bloated web app is what forced me to install more RAM. I normally don't close out of applications when I'm not using them, so I was always hovering around ~6-7 of my 8 gigs of RAM in use. Then they update the library to the new bloated version, and my computer starts freezing because the library memory footprint is so much larger that my computer was having to use over a gig of swap space.

I have a rig powerful enough to run a lot of last-gen games at 4K with high quality settings, and a lot of current-gen ones at mid to high settings and 1080p. The fucking redesigned Steam library lags, none of the animations (why does it need animations?!) are even close to smooth, and there's massive delay on every input. I've never once encountered a move toward webtech, or toward heavier and "appier" JS use in something that's already webtech, and gone "oh good, this works much better now, I'm so glad they did that."

And somehow it's still dramatically better than the Epic Games launcher.

I don't know what these companies are doing. Clearly they're not paying attention though. This is not merely low-hanging fruit that's going ignored, it's watermelons.

>I normally don't close out of applications when I'm not using them, so I was always hovering around ~6-7 of my 8 gigs of RAM in use.

That sounds like your problem. And I refuse to believe that the new steam UI uses 2gb of memory.

The "<resource> is cheap" mantra also really only makes sense if you are writing server code, where you yourself pay for all the memory your code will ever use. If you deploy code to a large number of users it makes little sense. If a million users start your app daily, and your app has a 5 second load time and uses 300mb of ram, you are wasting over 50 days of user time, and hogging close to 300 terabytes of ram.

So you are telling me that I can easily exchange development time which I would have to pay for end-user resources, which I would not have to pay? Sounds like a great deal.

In a fair society it should be taxed as an externality. There is a real environmental cost associated with software bloat.

Then I hope we would also tax the bad UX of the competing 20-year old Frankenstein applications, which lead to slower business processes (= more resources used as well).

People pay with electricity, uncomfortable temperature and fan noise, wasting time working on slow apps.

Apparently, they trade off still worth it, as many electron apps are popular despite these issues. Many times there is electron app or nothing.

Your last sentence hits the nail - many users don't have a choice in selecting the application, and due to industry fads can't expect to have a better option.

I can make a great chat system that uses fast native client, but it won't change the fact that Corporation A paid for a slack license and won't switch to mine.

In a fair society closed software would be illegal and people wouldn’t tolerate this kind of stupidity.

No need to hack it with taxes.

> In a fair society closed software would be illegal


People also takes it a bit to far. Sure, RAM is cheap enough, but if your application requires 64GB of memory you may start having other issues.

We have customers who requires servers with 64GB+ memory, of single applications. This is running on VMs in VMWare. If a ESXi host crashes, you'd want VMWare to migrate your VM to another ESXi host, but that becomes somewhat tricky if you need to locate one with 64GB of available memory. Unless of cause you're way over-provisioned, which is actually pretty expensive. More realistically VMWare will start moving a ton of VMs around to put all those with little memory usage on other hosts, in an attempt to find 64GB for your VM. This takes time.

It can be difficult to explain to people that really this should look at their memory consumption, if nothing else to plan for fail-over.

Waste the company does not have to pay for is not waste as far as they are concerned. Customers rarely notice that kind of waste either, or at least not enough to do anything about it.

But, I am traiding my time for my user's CPU and RAM, and evidence suggests many users are willing to pay the CPU and RAM to get more apps with more features.

It is true that there is a a correlation between lower level programming and better programming in general. You probably won't see someone writing asm but creating crazy O^2 algorithms that run on every frame with memory allocations that run in the inner loop.

At the same time a native win32 program can pack significant functionality into a 20KB exe. Put these together and you have a program where everything is instant, all the time on any computer. The original uTorrent was just over 100KB and installed then ran in an instant.

These two refinements together are such a massive difference from any electron program that it melts my brain when people say that it isn't a problem to have a chat program feel like using windows 95 on a 386.

People say talk about needing cross platform programs, but something like fltk has everything most people will need and also runs instantly, while adding only a few hundred kilobytes to an executable.

> You probably won't see someone writing asm but creating crazy O^2 algorithms

I watched a lecture by Bjarne Stroustrup that he gave to undergraduate CS majors at Texas A&M where he coded a solution to a problem using linear scans and then a "better" solution using better algorithms with better big O performance.

Then he did something interesting. He did a test on a tiny data set to demonstrate that the solution with linear scans was faster, and he asked the audience to guess at what data size the more efficient algorithms would start to beat the linear scan. After the audience members threw out a wide range of guesses he confessed that he didn't know. He had tried to test it that afternoon, but the linear scans outperformed the "better" algorithms on any data set that he could allocate memory for on his laptop.

IIRC he finished by telling them that professionals often do performance optimization the opposite of how the books present it. Using an algorithm with optimal big-O scaling isn't the optimized solution. It's the safe answer that you start with if you aren't bothering to optimize. When you need better performance, you evaluate your algorithms using real data and real machines and qualify your evaluations based on the characteristics (size, etc.) of the data.

You are focusing on an example and conflating it with the actual point that I'm making, which is that electron is not only slow, but compounded by slow programming in top.

That being said... I know exactly what you are talking about and it was always strange to me, because it was actually iterating through every time to find a value first, so the iteration through the linked list would always kill the performance. Even so, basic linked lists are practically obsolete. This is not a good example of algorithmic complexity, because the complexities were actually the same.

I strongly disagree with the idea that lower level is better.

Yes, lower-level languages allow for programs with good performance, small executables, and so forth. There are many domains where they are clearly the way to go.

But higher-level languages allow for better safety, tremendous productivity, portability, exploration, and flexibility.

If you keep your data in an SQL database and you can easily query and update it in any number of ways that you didn't initially realize you wanted. If you instead keep it in hand-crafted C structs, you can probably provide awesome performance. for whatever you originally thought you needed. Once your needs go outside of that box, you'll have to spend significant development effort.

The correct choice depends almost entirely on the domain.

You are arguing against a point I didn't make. I'm not trying to rehash nonsense language arguments. I'm saying that many times the easy electron route is also correlated with programming that gives poor performance even outside of electron functions.

How about "being aware of what your abstraction layers cost, and being palpably aware of every needless contortion you create"

know what everything does. call if from up on high? fine, but only if you literally can trace that high level call down to the machine code it emits :D C compiler suites can do that no problem "gcc -S mycode.c"

For an appropriate dose of humility, so that you know that I'm not elevating myself here, but pointing out reality, check out GCC or LLVM source code.

Something like Lua or Berkeley DB can be defined inside your program in a matter of a few hundred lines of included library code, but what does it DO?

Bringing SQL and a database on board is rather odd for a desktop app, wouldn't you say? Configuration should be flat files, ideally, or managed via the apps gui, in which case an embedded database like Berkeley DB is usually more relevant. Your mention of SQL smacks of "all things are nails, always use hammers", to me at least.

Have you worked with the actual computer itself in any capacity? I mean ASM, C, C++, etc, but essentially being aware of what an ABI is, what types actually are (memory shape patterns so we can define physical memory in terms of our data structures) Javascript is not computer programming, but rather programming the browser, or it's disembodied transplanted javascript engine. the animal is completely different from physical memory and actual instructions.

Computers essentially manipulate memory structures. The further away from this you get, the more likely that your abstractions will be leaky, not fit what computers are actually DOING with your data, and this results in beautiful script driving janky machine code.

Seriously, while we all like to pretend that everyone is equally special, let's recall that someone is a VBscript for Word expert, and that this is basically a virtual machine that itself is just defined inside someone elses program. Technological stacks are defined in terms of semi-arbitrary made-up things other people made-up and that you just need to know how to use.

Sqlite is inanely well tested, incredibly lightweight, and will be more reliable than the vast vast majority of flat file configuration systems.

Ummm Context here is "desktop software" configuration management will be key-value and you needn't bring SQL in for that purpose.

Let's not bring "in-house web app" into the picture just yet.

The "all things are nails, always use hammers" mentality is almost explicitly what I am arguing against.

My mention of SQL was particularly deliberate. It's an especially successful high level declarative language with clear semantics. Implementations provide sophisticated execution engines for optimizing and efficiently running queries. It is quite a lovely separation of concerns that gives you great flexibility and good performance.

Obviously SQL would be a disastrous choice for, say, storing the pixel data in your video codec. Meanwhile, hand-coded C data structures and algorithms would be a disastrous choice for an inventory management system. Tradeoffs everywhere.


a well DESIGNED technological stack WILL allow for high-level control of low-level structures.

Electron and Browser-based apps make a deliberate tradeoff that may be suitable for some kinds of apps (Balena Etcher, as I mentioned. You click a button and some process starts and alerts you when it's done.)

I would simply say that the OP should reverse the question: "in which cases can an electron app suffice for a desktop application" and not presume the death of desktop apps.

Not sure that I entirely agree there: I could easily imagine someone writing too close to the metal to stick to list iteration where a hash would be better and so on, unless it's prohibitively slower, whereas someone slightly higher would freely choose whatever they feel appropriate for the situation. They might often guess wrong and take a structure that is overkill for the typical dataset but the penalty for that will be negligible compared to the one paid for using a badly scaling structure on way too much data.

I agree, in part because of performance, but also in part because I value being able to bootstrap from source as much as possible. There are only a few projects seriously working on this, one being GNU Guix: https://guix.gnu.org/

The trouble with Electron apps though (and most Node apps) is the sheer number of dependencies. It's just infeasible to package them for a distro if you care about their dependencies being packaged as well - at least, not without the entire process being automated.

Slack desktop app is almost 1 gig in size.

On a whim I just checked how big the copy of Adium still lingering in my Mac is: 60 megs.

And Ripcord (a native discord/slack client) is a mere 40.

And Winamp is 4MB (after removing the stock plugins that I don’t need, like modern skins and the long-broken media library internet lookup stuff).

How big is Spotify now?

Spotify is 273MB (on Linux at least), 70MB for the binary and 137MB for Chrome, plus some other bits and bobs.

  $ pacman -Qi spotify
  Name            : spotify
  Version         : 1:
  Description     : A proprietary music streaming service
  Architecture    : x86_64
  Installed Size  : 272.79 MiB
So at least it's smaller than Slack.

I have a noob question, but how do proprietary apps (IE: Slack) make it onto the AUR without having an official binary?

There’s nothing to really block much on the aur. There are some guidelines in naming and no duplicates, no maliciousness etc that are enforced, but that’s it. But anyone can upload a build script (PKGBUILD) for anything.

If you want to see how it’s done search the package name and AUR and you can see the build script right on the website.

TIL! Thank you :)

Both Slack and Spotify do provide Linux versions, even if not/less supported officially.

Thanks! I hadn't tried Arch yet, but I just saw the pkgbuild script on the site (didn't know it was there) and it makes since now :)

There are many PKGBUILDs on AUR that cactually download rpm / deb packages and unpack the binaries and deps from it.

iTunes, in all its bloated, much-maligned media-managing glory, is a mere 188 megs on my Mac.

ooh I'll have to check out ripcord!

I've found it's great for Discord, and pretty good for Slack (although I've been using Slack in my browser now that it's become critical during WFH).

Unfortunately it's proprietary, but there's another native client called gtkcord3 [0] that seems to be progressing well.

0: https://github.com/diamondburned/gtkcord3

I use it daily. It's OK. It's better than having a full electron app/browser running, but it's fairly incomplete. (For example, in Slack I can't find a way to set my status.) I also have to reauthenticate every Monday morning which means launching Slack in a web browser to grab my credentials. It's a pain.

I admire your words here.

I am on the other side, building electron apps, I appreciate the flexibility and ease because I would rather iterate on ideas then learn three different OS-hooks.

I do agree that memory usage is too high on these types of apps and we as developers can be lax about performance.

When why not just use gtk or qt and call it a day? Electron is unnecessary.

Mainly due to them being both more tightly integrated to C++/Python than JS/TS for building desktop apps.

Yeah, but, honest question, why would you want to use JS if you didn't have to? It's like, the worst language possible.

If you use typescript, the tooling is great and the parent also mention it so I assume they are using it.

Code reusability between browser/desktop/mobile is one. Easier to find developers, faster development speed due to ecosystem, previous experience/familiarity etc. I guess.

Maybe it is just your subjective opinion? I am pretty sure that there are people enjoy writing JS/TS over C++/Python.

I would pick writing JS over C++ any day. Of course, that's just my opinion. I am sure yours differs.

JavaScript is actually not too bad. It lacks static types, but Typescript gives you one of the better type systems. But I'd take JavaScript over PHP or Bash any day of the week and twice on Fridays.

If I were no experience for GUI development, I'd prefer to learn Web tech than Desktop tech like Qt because it look like more popular and versatile skill.

When all you have is a hammer...

So quantity > quality?

I agree too, and your average user is not going to care about the fact you've used Electron, or even know. It's a big win for development.

I disagree. I mean they may literally not know that your app uses Electron, but they'll certainly have the feeling of, "Oh, that janky app that doesn't quite work correctly and makes my system slow."

I think that might be true on Windows or Linux, but not on the Mac. It seems like Mac developers care more about making their application feel good to use.

Your average user has just come expect technology to suck nowadays. Go look around and many people are jaded by how poor things actually function.

Bonus points if it includes offline documentation. As nice as it is to have "updated" online docs, the reality to me seems to be defined more by broken links than by actual up-to-date documentation.

1 gig for Slack ? I count < 300 MB on my Windows C:\Users\[MY USERNAME]\AppData\Local\slack counting app-4.5.0 and packages folder

And currently taking < 300 MB RAM

"storage is cheap"

"memory are cheap"

"cpus are cheap"

Say the same people who spend a million on AWS every year

185MB on mac.

I'm going to be slammed for using these two words, but for any real work you need to have as few layers of indirection between the user and the machine as possible, and this includes the UX, in the sense that it is tailored to the fastest and most comfortable data entry and process monitoring.

I don't see any `web first` or Electron solution replacing Reaper or Blender in a foreseeable future. One exception I'm intrigued with is VS Code, which seems to be widely popular. May be I need to try it to form my own opinion.

As an Electron hater, I’m constantly surprised at just how much VS Code doesn’t suck.

My personal evolution has gone from Sublime Text 3 to Atom to VS Code to Sublime Text 3. I've never been a heavy plugin user, mainly sticking to code highlighting. The thing I really like is speed. Sublime Text rarely chokes on me. I love being able to type `cat some_one_gigabyte_file | subl` and getting it to open up with little difficulty. VS Code chokes on files of non-trivial size, and that was the thing I liked about it the least.

For anyone wondering why I'd open up a 1 GB file in a text editor, I guess the answer is largely because it's convenient. Big log file? No problem. Huge CSV? No problem. Complete list of AWS pricing for every product in every region stored as JSON? No problem.

>> VS Code chokes on files of non-trivial size

VS Code isn't really designed as a general purpose text editor. It's meant as a development environment.

If MS choose to optimise the experience of 99% of the use cases (i.e. editing source code, which should never even approach 1GB), then that's the correct call IMO.

>> For anyone wondering why I'd open up a 1 GB file in a text editor, I guess the answer is largely because it's convenient.

I can completely appreciate the use of a text editor to open a massive log file, etc, I just don't think that's something VS Code is designed for. You can always use Sublime or Atom to open those files; while getting the nicer (IMO) dev experience with VS Code.

This. VSCode isn't Vim and vice versa.

Over time I've tried quite a few popular text editors, Notepad, Emacs, Vim, UltraEdit, Sublime Text, and of course VSCode.

VSCode is surprisingly good for a Microsoft product and they had to do some crazy smart engineering work to make it not sucks while being built on top of Electron.

That said, it still quite slow and memory hungry, I've gone back to Sublime 3 a few weeks ago and I am not coming back to VSCode.

I'm an atom user, and it chokes for the very same reason.

However I also use vim for tweaking server side stuff, and use less by default whenever I want to read something (logs is an obvious one)...

This is both for speed but also UX, i believe vim style navigation (which less basically gives you), is great for reading and searching - what I cannot stand though is doing more than small edits in vim, for development (and i mean code is flying around like crazy stage of development, not read for 1 hour and make tweaks), then I am fastest with the kind of flexibility atom provides.

I know it can be tempting to have one tool for everything when it seems like the tools are supposed to be doing the same thing, but in my mind lean text editors tend not to compete with the big fat slow electron style editors - so just use them both, for their respective strengths.

I hear you on the value a text editor on the server side. I have picked up the basics of vi for this use case, but I ain't fast with it :)

Vim handles large csv files pretty good, as well.

Not saying you should use it, just something that I found out years ago.

I've tried pure text editors, but they haven't really grabbed me, and my fingers are quite clumsy.

> Complete list of AWS pricing for every product in every region stored as JSON

Is this a hypothetical file that you mention or something you actually have? Asking since I have a use-case for this data and am interested in knowing how to get it. I have read AWS has APIs for pricing info - is that where you got the data from?

i just use less for big files, convenience is a matter of taste

I don’t open big files at all. There’s no point. Files exist as data made to be transformed from one form to another. It is only worth looking at a file in its final form unless you are making some kind of edit.

And even then, I make edits on large files through a series of commands, never opening the file.

By thinking of files in this way, it becomes easy to create programmable tool chains for manipulation.

What's your typical strategy for parsing something like a large text file for some relevant data?

Suppose you could cat and grep, but what if you don't know what you're looking for?

not everybody uses a computer for the same exact thing you use a computer for

Of course, if they did there would be no point in making my comment, it would be redundant.

I think the issue with your original comment is with "there's no point," which is in effect invalidating the opinion of others.

The reason for this, I think, is developers of VS Code really understand the problem field in this case, as they are making the tool for themselves.

Also, there's probably a ton of resources involved.

Interesting past thread on Atom's performance struggles and how they got outplayed by the VSCode team: https://news.ycombinator.com/item?id=14140421

It doesn't suck, but try Jetbrains IDEs. C++ and other "not JS" language support is vastly superior including refactoring that actually works.

Jetbrains IDEs (and others for that matter) provide so much more than the glorified text-editors, including extensive debugging support (both in my own code, library code, platform code), code-autocompletion, code navigation, code-formatting, refactoring, linting, static-analysis (works well for Python as well), great syntax highlighting, spell-check, a good plugin ecosystem. I'd never go back to editing code without that support.

The Jetbrains ecosystem is real cheap too, I pay US$160/year for the personal-license all-product suite I can install and use anywhere ... and I use PyCharm (Python), IntelliJ IDEA (Java et al), and Datagrip (DB) extensively, dipping into CLion (C++/Rust) as well ... but they have IDEs for many other languages and ecosystems as well. It's definitely a good deal.

Jetbrains suite, along with Docker, Atlassian SourceTree, and Homebrew (and connection to AWS/Kubernetes) are my main tools these days.

I keep periodically re-trying VSCode, but holy cow. It's a massive step down from a Jetbrains IDE, in every single language I've dev'd in.

Jetbrains stuff works, VSCode mostly handles the basics if it's possible to configure it correctly. Which is quite the achievement, and it's a very reasonable option and far better than much that came before it. But it's not where I want to spend my time if I can avoid it.

You're working with statically typed compiled languages tho. Once you try using a dynamic language you realize another editor is enough IMO. I use emacs for anything dynamically typed (including compiled languages like elixir) and intellij for scala/java.

Webstorm or PyCharm isn't as good as IntelliJ or ReSharper, but holy hell is it better than just a text editor, even emacs.

I really don't understand why so many programmers proudly proclaim that they do things the hard way and wear that as a badge of honor.

Phpstorm compared to vim is like an oceanside resort compared to a walk in the desert without a water bottle. If there's not thousands of bugs I have prevented or discovered thanks to phpstorm, I am not surprised.

Jetbrains products are absolutely critical if you're using a dynamic, uncompiled language and to turn down the offer is professional misconduct even if it requires buying a new computer with more ram. I don't know who thought it's a good idea to pretend that a typo in one use of a variable is a legitimate expression of developer intent, but Jetbrains saves your users from that hell.

That's an old, old trope. "Real men" do it the way that takes 3X as long and yields code with more bugs.

I feel like "tool use" follows roughly the same curve as a Gartner Hype Cycle, but without an upper bound on the right (as it implies a below-peak asymptote).

In the very beginning, less is often better, since the tool is prompting you with too many things you don't understand. Then you get past that point and you're massively more productive, since it's catching all your simple mistakes. Then you become disillusioned since it doesn't catch all mistakes, and you start learning in detail how it has failed you, and you just (╯°□°)╯︵ ┻━┻ the whole thing (the "real man" trough). And in the end you go back to sophisticated tools, as you realize a 70% solution can still give you magnitudes more productivity.

But the tooling on emacs or whatever is up to par. That was my original point, not that a plain text editor is a real man's way, just that it's good enough. PyCharm or whatnot isn't necessarily much better than what you can do in emacs, but it isn't portable to other languages or as flexible. A text editor like vi or emacs isn't much different than an IDE's functionality when looking at a dynamic language. It's a WHOLE DIFFERENT WORLD with scala though (IMO - that's arguable but refactorings etc aren't on parity with intellij), and I don't foresee myself ditching intellij for scala dev any time soon.

I remember showing a scala developer who was using sublime the "extract method" feature and some refactorings in intellij and he was like "HOW DOES IT KNOW ABOUT THE CODE THOUGH???" - the IDEs have great features, but they're less differentiating for dynamic languages as a lot of the OSS tools are just as good. Eg VS Code MS Python extension for example. It's just great.

It's miles better on Python and most javascript that I've touched (VSCode's ecosystem does tend to have more breadth, and if you're working on something that VSCode has plugins for but Intellij does not, yea - VSCode can be noticeably better for most purposes). Most commonly around stuff that requires better understanding of the structure of the language / project, like refactoring and accurately finding usages.

But yes, for many dynamic languages a fat IDE is less beneficial, especially for small-ish projects (anything where you can really "know" the whole system).

VSCode is more editor then IDE.

The editor that most closest to IDE.

jetbrains stuff is written in java btw, for all languages. (when we talk about “native”/“not native”)

I don't care what it's written in.

Native UIs seem to matter more for small to medium size apps. Huge all-encompassing things like Blender, IDEs, etc. seem benefit from a unified, attractive, easy to use UI but it doesn't seem to matter quite as much that it's native. These things are intrinsically huge too so bloat matters less.

I don't think I agree with you; but I will accept that a cross platform toolkit is necessary because you cannot write three tools of the quality of one Jetbrains. But I think you could do the same in Qt with ~manual memory management and it would be significantly more efficient. Qt, I think, is less like Java and rather more akin to an alternative native toolkit (in much the same way that you can choose on X, "oh, I will use Gtk; I will use motif; I will use Qt", you can say the same on Windows "oh, I will use winapi; I will use WPF; I will use UWP; I will use Qt" or MacOS "oh, I will use Cocoa; I will use Qt"). It just happens to be provided by a third party so there's no OS lockin.

I have no problem getting more ram for a Jetbrains product, because they are cheap at the cost of a new laptop. But it would nice to find a 16 GiB laptop would be able to cope with my codebase, my web browser and my vm.

Check out clangd or ccls for VSCode. They have refactoring that actually works (probably still rather basic when compared to Jetbrains IDEs).

VSCode is Electron? I had no idea. That’s the only electron app I use willingly then! Good job MS!

The first clue is a text editor that uses 180mb of ram :)

For comparison, I just opened up an org-mode file in Aquamacs (Emacs with macOS native GUI) and it weighs in at 105MB, which is actually lower than I would have guessed.

I am running Emacs on macOS, and it only takes 43 MB barebones, and my regular setup takes about 81 MB. Both opening the same org-mode file. Point being that it really depends on what you choose to run on Emacs, and it does not have to be about 100 MB.

Just to add to that, I don't think anyone should be concerned about their text editor taking 200 MB anymore. I doubt it is worth worrying about.

If the electron apps that I ran were the main programs I'm using and weigh in at 200 MiB, I wouldn't worry. But they're mostly background apps - chat, music - that should be using limited resources since I want them on all the time without inhibiting me. Genuine background tasks. And they're using a lot more than 200 MiB.

(I use PhpStorm and Visual Studio Pro and Rider as my main foreground apps. If Jetbrains products used a mere 200 MiB, I would be worried that something had broken and reset them or reinstall them. But they're not text editors.)

It can do A LOT more than editing text.

I’m under the impression it’s striving to be an IDE.

I love vs code, and its plugin system, but if I’m on my laptop without a charger, I use something else. When I’m running vs code, my battery life is cut nearly in half.

What do you use instead? In my experience, normal IDEs (Android Studio, Xcode, Visual Studio) all perform worse than VS Code in terms of memory use and battery. :/

Qt Creator or if my battery is real low, a text editor like Gedit (yes, I’m probably saving more there just by not having features like code completion and syntax checking).

Not old versions of Visual Studio.

As a Visual Studio user, I can't get into VS Code. For once, the interface moves constantly. Things resize at all times. It feels sluggish, however Visual Studio is becoming also worse so in the end VS Code end up feeling quicker... VS Code feels _very far_ from Sublime Text usability to me.

Same here. I am all native guy, including back end C++ servers but VS code is very decent. But then again I think the level of developers who did the main task is way above the average. And MS itself wrote that they worked super hard to optimize it for memory and performance. Something that regular developer usually ignores / does not really know how to do due to their lack of understanding how lower level tech works.

Anything is capable of not sucking if you go out of your way to spend several man-decades optimizing it.

Both VS Code and Atom use significant amounts of WebAssembly and low level libraries to achieve that performance. In addition to that they've written their own view layer for an IDE in modern JS which makes it more performant and stable.

I despise electron and html-wrapper apps. But I gotta give credit where it's due. VS Code is pretty good.

With the advent of the new WinUI, React+Native on Windows, and Blazer. I'm betting the future of windows is more web based technologies mingled with low-level native libraries.

> Both VS Code and Atom use significant amounts of WebAssembly and low level libraries to achieve that performance.

Atom has some internal data structures written in C++. VSCode uses a native executable to do the file search, but no further low level magic is used to make it go fast.

I don't think any of them are using WebAssembly yet.

I guess this is a semantics thing, but WinUI/React Native is not “web based”. It is JavaScript building “native UIs”.

Figma is pretty much replacing all web design applications precisely because it’s leveraging web tech for collaboration on a single document at the same time.

I think Figma’s success is less about being web first and more to do with filling in gaps in what Sketch offered, especially in collaboration. Today you need to buy at least 2 apps, Sketch and Abstract, to match the feature set of Figma.

Design is one of the areas where one could arguably create a native app, largely because the user base is much more homogenous in OS than most other user bases.

I think we can safely exclude 'collaborative web design' applications from the set of hardcore tools not gaining much from being implemented as web apps for understandable reasons.

Huh? Why? Designers had previously been using native apps for ages — Photoshop, Illustrator, Sketch... Figma has been successful not just because it's collaborative but also because it's performant, powerful, and reliable. Not sure why you think that can't be achieved with other kinds of software

And yet it is a hardcore tool, and does gain a lot from being implemented as a web app. Another example might serve your point better.

Ok, I see my point being not so understandable, after all. I didn't mean Figma isn't a hardcore tool, I meant that we exclude it because it specifically concerns with web technology (being a tool for web design) and leverages web to implement collaborative usage. So it's probably logical for it to be a web app.

You're still way off. It's not a "web" design tool; it's a visual design and prototyping tool. Folks are designing a lot more than websites and web apps with Figma.

> So it's probably logical for it to be a web app.

Ah, gotcha. Although I'd have assumed that Figma is used more to design native apps that web apps, this helps me understand where you're coming from.

Figma is a great tool and all but it will still take it years, if not decades, to replace immersive content creation technologies like Ventuz, one of the best in the game! https://www.youtube.com/watch?v=nu2FnEVk9_U

Is this an ad or are you serious?

No, it’s not an ad...I posted in defense of the power that native desktop apps bring since the use case of Figma was being stretched into territories other than web/mobile UI design...immersive content/interaction creation is that category!

VSCode, because of electron, doesn't allow you to have multiple windows that share state while working on a project. This makes it terrible with multiple screens.

It’s not because of electron. They could have multiple windows but it would be a massive over haul of the architecture . So they say just use another instance

I would argue that it is, because Electron doesn't allow you to share a js context across windows. So while it is not impossible, it is much more involved than it would be in most other frameworks. In fact, this is my only gripe with Electron where I think the normal HN objections about performance, bloat and lack of native UI elements are overstated and not something that bothers me.

That's basically the whole reason I'm sticking with PyCharm these days. Apart from that VSCode seems to tick all my boxes, but it's a deal-breaker for me. There's some kludgey workaround possible involving workspaces but it's rubbish.

VS Code is slow at basic things like having characters show up on screen after hitting the key. It's good at everything else though so that lag doesn't matter as much.

It probably depends on your hardware. I have an "older" (several years at least) Windows work laptop with iGPU that is quite sluggish with VS Code when hooked up to an external 4K display. However, it's snappy compared to Microsoft Teams in the same situation.

Meanwhile, my similar era MacBook with dGPU hooked up to the same screen is very snappy and I honestly would probably not be able to tell the difference in a blind test between VS Code and typing in a native text box (like the one here in Safari).

I'd consider myself pretty anal about latency -- I was never able to deal with Atom's, for example (disclaimer: I haven't tried it in years). I even dumped Wayland for X11 when I had a Linux desktop because of latency (triple buffering or something?) I couldn't get rid of.

But VS Code is not bad.

I'm using a 5K iMac with Radeon Pro 580 8 GB. I'm making the comparison vs Sublime Text which has no lag and is my standard.

On a real system or just in some syntetic benchmark? Because for me it looks quite fast in barfing out characters.

At least in base-mode. It can becoming slower when the IDE-features kicks in and autocomplete needs some time to meditate about the stae of it's world. But this also scales with size of your active sourcecode, the codebase and used language.

Also, this is some problem of all IDEs, not exclusive with VS Code.

I'm using a 5K iMac with Radeon Pro 580 8 GB. I'm making the comparison vs Sublime Text which has no lag and is my standard. This is based on my experience, not benchmarking, just try it side by side and you'll see.

My Sublime Text is decked out in plugins and so on too, there are instances where it slows down but only when it's obviously doing some processing. The basic rendering is fast, totally unlike vscode. But like I said vscode is still "fine" just uncomfortable.

That lag matters to me, but in my experience it's no worse than, say, VIM with the number of plugins that I normally run. Fully bare-bones I imagine vscode is performant as well.

Usually professionals using text editors for their work are not concerned with the absolute keystroke-to-screen latency. It’s totally fine if it’s fast enough, and it is.

Disagree on both points

OnShape (https://www.onshape.com/) offers professional level CAD which runs in a browser. Works great.

“Professional level” covers a lot of different uses... Juicero and Airbus both use CAD, I seriously doubt the latter are going to replace CATIA with OnShape

I'm in agreement with @namdnay...I don't think it will replace enterprise packages like Siemens NX or CATIA anytime soon. I do all of my CAD design in Solidworks and I do like the out-of-the-box thinking/features of OnShape...but...their pricing model ends up being more expensive in the long run ($2100/year for Pro Version). I paid $4k for Solidworks in 2016 and it's paid itself off more than 10x times since then...all without the need for a forced upgrade! When a newer version substantiates its value for my workflow, I will upgrade. Not to mention, most of my work can easily be done in Solidworks 2008-2010 because that is the innate nature of CAD packages, regardless of the version, they will get the job done.

So is Autodesk Fusion. But you won't see Autocad stop selling their desktop software. People don't buy extreme rigs to use for production, and then trade even 5% of the perf for convenience.

What's your use-case?

I work for an indirect competitor backed by the same commercial geometry kernel (Parasolid) and it did not do well with our models (which, granted, are pretty different from typical mechanical CAD models).

This writeup from a few years back caught my attention and gave me hope of near natively fast, cross platform electron apps https://keminglabs.com/blog/building-a-fast-electron-app-wit...

It's quite simple to use the web view process for nothing but the actual UI, and to move any intensive logic to a separate process (or even native code). It's also very possible to make that UI code quite performant (this takes more work, but VSCode has shown that it's possible).

If you don't see web app replacing Blender, give a try to OnShape. I was so surprised by it. It is slower than comparable desktop app, but it is usable for real world projects.

> It is slower than comparable desktop app, but it is usable for real world projects

Which means that under a high enough load it will be unusable, while Blender will deal with it just fine.

Blender in the browser. Ugh. <shudders>

I don't find it that crazy, if properly compiled with web assembly. The thing is that Blender's UI is all synchronous python, so, yeah, that and the addons system would be to need rewritten. Python in the browser is a no-go performance-wise, of course.

> Python in the browser is a no-go performance-wise, of course.

"Running the Python interpreter inside a JavaScript virtual machine adds a performance penalty, but that penalty turns out to be surprisingly small — in our benchmarks, around 1x-12x slower than native on Firefox and 1x-16x slower on Chrome. Experience shows that this is very usable for interactive exploration."[1][2]

[1] https://hacks.mozilla.org/2019/04/pyodide-bringing-the-scien... [2] https://alpha.iodide.io/notebooks/300/

So, an already slow language made slower?

No thanks.

The main point though, is that running Python in the browser it's an unnecessary abstraction because you get a crappier version of something that runs pretty well natively. If you're starting from scratch, I think that the browser might be close to native performance in some tasks. Porting existing applications is a pain when you start looking into the details.

The problem is not so much the run-time performance of the code, it's the overhead of loading the Python run-time environment over the network the first time you open the page.

VSCode uses a C++ backend to get that performance.

Yes, Chromium.

No, there's actually a C++ backend under there that implements the editor, if I recall correctly.

A C++ backend, aka Electron.

I think you're confusing Atom and VS Code.

VSCode uses the same c++ based regex parser for syntax highlighting as TextMate, Sublime, Atom, etc

For anyone interested, this is the Oniguruma [0] regex library written by K. Kosako. It's also used in Ruby and PHP.

[0] https://github.com/kkos/oniguruma

I will come at this from a different, philosophical perspective:

Web apps come from a tradition of engaging the user. This means (first order) to keep people using the app, often with user-hostile strategies: distraction, introducing friction, etc.

Native desktop apps come from a tradition of empowering the user. This means enabling the user to accomplish something faster, or with much higher quality. If your app distracts you or slows you down, it sucks. "Bicycle for the mind:" the bicycle is a pure tool of the rider.

The big idea of desktop apps - heck, of user operating systems at all - is that users can bring their knowledge from one app to another. But web apps don't participate in this ecosystem: they erode it. I try a basic task (say, Undo), and it doesn't work, because web apps are bad at Undo, and so I am less likely to try Undo again in any app.

A missing piece is a force establishing and evolving UI conventions. It is absurd that my desktop feels mostly like it did in 1984. Apple is trying new stuff, but focusing on iPad (e.g. cursors); we'll have to see if they're right about it.

What a perfect HN reply. Webapp bad. Native good. No justification. Just a bunch of generalizations.

Gmail empowers me. Wikipedia empowers me. Github empowers me.

Of course native application are important. You don't need to rely on those moralistic justifications.

You may not be aware of this, but the person you replied to has worked for years on a native UI toolkit. And they provide justification, too: skills don’t transfer between websites as readily as they do between apps. And while I wouldn’t associate we applications are somehow morally inferior, the fact is that many of today’s issues with increasing friction to drive engagement originated on the web and are easy to perpetuate on the web.

Worst thing that ever happened to HN was self-awareness, now your comment is worse than if it had just focused on what he was saying rather than where he said it and my comment is also worse because I included this paragraph. We should probably ban mentioning that this is HN on HN.

He said something like "OS apps have the quality of being windows, you can open different windows into the same data" (okay he literally said "bringing knowledge from one app to another" but it can be re-framed as bringing different apps to the same knowledge - which I argue is more characteristic of the "local experience").

Your "refutation" was to list a few web apps that are considered useful.

A more sane way to refute his argument is to talk about open APIs and how you can bring your data into different contexts using these tools as well as GUI web tools like these things for converting file types, making small adjustments to PDFs, GSuite or other tools.

However that refutation falls on its face when you want the window quality; i.e. looking at the same data with different perspectives. The reason is that the computers running these web systems are foreign and disjoint so you are dealing with a distributed system, sometimes you are lucky enough that it was designed to function how you are using it (google suite is this to some extent), however most of the time you have to bring your data to them to use these utilities and then things float out of sync as you move between tools and your Downloads folder fills up with intermediate artefacts.

We are moving back to the local system, and Electron (and those browser APIs for local storage and persistence) are steps in the conversion process. Eventually we will abandon browsers (read: Chrome) altogether in favor of "package management"; something like nix-shell (except secure) has a much more user-friendly social contract while being pretty much the same UI as a browser (but still much much much worse UX). That's where we will end up (some evidence: NLNet is funding the nix-packaging of all the projects they support).

> What a perfect HN reply. ... No justification.

I disagree, therefore the stuff you wrote didn't exist. I mean, don't you think it's a little rude? Think first, engage keyboard later.

hahahaha. It's nice to see there's still reasonable people here. Not all of us have the mental fortitude to "empower" our lives.

I prefer well-designed desktop applications to web applications for most things that don't naturally involve the web:

* Email clients (I use Thunderbird)

* Office suites

* Music and media players

* Maps

* Information managers (e.g., password managers)

* Development tools

* Personal productivity tools (e.g., to-do lists)

* Games

As Windows starts on-boarding their unified Electron model (I can't recall what they have named this), I suspect we'll see more lightweight Electron desktop apps. But for the record, I like purpose built, old-fashioned desktop applications. I prefer traditional desktop applications because:

* Traditional applications economize on display real-estate in ways that modern web apps rarely do. The traditional desktop application uses compact controls, very modest spacing, and high information density. While I have multiple monitors, I don't like the idea of wasting an entire monitor for one application at a time.

* Standard user interface elements. Although sadly falling out of favor, many desktop applications retain traditional proven high-productivity user interface elements such as drop-down menus, context menus, hotkeys, and other shortcuts.

* Depth of configuration. Traditional desktop apps tended to avoid the whittling of functionality and customization found in mobile and web apps. Many can be customized extensively to adapt to the tastes and needs of the user.

Bottom-line: Yes, for some users and use-cases, it still makes sense to make desktop apps. It may be a "long-tail" target at this point, but there's still a market.

This is a big part of why I still use MacOS. The mail, notes and reminder apps are simple, easy, fast and can be used with third party providers like Fastmail. The Windows apps are fairly sluggish by comparison. I prefer most native MacOS apps in general, Finder/Explorer is a big exception though.

What is your maps desktop application?

I just use Maps that ships with Windows.

> Thunderbird

The search is not very good on Thunderbird...

it's still better than Outlook!

Thunderbird isn't a native app, for the record. It's a web application similar to an Electron application, but with extra steps.

Eh. XUL isn't really a web technology the same way as Electron.

It's not XUL anymore, actually, as far as I know. That's why it ripped out support for XUL addons.

Firefox has been ripping out xul, but Thunderbird still appears to support it.

Only if you're using a ridiculously outdated copy:


Add-on support: Add-ons are only supported if add-on authors have adapted them


Dictionary support: Only WebExtension dictionaries are supported now. Both addons.mozilla.org and addons.thunderbird.net now provide WebExtension dictionaries.


Theme support: Only WebExtension themes are supported now. Both addons.mozilla.org and addons.thunderbird.net now provide WebExtension themes.


Literally here's a doc explaining how XUL has changed as of Thunderbird 68, the most recent version, released about a month and a half ago. Yes, some elements have been removed, but others have been modified and still exist.


And that's in the add-on documentation, not even just internal development docs.

Also, describing information changed in the most recent stable release, a month and a half ago, hardly qualifies any older as "ridiculously outdated ".

Last time I looked, Thunderbird was about:

    - 1/3 C/C++
    - 1/3 Javascript
    - 1/3 everything else (XML, CSS, etc) known to humanity

I'll grant you that an Electron app is generally 90% C++ (ships a web browser), but I'm not sure if that makes Thunderbird (ships a web browser) any better.

Email doesn't naturally involve the web? What?

Ray Tomlinson's design for email came in the 70s. RFC 788 (SMTP) was published in 1981.

Email predates the Web, and, imo, has been made much worse by all the Web-adjacent features shoved into it.

I believe they’re referring to the web as port 80/443 http(s) traffic. It’s the old World Wide Web vs internet distinction, if you will.

Email really is just a protocol for message sending, and it lives on it’s own port with its own server. If you have an email client and access to an email server (POP/SMTP/however), you can use email over the internet but without the “web”.

Basically, the web email client ought not be the only email client.

It was the ambiguity of the word 'web' that tripped me up. You still need a network of computers for email to be useful.

`Web`[0] is shorthand for `World Wide Web` which is specifically about HTTP/HTTPS and/or the applications built on that protocol. It is an entirely unambiguous word in this context.

`Internet`[1] is distinct, and that's the general purpose network of networks that you refer to which the Web is built on top of.

[0] https://en.wikipedia.org/wiki/World_Wide_Web

[1] https://en.wikipedia.org/wiki/Internet

Totally fair! Frankly, I only know the distinction from a high school computers teacher who was adamant about the distinction.

I guess the easiest way to get the name is to see the “Web” as a “web” of hyper text documents, where hyperlinks act as the strands in the web (graph edges, if you will).

Honestly, like you say, it’s all built on top of a computer network (yet another web/graph). As a consequence, the distinction never really made a ton of sense to me, either.

Alas, this is the common parlance, so it is what it is.

I don’t think you’ll find many people here who agree that “web” is ambiguous.

Nope, different protocols. You don't need web browsers for email, and the email clients that run in web browsers are using mail servers to send and receive.

If the web didn't exist, which it didn't prior to 1991, email would still work fine. There just wouldn't be any web-based email clients.

Email over ARPANET and the internet predates the web by a couple of decades.

The way we use email hasn't really changed that much since the 70s.

I make a living developing software only available on Windows and macOS. That said, if I didn't need to interact so much with the operating system, I'd be making a web app. It all depends on what you want to make though. Video editing software? Native app. CRUD app? Web app.

You may also want to consider pricing implications of both. Desktop software can usually be sold for a higher up front cost, but it's tough sell to make it subscription based. SaaS would make your life a lot easier if you have a webapp. People are starting to get used to paying monthly for a service anyway.

Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag:

<meta http-equiv="x-ua-compatible" content="ie=edge">

We sell our B2B desktop software as a subscription since 2012 (previously it was freeware).

The key is to make it only available as a subscription (no permanent licenses) and it does not have any cloud component.

We managed not to fall in the trap of making two types of licenses (suscription or permanent) to maximize early revenue (we could affort to wait).

We seek a marriage relationship with our users not a hook up.

This increases the value our users extract knowing it will not go away in the long term and the lifetime value of customers is a lot higher.

We know that some customers will not accept a subscription-only desktop application but in B2B world they are fewer than it might seem.

> Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag: > > > <meta http-equiv="x-ua-compatible" content="ie=edge">

Great to know, thank you.

I understand the concept of making a native app to include using the native UI platforms. What you described is hardly more native than electron, which is basically a web app at heart.

Or maybe there needs to be a consensus on terms. Do people consider electron apps to be native? I would put them in some weird middle ground, but definitely closer to web technologies than native development.

The main complain about electron app is due to their bundling of complete web browser runtime, which is over a hundred of MB, not to mention the big memory requirements. By using platform's built-in webview component, your app size will hardly any bigger than the total size of the zipped html+js+css of your app. If you're going to use web stack to develop your desktop app anyway, might as well try using native ui webview first if you don't need any electron-specific feature.

With Electron, it feels more like starting from web and making it behave like a native app, from the standpoint of operating system app handling (task bar, notification). It can also fill a gap left by web for exposing a filesystem and other things that are native to some abstract computer.

Using WKWebView for UI is different in that you are starting from a native app and using web technologies to leverage code sharing and programming model of the user interface (js, css, html).

I think for this view to make sense you have to see web apps and native apps as fundamentally different things, which I believe they are.

From a capabilities point of view, they're native. You can access the OS api just like any other native app.

From a developer side, it looks like developing a webapp without the usual limitations of API access, albeit at an extra cost of marshaling or build-complexity.

There really is no reason to think of HTML/CSS/JS as Web only though.

Great to read about someone in a similar situation to me. I work as the developer and maintainer of a niche-market financial / real-estate application. This application has been developed and supported since the late 80s, first being done in Turbo Pascal, then Delphi, and then under my stewardship we moved to C#. I refactored the calculation and report production code into a library, and since that time we've built a Mac version and Web version, all utilising the same 'core' library. This means that for critical calculations and data output we - my business partner, who is the 'domain brains', and I - can do all the hard work on the Windows version (with which we are most familiar and comfortable, and IMO VS on Windows is still miles ahead of VS on Mac), and then 'just' do the GUI work for the other versions.

We did look at doing exactly as you said, i.e. using a web view within Windows and Mac, however I couldn't really get things working well enough at the time (as TBH I am bit of a noob WRT web development, and just pick things up as necessary as we go along).

For our market, there is strong demand for the desktop versions, and this is even with a subscription model; people get access to the most recent major and minor versions of the software as well as phone and email support while under subscription. When their sub runs out they are entitled to minor version updates, but nothing else. My biz partner is very good with people and very knowledgeable in the domain we operate, so this kind of arrangement suits everybody. Oh, and I get to work remote, and have done with him for ~15 years. The current situation really makes one appreciate fortunate arrangements such as this.

For a personal project I am currently using this approach and can confirm it works great.

I wrote just enough PyObjC to get myself a trayicon in the Mac menu bar, which shows a popover containing a wkwebview to localhost. Then I have all the app logic in Python, exposed to the webview through a bottle server, and Svelte for the UI. Highly recommended.

Do you distribute Python with your application too then? How well does that work?

It sounds like a macOS app, and python is distributed with the OS already. Just watch out for Apple trying to take away scripting language support in the future. There's also an upcoming Python version change in macOS 10.16 to look out for.

I actually use py2app to bundle the whole virtualenv into an .app file. Worked pretty much out of the box after fiddling a bit with pyenv (you have to build your python version with framework support).

I mean, we built the Windows Terminal as a native application because we didn't want users to have to be saddled with 40MB of a webview/Electron just to boot up a terminal. It might take us longer to make the terminal as feature rich than it would have with JS, but no amount of engineering resources could have optimized out that web footprint. When we think about what the terminal looks like in 5 years, that's what we were thinking about.

Thank you. As you say, in the long haul, Windows Terminal is considerably better than it would have been thanks to that decision. It feels responsive and lightweight; unlike any Electron app, and that is greatly appreciated by many users, myself included. I look forward to each new version.

Didn't they eventually did some crazy optimizations to keep the terminal in VS Code performant?

They still do; for years now I read the patch notes and every time there's a segment dedicated to the terminal with things like font support or sane selection, things that IMO are both basic features and which should be in native terminal emulators already.

also I've never used the VS Code terminal, iterm works better for me. Also because it's its own dedicated app, so I can actually alt-tab to it instead of having to learn whatever shortcut it would be in the editor.

I don't use Windows, but thank you for having gone for it.

I am sorry but Windows Terminal is the suckiest Windows app ever and gives me a headache on a daily basis. Try opening multiple instances of this app in overlapping windows, like most developers do. Then fill them with text. Now see if you can tell where one window ends and the next window starts. Because of nearly invisible borders all the windows sort of blend into each other. Worst usability issue ever. This terminal appears to be designed for people who open only one command prompt at a time. Please bring back the old terminal.

I don't like the new Windows Terminal and I still use conhost.exe since it actually doesn't crash, but I don't think that's a fair criticism. The borderless design is part of Windows' theme. The terminal merely renders into the box its given. I think Microsoft (and Gnome) have made a mistake in this borderless fetish, but you can't blame the app for the designers fault.

And the fact that it crashes each time I resume my system seems like a much bigger issue than a person who refuses to keep a few pixels separation manually, or to color the windows differently.

I specialize in data recovery / digital forensics tools, which require very low-level disk access to be able to read physical media at the block level. I doubt there will ever be an HTML5 standard for low-level disk access.

But aside from my particular specialty, I also prefer any other software I use to be fully native. I'm surprised that's such a controversial thing to ask for these days. All I ask is so precious little:

* I want the software I use to be designed to run on the CPU that I own!

* I want software that doesn't require me to upgrade my laptop every two years because of how inefficient it gets with every iteration.

* I want software that isn't laughably bloated. I think we have a real problem when we don't bat an eye upon seeing the most rudimentary apps requiring 200+ MB of space.

I remember hanging out in /r/Unity3d and some newbie posted a download for their completed game. It was super basic - just a game where you move a cube around a grid, but the size of the game was insane, like half a gig.

The dev who posted it seemed perplexed when people told him the game was 100x bigger than it should be.

I hope that you did the kind thing and showed them how to adjust what gets compiled into their package, depending on whether they are targeting a development or production build.

Nothing worse than people who mock beginners showing their first work. We were all there, once.

He was probably expecting useful feedback or even praise.

> I doubt there will ever be an HTML5 standard for low-level disk access.

It's clear you've never worked with Electron; nothing about its system-level access has anything to do with HTML5 or related standards.

All of that lives in NodeJS, which offers reasonably low-level APIs for accessing system resources. For cases where that's not enough Node can easily call out to logic written in other languages, either directly through FFI (foreign function interfaces) or by spinning up an independent binary via the shell.

This is the problem with this discourse: the vast majority of the Electron haters are people who have no idea what they're talking about when it comes to the actual thing they're criticizing. It's particularly hypocritical when they go so far as to frame "JavaScript hipsters" as some combination of ignorant, inexperienced, and/or lazy.

I don't mean to hate on any particular groups of people, I'm simply a proponent of using the right tools for the job.

I can only speak from my own experience, but I have never dealt with an Electron app that wasn't noticeably slower than other apps, to the point of standing out like a sore thumb.

Or, put another way, when I notice an app performing especially slowly, I think to myself, "that's probably an Electron app", and I'm usually right.

Have you looked into webassembly and how browsers give access to filesystems[1], there could be hope for a future of high performant filesystem access.

[1] https://developer.mozilla.org/en-US/docs/Web/API/FileSystem

From the linked article:

This interface will not grant you access to the users filesystem. Instead you will have a "virtual drive" within the browser sandbox. If you want to gain access to the users filesystem you need to invoke the user by eg. installing a Chrome extension. The relevant Chrome API can be found here.

that's not low level disk access

Can you explain how it isn’t?

If you are accessing the file system, you are accessing an abstraction, not the actual disk data (which is another abstraction on top of the actual hardware).

Does it give you access to /dev/disk0 (or /dev/nvme0, or /dev/sda0, or /dev/rdisk0 etc)? If not, no.

There are still a lot of fields where performance matters. This is especially true with apps that need low latency, like most games. Something like Stadia may be fine for a casual gamer but it still feels laggy to many, especially those used to gaming at 144Hz+ with almost zero input lag and gsync.

VR is another area where native desktop is still superior.

Then there is anything that is dealing with a lot of local data and device drivers. Video editing for example.

Development tools that work in a browser are getting better but native (or even just Java-based like IntelliJ stuff) still seems superior for now.

Stuff that doesn't use TCP, like network analysis tools, either need to be done as a desktop app or need to run a local server to point the webapp used to control them to.

I guess what I'm getting at is that if you need low-level access to the local device or if you care a lot about things like rendering performance then native is still the way to go.

Stadia's games do not run in a browser.

The client you use to interact with the stream is a web page, yes, but the actual game running in their servers is a native Linux application.

Yes, but I think that's the point isn't it? Of course the game runs on a Linux server because it couldn't possibly run in a web page due to performance reasons. Hence the complaint about lag.

You either get lag from the network roundtrip or lag from the crap performance in a browser (or you choose to degrade the experience, e.g., by reducing graphics fidelity), but one way or another you're experiencing lag (or a compromised experience).

Ergo the same game implemented as a native app running locally is going to be better[1].

[1] At least technically. It could of course still be a rubbish game.

Also from a security standpoint, it seems to me that having all apps work in the browser is creating single points of failure.

Can you elaborate on the security considerations of this? It seems like you could just as easily say it's more secure because it reduces the attack surface to a well-tested program with mature sandboxing.

It goes both ways, the sandboxing means that the app has less access to the OS, but the flip side is that the open web has more access to the app (CSRF, XSS, evil extensions, etc.)

Coincidentally, I just published a blog post that touches on this from both sides: https://paulbutler.org/2020/the-webassembly-app-gap/

You mean like the OS is supposed to be?

Exactly. The browser becomes (basically) an OS running within the host OS, so s/he's invented a kind of half-cocked virtualisation. There's a joke in there somewhere about turtles all the way down.

IMO desktop apps aren’t quite equivalent to native apps. Native apps look and behave in a consistent way. They have

• Familiar UI primitives: - Controls and chrome are in the same place - Font sizes are the same across apps - Consistent icons and button shapes

• Support standard keyboard shortcuts (including obscure ones that developers re-implementing these UIs might not know about) - All the Emacs-style keybindings that work in native macOS text fields but are hit-or-miss in custom web text fields - Full keyboard access (letting me tab and use the space bar and arrow keys to interact with controls)

• And consistent, predictable responsiveness cadences - Somewhat contrived example: In Slack (browser or Electron app), switching between channels/DMs (via ⌘K) has a lag of about 0.5–1 second. If I start typing my message for that recipient during this lag, it actually gets saved as a draft in the channel that I just left and my content in the new channel gets truncated. I don’t think that kind of behavior would happen in a native macOS app, which renders UIs completely synchronously by default/in-order (so it might block the UI, but at least interactions will be in a consistent state)

I don't agree with the first two points. Native applications aren't consistent in this way. There are dozens of cross-platform GUI kits and they all behave slightly different, just like Electron apps. If you want consistency, you need to build multiple apps, one for each OS with their respective toolkits. Ain't nobody got time for that when you can easily build on Electron and target browsers, macOS, Windows, and Linux in one single app. No wonder Electron is winning the battle so far, regardless of your last point.

Native implies that you are building for each OS and their native toolkit. On macOS, you write Cocoa. On Linux you write GNOME or KDE or CDE. On Windows you write...I dunno. Win32 probably.

Current tech is C++/WinRT, for this year, but last year it was WPF and five years ago it was XAML, then previously it was MFC/ATL, and original Win32 somewhere back in the old days.

And Linux isn't better. I think OSX is the only desktop OS that has an idea of what an app should look like.

MacOS used to have a choice between Carbon and Cocoa, didn't it? Maybe still does.

C++/WinRT uses XAML. But XAML isn't a control library/toolkit. You can tell because it isn't called the Extensible Application Control Library. I'd say it shouldn't be included in your list, but MFC/ATL is just a way of accessing Win32 via C++ -- it makes the same function calls -- so it's not clear that your purpose was to make a fair statement about native development, but perhaps to complain about Windows turnover of apis.

On Linux, it's quite simple to live entirely in Gtk-compatible land. I think Firefox and JetBrains are the only foreigners I use on my box, but I'd be using them on any opertaing system so it's not exactly a fair cop.

Let's go with a file picker dialog, a simple OS-provided component. Windows provides three versions of this dialog (the "app" view, the tree-view, and the explorer-in-your-app view) depending on which API you use. You see this pattern repeated. It being the same calls "underneath" is true, but it's also irrelevant, as the user experience noticeably changes depending on which API is invoked.

> MacOS used to have a choice between Carbon and Cocoa, didn't it? Maybe still does.

It did. Carbon was the pre-NeXT widget set and compatibility to run pre-OS X apps. It is very dead. There is only Cocoa.

> MFC/ATL is just a way of accessing Win32 via C++

Okay, so we can still regard MFC/Win32 as the standard API?

Consistency is maybe overrated?

I purposefully make my FF unlike the other apps on my system. I use a couple of workarounds to prevent OS level keybinds from working in some apps. Sometimes a completely purpose made UI is better, sometimes.

In general, consistency [in desktop UI] is good, but there are good reasons to break it.

The difference is you breaking consistency for your use cases, vs some product designer somewhere breaking consistency for all of their users for whatever reason without the end user having any say in it.

Pithily, the cost of a product designer’s novel portfolio piece is externalized onto their captive users.

Depends heavily on the user community, use case, time-in-environment (short- and long-term), inter-user communication about application, and rate of change. There's almost certainly a set of interrelationships between these.

For a highly technical userbase, a highly bespoke UI may be defensible,even preferable, especially:

- User community is expert, highly skilled, and highly computer-literate.

- If daily-weekly time-in-environment is high. User "lives in" application.

- If lifetime time-in-environment is high. Users base career in application.

- If there's relatively little communication between users about application state, interactions, or activities. Put another way: users interact with the app, but not about the app with others (users, clients, management, techs).

- The UI avoids drastic change over time.

By contrast, reverse virtually any of these conditions and you'll want a UI that conforms closely to current standards:

- Users are inexpert, poorly computer-literate (the vast majority, see OECD computer skills study), or simply nontechnical with app.

- Time-in-environment is low. At the extreme, all users are one-time novices.

- If users must communicate with others regarding app state or tasks -- close management, team interaction, client / management interactions. All parties need a ready and clear mental model of the UI and state.

- If the UI changes drastically over time, it should do so consistently with all other major elements. (Both should change as little as possible.)

Somewhat more concretely, the absolute worst feature a UI can have is change. Users get confused, lose trust, and are burdened by obsolete knowledge. This afflicts both expert and nonexpert users, profoundly, though in somewhat different ways.

For very technical tools used principally creatively --- virtually all editors, development environments, and many reader/browser/search/analysis tools fit this description --- a highly distinctive (and customisable) interface may be appropriate for advanced users. This is a small, but generatively critical, user community. There's a requisite complexity to such tools, simplified UI comes at the cost of vastly less efficient performance.

Virtually all "weird" tools in heavy use today evolved from what were at the time of their creation, common motifs, at least within the environment of origin. Think of Unix, vi, emacs, Photoshop, Excel, and Eclipse, say.

For standard workflow, control, and transactional tools, highly standard UIs are preferred. Here, users are interacting closely with others regarding state or interactions with the interface, and clarity, consistency, and common knowledge of the UI and state changes matter. Point-of-sale systems, equipment controls, general monitors, enterprise applications, end-user/customer support tools, and the like.

Occasional-use, public use, and similar tools must be generally usable without training. Adherence to standard motifs is critically important. UIs and capabilities are generally simple.

The trade-offs:

- More expert users and more heavily-used tools can support more novel UIs.

- Less literate users, more unfamiliar tools, and greater communications about UI state and interactions, demand more standard UIs.

- Change is generally bad, but evolutionary change (within the tool) or conformant change (with the overall environment) are generally less disruptive than either sudden drastic or idiosyncratic changes.

If you are going to expect me to run your software in an always-on manner, I would greatly appreciate a native application.

I frequently do light computing on a Surface Go. It's a delightful little device and I love it, but it is not powerful enough that I can leave gmail, Slack, and Discord open all the time.

I don't have enough RAM to run another web application but I could very easily afford a native app or two.

I have an old ThinkPad X200s that I turn on from time to time. I keep a fedora installed and updated just in case, and since I'm there, i also sync my nextcloud stuff.

E-mails using claws-mail are not a problem. It generally runs fast enough until I open firefox and web stuff in general.

Claws Mail on an old Atom is a lot faster than webmail on pretty much anything else. This probably illustrates best the difference between the two approaches.

I remember times when webmail basically meant squirrel mail. It was fast over an 56k modem, but desktop was always better.


Indeed, I asked on another HN thread, why do you developers need so much RAM, and I got lots of good answers. But it occurs to me that a few lightweight, quick-starting apps will help me stave off the day when I have to get a new PC because my old one ran out of juice.

I'm not sure that's enough of a reason to develop a commercial app, because tightwads like me also like our stuff to be free.

There's a pidgin plugin for discord. While pidgin is dated and has issues, the existence of it shows that even modern chat systems could be made fast.


Yes, I hope so. I have just released a new data transformation/ETL tool for the desktop (Qt/C++ app for Mac and Windows). The advantages of desktop are: -performance/low latency -richer UI (e.g. shortcuts) -privacy

But there are trade-offs: -the user has to install/upgrade it -less insight into what the user is doing -harder to sell as a subscription

I wrote this in 2013 and I think it is still mostly true: 'Is desktop software dead?' https://successfulsoftware.net/2013/10/28/is-desktop-softwar...

Cool article!

Lots of software would be better (from the user's POV) as a desktop app. However as a developer and as a software business owner/investor, it's (much) better to write web apps. So it depends on what you're asking here. Should you invest in desktop dev skills to further your career? No. Should you write your software idea as a desktop app if you want to make a business of it? Not if you can avoid it. If you're asking something else, well I guess the answer is 'maybe'.

OTOH, creating desktop apps is a skill in itself that one might want to master. At least in the Mac ecosystem this is a regarded skill and amongst the users there is a willingness to pay for apps. SaaS might still be more lucrative though.

> However as a developer and as a software business owner/investor, it's (much) better to write web apps.

Why do you say that?

Because of the main advantages of SaaS: recurring payments and data lock-in.

Adobe achieves that with desktop apps, if you can call it an achievement.

Achievement is a big word here. They had to shove it down people's throats with a bloated, buggy, and expensive app suite.

Yes although people still put their hand in their pocket every month so they must be getting some value from it.

Sadly I am not Adobe

Scaling out cross-platform, lower cost of investment to support multiple OS/Devices.

User does not need to worry about dependencies and libraries - they just need to make sure they're running a somewhat up-to-date version of a modern browser.

Users are not tied to a single device either.

As a business, it's not to your advantage to have to ask permission from app store owners. The web does not require permission from a third-party that may decide to compete with you.

But with the app stores you get more users

I basically had no users for one of my desktop apps for seven years, till I ported it to mobile and put it in the normal app store

Here's PG's response from 2001: http://www.paulgraham.com/road.html

I think he is saying that because with webapps you have

- control over your software and updates

- no one can pirate your software/service

How do you have more "control over your software" with a web app? With a native app, I can do virtually anything. With a web app, I can really only do what (Google ∩ Apple ∩ Microsoft)'s web browser teams decided to prioritize, and allow, and optimize.

As for 'pirating', is that a serious concern these days? I've only ever heard about it being an issue at big companies selling software to other big companies, where it's solved with "license servers".

I think they mean rapid, continuous deployment. If it’s a web app you can push out changes every day to near universal adoption. With a native app you have to cajole users to update

There are auto-updaters for desktop software, too.

Also on mobile you are constrained by the app store requirements as well.

This is exactly what I meant.

>> How do you have more "control over your software" with a web app?

It means they can break a program I'm using at any moment they wish, even after I paid for it.

I think pirating is a serious issue in other countries (i.e mine), I practically never saw anyone actually pay for any office suite, adobe software and some offline games. All of them get pirated and very quickly too.

> no one can pirate your software/service

Huh? Given how most web applications these days are using client-side rendering there's nothing stopping someone from just downloading all of the frontend assets. You can also connect to a server from your desktop application so I don't understand how desktop makes it easier to pirate anything.

Web applications have lots of essential parts on the server-side even if they do client-side rendering. And each paying user is logged in when using it, so the company can accept/deny requests depending on the user.

Proprietary desktop applications are usually downloaded after a payment, and then the full software is available locally. By hacking the security parts of it one can then have a totally free version and distribute it. That's why it is easier to pirate.

How does this address my point about being able to connect to a server from your desktop application? Just because historically companies have not deployed their product in such a fashion doesn't mean it isn't technically possible. How do you think social mobile applications work? Have you ever worked on an application that wasn't running in a browser?

> That's why it is easier to pirate.

There are no technical reasons why a desktop application is inherently easier to pirate. Only implementation details.

Not anymore!

Games and even productivity apps are known to be moving small pieces to their servers so that you cannot run the entire application on your own...

If you develop an application that runs in the web browser, I won't use it. That's not some dogmatic principle of mine, it's just an empirical fact.

I use only one browser-based application: Gmail.

I've never used another browser-based application and I can't imagine that I ever will unless there's truly no alternative and it's forced on me by an employer.

I've happily paid for dozens of desktop applications, and I'm even semi-happily paying for around ten of them that have switched to a subscription model, but I never have and likely never will use browser-based applications even if they're free.

I don't get the downvotes: the original question asked about opinions on web-based vs. native apps, and this guys is giving exactly that. And what else could you do, you can either cite some usage statistics or give personal assessments.

What, and now even my own comment gets downvoted?! If there was at least a reason given... coward.

Does that mean that you dont browse the web? Except for gmail?

I use gmail but i get it through Thunderbird...

I'd be surprised if that is the case—or perhaps we have different definitions of what constitutes a "browser-based application." For example, I do all my online banking in my browser; it's the most fully-featured way to do so. It might not obviously be a PWA or an SPA, but I certainly think it deserves to be called a "browser-based application." What else could it be? It's certainly not a "web page." In fact I'd argue that any site that has content that's scoped to a user and can be manipulated by the user is a "web application."

In some (most) cases, desktop apps could be better - performance, latency, offgrid capabilities, and even privacy. In most cases, I prefer offline desktop apps, then their online counterparts.

One area which is really tough to nail is cross-platform support though. Getting a good app on one system is hard enough, getting it in all three - rarely done. This is one of the things where web shines.

From business standpoint, I think web-first with an eye on native works for majority of cases. That is, as long as the majority of users don’t care about the above. In some future, if we start valuing efficiency and especially privacy more, this could turn around. But it feels like, even then, web will probably find a way to make more sense for most people.

But building a good app on all three major operating systems is not solved by an abstraction layer, neither by a cross platform gui toolkit nor by a web layer. Different operating systems have different conventions, metaphors and standards. A portable application will usually feel foreign on all, but the developer's primary platform. Unless, the developers invest in adapting to the differences of the platforms.

Yes, there's definitely still a place for them.

I'd say that that when you're writing an application which is fundamentally just a pretty wrapper (e.g. it exists to take user input and pipe it over HTTP to some web service or use it to generate a command for some other binary) and your users don't care about performance, resource usage or reliability, it makes sense to use a browser. Your application is very UI-focused and if you're already familiar with HTML, CSS and JS, use what you know.

However if you're working on an application that has strict resource usage, reliability and/or performance requirements like say a control system for industrial equipment, a 3D game, a video encoder, photo editing software, or software that's going to be run on an embedded system, you're going to find it difficult to do what needs to be done with a browser/wrapper. It can be done for sure but it'll be something you work around rather than with.

100% this, it depends on the context, and as one other person mention it depends on your goal.

I like to take my laptop out to a park and work with all the radios off to get the best use out of my battery. I also like to do complicated things with a lot of files that need to be organized in a real filesystem, the directory structure of a graphic novel can easily match the complexity of a program’s source tree.

Your web app, which requires several extra levels of indirection between the code and the bare metal, an online connection, quite possibly is built on a framework that tends to suck down a significant percentage of my CPU even when it’s idle in the background with all windows closed, and its own weird file management that’s probably a giant hassle when I need to get my work into another program, has no place in my world.

We're building POS applications for major retailers, and for this kind of software, native is king and will stay for the foreseeable future (with a few exceptions confirming the rule, of course). These applications need tight integration with exotic hardware, must satisfy weird fiscal requirements often written with native applications in mind, must run rock-solid 24/7 with daily usage and predictable response times for weeks without a restart, must be able to provide core functionality without interruption in offline situations while syncing up transparently with everyone else when back online and usually run in an appliance-like way on dedicated hardware (start with the system, always in foreground, only major application on the system, user should not be able to close or restart it, updates and restarts triggered by remote support/management).

All of this is much easier to do with native applications, running them in a browser just adds the need for crazy kludges and workarounds to simulate or enforce something you get for free if running native. Also you end up with managing hardware boxes with periphery attached and software running on them anyway, so whether managing a native application that is a browser which then runs the POS application or whether directly managing the POS application does not save you any work; if anything it even gives you an additional thing to manage, which increases maintenance effort and potential for failure (which quickly is catastrophic in this business, POS down today effectively means the store can close its doors).

Back-office applications in the same space are actually pretty well-suited for a web application, and frequently implemented as such today.

A lot of ATM machines and POs systems are glorified web apps. Not sure why a web app can’t be rock solid. Certainly easier to go native since you only have one platform, but I don’t see it being required

Notice that I didn't say anything about that being impossible. I just said it's harder to meet most of these requirements with a web app, because you have to solve problems that you simply wouldn't have otherwise while gaining practically no advantage whatsoever, which is why most people decide to keep building native applications. Some people happily shooting themselves into the foot does not invalidate this assertion.

Also especially ATMs are notorious for hanging inexplicably for seconds, not reacting swiftly to user input, and generally providing a rather poor UI experience. The performance and quality standards for POS systems, at least our standards, are quite a bit higher.

A lot of ATMs are using Windows XP embedded.

just being curious, What is the stack and does server works on xml/json apis? Any other setup info of system will be appreciated.

Yea but as a Point of Sale application for retailers.. you have the luxury of running on a consistent hardware platform.

I'm all for web apps, unless you need to do things they don't do well. If you are doing, say, video editing -- yeah I want a native desktop app for that. At least currently.

But those things are getting fewer and fewer. And it annoys me to no end that I can't, say, run my favorite screencast/video editor (screenflow) on my Windows or Chromebook machine, since it seems pretty deeply tied to the OS. I don't want to have to learn another one, and I don't want to replace my Mac which is on borrowed time.

That said, I use a lot of apps like Gimp and Inkscape on my Mac, and they may be technically native, they can be really awful about "feeling native." I don't mind inconsistent user interfaces so much, as long as it is mostly cosmetic. But I've spent SO much time in both of those searching for lost windows, etc. (OMG Inkscape devs, has anyone even tried it on multiple monitors???) Things you never run into with "true" native apps (those two use GTK toolkit).

So, I certainly recommend web apps if you app can run sufficiently fast or otherwise can get away with being a web app.

Take any app that uses all cores nearly 100%, maybe maxes out the GPU eats 3-5GB of ram, and is a 2-100GB install.

Those will always be native.

These are your Cad programs, your video editors, your AAA games.

You can make a cad program in a browser too. But you trade a chunk of perf for convenience, and thats only rarely acceptable.

Anything that could ever be done in a mobile app (chat, media consumption, ..) those might be possible to do in a browser. But you didn’t even really need a computer for them to begin with.

Tell that to Autodesk. Fusion is like 70% web, slow, and bloated. They don’t even have a good excuse since the whole UI is just a shell around a canvas and could easily be made native, it’s not as if they’re actually benefiting from the DOM or CSS.

That’s the one I was referring to. That is, Fusion is a tradeoff that isn’t always a acceptable (and still it’s not completely web based).

Well the browser is still not good at things that say Blender or Protocols can do. Media pros still need desktop software for example audio latency in chrome is far too high to use it for any serious pro audio applications.

It's true that many apps could be replaced today with a website, specially those that are basically capturing and showing data.

But there are still many areas where native is king.

- Games

- Audio / Video work

- Adobe type of work (photo editing, motion graphics, vectors, etc)

- 3d

- Office. For me Google docs is enough, but not for heavy users.

- Desktop utilities (eg: Alfred, iStatMenus, etc). You could certainly use a web UI for those, and it would probably be fine , but you'd still need some native integration.

The web versions of Office software still feels way less polished than the desktop version. It just doesn't feel quite right

It really depends on the app.

A 3d modeling package? Although Clara.io exists most of the time I'm dealing with 100s of megs of data so native wins. Creating a game? Mostly same though I can imagine some limited online game creation tool even the small sample 3D platformer for unity is 3gigs of assets so a game editor native seems to win. Photo editing, when I get home from vacation there's 100gig of photos so native wins for me. Video editing same thing.

On the other hand there are apps I have zero interesting in being native. WhatsApp, Line, Slack, Facebook, Office, Email, Discord, etc. I'm 100% happy with them in a browser tab. Native apps can spy on way more than browser apps (maybe less on Mac). They can install keyloggers, root kits, scan my network, read all or most of my files, use my camera, mic, etc.

I also use 7 machines regularly. Being able to go to any one and access my stuff is way more convenient than whatever minor features a native app would provide.

Installable web applications are an incredible concept that have saved my customers and myself countless hours (and money).

The experiences are fantastic. The applications look native to the platform, with coloured title bars and OS specific window decorations.

The performance is not noticeably different than the equivalent native experience. I am taking advantage of multi-threading through web workers, web push notifications (sorry iOS), and the (single) code base is maintainable and easy to work with.

I don't see how a GUI framework like QT or several native applications would make a more effective alternative either aesthetically or financially.

I'd consider it uncontested once installable web applications have deeper system access (filesystem, etc).

The addition of web assembly bindings for direct DOM manipulation and directly importing wasm binaries via a script tag would complete the browser as the most sensible customer facing front-end environment.

RE performance, look at Figma.

Browsers have one huge problem and that is they override important hotkey space (like cmd+w, cmd+f for searching etc). So native apps will always provide a better experience.

Google Docs actually hijacks CMD+F.

And nowadays even websites that don't hijack the shortcut itself are non-"find in page"-able, due to the trick of disposing unused/out-of-view components (which is actually I suspect what Docs is doing, too, but they made their own find so people wouldn't get stuck)

Yup you're right, ay least for Google Spreadsheets. GS is a canvas app so what you see is exactly - and only - what is rendered.

Though, I think Google Docs is not a canvas app.

No internet? No web-app.

I suppose I am increasingly frustrated with the inability to use my computer if it has no internet....

>No internet? No web-app.

I've seen lots of tools that spin up a local webserver and then use that to serve the webapp even offline. But then the question becomes is this really a webapp if I have to install a native server?

Truly, that old quote about "computer engineering concerning with solving problems that didn't exist before computers" gets less funny every year.

This needn’t be true. You can use a service worker to make the code load when offline, and IndexedDB to store data.

As a trivial example, https://jakearchibald.github.io/svgomg/ (which has no data storage requirements) works just fine offline.

Offline support is a banner feature of the PWA (progressive web apps) movement.

I was excited to see how this worked, so I visited the page, clicked around to see what it did, disconnected from my network, and tried opening the page again in a new window. I got the usual browser error page: "You Are Not Connected to the Internet". So I connected to the network, loaded the page, then disconnected from my network, clicked the Demo button, and got: "Couldn't fetch demo SVG".

So by trial and error, I found that I need to load the page before disconnecting, and only use the 1st of the 5 buttons on the side (even "About" requires network connectivity). Then it works offline. Which is pretty cool!

But every native application I have works perfectly offline, and I don't need to do anything special ahead of time, or worry about which parts might not be available. There's a big difference between "some parts may work offline sometimes" and "entire app will definitely work offline always".

From decades of experience, when a "trivial" app struggles with demoing a feature, there's little chance it'll be widely supported among real apps. PWAs have been "any day now!" for 10 years now.

Sounds like the service worker didn’t register for you.

The logic that decides whether the browser accepts service workers is a little bit iffy. My vague recollection is that browsers default to something like “when you access the site for the second time, keep the service worker”. (Don’t quote me on that, and if anyone knows better, please correct me.) “Add this to your home screen” functionality will definitely make the service worker go. (That’s mostly for mobile browsers only at this time, though desktop browsers will eventually get it consistently available, as we’ve been promised for… hmm, about 8±2 years, I think.)

Privacy and blocking extensions may block service workers from being registered, too.

If the service worker is loaded, then it loads just fine with no internet connection, and the demo works too. The contribute and about links, being to GitHub, still don’t work.

I first actually discovered this completely by accident: I was offline, and thought “I need to shrink this SVG file, so I’ll open a tab to the SVGOMG URL and that error page will remind me to do it when I reconnect; huh, it loaded, must have a service worker. This is really great.” On later reflection, I realised that it being by Jake Archibald (a big mover in the service worker space) pretty much guaranteed it was going to work offline.

Service workers are an antipattern. I don't want to visit a website and have it install something that runs in the background.

If you, as a website owner, want me to run an application then you should ask for permissions. No ifs, no buts, no ors.

What a crazy mixed-up world: on desktop we ask if there's a future for native apps, while on mobile we ask if there's a future for web apps.

Controversial opinion: Yes, native desktop apps are the future.

Msft is pushing on react-native-windows and macOS has project catalyst. React-native is making cross-platforms native apps viable.

Apps are way more immersive than the browser and it allows the developer to give a gamification experience.

What do you mean by “gamification experience?”

I had revolut or wealthsimple in mind that both make trading/exchanging money more like a game and a wholesome experience.

I think because of the app the user is "forced" to concentrate and use the whole screen of the app (especially for the phone) it enables a more immersive experience. Whereas most browser apps feel like a transactional thing.

I‘ve always been on the fence about this. I can see both sides, and don’t have a strong opinion either way. But answering is there still a place for native? I think yes for sure! I guess it comes down to if it is really important to your philosophy as a developer, or your type of app could really benefit from native capabilities.

A good example of a recent app that chose to leverage native platforms is Table Plus. They are developing native apps on Mac, Windows, and Linux. I respect the effort/skill and dedication required to pull this off! https://tableplus.com/

Not yet for music production. DAW's, software synthesisers, drum machines, latency matters here.

I imagine DAWs like many others could be a very small native audio processing binary with a gui on top that can be whatever it wants. Some are already Java UIs over C++ backend (e.g Bitwig). Should be possible to do an electron or web frontend over the C++ backend too. Replacing the processing bits will be hard though. Driver interfaces aren't exactly the strong point of web dev.

Funny that the other day I discovered bandlab. It's not professional but for my hobby usage it's the best I've found

Same here - I was a Cakewalk user back in the 90s, and decided on a whim to find out if it was still around.

Native apps can take advantage of OS APIs that are much richer than those via the browser (or Electron) pass-through APIs.

For example, I've made a Mac app that lets you customize your Spaces (virtual desktops), assign names to them, jump to specific ones, track your time spent across them, and trigger custom events when you move to specific ones. None of this would be possible via a web app or electron app. Project homepage https://www.currentkey.com, app link: https://apps.apple.com/app/apple-store/id1456226992?pt=11998...

And as long as there is differentiated hardware between platforms, there will be opportunities for innovative native apps. For example, though I personally don't love the Touch Bar, there are interesting native app projects around that like Pock: https://github.com/pigigaldi/Pock

Checking your app, sounds very useful :) Thanks for building it

Enjoy! And thanks for the kind words

I would simply say that the OP should reverse the question: "in which cases can an electron app suffice for a desktop application" and not presume the death of desktop apps.

It's a very web-dev-centric view to imagine that this model is right for everything and will eat all software. There are clear performance and efficiency tradeoffs.

If you KNOW the constituent bits of the software stack of an electron style app, you will be horrified at what you are doing in the name of being 'normal cool and popular'.

It would be considered extremely bad engineering if you whiteboarded the actual layers and proposed it sans the singular justification of "there are lots of JS devs and this is considered easier than QT which requires a bunch of picky expensive C++ devs'

More's the pity that the eventual use case is not first priority in such a decision making process.

"Background communication channel that should be kept open while the users main productivity software is afforded the computers resources for actual work" that would suggest that Slack would be better written as a native or near-native app. Hello, McFly...

A naive desktop app is the deluxe option. It'll always be more efficient than even the best web-technology based app, because it can skip tens to hundreds of layers of abstractions (e.g. Javascript, HTML, CSS, DOM) if done right.

So the questions are:

- do your customers care about performance? (gaming, 3D animation, music editing)

- are resource limits relevant? (embedded systems, mixing desks, broadcast, dvr)

- are people concerned about battery life? (pagers, medical equipment)

If none of these reasons for native apply, you can probably make your users suffer through a web app, which will be much cheaper for you to produce and maintain.

That said, people definitely notice the sluggishness that all web apps have. I mean those 100ms from click to screen update. So your customers will most likely be able to intuitively feel if your app is native or not, with native feeling better.

For some groups of customers, this premium feel might be a decision factor. For example, Apple TV (super responsive) versus Netflix (objectively slow website).

IMHO native applications represent a valuable class of niche software tools that deliver very highly specialised functionality in concert with desktop software. Add ons for MSProject and Excel abound and there really isn't an equivalent for online tools or indeed a viable or stable market.

The amount of ignorance in here masquerading as experience and knowledge is staggering.

My father was right about so many things. The one I have in mind right now is when he said "age is another form of strength against the nitwit, because those with experience see straight through those with misplaced confidence."

He was right. I'm not calling anyone here a nitwit or anything, please be clear on that.

It's just amazing how wrong some of you are, while sounding so absolutely sure of yourselves. A few don't even get easily researched technical details correct, while trying to sound authoritative.

My point is that maybe this is related to why software development is in the sorry state it is in today: the ignorant are confident they know it all, and the knowledged are confident that they know very little.

Whenever this sort of question comes up, I always look over at my macOS dock and see what's running or permanently docked there:

- Marked (Markdown previewer/converter): native.

- Transmit (file transfer): native.

- PDF Expert: native.

- ImageOptim (image optimizer): native.

- Fantastical: native.

- MailMate: native.

- Terminal: native.

- iA Writer: native.

- Safari: native.

- CodeKit: native.

- Dash: native.

- GitUp (GUI Git client): native.

- 1Password: native.

- Telegram: Electron.

And a few that aren't running now but I run very often:

- Slack: Electron.

- Visual Studio Code: Electron.

- BBEdit: native.

- Nova[1]: native.

- MacVim: reply hazy, ask again later[2].

So, I mean, I can't speak for everyone, but it doesn't seem to me like native apps are going away in the near future, at least.

[1]: Nova is a still-in-beta code editor I'm trying out as a possible replacement for VS Code. Code still "wins" on features for me, but Nova is pretty cool, and still in beta, so...?

[2] I mean, MacVim is a native "GUI," but it's, you know, Vim.

Telegram is Qt

Oops. Thanks!

...although I realized just now that I'm actually using the native client, so it should actually have been another "native" in my case. D'oh.


I think there are some applications or problems that are likely to be favourable to native desktop apps for a long time. For inspiration, simply look at what hasn't already become web based. Some things I thought of:

1. Heavy lifting - As others have mentioned, running some code in a browser is quite a few times slower than locally. As Moore's Law comes to a screaming halt we're going to be need to get better at creating efficient software rather than relying on the underlying hardware to get faster.

2. Capability - Some things are inherently difficult to do in the browser, such as custom networking, calling kernel functions, accessing various hardware, etc, etc. You can always have your native app launch a web-based front-end, but going back the other way is not possible by design.

3. Hardware Access - Sometimes you need to access a camera, USB device, GPIO, I2C, SPI, run architecture specific instructions on the CPU, access the GPU, etc, etc. Again, the browser typically won't let you access these by default.

4. Security - This comes in a few parts: (a) You're able to bypass "most" security and do what you want within reason. As long as the user ran your application you usually have the same privileges. (b) Now that you're dug-in you can enforce a level of security that may not easily be available otherwise. (c) Features such as app signing mean that the user can more easily guarantee the app came from you, rather than relying on their ability to read the exact URL in some email at 2am. If I run `apt-get install <X>` or equivalent in other OSes there is a chain of accountability.

5. Memory - Put simply the browser adds massive overhead to any application and typically has inefficient data structures. Compare something like Atom [1] to any equivalent native editor for example. (There is some existing efforts in comparing editors [2].)

[1] https://en.wikipedia.org/wiki/Atom_Editor

[2] https://github.com/jhallen/joes-sandbox/tree/master/editor-p...

Finda’s architecture[0] is great for this discussion.

On the one hand, you can say “look, an Electron app that’s actually fast!”

On the other hand, you can say “wow web apps are slow; it takes ~50x longer to render a basic list than to regex search across tens of thousands of strings”.

From a performance perspective, the JS part of the stack certainly isn’t helping.

0: https://keminglabs.com/blog/building-a-fast-electron-app-wit...

I have a slightly different take - I wrote my own CMS based on Elixir. It's a static site solution, which means it generates, static HTML files that are then uploaded to a CDN (Eg. Netlify). My UI is done in VueJS and my database is actually inside of my application. I wrote a simple electron wrapper combined with docker in the background to deliver my CMS solution to my clients and it has worked really well for me. The reason being, I don't first of all need to collect my clients' data and store them on a central server, at the same time, my clients don't need to bother finding a hosting provider to maintain the site. They can just run this thing off their desktop and publish and be done with it. What's nice is, if they need updates and new features, they got to pay, which supports me and my work as well.

In fact, the whole project started out as writing a replacement for Wordpress from scratch. At least 6 of my clients' websites got hacked and one of them had a million visits a month. Simply because of stale plugins (it's easier to accumulate them than you think). So, long story short, I absolutely believe there is a place for desktop apps even in 2020.

BTW, I plan to open source my CMS soon :)

(and I do write about my journey here - https://medium.com/build-ideas)

Oh yes there is. For one, thinking "desktop" is very very different than thinking "web".

Dealing with "state" is much better/easier/clear in desktop than in web.

An app on desktop, if well made, will be insanely more responsive than a web app. That's one thing - the other thing is there are cases where speed/resources will dictate that the app should be desktop. A simple example is a video editor (such as, the one I'm developing, but that's besides the point). Sure, you can have a video editor as a JS app, but that will be incredibly trivial compared to a desktop app.

I'm not saying that you can't match any desktop feature on to web. I'm saying that some will take 10x+ time and resources (and thus, an insanely higher complexity) than desktop. And some features, they are simply not feasible to do on the web. Let me give you an example: time-remapping for a video editor (one thing that I'm gonna implement soon). This is such a complex issue, requiring advanced caching + lots of RAM + fast rendering, that implementing it in a browser is simply unfeasible TODAY.

As things become feasible on the web, lots of them begin by being 10x+ more complex than desktop (this gets lower in complexity in time), for one thing. And for another thing, that basically means more things that were unfeasible for desktop will now become feasible there (but still not feasible on web). And this cycle continues.

In conclusion - there will always be a place for native desktop apps IMHO.

It does feel better to use a native app. Marketing as a "premium" product could work.

In the short term 3D MMO's are desktop only. In the long term everything goes back to being desktop because the abstractions waste energy. Everything beyond vanilla HTML + CSS + .js for GUI is going away!

I'm also going to burn some karma and re-iterate that there are only two languages worth using: JavaSE on the server (build everything yourself) and C(++) on the client. We need this to be understood so that fragmentation can be reduced!

IMO: To some extent, yes.

Here are some desktop applications I enjoy and have gladly paid for:

Acorn (image editor) / 1Password / Evernote+Skitch / Pingplotter / djay

I share the frustration with everything moving to slow and bloaty electron apps.

But wrt to electron apps and using web technology instead of native frameworks, I think it depends a lot as well how well the web code is designed. I've been prototyping a Matrix chat app [1] in a very minimalist way:

- no framework or library used for UI, "state", ... to have complete control on how and when things are updated and rendered.

- use indexeddb optimally to keep as little things in memory as possible.

My 2 main conclusions from this are:

- Web technology can perform very well. The chat client uses only 3.5mb of Javascript VM memory on a 200 room account. It visually outperforms some native apps as an installed PWA on Android on a low-end device. I attribute this to the fact that web browsers are very optimized.

- It takes more time to engineer an app properly like this, even in a language like javascript. I can imagine it's hard to justify the expense, when most people don't know who to blame when their computer is slow.

1: https://github.com/bwindels/brawl-chat

There definitely niches for native apps, but they're just niches. It's like massive C++ vs others debates in the 90s. Eventually, people only use C++ where it suits the niche, instead of using it for most of the things like it was in the 90s.

Economically, it's basically 'web first' or at least web by default. Not only it's cheaper, but also faster to iterate, since major clients' logic could be unified.

Development wise, most non-desktop developers hate native desktop development. There are too many quirks in different platforms.

PS: There are so many Electron hates in this thread. Sure, there are a lot of crappy Electron apps, but they're more likely assembled in like one afternoon either because it's a personal hobby project tool or their company wanted only a working software and demanding their developer to release it within one week. Given the same amount of investment, I believe Electron wouldn't be too much short than native apps.

VSCode like everybody mentioned is pretty slick. And personally I feel Discord's Electron app and the React Native app are pretty nice as well.

Just to add a counter-point to the readily expressed opinion here on HN that the web is terrible and we should all go back to programming Coltran; the only place for desktop apps is testing new APIs for new technologies which the Web can adopt once they've stabilized. So if your phone gets some new breathalyzer sensor or something, and you can't interact with it via WebUSB or WebBT then yeah, you're gonna have to drop down to the OS runtime to play with that until the standards bodies finish arguing about what WebBreathalyzer API should look like. Or more likely, that never happens because it turns out that few people need a phone with a breathalyzer. The web is the common denominator, the platform that runs on the most devices supporting their most common features. That makes it clearly superior for applications which need to reach the broadest audience. That only grows as more and different platforms emerge. But there's always going to be something new to play with, and that's fine too.

it's crazy that we've reached the time where this is a question. it just feels like web browsers are just worse operating systems but that's somehow where everyone thinks things should be.

i generally hate using web-apps. the usability and performance just isn't there, and the web was not built for this use case. even now, it's a terrible experience.

for industrial engineering, scientific, creative jobs and more, almost everything you use aside from confluence is a desktop application. visual studio, visual studio code (line is blurred, but it's still a desktop app), solidworks, opera, other modeling software, xilinx vivado, matlab, visio, simulink, houdini, touchdesigner, unreal engine, logic, pro tools, studio one, VSTs, office suite, control applications, perforce visual tools, git gui clients, custom internal tools, etc.

all the real work gets done in desktop apps and yet people keep saying desktop apps are a thing of the past.

i truly don't understand what people's end game with web browsers and applications are.

I'm going to take the contrarian position here and say No, there is no place for native desktop apps. If a user has enough resources to run a graphical operating system, then they have enough resources for an extra copy of Chromium.

The exception to this is something either so simple that it only has one button (e.g. some file format converter) or so large (e.g. Photoshop, Bloomberg, AutoCAD, Mathematica, Visual Studio) that it surpasses the capabilities of the web platform.

Most things like chat, music, or word processing absolutely can and should be done with Electron or (imo) a WebView.

The reason Slack etc. is slow is not because of Electron but because the JavaScript is probably very poor, bloated, and not optimized. I used to hate Electron for being slow but after using it have changed my mind. The bottleneck is never on the Electron side; it slow because of your code. VS Code is an Electron app and is snappier than many "native" editors.

Maybe your native editors just suck.

There are many people who have older hardware. For example I write this on a 2009 Macbook. My parents use older hardware still. And we're relatively well-off in a rich western country. These machines are perfectly fine to use for almost all tasks, but they're not powerful enough to run ten copies of Chrome. We might easily be able to afford that newer 500€ laptop, but for many people that multiple months of income. Not to mention the unnecessary waste the churn produces.

And also sometimes there is absolutely no need to buy new hardware! I find appalling that we need to upgrade our machines just because software gets more and more resource hungry. My parents as well use old hardware to browse, edit documents and write emails. Back at I home I still run a 10 year old PC with Linux and it just keeps running very well... No need to upgrade! I just think about the golden age of videogames and the software tricks developers came up with to make them run and the hardware those used to run on. Today basically you need to constantly upgrade your GPU if you want to play the latest cool game. Ridiculous.

What about WebView then? Surely you can run 10 desktop apps that use WebView because it's just like having 10 Chrome tabs open.

Native apps are far superior to Web/Electron apps. Got 60 upvotes on HN before it was flagged so sharing it as a comment.


Of course!

I would rather reverse the question: In which situations is it acceptable to use Electron, for example? Something like Balena Etcher makes sense, but Logic Audio not so much.

Don't let hype and popularity on HN (not to mention market share of JS as pertains to the amount of web front-end work vs desktop work) serve as a surrogate for noting actual performance given a specific application.

Wrap-a-browser approaches and bloaty non-native frameworks are good for configuration ware, and things that don't need to engage in high amounts of real time processing saturating the CPU and RAM.

Many applications continue to squeeze required performance out of each platform. Cross platform approaches can serve just fine if there are zero-cost abstractions.

Audio/Video/3D/Image production software, for example. CAD, not to mention developers tools and compilers, just for the tip of the iceberg.

See, everyone likes to complain about electron but when I did a market study asking “would you pay for a full-featured native Slack app that was lightweight and designed with functionality in mind” the answer I got was mainly “yeah but no.” As best as I can figure, it was mainly feeling they were entitled to it for free as it’s what Slack should have developed, so they were not about to pay to correct a mistake they’re “not responsible for” never mind that they’re the ones paying the price for it.

(I wrote a fully native iMessage client for Windows 10 [0] and enjoyed it enough to consider building a product around the code, minus the iMessage component.)

[0]: https://neosmart.net/blog/2018/imessage-for-windows/

I paid for Ripcord.


I am constantly annoyed by the web apps, not only because they consume so much resources, but because of the noticeable UI lag that drives me crazy. For example, I have been entertaining the idea recently of building a native Todo app for macOS because of how slow Todoist has become in the past few years.

I'd check https://www.2doapp.com

It's super fast and featureful

Thanks for the suggestion. Looks very interesting, but a bit expensive. Looking to see if there is a trial somewhere...

Native applications will always have an edge, because they can do several things that are difficult or impossible in a browser:

1) Rich key combo support. When running your app inside a browser, many key combos are reserved by the OS (and they differ by OS), and many are reserved by the Browser (and they differ by browser). As a result, your app has to avoid a huge number of key combos, because some OS or browser uses those.

2) Latency. It's not impossible to make a fast web app, but you're already at a disadvantage, due to the inherent overhead of the browser and JS runtime. Put it this way: making a user experience that feels slow and sluggish in a native app requires a lot more mistakes than doing the same thing in a webapp.

3) Filesystem support. It's just better with native. Especially on Windows where you can fully customize the file-open dialog box with app-specific business rules and warnings.

4) Hardware. You'll always be at the mercy of the underlying browser's support for hardware. Need to allow the user to switch between sound devices? This is easy with a native app, but it may require going to the browser control panel if you're a webapp.

5) Leaky abstractions. As a user, I want to open an app and do everything inside of that app. When using a webapp, I may have to fiddle with browser settings, key combos may break me out of the immersion as I accidentally open some browser toolbar or help feature, and the browser toolbars and window is always there to distract me.

6) Updates. With a desktop app, it can show me an alert when it's time to update, and I can choose to update now, or do so later. With a webapp, the updates are normally locked to a browser refresh (I need to refresh the page to get the update, and the update will happen whether I want it or not once I reload). Sometimes, the app decides it's time to update and just force-reloads itself (in the case of an app window I've left open for too long - days or so, while working on something important).

It depends on what you’re developing, who the target users are and how much you need to charge to sustain yourself. It also depends on your skills and how much time you’re willing to invest in creating a desktop application (doing one that’s cross platform, that performs well and works like a native app on each platform would take a significant amount of effort).

Native desktop apps targeted at the average user are better done on macOS, since that platform has a higher percentage of users who will pay (compared to the percentage of users who’d just pirate it).

Applications targeted at professional users, corporate users and developers can get an audience that’s willing to pay on any platform.

If your application is better done as a service, and you’d like better control on managing versions, a web based SaaS might make sense.

>Native desktop apps targeted at the average user are better done on macOS, since that platform has a higher percentage of users who will pay (compared to the percentage of users who’d just pirate it).

That was always the standard wisdom. However I have released the same software on Windows and Mac and seen significantly worse piracy for the Mac version.

I always go for a native app over a web app/electronjs app. Native will most likely have the familiar idioms of the OS, lower overhead, and take advantage of platform specific features. I happily pay money for native apps. I generally don’t pay for blobs of JS in app form.

O God, please no! I can't see anything requiring true low-latency real time performance (like audio production DSP) ever being fast enough on the web compared to native. Also, when everything went 64 bit in recent years, developers got really lazy about memory management.