Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is there still a place for native desktop apps?
524 points by Jaruzel 15 days ago | hide | past | web | favorite | 752 comments
Modern browsers these days are powerful things - almost an operating system in their own right. So I'm asking the community, should everything now be developed as 'web first', or is there still a place for native desktop applications?



As a long-time Win32 developer, my only answer to that question is "of course there is!"

The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.

In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.


I recently had to upgrade my RAM because I have Spotify and Slack open all the time. Today RAM is cheap but it is crazy those programs take up so much resources.

Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).

So I absolutely agree with you.

I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while programming.

The younger programmers may be right that resources don't matter much because they are cheap and available. But now I had to upgrade my RAM.


It really surprised me when I downloaded Godot only to get a 32MB binary. Snappy as hell.

Web apps masquerading as desktop apps are terribly slow and it's a surprise we've got so used to it. My slack client takes a few seconds to launch, then it has a loading screen, and quite often it will refresh itself (blanking out the screen and doing a full-blown re-render) without warning. This is before it starts spinning the fans when trying to scroll up a thread, and all of the other awkward and inconsistent pieces of UI thrown in there. Never mind having to download a 500MB package just to use a glorified IRC client.

I'm really enjoying writing code outside of the browser realm where I can care a lot more about resource usage, using languages and tools that help achieve that.


It's interesting to compare Ripcord[0] to Slack. Ripcord is a third-party desktop client for Slack and Discord. It has something like 80% of features of the official Slack client and a simpler UI (arguably better, more information-dense), but it's also a good two orders of magnitude lighter and snappier. And it also handles Discord at the same time.

--

[0] - https://cancel.fm/ripcord/


I wish so much that 3rd party clients weren't directly against the TOS of Discord. I sorta miss the old days where it seemed like anyone could hook up to MSN/Yahoo/AIM.


I wish that too. More than that, I keep wondering whether there could be a way to force companies to interop, because right now you generally can't, without getting into some sort of business relationship with these companies. That's the problem with services - they take control of interop, and the extent to which interop is allowed is controlled by contracts between service providers.

Where in the terms of service does it say that third party clients are disallowed?

(It doesn’t.)


While it does not explicitly state that, it does say:

"(ii) copy, adapt, modify, prepare derivative works based upon, distribute, license, sell, transfer, publicly display, publicly perform, transmit, stream, broadcast, attempt to discover any source code, reverse engineer, decompile, disassemble, or otherwise exploit the Service or any portion of the Service, except as expressly permitted in these Terms;" [1]

Given that the API is not public if you are not using a bot key, I would think that using it with a third party client would take some form of reverse engineering.

The devs also stated that other client modifications like betterDiscord are against the TOS.

[1]https://discord.com/terms (Under Right To Use The Service)


Ripcord isn't a modification of their software. It's an original implementation. I didn't look at any of their code.

and is written by a single person, in Qt


ripcord is amazing. I bought it right away because in its current form, it's already worth the money

LMMS (https://lmms.io/) is a full blown DAW that's only 33MB as well


Just remember that it's entirely possible to do awkward and inconsistent UI in native apps, and there's a very long tradition of it.

But at least it's generally faster when you do it!


> I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do.

I'm not convinced it's the programmers driving these decisions. Assuming that it takes less developer effort - even just a little - to implement an inefficient desktop application, it comes down to a business decision (assuming these are programs created by businesses, which Spotify and Slack are). The decision hinges on whether the extra cost results in extra income or reduced cost elsewhere. In practice people still use these programs so it seems the reduced income is minimal. What's more, the "extra cost" of a more efficient program is not just extra expense spent on developers - it's hard to hire developers so you probably wouldn't just hire an extra developer or two and get the same feature set with greater efficiency. Instead, that "extra cost" is an oppotunity cost: a reduced rate of implementing functionality.

In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too. I'm not saying that I agree with it, but it's how the market works.


> In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too.

And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it. Sure, a factor in this is that it works on multiple platforms (including mobile) and you don't have to worry about setting it up for yourself, but that has nothing to do with the in-app features and overall UX. Or Spotify, whose biggest value is that it's a cheap and legal alternative to pirating music. And that value has, again, nothing to do with softare, and everything to do with the deals they've managed to secure with artists and labels.

I exercise my preferences wrt. Slack by using Ripcord instead of the official client. Most people I know exercise their preferences wrt. Spotify by using YouTube instead (which is arguably lighter resource-wise). And speaking of alternative clients, maybe that could be the way to go - focus on monetizing the service, but embrace different ways of accessing it. Alas, time and again, companies show they prefer total control over the ecosystem surrounding their service.


> And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it.

The consumer here is the business itself, not their employees.


Technically yes (well, the customers, not consumers), but that's the problem itself: the feedback pipeline between end-users and producers is broken because the end-users aren't the customers.


Maybe we need a power declaration of software as with dish washers. (No joke)


As a younger developer I'd say I agree. But it's not just developers being used to resources being plentiful.

I do webdev mostly and there it's also a matter of management. I want to optimize applications to be less hungry, those are interesting challenges to me. But I've been told by management to just upgrade the server. Either I'd spend a day optimizing, and maybe fixing the issue. Or we just spend 50 euros a month more on a server.

Sometimes the optimization is not worth the effort. For applications like Blender? Optimization means a lot.


> Either I'd spend a day optimizing, and maybe fixing the issue. Or we just spend 50 euros a month more on a server.

So, discounting additional effects like more satisfied userbase, the optimization would pay for itself in a year. And optimizations stack.


Yes, that was my thought process as well. But management didn't agree. To them, the short term cost of me optimizing the problem was higher than the long term costs would be.

Something I noticed over my career. Programmers tend to get super beefy machines. My machine has 64GB of memory, and 12 cores. But the typical users who use our software don't have anywhere near those same specs.. but programmers often just say "it worked on my machine" without thought about the specs.


Same problem exists with designers and monitors.


I like to imagine 20 years in the future we’ll see articles posted on HN, or whatever the cool kids are reading by then ;)

... articles with titles like:

“Slack in one Ruby statement” a la https://news.ycombinator.com/item?id=23208431

More seriously though, Spotify and Slack are optimised to intentionally be huge time wasters, so it makes sense the organisations that produce them don’t care about performance / efficiency.


Most Spotify user-hours are probably office workers or students pumping music into headphones while working. If anything it's a productivity application because it trades flagrantly unnecessary resource usage (streaming the same songs over and over) for users' time (no more dicking around crafting the perfect iPod).

On the topic of flagrantly unnecessary resource usage...

My first child was born six months ago. Newborns (we discovered) sleep better with white noise. So of course we found a three hour white noise track on Spotify and played it from our phones for every nap, never bothering to download it.


I find it hard to believe at least some of that that data wasn't cached on your device. Setting a track to be downloaded just means the cached data is evicted differently. If you run their desktop client with logging enabled you'll see this happening and I'd say it's likely to be the same across platforms. That is of course the actual reason they have a non-native app - to reuse the codebase and save money.


> But now I had to upgrade my RAM.

But I can't. My ram is soldered on. How many tons of carbon dioxide should I emit so that you can use React? There are ways to do declarative ui/state management without dom...


If carbon footprint is that important to you, maybe you should find ways to encourage companies not to solder on RAM instead.


Why not both?

My computer still computes with 2 GB of ram. It’s just that developers are gluing more and more stuff together to do things we did on Pentium processors with 64 MB of ram.


no. just because soldered on ram is bad doesn't mean that bad code is ok.


I guess the question becomes: what is the native ecosystem missing that means devs are choosing to deliver memory/CPU hungry apps, rather than small efficient ones?


(Easy) Cross platform publishing.


HTML, CSS and Javascript. Most of these electron apps are basically wrappers around actual websites to give a place in the dock and show notifications and access the filesystem.


But that isn't what's missing. It's a restatement of the problem. DOM based apps are much more resource intensive than native. What is missing from native that makes business choose DOM?

If there was some modern tool like WxWidgets that supported modern apis like DOM, Android and UWP, would we see more use of native? Electron would therefore become pointless.


That is what's missing, albiet tersely stated.

The hypothetical business has two choices. Choose Electron, or choose some other toolkit that has native, cross-platform support (like Qt). It's far easier for the business, and the developers there, to take their existing website HTML, CSS, and Javascript; and simply wrap it in Electron (which costs $0), and call it a day. Every other choice is (perceived as being) more expensive.

Qt is a modern toolkit with native-cross platform support, but costs money for commercial use, and businesses and software developers don't want to spend the money on it.


Qt Quick takes plenty of ideas from the web playbook.

https://en.wikipedia.org/wiki/QML


as someone who has done both desktop apps and electron apps, it is much faster to write some html/css and wrap it in electron than to do the same in qt/gtk/etc...

Not to mention, the HTML/CSS combo is possibly the best we've come up with for designing user interfaces.


If you don't mind me asking, how much RAM did you have before, and what did you upgrade to?

I recently got a new PC myself and decided to go for 16GB, my previous one (about a decade old) had 8GB and I didn't feel I really hit the limit, but wanted to be future proof. Because as you said, a lot of 'modern' applications are taking up a lot of memory.


I also went from 8GB to 16GB recently (virtual machines are hungry); but I had gotten rid of Slack even before that. I mean, yes, it has round edges and goes ping and has all those cutesy animations - but 2GB of RAM for a glorified IRC client, excuse me, what exactly is it doing with billions of bytes worth of memory? ("Don't know, don't care" seems to be its developers' mantra)


The answer is JIT'ing JS.

For each of your Electron apps there is a little compiler chugging away.


Back in the day, we didn't have 2GB of RAM total, much less just for a compiler!


I upgraded from 8 to 16GB. But I'm in the process of ordering a new desktop that will have 32GB.

Spotify and Slack are not problematic as individual programs but since I have a lot of other programs open they are the ones that take up more memory than they should. I mean: Spotify is just a music player. Why does it need 250MB RAM?


Because it is not just a music player. It plays music, aside from giving you an entire experience that consists of connecting with your friends, managing your music library, consuming ads, having a constant connection with the server, and ...

This was meant to be sarcastic, but I'm not even sure how to continue. Maybe someone else can bulk up that list to get to something that requires 250MB. :)


64GB RAM is borderline reasonable today. Why not jump to that?


It probably uses it for buffering.


I work on a desktop CAD / CAM application, and I need every one of the 12 cores and 32 GB RAM on my windows workstation. I know this because I also have a mac workstation with lower specs (16 GB RAM, don't know offhand how many cores) and developing on it is intolerable (let's play "wait an hour to see if clang will compile my changes" - I know, I know, I should read the C++ standard more carefully so I'm not disappointed to discover that MSVC was overly permissive).

Parenthetically, we do use Slack and I am double-dipping on a lot of heavy functionality by having both Spacemacs (which I use for code editing and navigating / search within files) and Visual Studio (which I use for building, debugging, and jump-to-definition) open at the same time.


Pair that with some discord, vs code and chrome and all of a sudden my 16gb is getting maxed semi regularly

Just had to upgrade to 32 myself


m.spotify.com..?

You are looking back at the past with rosy goggles.

What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.

AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.

Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.

I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.


FWIW i do not remember having issues like that, i had mIRC practically always open, a web browser, email application, etc and i do not remember ever having networking issues.

Internet was slow but that was largely because for the most part of the 90s i was stuck with a very slow 2400 baud modem - i got to appreciate the option that browsers had to not download images by default :-P.

But in general i do not remember being unable to run multiple programs at the same time, even when i was using Windows 3.1 (though with Win3.1 things were a bit more unstable mainly due to the cooperative multitasking).


Me neither, I'm not going to lie and say that I had 40 applications opened, but I DID have 5-10 apps using the web with 0 issues (A browser+IRC App + Email Client+MICQ+MSN Messenger+Kazaam/Napster+Winamp in stream mode).

Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.


It really whipped the llamas ass.


It still does! https://www.winamp.com/


I still use Winamp for offline music. Nothing else is faster.


I did use foobar2000 for a while and it was quite snappy.

Try AIMP 3. It's amazing and is an improvement over Winamp.


And yet, people stopped using it.

I used to use it all the time. Now I use Spotify instead. I'm not sure I want to go back to curating my own collection of mp3's again.


Sure, but if you could have Spotify as is, or a light weight player, like WinAMP, both with equal access to the Spotify service, which would you pick?

People aren't using Spotify because the player is fantastic, they use it because Spotify has a huge library, is reasonable price and the player is sort of okay.


Totally agree. But the DRM monster rears its head. Everyone is afraid you'll steal their choons if you're allowed to play them on whatever player you like sigh


Still, all iTunes content can be de-DRM-ed in 500 lines of C code, so it's not like "the industry" actually requires it to be secure.

Like everything these days, it's barely good enough. And why bother implementing your DRM as a 1KB C++ library when you can use a 5MB Objective C framework instead?


That's an artifact of IP laws. The reason you don't have to curate your own mp3s again is because some service managed to find a way to give you a searchable, streamable collection of music that's also legal. But that in no way implies Spotify needs to be so bloated.


Spotify is a case in point: it used to have a fantastic, small and fast native desktop app. It replaced it with the bloated web-based one we see today.


In a better world, you really wouldn't need to. Winamp was great - The weak point was always the playlist editor, but winamp's interface for simply playing music and seeing what was up next was wonderful. Spotify could provide you with a playlist plugin that simply gave a list of URLs, or let you download one that lasted X hours.


Mobile phones happened. Tiny memories while always being connected.


Same here. About the only time when by browser + mIRC + WinAMP + IM + Visual C++ 6.0 combo slowed down was when the VC++ was compiling the game I was working on. I would then close the IM, because doing so would speed up the compile times by 1.5x.


> Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.

foobar2000?


IRC, SMTP, IMAP are protocols from back when desktop operating systems didn’t even come with TCP/IP. They would use a single connection for unlimited messages. I was using a “modern” chat program, AOL Instant Messenger.

Alright, the missing part of my story was that I was also using a proxy program to share a single connection with my brothers. NAT wasn’t widely available yet, and Winmodems were much easier to find than hardware modems. (And I hadn’t discovered Linux and the free Unixes yet.)

So, every TCP connection that AIM made was 2 connections in the proxy program. We quickly discovered that AIM on more than one computer at a time made the entire Internet unusable.

Every generation of developers decries the next generation for bloat, but Windows 95 had preemptive multitasking that made the computer so much snappier (plus other features), at the cost of multiple times more RAM needed than Windows 3.1. (16 MB was the unofficial minimum, and often painfully small. Microsoft’s official minimum was impractical back then.) Windows XP had protected memory that made it more feasible to run multiple applications, because they were much less likely to crash each other (plus other features, including a TCP/IP stack featuring NAT and a useful connection limit), at another several multiples more RAM needed.

There have always been tradeoffs. Back in the day, programs were small and developers focused more on making sure they did not crash, because they didn’t have lots of RAM and crashing would often require the computer to reboot. That developer focus meant less focus on delivering features to users. (Also, security has often meant bloat.) Now, you barely need to know anything about computer science, and you can deliver applications to users, at the cost of ginormous runtime requirements.


It may be true that people are partially looking back in rose-tinted glasses, but there's more than just an inkling of truth to their side. Casey Muratori (game developer for the Witness) has a really good rant [1] about bloat in Visual Studio specifically, where he demonstrates load times & the debugger UI updating today vs on an Pentium 4 running XP. Whether or not you attribute the performance difference to new features in Win10/VS, it's worth considering the fact that workflows are still being impacted so significantly on modern hardware. We were able to extract 100s of times more out of hardware and gave it up for ???

[1] https://www.youtube.com/watch?v=GC-0tCy4P1U


The Visual Studio 6 on Pentium 4 demonstration starts around 36th minute.

I used Visual Studio 6 for years, and yes, I can confirm, it was really that fast.

It's also not true that there were problems with more applications running etc as "Decade" claims. Or to be more precise, there were no problems if one used Windows NT, and I've used NT 3.51, NT 4 and 2000 for development, starting with Windows development even before they were available. And before that, Windows 3.x was indeed less stable, but it is the time before 1995. Note that the first useful web browser was made in 1993, internet as we know it today practically didn't exist. There were networks, but not web.


Maybe it’s possible for opposite things to be true if they happen to different people. I wasn’t a developer back then.

Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on. Starting with XP, the professional and consumer versions of Windows have merged. We are so lucky.


> I wasn’t a developer back then.

That explains your inaccurate perspective.

> Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on.

Allow me to claim that that is also not true, in the form you state it. Again, I've lived through all this, and I can tell you what that was about. The "pickiness" of NT was even at these times not about the motherboards and the chipsets. It was about the hardware consumer devices. Many things that probably don't even exist as the products today, like a black-and-white hand-scanner that scanned as you moved your hand over the paper and had only Windows 3.x drivers on the floppy with it. There was never a problem of having a developer machine running NT in any reasonable price range, with a reasonable graphic card, monitor, keyboard and mouse. And, at the start, a phone line modem transmitting some kilobytes per second!

The RAM needs did exist, but again not such as they are made to be believed by later distortions. If I remember correctly (it changed relatively fast), at the time NT was published, Microsoft had to deliver it claiming that it will run on 4 MB, the OS and the programs and the graphics, all had to fit. Let me repeat, 4 MB. It run, but not comfortably for bigger programs. But the point is -- as soon as you at that time had 8 MB you haven't had a problem. A little later, for comfortable work, 16 MB were more than a good choice. It was a hundred, two or three of $ more than the cheapest possible offer (yes, that were the prices then), but that was it. RAM was the only thing you had to care about to have NT running.

The point is, at that time there were enough those who didn't want to use Windows NT at all, clutching to 3.x and then 95 and these are those who promoted the horror stories about OS problems. But it was just their ignorance. 95 was also reasonably stable, unless you used, like many, some "utility" programs that were more malware than of real use (the "cleaning", "protection" or even "ram expander" snake oils were used by some even then -- no to mention that a lot of people believed they had to try any program that happens to access them).

The good development tools were good and stable, especially command line (in GUI areas, there were some snake oils among them too). But Word did crash even under NT, and even during the first half of 2000-s decade, and that's completely different story, that was intentional at that time for these products.


> That explains your inaccurate perspective.

Yep, you really succeeded at empathy, there. /s

> reasonable price range

The word “reasonable” is doing a lot of work, here.

Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98, which had arbitrarily severe limitations and lacked memory protections.

So, while it was technically possible to run lots of applications simultaneously on 256 MB of RAM, for most people it was a fun adventure in whether some buggy program has destabilized the system into needing to reboot to run properly again. Or whether it’s still usable with degraded functionality. In my case, that’s without using the cleaning, protection, RAM expander programs.

And even on professional operating systems, web browsers crashed a lot, and any other program that had to deal with untrusted input, which is basically anything that can open files or connect to the network, has gradually bloated as they learn security or add features.


> Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98

Once again: only somebody using a computer not selected for serious development used Windows 95 and 98. No developer who knew what he was doing was using Windows 95 and 98 as his primary development machine. So if you complain about that, you used the wrong tool for your work. Like I've said, it was easy to install Windows NT, and I don't know any computer which wasn't able to run it, if it had reasonably enough RAM.

> on 256 MB of RAM

To illustrate "reasonably" once again, that changed at these times: I remember buying an AMD-based notebook in 2002 with 256 MB and running absolutely without problems Windows 2000 on it for a few years, before upgrading to 512 MB, which was the maximum for that notebook. And that was the time of Pentium III and IV, not Pentium II, and like I've said, I've run Windows NT on 8 MB computers, all with compilers, resource editors, debuggers and even IDE. And even before, I've run Windows 3.11 on 2 MB computer and used that for development too (the development tools being in text mode, of course).

> some buggy program has destabilized the system into needing to reboot to run properly again

Only on non-NT systems, and surely not developer tools. I used Windows 3.x and Windows 9x, and never had to reboot due to the developer tools "making system unstable." Not even on a 4 MB or a 16 MB machine.

> web browsers crashed a lot

I've used both Mosaic and Netscape, and before 2000 my main problem was surely not them crashing. Surfing mostly worked (only the pages loaded slowly, there were no CDNs then). Again, on a NT system.


I think we’re losing the plot. The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers, and now you’re talking about developer tools on a Pentium 4.

Maybe it’s simultaneously true, that you could run many developer tools at the same time on Windows NT with hundreds of dollars of RAM, and attempting to run a bunch of consumer network programs at the same time (especially on consumer Windows) was asking for trouble.

I remember one of the attractions of IE 5 back in the day was how each newly launched window was its own process (not windows opened by the open link in new window menu option), so unlike Mosaic and Netscape, a crash in one copy of IE did not necessarily bring down all the other windows. Multiple windows being useful because surfing with a modem was slow regardless of CDN. Remember when Yahoo was scandalous, because banner ads took so much bandwidth?


> on Pentium II

> and now you’re talking about developer tools on a Pentium 4.

It's to illustrate that arguments are wrong: it's Decade who uses "256 MB" as an argument which is not "small memory" for Pentium II, and I illustrate that it was common in 2002 for notebooks, the time when Pentium IV was common for developer machines.

> The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers

Let me check again:

"For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II."

OK. That is also obviously a bit off. 256 MB with Pentium II is quite a lot, as I showed 256 MB was normal even in 2002 for notebooks, as Pentium III was already common on notebooks and IV on desktops. Working with email -- at that time e-mail clients, if they used html at all, were limited to html formats of that time so "using email" completely worked, no crashes of system on NT (Outlook did have a limit of single PST having to be less than N GB, I remember that). IM too just worked, and also without crashes on NT.

That leaves "video/audio calls". Video calls were surely not common at that time, and I personally also haven't used audio calls.

But the "stability" problems you claim to have been common definitely didn't exist the way you claimed, as soon as one used NT, that is, since around 1994, or later on Windows 2000 or even later on XP or Server 2003, all NT-based. And as I've said, it was not that "too much" RAM was needed, as I've run NT on 8 MB with no problem.

So I still don't understand why you continue to stick to the narration that was simply not true. No, it was not that bad like you claim. Computers were quite stable even then for those who knew what they were doing. On NT, almost nothing crashed the system, except for failed hardware. Like I've said, it was that some apps were indeed less stable, like Word crashing or saving the invalid DOC file. But Excel, for example, while being in the same "suite" I don't remember to have ever crashed. I also don't remember browsers actually crashing, just the pages downloading very, very slowly.


The 256 MB number came from the ggp post. At the beginning of the Pentium II era, that was very expensive, but it was not the only issue with running multiple programs at the same time.

But clearly you want to have the last word, so I guess I should let you have it.


We gave it up for slightly higher profit margins enabled by hiring slightly less qualified programmers at a slightly lower rate.

In a similar vein, Industrial Light and Magic used to have a few highly talented people crafting incredibly intelligent solutions to make their movies possible: https://youtu.be/AtPA6nIBs5g

By now, most of those effects would instead be done using CGI and outsourced to Asia.


There's probably a long rant waiting to be written on this topic. Myself, I've observed how over the last four decades, CGI effects went from worthless, through novelty, through increasingly awesome, all the way to "cheapest garbage that can be made that looks convincing enough when the camera is moving very fast".


A Pentium II could barely process DVD-resolution MPEG-2 in realtime.

According to http://www.vogons.org/viewtopic.php?p=423016#p423016 a 350MHz PII would've been enough for DVD, and that's 720x480@30fps; videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.

Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.

I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams; instead a MCU or "mux box" is used to combine the streams from all the other participants into one stream, and they do it with dedicated hardware.

That said, video is one of the cases where increased computing power has actually yielded proportional returns.


> videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.

Modern videoconferencing solutions (WebRTC) usually use 1280x720 and either H264 or VP8. Some apparently use HEVC. Also most modern processors and SoCs come with hardware-accelerated codecs built in, so most of the work related to compression isn't even done by the CPU itself.

> I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams

Yes you are receiving separate streams.


Don’t think it can be an MCU box. You can select an individual stream from the grid to make it larger almost instantly. The individual feeds can display both as grid and a horizontal row. I’m assuming they send individual feeds and the client can ask for feeds at different predefined resolutions.


Without having used Zoom much I can't definitively say how it works, but I've used BlueJeans quite a bit and noticed compression artifacts in various parts of the UI (e.g. text underneath each video source). That means BlueJeans is definitely muxing video sources and it really does not have a noticeable delay when changing the view. Since each video is already so compressed I think they can get away with sending you really low bitrate streams during the transition and you'll barely notice.


Mixed plus whichever feed you request to enlarge sounds more reasonable.


With Skype you're definitely able to receive separate streams from each participant, as I can access them individually via NDI and pipe them into OBS to do live multi-party interviews. You can see the resolution of individual feeds change when their bandwidth drops, and you can choose high/low bandwidth and latency modes for each feed. I would guess Zoom does the same but doesn't provide an NDI feed (yet).

I have an iMac G4 from 2003 (the sunflower things) on which I installed Debian PPC and it is able to stream 720p content from my local network and play it back smoothly on VLC


I could see Street View-like vistas under a Pentium3/Amd Athlon. On power, I did the same you can do today but with an Athlon XP and Kopete. On video, since BeOS and Mplayer I could multitask perfectly XVid movies good enough for its era.


To be fair, 10-20 years ago was the age of Windows XP and Windows 7, not Windows 95. There barely was anything good about Windows 95, and there are likely not many people missing it, but it was also a complete different era from the later "modern" desktops, hardware as also software-wise. If anything I would call that era the alpha-version, problems included.


Most of those have nothing to do with OP's point, which is that some software uses way too much processing power than it should.

While on the topic, let's remember the speech recognition software available for Windows (and some for Android 2.x) that was completely offline and could be voice activated with, gasp, any command!

Google with its massive data centers can only do "OK/Hey Google". Riiight. I can't believe there are actually apologists for this bs.


Do you mean Dragon Speech or how it was called?

Anyway, old speech recognition software was quite horrible. Most did not even worked without prior training. And Google does have now offline-speech recognition too. But true, the ability to trigger with any desired phrase is something still missing.


The ability to trigger with any desired phrase is easy, but not done for privacy reasons, to reduce the chance of it accidentally listening to irrelevant conversations.

The inability to change it from Hey google is done for marketing / usability reasons.


Nuance Dragon NaturallySpeaking


What was the software name, if I may ask? I remember speech recognition pre-CNN to be quite terrible.


Microsoft had speech recognition since WinXP. And also Dragon Naturally Speaking. Both needed a couple of hours of training, but worked really well, completely offline, it was amazing for me at the time. It did have a very high processor usage, but that was on freaking single core Athlon and Pentium. I'm not even a native English speaker, though dare I say my English is on par with any American.


You're talking about different concepts.

Voice recognition used by things like Google Assistant, Siri, Cortana, and Alexa usually relies on a "wake word", where it's always listening to you, but only starts processing when it is confident you're talking to it.

Older speech recognition systems were either always listening and processing speech, or only started listening after you pressed a button.

The obvious downside of the older systems is that you can't have them switched on all the time.


I think it would be really easy to create an app that would also listen to a very specific phrase (like "Hey Merlin", simple pattern match, with a few minutes of training for your own voice) and then start Google Assistant.

It's so embarrassing saying Hey Google all the time, and for me, it just feels like I'm a corporate bitch, tbh. It's true, which just makes me feel worse :D


There were always idiots writing buggy code. The issues you mention are about “old software” on “old hardware”. GP is only talking about “old style of software development”. Granted Qt, X, Win API is unnecessarily complicated.


> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

With Moore's law being dead, efficiency is going to get a lot more popular than it has been historically. I think we're going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.

We see new languages like Nim and Crystal with their only value proposition over Python being that they're more efficient.

Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.


I said that 20 years ago, but so far I've been proven completely wrong. Skype/etc just keep getting bigger and slower despite, from what I can tell adding absolutely no additional functionality. In fact if you consider it can't seem to do peer to peer anymore, its lost features.

Very few companies are rewritting their electron apps in win32 (although they should be). Instead it continues moving in that direction, or worse. Crashplan rewrote their java GUI a while back in electron. Java UI's are mostly garbage, but compared with the electron UI it was lightweight and functional. The electron UI (besides shipping busted libraries) has literally stripped everything out, and uses a completely nonsensical paradigm/icon set for the tree expand/file selection. Things like slack are a huge joke, as they struggle to keep it from dying under a load my 486 running mIRC could handle. So blame it on the graphics and animated gif's people are posting in the chat windows, but the results speak for themselves.


Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.

Though the only measurement I think people would actually care about is battery impact, and even that is pretty much hidden away on phones except to the few people who actually look.

But the other problem is: who cares if Discord or a browser's HN tab aren't optimally efficient? You're just going to suck it up and use it. With this in mind, a lot of the native app discussion is technical superiority circlejerk.


Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.

I'd say it's more of a "without a way for end-users to compare" --- the average user has no idea how much computing resources are necessary, so if they see their email client taking 15 seconds to load an email and using several GB of RAM, they won't know any better; unless they have also used a different client that would do it instantly and use only a few MB of RAM.

Users complain all the time when apps are slow, and I think that's the best point of comparison.


There is an economic theory that's escaping me right now, but the gist is that with certain goods, the market will hover at the very edge of efficiency; they have to become just scarce enough to break a certain threshold, then the market will realize that they are in fact a scarce resource, then correct to achieve a high efficiency equilibrium.


THIS. I do remember outright revelations in user experience, as I showed people how much better Firefox 2.0 was, compared to IE6 (and looking back, version 2.0 wasn't all that wonderful, from present point of view - tells you more about IE than about FF).

edit: it was 2.0, I misremembered.


Even further: without a way for end users to take action based on that comparison.

If I decide that I don't want to use Slack because it drains my battery, then I can't take part in Slack conversations.

Because Slack is the go-to chat application for so many teams, excluding myself from those conversations is not feasible.

End result: I carry on using Slack.


Instacart website has dreadfully slow search. Looks like instant search update takes forever to update with each character. The whole site is so slow. It makes my Mac Safari complain that the page uses significant resources.

This weekend I noticed that Amazon Fresh now delivers the same day—-for the past few months they had no slots. I switched to Amazon away from Instacart at once. The Amazon website lacks some bells and whistles compared to Instacart but it is completely speedy. If Instacart website were satisfactory I would never have switched.

Slow, bloated websites can absolutely cost companies money.


I think the other major, major thing people discount is the emergence of viable sandboxed installs/uninstalls, and the accompanying software distribution via app stores.

Windows 95 never had a proper, operating-system supported package manager, and I think that's a big part of why web applications took off in the late 90s/early 2000s. There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.

Mobile has forced a big reset of this, largely driven by the need to run on a battery. You can't get away with as much inefficiency when the device isn't plugged into the wall.


> [the absence of a package manager was] a big part of why web applications took off in the late 90s/early 2000s.

Of course apt-get is very convenient but I can't see a Microsoft version of it letting companies deliver multiple daily updates.

Based on my experience of the time the reasons were, in random order

- HTML GUIs were less functional but easier to code and good enough for most problems

- we could deploy many times per day for all our customers

- we could use Java on the backend and people didn't have to install the JVM on their PCs

- it worked on Windows and Macs, palmtops (does anybody remember them?) and anything else

- it was very easy to make it access our internal database

- a single component inside the firewall generates the GUI and accesses the db instead of a frontend and a backend, which by the way is the modern approach (but it costs more and we didn't have the extra functionality back then, js was little more than cosmetic)


There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.

Bloated, inefficient software is certainly present on the native side too, but it's also possible to write single-binary "portable" ones that don't require any installation --- just download and run.


OS API sets have evolved toward more sandboxing. Things are more abstract. Fewer files on disk, more blob-store-like things. Fewer INI files in C:\Windows, more preference stores. No registry keys strewn about. .NET strong naming rather than shoving random DLLs into memory via LoadLibraryA()

(Hi, I'm a windows dev)


IMHO web applications took off because developers learned pretty fast how useful "I can update any time without user consent" is, especially when your software is a buggy mess (or a "MVP" if you like buzzwords) and you need to update every five minutes.


> Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason.

I would predict that if only Qt didn't cost a mind-boggling price for non-GPL apps. They should really switch to pay-as-you-earn e.g. like the Unreal engine so people would only have to start paying as they start earning serious money selling the actual app. If they don't Qt popularity is hardly going to grow.


Qt through the LGPL license is free for non-GPL apps. Tesla is using it as LGPL in their cars without paying a dime to Qt Company (which is, imho, super shitty given the amount of money they make).


Agree 100%.

I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).

Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.

I sense there's a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust's borrow checker, Swift's reference counting, or C++ smart pointers.

I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are "hard" in a way that C#/Java/JS aren't, and I also think it has a big effect on performance, or at least, latency. I completely agree we've backslid, and far, but the reality is, today, it's expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we're stuck with the Electron / web shitshow, in large part because it's just faster, and easier for non-specialists to develop. It's all driven by economic factors.


There is movement away from stop-the-world GC, but not to reference counting. The movement is towards better GC.

The language Go has sub millisecond GC with multi-GB heaps since 2018. See https://blog.golang.org/ismmkeynote

Java is also making good progress on low latency GC.

Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.

I don't want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.


Yet we still read articles and threads about how bad the Go GC is and the tradeoffs that it forces upon you.

I get the feeling that the industry is finally starting to realize that GC has been a massive mistake.

Memory management is a very important part of an application, if you outsource that to a GC you stop to think about it.

And if you don't think about memory management you are guaranteed to end up with a slow and bloated app. And that is even before considering the performance impact of the GC!

The big hinderence has been that ditching the GC often meant that you had to be using an old an unsafe language.

Now we have rust, which is great! But we need more.


The Go GC isn't that great, it's true. It sacrifices huge amounts of throughput to get low latency: basically a marketing optimised collector.

The new JVM GCs (ZGC and Shenandoah) are more sensibly designed. They sacrifice a bit of throughput, but not much, and you get pauseless GC. It still makes sense to select a throughput oriented collector if your job is a batch job as it'll go faster but something like ZGC isn't a bad default.

GC is sufficiently powerful these days that it doesn't make sense to force developers to think about memory management for the vast bulk of apps. And definitely not Rust! That's one reason web apps beat desktop apps to begin with - web apps were from the start mostly written in [pseudo] GCd languages like Perl, Python, Java, etc.


I don’t think it’s fair to call garbage collection a mistake. Sure, it has properties that make it ill-suited for certain applications, but it is convenient and well suited for many others.


Go achieves those low pause times by allocating 2x memory to the heap than it's actually using. There's no free lunch with GC.


Same applies with manually memory management, you get instead slower allocators unless you replace the standard library with something else, and the joy of tracking down double frees and memory leaks.


I'm using Rust, so no double frees and no accidental forgetting to call free(). Of course you can still have memory leaks, but that's true in GC languages too.


That is not manually memory management though, and it also comes with its own set of issues, like everyone that was tried to write GUIs or games in Rust is painfully aware of.

There is no free lunch no matter what one picks.


That's true. The comment by mlwiese up-thread, that I responded to, praised Go's low GC latency without mentioning the heavy memory and throughput overheads that come with it. I felt it worth pointing out the lack of a free lunch there; I think a lot of casual Go observers and users aren't aware of it.


Agreed, although if Go had proper support for explicit value types (instead of relying in escape analysis) and generics, like e.g. D, Nim, that could be improved.


I don't think that's as hard as you make it out to be. Notably, Zig does not have a default allocator and its standard library is written accordingly, making it trivial to ensure the use of the appropriate allocation strategy for any given task, including using a debug allocator that tracks double-free and memory leaks.


Has Zig already sorted out the use-after-free story?

No, and as far as I am aware it makes no attempt to do so other than some allocators overwriting freed memory with a known signature in debug modes so the problem is more obvious.

> Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.

I worked for a robotics company for a bit, writing C++14. I don't remember ever having to use raw pointers. That combined with the functionality in Eigen made doing work very easy --- until you hit a template error. In that case, you got 8 screens full of garbage.


Yeah, software seems to frequently follow Parkinson's Law:

Work expands so as to fill the time available for its completion.[1]

Corollary: software expands to fill the available resources.

1. https://en.wikipedia.org/wiki/Parkinson%27s_law


See also: Wirth's Law https://en.wikipedia.org/wiki/Wirth%27s_law

from A Plea for Lean Software (1995) https://cr.yp.to/bib/1995/wirth.pdf


Neat. I wasn't aware of that one.


This sentiment is why I've moved to write Elixir code professionally three years ago, and why I write Nim for all my personal projects now. I want to minimize bloat and squeeze out performance from these amazing machines we are spoiled with these days.

A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.

It's our responsibility to minimize our carbon footprint.


Some of blame is to be put on modern development environments that pretty much require the latest best hardware to run smoothly.

> It's our responsibility to minimize our carbon footprint.

This, a hundred times.


My normal work computer is a Sandy Bridge Celeron laptop. I might need to upgrade it soon, but I'd still prefer something underpowered for exactly same reason; perhaps I'll purchase an Athlon 3000 desktop.


> As a long-time Win32 developer, my only answer to that question is "of course there is!"

As a long-time Linux user, that's what I say as well.

And as a privacy activist, that's what I routinely use.


> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.

If we save developer-cycles, it's not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.

> For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II.

Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don't realize there are there. Some of them working in the background, some being so naturally that you can't imagine them not being there since the beginning of everything.


> If we save developer-cycles, it's not wasted, just saved somewhere else.

In other words, pass the buck to the user (the noble word is "externality").


No, an externality is when a cost is passed to a external party (not involved in the transaction), like air pollution or antibiotic resistance. Passing a cost to the user is just a regular business transaction, like IKEA sending you a manual so you can build the furniture yourself.


I don't know ...

https://tsone.kapsi.fi/em-fceux/ - This is an NES emulator. The Memory tap in Developer tools says this takes up 2.8MB. Runs in 60fps on my modern laptop.

It seems possible to build really efficient applications in JS/WebASM.

Multiple layers of Javascript frameworks is the cause of the bloat, and is the real problem I think.


> The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.

Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749


While true, it also had plenty of limitation. You have to keep carrying around a huge legacy, you're locked in to the APIs, SDKs and operating systems of a single vendor, often locked themselves to a single type of hardware.

The win32 code doesn't run anywhere, except on Windows, but most of the compute devices are mobile (non-laptop) systems and those don't come with Windows.

Running your native apps now takes both less work and more work: you can write (Somewhat) universal code but the frameworks and layers required to get it to build and run on Windows, macOS, Linux, iOS, Android, and any other system the market you target relies on now comes in as a dependency.

It used to be that the context you worked in was all you needed to know, and delivery and access was highly top-down oriented meaning you'd have to get the system (OS, hardware) to run the product (desktop app). That is no longer the case as people already have a system and will select the product (app) based on availability. If you're not there, that market segment will simply ignore you.

That is not to say that desktop apps have no place, or that CEF is the solution to all the cross-platform native woes (it's not, it's the reason things have gotten worse), but the very optimised and optimistic way to writing software from the 90's is not really broadly applicable anymore.


What are some solid resources for learning more about optimization? I graduated from a bootcamp, and at both jobs I have had I ask my leads about optimization and making it run even faster and am often told that we don't need to worry about it because of how fast computers are now. But I am sitting there thinking about how I want my stuff to run like lightning for every system.

Is it practical to target wine as an application platform? That will require building without vs. Or build on windows and test with wine. What are the apis one would need to avoid in order to ensure wine compatibility?


256MB RAM? How extravagant! My first computer had 3kB.

This is just the nature of “induced demand”. We might expand the power of our computers by several orders of magnitude, but our imaginations don’t keep up, so we find other ways of using all that capacity.


> is easily several orders of magnitude

You might have used these words as a way to say "way faster", but factually you are incorrect. several orders of magnitude = thousands of times faster. No way.


> only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions

A few kb for the binary + 20-40 gb for the OS with 25 years of backwards compatibility


The actual part of the OS providing that is a small fraction of the number you quoted.


And if it weren’t for all that rest of the OS, the small fraction wouldn’t get the funding to survive to today.


If the browser is computationally expensive abstraction, so is the various .NET SDKs, the OS, custom compiler and the higher language of your choice. Yes there were days were an game like prince of persia could be fit in to the memory of apple IIe and all of it including the sound graphics, mechanics and the asset was less than 1.1 MB ! However the effort required to write such efficient code, hand optimise compiler output is considerable not to mention very few developers will be able to do it.

Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.

While the browser security sandboxes have its issues, I don't want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .

Till late 2000's Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family's systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.

I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that

Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.


I use windows (although not a heavy user, I mainly use Linux these days), and only outsider apps I have installed are lightweight open source ones and some "official" versions of software. You don't need an antivirus apart from built-in Windows Defender. And I don't notice any slowdown. I have a non-admin account which I regularly use and admin account is separate.

Arguably many users don't know how to use a Windows desktop. But that's not a failure of desktop; that's failure of Windows. They could have provided an easy way to install applications to a sandbox. On Android you can install from apk files and they are installed to a sandbox. If Windows had such a feature easily available, I think most of genuine desktop app makers would have migrated to it. This would have the advantages of the browser and no battery drain, no fan noise, no sluggishness.


You already can use UWP that has sandbox. Win32 apps can be converted to it. So no one cares about more security. Most vendor stuck at "just work" Win32.


Can convert does not mean thats only way to install , as long you give an insecure option your security is still weak .

It is not that OS developers are not improving , for example S mode on surface is a good feature, however as long as adobe’s of the world still abuse my system the problem is not solved .

It is just not older gen software either , Slack desktop definitely takes more resources than web version while delivering broadly the same features Sure it is electron abstraction , however if a multi billion company with vc funding can not see value in investing on 3 different stacks for windows,MacOS and Linux how can most other developers?


Is it possible to run a process (and its children processes) in sandbox manually with only given permissions?


Native desktop apps are great.

The reason that people don't write them is because users aren't on "the desktop". "The desktop" is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can't run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there -- Android users won't like your iOS UI, and iOS users won't like your Android UI. Then there are tablets.

Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.


> Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.

I must be living in a parallel world because I use a ton of desktop apps that aren't "written 6 times" - and write a few, including a music & other things sequencer (https://ossia.io).

Just amongst the ones running on my desktop right now, Strawberry (Qt), Firefox (their own toolkit), QtCreator (Qt), Telegram Desktop (Qt), Bitwig Studio (Java), Kate (Qt), Ripcord (Qt), all work on all desktop platforms with a single codebase. I also often use Zim (GTK), which is also available on all platforms, occasionnally Krita (Qt) and GIMP (GTK), and somewhat rarely Blender. Not an HTML DOM in sight (except FF :-)).


In my experience Java GUIs are consistently even more laggy and unresponsive than Electron apps. They may be lighter in terms of memory, but they never feel lighter. Even IntelliJ and family - supposedly the state of the art in Java apps - feel like mud on a brand-new 16" Macbook Pro.


Lighter in terms of memory? No way. Intellij is always at a few gb per instance. They are indeed laggy as hell. With the latest Mac os intellij products specifically bring down the entire os for ten to twenty minutes at a time requiring a hard reboot without which the cycle starts again. Except it's not java or intellij, it's the os. I only wish they were electron apps. That way I wouldn't have to return a $4400 brand new 16" mbpro because of its constant crashing due to horrible native apps. All apps can be shitty. At least electron ones are cross platform, work, and generally do not bring the whole system to a standstill followed by a hard crash. While using about the same resources as electron apps.


Interestingly they seem to run exactly the same on horribly low spec machines. I blame the jvm's love for boxing and unboxing everything in IL land. Of course by now I'd hope it's less wasteful - last I spent serious time in Java was 2015.


I've definitely noticed the same on IntelliJ but weirdly enough Eclipse feels just fine. IIRC both are written in Java, so maybe it comes down to the design of IntelliJ moreso than the limitations of the JVM?


I used Eclipse for a while before switching to IntelliJ around ~2015 and it actually seemed like a vast improvement, not just in terms of features but in terms of performance. It still wasn't "snappy", but I figured I was doing heavy work so that was just how it was.

Fast-forward 5 years and I've been doing JS in VSCode for a while. My current company offered to pay for Webstorm so I gave it a try. Lo and behold it was still sludgy, but now unbearable to me because I've gotten used to VSCode.

The one other major Java app I've used is DBeaver, which has the same problem to an even greater degree. Luckily I don't have to use it super often.


Eclipse interestingly uses native controls which it hooks up to Java, while IntelliJ draws everything essentially.


I work daily in a codebase with 20M lines and RubyMine can still search near-instantly compared to say VS Code. One thing that's still true is that there are sometimes long pauses, presumably garbage collection, or I suspect more likely bugs as changing window/input focus can sometimes snap out of it.


If that's the case regarding IntelliJ then you probably haven't changed the jvm heap size, which is defaulted to something very small (2GB maybe) by IntelliJ.


QT is excellent, but C++ is quite a tough pill to swallow for many. Especially as QT layers a macro system on top. I predict that native desktop apps will make a comeback when there's QT-quality cross-platform framework in a more approachable language (Rust, nim, or similar).



Why not use Qt bindings for $YOUR_LANGUAGE_OF_CHOICE? https://wiki.qt.io/Language_Bindings


It's rather clunky and often requires writing like C++ in whatever language of choice you're using, the worst of both worlds.

I wonder what an API with both only C (or some other low level) bindings and designed to be easy to use externally might look like.


Flutter for desktop solves this.


Your subdomain http://forum.ossia.io/ lacks an ssl certificate for the login mask.

You can get wildcard ssl certificates for your domain and all subdomains from Letsencrypt.


Thanks for the info, gonna check.. we already have a cert, no clue what's missing precisely in the config, how can I try ?


Configure your HTTP server to serve that subdomain from HTTPS. Here's an example using nginx:

https://linuxize.com/post/redirect-http-to-https-in-nginx/


...or just use CloudFlare, which will automatically take care of it.


Do you count Qt apps as native, but not count web apps as native? Why?


Qt may not 'look' native, but it has native performance, whereas Electron really doesn't.


The difference between "Qt native" and "native native" (e.g. Win32 or Cocoa) is still noticeable if you pay attention, although it's not quite as obvious as between Electron and the former.

(Likewise, applications using the JVM may also look very convincingly like native ones, but you will feel it as soon as you start interacting with them.)


Is it really even worth highlighting though? I use Telegram Desktop (Qt) daily and it is always, 100% of the time completely responsive. It launches basically instantly the second I click the icon and the UI never hangs or lags behind input to a noticeable degree. If we transitioned to a world where everyone was writing Qt instead of Electron apps we would already have a huge win.


If you're using KDE, Qt is "native native".

You're fundamentally mistaken about where Qt sits in the stack - it effectively sits in the same place as USER32/WinForms in Windows or NS/Cocoa GUI widgets of OSX. It is reasonable to think of it as an alternative native GUI library in that sense. If it is slower, it's because an implementation of something is slower, not because of where it lives or an abstraction cost.

Qt pretty much draws using low-level drawing APIs on the respective platform. And although Qt itself is not written in the most performance sensitive C++, it is still orders of magnitude faster than most (and it's not like Chrome doesn't pay overhead) - people rag on vtable dispatch speed but jeez its still orders of magnitude faster than something like ObjC which served Apple quite well for years.

The performance of a Qt app is more likely a function of the app itself and how the app developers wrote it.

But no, you're not noticing any micro-seconds differences in C++ overhead for Qt over "native native" - and you're basically comparing the GUI code of the platform - since Qt does it's own rendering. Win32 is mostly pretty good, NS is a mixed bag, and Gtk+ is basically a slug. In all cases there is some kind of dynamic dispatch going on, because that is a fundamental pattern of most GUI libraries. But dynamic dispatch is almost never a factor in GUI render performance. Things like recalculating sizes for 1 million items in a table on every repaint are the things that get people into trouble, and that is regardless of GUI library.


VSCode is indistinguishable from native, so not sure its Electron that's at fault here.


This gets said a lot, and granted VSCode is certainly one of the best performing Electron apps, but it definitely is not indistinguishable from native apps. Sublime, Notepad++, or TextAdept all fly compared to VSCode in terms of performance and RAM efficiency.


On Mac, VSCode does a better job than many apps at emulating the Cocoa text input systems but, like every electron app, it misses some of the obscure corners of cocoa text input system that I use frequently.

If we’re going to use JavaScript to write native apps, I’d really like to see things like React Native take off: with a good set of components implemented, it would be a first class environment.


No. I like VS Code but it's a hog.

I still use Macvim or even Sublime Text a lot for speed reasons, especially on large files.


If your native apps are indistinguishable from VSCode, they're doing something wrong.


Start Notepad++ or https://github.com/rxi/lite and then compare the startup speed with VSCode.


I use VS Code daily (because it seems to be the only full-featured editor that Just Works(TM) with WSL), but it can get pretty sluggish, especially with the Vim plugin.


Try opening a moderately large (even 2MB) .json file in VSCode, and then do the same in sublime.

VSCode very quickly freezes because it cannot handle a file that size. Sublime not only opens it but syntax highlights immediately.


This is something with your configuration. OOB VSCode will immediately show you the file but disable tokenization and certain other features. I regularly open JSON files upto 10 MB in size without any problem. You probably have plugins which impede this process.


Try to use AppleScript or Accessibility. It's like VS Code doesn't even exists.


If I recall correctly, Microsoft forked their own version of electron to make vs code feel more snappy. Because normal electron runs like slack.


I don't think so, Microsoft wanted to fork electron in the past to replace Chromium with edgeHTML, but it didn't happen. VSCode is powered by Monaco Editor github.com/microsoft/monaco-editor, and VSCode feels snappier than let's say Atom, probably because of Typescript.

Isn’t that more of an Electron issue?

I mean, is anyone clamouring for VS Code, for example, to be rewritten in native toolkits?


I would argue that the web platform is one of the most optimised and performant platforms for apps.


When you say web platform do you mean a browser? Using a browser is the mosted optimised and performant over installing an application on your desktop?

Curious what desktop do you run your browser under?

I would give you an example of a simple video split application. A web platform requires uploading, downloading and slow processing. A local app would be hours quicker as the data is local.


No reason a video splitting app couldn't be written with client-side JS.


That sounds like it’d probably be slow.


So please do.


a few reasons :

- Qt is actually the native toolkit of multiple operating systems (Jolla for instance and KDE Plasma) - you just need to have a Linux kernel running and it handles the rest. It also does the effort of going to look for the user theme for widgets to mix in with the rest of the platform, while web apps completely disregard that.

- Windows has at least 4 different UI toolkits now which all render kinda differently (win32, winforms, wpf, the upcoming winui, whatever is using Office) - only Win32 is the native one in the original sense of the term (that is, rendering of some stuff was originally done in-kernel for more performance). So it does not really matter on that platform I believe. Mac sure is more consistent, but even then ... most of the apps I use on a mac aren't cocoa apps.

- The useful distinction for me (more than native and non-native) is, if you handle a mouse event, how many layers of deciphering and translation has it to go through, and are these layers in native code (eg. compiled to asm). As it reliably means that user interaction will have much less latency than if you have to go through interpreted code, GC, ...

Of course you can make Qt look deliberately non-native if you want, but by default it tries its best - see https://code.woboq.org/qt5/qtbase/src/plugins/platforms/coco... and code such as https://code.woboq.org/qt5/qtbase/src/plugins/platforms/wind...


Knowing what I know about Qt and what I've done with it in my day job, it's basically the best kept secret on hn. What they're doing with 6+licensing... I'm not sure how I feel, but from a pure multi-platform framework it really is the bees knees.

I've taken c++ qt desktop apps that never had any intention of running on a phone, built them, ran them, everything "just worked. I was impressed.


I just wish it weren't stuck, anisotropically, ~10 years in the past. Maybe Qt6 will be better, but more likely it will be more and more QML.


Since QML uses Javascript it may be their best bet to attract new developers.

Yes, well, QML also uses JavaScript.

This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).

Also worth noting that many creation-centric applications for the desktop (graphics, audio, video etc. etc.) don't look "native" even when they actually are. In one case (Logic Pro, from Apple), the "platform leading app from the platform creator" doesn't even look native!


> This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).

Qt also supports rendering directly on the GPU (or with software rendering on the framebuffer) without any windowing system such as X11 or Wayland - that's likely how it is most commonly used in the wild, as that's one of the main way to use it on embedded devices.


I'd like to see it do that on macOS ...

You're seriously suggesting that the common use of Qt on Linux systems is direct rendering without the windowing system?


Not parent, but yes, sort of.

Arguably it's use in embedded contexts is much larger than desktop. It's quite popular for in-car computers, defense systems, etc.

For desktop linux, yes, it uses the windowing system.


Well, yes. I can't tell too much because of NDAs but if you go buy a recent car there is a good chance that all the screens are rendered with Qt on Linux or a RTOS - there is likely more of those than desktop linux users as much as this saddens me


On macOS Qt doesn’t really use Cocoa, it use Quartz/CoreGraphics (the drawing rather than the application layer). Note that Apple’s pro apps are native controls with a UI theme: they usually behave like their unthemed counterparts.


I had meant to write Quartz, not Cocoa.

I know how Qt works at that level. I did a bunch of work in the mid-naughts on the equivalent stuff in GTK.


> Qt relies on a lower level windowing system

That's true of QtWidgets, but not QML / Qt Quick (the newer tool), correct? (I found this hard to determine online).


Kinda - QML is a programming language, Qt Quick is an UI scene graph (with the main way to use it being through QML) which also "renders everything" and makes by default less effort than widgets to look like the OS.

But :

- QML can be used with other graphics stack - for instance Qt Widgets like with https://www.kdab.com/declarative-widgets/ or alternatives to the official Qt Quick controls with https://github.com/uwerat/qskinny or with completely custom rendering like here with NanoVG : https://github.com/QUItCoding/qnanopainter

- QtQuick Controls can be made to use the same style hints than the desktop with https://github.com/KDE/qqc2-desktop-style


What does “native” even mean?

Put a <button/> in an HTML page and you get a platform-provided UI widget.


It's not platform-provided in my experience, but browser provided. The result of <button/> when viewed in a browser on macOS has no relation to the Cocoa API in any meaningful sense.


I'm pretty sure that when you render just a <button> in at least Safari, the browser will render a native Cocoa button control. If you set a CSS property like background colour or change the border, then it will "fall back" to a custom rendered control that isn't from the OS UI.

I did a small bit of research into this, and found plenty of "anecdotal" evidence, but nothing confirming for sure. I'm looking and interacting with the controls and they seem pretty native - if they're a recreation than that's pretty impressive :)


The drawing is native and the interaction is handled by the browser.


A GUI is a collection of elements with specific look and behaviour. A Desktop Environment is a collection of GUI(s), tools and services. Native means you have something which follows this look and behaviour 100% and can utilize all the tools and services.

Implementing the look is simple, adding behaviour quite harder and utilizing the service the endgame. WebUI usually does nothing from those or some parts, it all depends on the constellation. But usually there is a obvious difference at some point where you realize whether something is native or just made an attempt.


I'd also love for Mac and Windows to make it really easy to get a vendor blessed version of QT installed.

Imagine if when trying to run a QT app on Windows a dialog box could popup saying. "Program X is missing the Y, install from the Windows Store (for free): Yes / No"


Ossia looks pretty sweet! I'll be checking that out for sure.


thanks ! it's still kinda in alpha but making progress :)


Bitwig has been ported to Android ? Or IOS ?


I don't think it makes sense to use it on even small laptops screen to be honest so I don't really see the point. You'd have to redo the UI and whole paradigm entirely anyways for it to be meaningful on small devices. But there is certainly not any obstacle to porting - from my own experience with ossia & Qt, it is fairly easy to make iOS and Android builds, the difficulty is in finding a proper iOS and Android UX.

In particular C++ code works on every machine that can drive a screen without too much trouble - if the app is built in C++ you can at least make the code run on the device... just have to make something pretty out of it afterwards.


The point is that the parent poster mentioned tablets and phones which you don't address in your point. Of course your examples aren't written 6 times, but they support fewer platforms too (only desktop).

Off-topic, but regarding Bitwig: of course it makes perfect sense to use it on smaller devices. Not phones, but tablets. It's even officially supported with a specific display profile in your user interface settings (obvious target amongst others: windows surface). This is particularly useful for musicians on stage.


I still can't believe bitwig is java. I'm a bitwig user. It even runs on linux.


Only small part, the core is C++ and assembly.

https://www.reddit.com/r/Bitwig/comments/4c2zoh/what_program...


Wasn’t that Java’s original selling point, it runs anywhere (there’s a JVM)?


Yes, today you can use JavaFx to build cross platform desktop apps.


Sounds like that’s the Java Swing replacement for the same thing you could do over a decade ago?

Only part of it will be Java, probably the UI logic layer


I think he did not mean "written 6 times", but more like Compiled 6 times, with 6 different sets of parameters, and having to be tested on 6 different devices.


Because you NEVER have to do that with browsers, right?


You don't have to sign and deploy your web app six times.


You have to do it anyway if you're using Electron


That's not a web app then.

Isn't it? The UI is rendered using web technologies inside a specialized browser and it's written in a web-specific language. I might consider an electron app an hybrid app (that leans heavily towards the web side), but for sure not a native app

CI/CD + uh... doing your job? I build one app (same codebase) on 4 different platforms often, it isn't terribly hard.


I don’t know, I think they did mean that.

You write an app for the Mac... how do you ship on Windows as well?


Concerning the desktop, I honestly don't see Windows users caring much about non-native UIs. Windows apps to this day are a hodgepodge of custom UIs. From driver utilities to everyday programs, there's little an average Windows user would identify as a "Windows UI". And even if, deviations are commonplace and accepted.

Linux of course doesn't have any standard toolkit, just two dominant ones. There's no real expectation of "looking native" here, either.

Which leaves macOS. And even there, the amount of users really caring about native UIs are a (loud and very present online) minority.

So really, on the Desktop, the only ones holding up true cross-platform UIs are a subset of Mac users.


During my days of Windows-exclusive computing, I wondered what people meant by native UIs, and why do they care about them. My wondering stopped when I discovered Mac OS and, to a lesser extent, Ubuntu (especially in the Unity days). Windows, with its lack of visual consistency, looked like a hot mess compared to the aforementioned platforms.

And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?


I don't know exactly what time period you're referring to, but back when Java was attempting to take over the desktop UI world with Swing, it was painfully obvious when an app wasn't native on Windows. Eclipse was the first Java app I used that actually felt native, thanks to its use of native widgets (through a library called SWT) instead of Swing.


As far as I know, you can even write your own applications based on SWT which would make jvm apps pretty consistent and performant across platforms, but not many people seem to have chosen that route for some reason.


> And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?

I don't think that's how fraud works in actuality; malicious actors will pay more attention to UI consistency than non-malicious actors (who are just trying to write a useful program and not trying to sucker anyone), inverting that signal.


I don't know, I've read that e.g. spam will not focus on grammatical accuracy because they want to exclude anyone who pays attention to details. Also most fake Windows UIs from malicious websites I used to see weren't exact matches of the native UI.


I think this has changed. People used to be very particular about how their apps looked on different native platforms, like you say. But I don't think it's like that anymore. People are more agnostic now, when it comes to how user interfaces look, because they've seen it all. Especially on the web, where there's really no rules, and where each new site and web app looks different. I believe this also carries over to native apps, and I think there's much more leeway now, for a user interface to look different from the native style, as long as it adheres to any of the general well established abstract principles for how user interface elements ought to behave.


Speaking for myself only, I haven’t changed my preference, I’ve just given up hoping that any company gives a shit about my preference.


The other thing is that I trust the web browser sandbox. If I have to install something I’m a lot more paranoid about who wrote and whether it’s riddled with viruses.


And therefore, beware any OS attempts to break cross platform browser compatibility.

Also I think you can deploy to all those things with Qt.


And pay QT like 5,000 $ a year to keep it closed source. No thank you. Would rather write it 6 times. Or just use electron.


Qt is LGPL licensed is it not? LGPL license means you can distribute your app closed source, so long as the user can swap out the Qt implementation. This usually just means dynamic linking against Qt so the user can swap the DLL. The rest of your app can be kept closed source.

On iOS and Android the situation might be a bit more complicated, but this discussion[0] seems to say that dynamically linking would also work there.

[0]: https://wiki.qt.io/Licensing-talk-about-mobile-platforms


Qt doesn't require that but even if it did writing it 6 times is vastly more expensive. People would just rather spend 500k on writing it 6 times than 5k on a license because they are somehow offended at the notion of paying for dev software or tooling.

It's a major reason UI coding sucks. There is no incentive for anyone to make it not suck, and the work required to build a modern UI library and tooling is far beyond what hobbyist or spare time coders could ever attempt.


Qt is mostly LGPL. It's really not that hard to comply with that on desktop, and doesn't require you opening your source code.


It’s hard to satisfy the requirements in various app stores. They also sneak in more restrictive licenses in their graphs


AFAIK you only have to pay if you modify Qt itself and don't want to release those changes.


Targeting Windows alone gets you 90% of the desktop market. 95% if you make it run reasonably in Wine. This argument is often used, but it's an excuse.

Anything that you need to run on a desktop can't be used effectively on a touch screen anyway, so phones and tablets don't really count for serious software. (Writing this comment is stretching the bounds of what I can reasonably do on an IPhone).


95% of a market that has shrunk nearly 50% over the last decade.

In many ways, the consumer and non specialty business are post desktop. Turns out documents, email, and other communication apps cover 90% of use cases. Anything that requires major performance gets rendered in a cloud and delivered by these other apps.


You live in a bubble, Windows still dominates desktop (obviously, most people can't afford a Mac and the year of Linux on desktop has not come yet)

https://gs.statcounter.com/os-market-share/desktop/worldwide


They're not refuting that. They agreed that it's "95% of the market." Their point is that the overall desktop has shrunk, regardless of Windows's share of that.


Note: People questioning the stats

https://www.statista.com/statistics/272595/global-shipments-...

Note that this data is missing phones.

2010: 157mm Desktops, 201mm Laptops sold 2019 forecast; 88.4 + 166mm

Or, perhaps better, is in terms of internet use https://www.broadbandsearch.net/blog/mobile-desktop-internet...

Mobile went from 16.2% to 53% of traffic since 2013; nearly 4 fold. Which means since 2013, non mobile usage went from 84% to 47%; or nearly 50%


Shippments of desktops/laptops doesn't tell the whole story. I'm still using a 2009 desktop (with some upgraded components) and wouldn't show up on any of those stats. Similar story for a lot of my friends. They still use desktops/laptops daily, but they don't replace them as often as in the 2000s.

What do you consider as a specialty business? There are hundreds of millions of professionals - scientists, engineers, accountants, animators, content creators, visual artists, chip design folks, folks writing drivers for equipment, photographers, musicians, manufacturing folks, etc who simply cannot earn a living without native apps. Sure, maybe when those people go home, they don't always need native apps, but IMHO its a mistake to only think about them in such a narrow scope.


You name several that are speciality businesses and are part of that 10%.

But there are definitely examples within Accountants, Animators, and Musicians where Phones, Tablets, and Chromebooks (not specialty desktop apps) have taken over the essential day to days.

For animators; the Ipad VS. Surface face off is a great example -- also where they offload concepts to "the cloud" to render instead of a Mac Pro.


Well, I am not talking about examples, I'm talking about entire industries. For e.g., There is absolutely no way for my industry (vaccine r&d) to any work without native apps. Even for animators, no native apps = no pixar. Maybe you were thinking of some other kinds of animation. I don't disagree that you can find small examples here and there of people not needing native apps in any industry.

really shrunk?


Lots of people use an android or an iPhone as their main computer nowadays. If you're targeting keyboard/mouse style input, then Windows is probably close to as popular as ever. But if you're targeting people using some kind of device to access your service in exchange for money, Windows is wasting your time.



Sales shrunk doesn't mean usage shrunk. Replacement cycle of PC should be getting longer due to enough performance.


Any PC from the past decade is still mostly serviceable.

I'm finally upgrading from an Intel Sandy Bridge processor after nine years, and I still don't need to - it's cranking along pretty well as a dev and gaming machine still.


I'm surprised nobody mentioned Emscripten. Unfortunately I have no experience with it but I get that you could write a native app and also get it to work in browser with this. I also get there could be a performance penalty but hey... There's also a native app! It feels like we could reverse steam and get 1st class native apps again.


Many desktop apps these days seems to be built on Electron, a JS framework for building desktop-class apps.

https://www.electronjs.org/


And many of those apps end up with terrible performance. I'm sure it's possible to write a performant electron app, but I don't see it happen often and it's disappointing.


What is a desktop-class app?


One that can read and write files and directories among other things. (Not an electron fan, but web pages are still pages, not real apps.)


There's a web API for that! https://web.dev/native-file-system/


Does anything but Chrome support this?


Or you could use languages that allow you to share code, so you have 6 thin native UI layers on top of a shared cross-platform core with all the business logic and most interactions with the external world.

You can do it today with C# and https://www.mvvmcross.com/


u forgot linux. :(


I started using Linux in the late 90s and have since lost all expectation of someone writing an app for it.


actually id say app support is better than ever (of course all the caveats that go along with being a 1% os apply..)


If wine and steam count as app support, then you're not wrong. It's pretty amazing what can run on linux nowadays compared to yesteryear.


downvoters I'm curious to hear your counterexamples!


Windows-app-compiled-for-Mac is going to annoy Mac users

And they'll let you know it, too. Unfortunately this has been an issue since the first Macs left the assembly line in 1984. If you point out that based on their share of the software market they're lucky they get anything at all, the conversation usually goes south from there.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: