Hacker News new | past | comments | ask | show | jobs | submit login

Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".

Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies. And as for Qt, Qt has existed for over two decades - if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.

But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that two of the largest tech companies in the world are behind, yet couldn't do.

That is a Big Deal.




That's nonsense. Desktop apps have been frequently released since forever.

The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. That is, it's cheaper to build a single cross-platform application in Electron than it is to build two or three native applications, and you can re-use your existing web experience. I can completely understand why companies might choose this approach.

The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power. It's fine to make those trade-offs – in some ways, it's better that you can get an application at all than the alternative of 'no support for your platform'. But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.


What you call "trade-offs" appears to me to be developers externalizing their costs onto users. There are costs to developing desktop applications and developers don't want to pay them, so they make users pay for them in wasted hardware dollars, bandwidth, RAM, battery life, and poor integration.


Well, to the extent that these costs degrade the user experience, I'm not sure they really are externalizing them. Presumably they pay this cost in the form of decreased user satisfaction. It's just that this is a cost they are willing to pay.

There may be an interesting economic lesson here: it really is not that easy to externalize costs. It surely can be done (air pollution), but it requires some special circumstances for those costs not to be internalized in a different form. (These special circumstances might include information asymmetries, harm to a public good, enjoyed by people other than a firm's own customers, etc.--themselves classic risk factors for market failure.)

By the same token, there probably are some truly externalized costs in this example, but I would expect them to be very minor and indirect. For example, most people probably do not pay the 'true' cost of their electricity. So to the extent electron wastes power, some of the cost will be internalized in the form of user dissatisfaction. But some will also be externalized either because the user doesn't know about the extra power consumption, or because the user herself doesn't fully internalize the costs of her power consumption and therefore doesn't care as much as she might if all costs were properly internalized.


> Well, to the extent that these costs degrade the user experience, I'm not sure they really are externalizing them. Presumably they pay this cost in the form of decreased user satisfaction. It's just that this is a cost they are willing to pay.

Precisely. I don't use Slack in part because its desktop application is irrationally bloated for its use-case. Although I have other reasons as well, they are enduring a cost--a small one to be sure, but presumably non-trivial in aggregate--in people refusing to use their service in part because their desktop application is poor.

But I do agree with the general emotion underlying the frustration of "externalities" here. As someone who advocates for high-performance and efficient web applications, I have toyed with the question of whether developers should confront the morality of wasting energy by having selected poor/low-performance platforms. Put in a somewhat comedic tone: low-performance software contributes to climate change.


By walking just one day a year than taking my car, I can offset the electricity consumed by my laptop.

You do have a point though. You make a difference where you can.


The problem is that despite how good it feels, individuals simply optimizing their individual activity does not scale to the societal scope. Post-agrarian humans literally behave and think via shared infrastructure. The only way individuals can make lasting improvements to society is to improve the infrastructure.


Individuals changing their behaviour can lead to societal change though.


Yes, but you are talking about contingencies. Optimism over dice rolls cannot excuse wasteful systems.


I'm talking about how nearly all progressive progress progresses. It swells up from individuals.


Does it? Or maybe those individual pop up and do their progressive thing only when existing infrastructure and technological landscape lets them do that?


Why they haven't made Slack a web based chat app like Discord or Wire or even Skype is beyond me. I don't even use their native apps, just open a tab in my browser and run the corresponding web app.


They already do, just not on mobile.


I don't get it. If slack.com is a web-based chat, why doesn't the OP open Slack on his existing instance of Chrome?


"Along with the features available in your browser, the app loads your conversations more quickly, gives you more control over notifications, and has better support for multiple teams."[1]

Also, the app gets its own entry in the task switcher.

[1] https://get.slack.help/hc/en-us/articles/207677868-Slack-for...


> Put in a somewhat comedic tone: low-performance software contributes to climate change.

Number of developers who would donate the increased efficiency to "idle": $denominator.

Number of developers who would fill the increased efficiency with more triangles: $numerator.

I'd argue $numerator is sufficiently large that the premise of your joke doesn't hold.


Couldn't they just pay to offset the performance cost of their application? I doubt the cost would be much compared to the development difference.


The real trick is whether or not users care about:

A) This works on all of the platforms that we use

vs

B) Performance could be better on my computer

It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...which is almost entirely because SSD's make them so much less noticeable if you start dipping into swap unless you're really working out your machine.

For most users, just knowing it will work on their machine is a bigger influence in using the product...and therefore a greater influence on business...than the performance of that system. It's especially true with a chat system where the most important feature is that everybody on the team can get access.


> It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...

Really???

Because personally, I keep having performance problems on all laptops I have. Don't try running on battery saving mode, seriously.

I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.

----

Fun Anecdote: I had to trial an entreprisey SaaS solution not long ago. A coworker gave me the name and I opened the site on my laptop (on the move outside of work, just taking a quick look).

Their site froze my firefox for 30 seconds because these idiots put a high quality full screen video of a surfer in the background of the main page. Looks cool, doesn't it? https://www.wavefront.com/

Couldn't see the site. Had to be at work to read it, on my top end workstation, the video played smoothly there. Needless to say, didn't take the product seriously.


> I stop counting the 5+ years old laptops that have to be upgraded,

5+ years ago is that LOOOOOONG time that OP was talking about. It's also unfair to compare technical capabilities of old hardware for many reasons. I think the point was that, new hardware, _while its new_, is becoming more and more capable. Any new laptop today, even budget ones, can handle YouTube videos in HD. The problem is that HD today won't be the same HD that exists in 5 years (i.e., 4k), and it's sensible that a budget laptop today will struggle with the 8k technology that comes out 5 years from now. This is an old problem (pun intended) and should not be surprising.

> Because personally, I keep having performance problems on all laptops I have.

Selection bias. Programmers who compile code, run VMs or containers, and process tons of data, are not the average consumer laptop use case and have much stricter requirements. Many people are sitting in Facebook, YouTube, Gmail, or Google Docs for most of their day-- and likely inside of Chrome.

Where are the "Chrome is Flash for the desktop" posts?

The idea that Electron is any different of a user experience for the vast majority of users seems skewed to developer usage, to me.


I don't know, 5+ years isn't that old anymore for a computer. Like, 5 years ago I was running... a core i7 with 4GB of ram. And now I'm running... a core i7 with 16GB of ram. The only things in computers that have really gotten significantly faster are SSDs and GPUs


I meant 1080p when I said full HD.

It takes a surprising amount of power to decode. The cheap CPU from netbooks have been struggling for a decade, especially in battery power saving mode.

Lately, they get hardware acceleration just for that. Special CPU instructions and drivers just to achieve that decently.


For Youtube in particular, they're sending VP9+opus where the browser supports it, without considering hardware acceleration. The rather anaemic Atom chips might have H264 decoding on-chip, but only Kabylake has VP9.


'Where are the "Chrome is Flash for the desktop" posts?'

Ehhh, you're in one, I think?


Different people have vastly different use cases. I haven't experienced a performance issue with desktop Slack or any other (non-game) application for many years. Sure, if I look at how much RAM Slack is using I'd probably conclude that it's more than it needs to be, but I haven't had an issue with RAM usage for, gee, probably ten years.

As for battery, again, my laptop battery has been little more than a UPS for at least five years.


It's mostly CPU performances I experience. Rendering any video, gif, website takes sizeable CPU power (for a cheap CPU in power saving mode).

Never had issue with RAM. I can remember a few friends who bought netbooks with 2GB memory some years ago, they quickly realized that they simply cannot run their development environment in that. (I'm talking swap death where a click that should take 1 second to act takes 1 entire minute).


Right now I'm using a backup laptop with Intel Celeron. It runs most apps fine. Web sites like Hacker News come up easily and snappily. The CPU fan kicks in when I watch a high-quality movie. It also kicks in on small, web apps. Kicks in big time on Youtube and Facebook. Everything on those is sluggish compared to my native apps. Even if non-technical, I'd know stuff was about to slow down when I go into my browser.

Just bad tools leading to waste users notice. Best to avoid them if possible. Not always possible...


> I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.

Are they all netbooks? Because those were crap the day they came out and even more crap 5 years later. You can find terrible discount desktop machines that can't handle anything just as easy as you can find terrible laptops.

My laptop is 5 years old and the only time I feel a lack of performance is when the Swift compiler fires up. Your example link came up right away (OK it burns away 30% of a core, but I got 4 of those).


They are not crap by any standard, they are simply low spec.

A core i3/i5 starts at 150$, most devices don't have these expensive CPU.

Even a 1000$ macbook from 5 years ago would have one of the first i3/i5. It would struggle to read 1080p video, unless plugged to a wall plug with the span spinning seriously.


Just an anecdote, but my 2011 Lenovo x220 with i5 running macOS Sierra plays this site in Chrome with around 30% CPU use, everything is snappy still


> It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems.

This annoys me a lot, for one of two reasons. Either:

1. Some developers have no idea what performance means -- it's crazy that I have time to notice a spinner when a glorified IRC client starts up or switches tabs, let alone have time to watch it for tens of seconds.

or:

2. If these developers are claiming they have no performance problems, clearly the laptops that can handle modern applications are being hoarded, and I have no way to obtain one of these magical machines. All I can get are recent i7 processors.


3. Battery life is excluded from "performance".


On point 1: Also there's still the effect where your app does the same thing a native app would do in an equally reasonable time but uses twice as much power to do it.


I agree with both of these.

With the power that even average machines have now, it beggars belief that we ever see a spinner at all for normal desktop-related stuff.


>It's especially true with a chat system where the most important feature is that everybody on the team can get access.

That's a complete other story... All that big open source movement for decades and in 2017 we are still depending on proprietary systems to provide a chat(!) for a company.


Yep, and when the government doesn't properly tax those harm-to-public-good things like air pollution to balance the incentives and make it cheaper to do "the right thing" and pollute less, companies (which are made of real people and are not just unthinking machines) will do whatever they can to maximize profits. This is why it's so important to e.g. Increase the price of water to reflect the true cost in the Central Valley of CA instead of letting tons of farmers grow water-intensive crops in open-air, wasting tons of water just because it's cheap (then they complain about "is GROWING FOOD a waste of water?!?", completely un-ironically). Many businesses subsidize their operations using government funding, yet again ironically most of the rich white men running them are staunch opponents of public assistance programs despite owing most of their wealth to similar mechanisms, typically in conjunction with massive tax avoidance.


Pretty funny that the only reply here is 'found the Marxist' (maybe it was supposed to be a joke?) when, in reality, all you're suggesting is that the government set the conditions for the proper functioning of the market, which is an idea straight our of Hayek.


I don't understand. The cost is mostly externalized into, quite literally, massively distributed increased electricity consumption. That an individual's power bill may be subsidized only emphasizes that problem.


It sounds like we agree. To the extent we're just talking about electricity costs, then it's true that easily written, but inefficient code may indeed externalize these costs to the extent that users are unaware of the added power consumption, or that the user herself does not pay the full cost of that electricity.

But these electricity costs struck me as only a small part of the broader point I was responding to, which is what I framed the point the way I did.


I contend that squandering the work and infrastructure involved in converting natural resources into electricity is the main problem with modern consumer software development. Any user's (lack of) perception of waste is irrelevant. The scope of the problem is not any individual literally paying a couple dollars more on their electric bill, but rather it's that gigawatt hours are literally being burned away because software developers who know their code will be running on machines the world over can't be assed to spend a little more time learning to practice their craft correctly.

There is no broader point than converting natural resources into societal infrastructure. Be a responsible member of society. Don't obstinately ship wasteful code. I understand if you legitimately don't know any better, but if you're part of the community that's constantly writing blog posts about computers being so fast that its OK to burn the end users' CPU and storage just so you don't have to spend a couple more minutes thinking about what you're doing, you're adding harm to the world.


Again. I think we agree. You're just making a different point than the one I was making. I was making a technical but hopefully still constructive point about what is an externality and what is not.

You seem more focused on making a case against shipping inefficient code. And you're point is a good one. It's just a little difficult to suss that out since you're framing it as standing in opposition to my related, but very different, observation about externalities.


The vast majority of users (like >99%) don't have a clue why their computer isn't performing up to par. They literally have no idea how any of it works. They might just assume that it's slow because they haven't deleted spam emails in a while (even while they use gmail) or didn't pay extra to get that 3.2Ghz machine instead of the 3Ghz. They certainly don't have any idea how to open up top stats and check what resources a particular app is using, so until that happens and users punish developers for this kind of stuff, expect it to continue.


> The vast majority of users (like >99%) don't have a clue why their computer isn't performing up to par. They literally have no idea how any of it works.

Same thing for Web developers.


Yes, you'll often hear things like "I think I have a virus" or "I think I need a new computer". They're computer is fine, they're just running slow apps.


They need a new computer because they don't know how to make their old one work properly.

Getting a new computer will fix the problem though. :-)


I don't disagree, but I'm hopeful that Electron will get better. When you choose an open source platform with a diverse group of major companies using it, part of what you are betting on is that the platform will a) survive for a reasonable amount of time and b) gradually improve.

GitHub, Slack, Spotify, Microsoft, etc., are all using Electron. As their developers gain experience with the platform and as they experience problems with resource usage, I would expect to see the platform improve. Maybe I'm wrong.


On top of that, webassembly could see some improvements across the board.


I agree. I think it also has the potential to help motivate developers to write native UI rather than use electron.

Currently if for your market, your primary target platform is the web, but other platforms are still important, going electron might make sense because you really don't want to write your business logic natively for each platform.

Web assembly allows us to use the same native libraries for all our business logic and data models, which besides being more performant, means we only need to be willing to write the UI natively on top of those libraries. We've actually taken (very) old desktop C++ code, and compiled it to asm.js and ran it in our web app, and then rendered its outputs with webGL onto a canvas and have had surprisingly successful results. This makes the prospect of WASM becoming standard across the board very exciting.

Now let's hope we don't decide to replace our native desktop app, that uses this old C++ library, with an electron app of our web app, running the asm.js compiled library :).


The future, where we write in native languages, that are compiled to Javascript, run in a webbrowser VM, and power desktop applications. Jesus wept.



It's not that they are native languages. They are languages with no overhead, GC is no longer necessary for example.


I've switched from Sublime Text 3 (native) to VSCode (Electron) and I couldn't be happier. If I didn't know it's electron based I wouldn't have been able to tell as a user. YMMV so don't generalise.


It's halves my battery life compared to Emacs or BBEdit.


If your battery is more important, just stick to Emacs.

What machine are you using ? I have a 2 year old thinkpad, it's still 10h+ in full brightness, vscode, compiling, etc. And it's the screen brightness that consumes the most battery.

It used to do 20h, but the main battery is external and can be replaced so it's a good thing. I would really recommand thinkpads to everyone instead of going for macbooks for example.


VSCode doesn't compete with emacs and vim. VSCode competes with Sublime Text and Atom. Maybe even with WebStorm on the more full-featured end and Notepad++ on the leaner end.

I'm not even sure whether you're trolling. BBEdit provides a tiny subset of the features VSCode offers. VSCode is not a text editor, it's halfway between a text editor and a traditional IDE.


When did you switch? Atom was damned near unusable around 1.0, and VSCode was only a little better.

The difference to now is night and day.


Atom and VSCode are in different leagues with respect to performance, and the difference is growing with time. If you think the reason that Atom is slow is because of Electron, VSCode is the counterexample.


If VSCode isn't slow because of electron, what's the reason it's slow?


VSCode isn't slow. Atom is slow.

Since both are written with electron the difference must obviously be the actual implementation, not the platform itself.

Keep in mind VSCode was written by Microsoft who have decades of experience writing IDEs and text editors, whereas Atom was written by GitHub who are mostly working on the GitHub product.


> VSCode isn't slow. Atom is slow.

Could have fooled me.


VScode runs like a dream on my hexacore/8GB/SSD desktop, but brings my atom/2GB/32GBeMMC based Asus Eeebook X205TA to its knees. Native programs for low power systems seems sound practice to me.


I first tried switching to Atom around 1.0 and it was a horrible mess.

I just switched to VSCode last month and it's been as nice to use as ST3 (actually even nicer because ST3 didn't have any code intelligence).

I've also used IntelliJ (or rather WebStorm) several times over the years and it always felt too sluggish and obnoxious.

For the record: I'm on Linux and have 32 gigs of RAM. So maybe I have lower standards for memory use and performance.


Everything in the world of web-stack development has turned externalizing costs onto users as a core first principle.


How about PWA, Service Workers, WebAssembly? Thousands of developers are working hard on improving the UX, reduce battery usage, wasted CPU cycles, network traffic, latency.


All of this because they're externalizing costs onto users through SPAs and other go-to practices that force all the work onto users' devices.


it's been that way since forever. if you're not employing expensive top-ranked brilliant engineers writing aggressively optimized, completely bespoke c applications tuned to every cpu model's feature set and performance characteristics... you just might be externalizing.

insofar as it benefits the budget, you should probably externalize as much as possible. consumers will provide the fitness function through deciding what products they prefer.


We're not even discussing aggressively optimized C vs web stack. We're talking about a massive segment of today's software developers not even bothering to learn competent C (or Obj-C, Swift, Java, whatever) to build a native application. Hell, it's about the very practice of calling Electron apps native to begin with. The same thing continues to hold across web stack everywhere—reach only for those things that don't require you to learn a new language/paradigm, letting you keep churning out the same thing you build in a browser and call it native for platform X, and let it be on the user to pony up the resources to use it.


consumers will provide the fitness function through deciding what products they prefer.

What about when your users don't have a choice?


hm... performance metrics, possibly? it may be hard to measure.


Well users/clients/bosses want everything, now, and for free so...


Free things give continuous hassle. People on business don't want hassle. They pass on free except if they're very inexperienced.


Yes. You just described modern computing.


Computing for ever. We use C to avoid writing assembly. Use Java to avoid writing C. Use Python to avoid writing Java.

The world is asking for more and more software, and the resources to provide it are not following so people are taking shortcuts.

Want something better ? Someone has to pay for it.


C is generally faster, than hand-written assembly, so it doesn't make sense to write software in assembler.

For macOS desktop applications are written with Objective C, which is C with fast message-passing, and doesn't trade much to speed. Swift is modern alternative, but it doesn't trade anything for speed either.

For Linux applications are traditionally written with C and Gtk or with C++ and Qt. Those options are both very performant.

For Windows main language for a long time was C++ and it remains supported language. There's movement to .NET, so Windows is an outlier here. But .NET is generally very performant language, it makes some tradeoffs for safety, but it has enough features to stay fast and its implementation is specifically tuned for desktop applications.

The only terrible platform with slow language is Android and it's well known for its lags.

There's very little desktop software written with Java and Python and usually those are specialized applications, when users don't really care about experience, but rather care about functionality.

So JavaScript invasion here is unexpected and not wanted. I won't use any JavaScript desktop application, if I have choice. I don't like this technology. JavaScript and V8 made for browser with advanced sandbox capabilities. That's fine. But for desktop it's just not needed. And there's no other advantages. UI is terrible and does not conform to any standards. Performance is not good. Memory consumption is abysmal. Energy consumption is abysmal as well.

If I'm about to buy application for macOS, I'm always carefully inspecting its bundle and trying to determine which technologies were used. Unless it's pure Objective C/Swift, I'm usually won't buy it. I hope, more users would do it.


>C is generally faster, than hand-written assembly, so it doesn't make sense to write software in assembler.

That's true now. It wasn't true then. The point still stands.


and not even true now for certain workloads, each tool for its job


Yeah, but Python still doesn't seem to have a great cross-platform desktop GUI, as far as I can find. I'd prefer to code entirely in Python if I could, but I haven't liked a single GUI library I've tried for Python.

Although to be fair I haven't really tried PyQT out still, but I don't like the idea of having to buy a commercial license for it.


PyQT is alright. Kivy as well.

Or anything in any other language.

But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.

Eventually all GUI app toolkit ends up with a custom MVC framework, a client/server architecture, some kind of db for persistence, their own implementation of asynchronous event and communication models and a declarative layer to create the UI without code. For the most advanced this layer separate structure and layout.

Well guess what, this is what the Web has natively being doing for ever.

Since the web is now the most popular platform, with millions of libs and tutorials on it, people just reused that. It just makes sense.

The problem is not the concept. The problem is we should have driven this effort with a standard to sanely close the gap between the desktop and the web so that you don't have to spawn a freaking browser-engine-os for every one of your app.

But no, the web is the only platform with a standard. And it flourished while all the big players created closes gardens with proprietary shitty API. And this is the result.

Have you not seen Jurassic Park for god sake ? Life finds a way.


> But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.

Web UI is a gazillion shades of shit, please don't dump that rubbish on the desktop. Thank you.

Seriously, how can one take Web UIs, the most cumbersome, unreliable, inconsistent, unreactive UIs, as examples to be followed? That's beyond me.


Haha, "a gazillion shades of shit". So true.


I do strongly disagree with the web having the only strong standard, specially since it switched to a living standard

I admit that for someone from the web electron is a godsend, but to be honest, responsiveness of the applications leave something to be desired


It's completely false. The web at the beginning was a huge mess of static pages, hacked CGI scripts and no interactivity at all with the need to submit a form to trigger a full reload of the page. Everything was absolutely synchronous and a lot of times the database was accessible directly from the public interface and some pages did actually have the connection string directly in the HTML for everyone to see. I'm really not sure in which alternate reality you have seen web apps doing all that forever.


Yep, completely agree with everything you said. I'd love for their to be a standard on the desktop side, but until then, I'll keep using a hodge-podge of technologies depending on the project, I guess.


They should have adopted Qt; it makes it easy to write native C++ code that compiles on different platforms, so you get the performance and RAM usage of C++ but still most of the benefits that web coding offers, without so many drawbacks. (It's also quite nice to work with IMO.)

It's too bad it wasn't more universally adopted, by any of the 3 major platforms (including Linux, where the all-C Gtk+ has become the standard for the most part). Instead, it seems to have found its greatest success in, ironically, small embedded devices. Devices like this simply cannot take the performance hit of something like Electron.


> Python still doesn't seem to have a great cross-platform desktop GUI, as far as I can find.

PyQt is literally the best cross-platform desktop GUI going, in any language.


Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago. And it was proprietary and it needed a licence. With all my good will I find quite difficult to believe that now PyQT is the silver bullet to write all UIs. For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')


> Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago.

Shrug, I found it much nicer than anything else I'd used, but I've never used WPF (which is single-platform in any case).

> And it was proprietary and it needed a licence.

Neither Qt nor PyQt is proprietary in the usal sense of the word (nor were they 5-6 years ago). If you're using a non-standard definition it would probably be more productive to use a different word.

> For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')

I'm a huge fan of type systems. I wish I could find a UI framework that's anywhere near as nice as PyQt for an ML-family language.


If you really want to use PyQT you may try PySide which is LGPL licensed. I'm not sure whether it restricts commercial use.


take a look at https://github.com/dddomodossola/remi, which transpiles python into html (kind of)


I can't help but think there's a different flavor to this than with higher-level languages. This is adding a platform on top of another platform. The same objections existed years ago with Java when Swing was released. Java is a cross-platform high-level language but Swing is basically an entirely new platform.

It's this platform on top of a platform that is objectionable from a performance, memory, storage, and integration perspective.


We see the same thing with containers. Docker, flatpak, snap... It's just the right time for it.

Languages have evolve to change the way we handle constraints like memory, speed, readability, expressivity etc.

We are arriving at the pick of what languages can bring on the table. Sure we can improve things here and there, but the huge challenges now are integration, packaging, distribution, updates, communications, multi-tiers architectures and all that.

So we now tweak platforms to help us with that.

But because we didn't see that coming, it's not done in any structured way. It's done exactly the way we did everything since the beginning of computing, by stitching together stuff then hitting hard on it with a hammer until the job is done.

This is not new. IT is a joke of an engineering field. We hack everything, don't think about the future, and then end up using the status quote. It's always has been like that.


I agree. Containers should be unnecessary -- all that they could provide could be done at the process level with an operating system designed to isolate computing resources appropriately. But operating systems were not historically designed for that so another (somewhat ridiculous) layer is added on top.


actually all containers do is utilise the very design in an OS like LXC in Linux to form containers. Containers are not a platform on top of Linux, they are a wrapper around different isolation tools build into the kernel.

https://en.wikipedia.org/wiki/LXC


Thats all containers are?

They are an abstraction over OS level isolation.


IT is a joke of engineering for non engineers. If a pull request doesn't follow some principles agreed a priori it doesn't get merged. If there are people that like to play the "IT cowboys" just hacking together stuff without any whatsoever process or unit test that certifies the behaviour of what they have written, jeopardising the entire team efforts, it is not a failure of software engineering, it's a failure of that specific team. And please bear in mind that I worked in such toxic environments, but I never thought for a moment that software engineering and software architecture are jokes. The joke was the team/organisation on which I was at the time.


Alan Kay might have something to say about all this.


>Computing for ever. We use C to avoid writing assembly. Use Java to avoid writing C. Use Python to avoid writing Java.

Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades. Which are all as close to the metal as can be. And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception), so the whole premise is moot.

The few Java desktop apps that were around, people used to hate as memory hogs.


Actually, I've been shipping desktop apps in Objective-C for roughly a quarter of a century.

Gets me the high-level, Smalltalk-ish productivity when I want it (most of the time) and the low-level C efficiency when I need it (sometimes).

Having my cake and eating it, that's me :-)


But can you make a website with the same knowledge ? Can you make it portable to other OS ? Can you reuse 20 years of knowledge, resources and libs ? Can you hire tomorrow 10 experts to help you on it ?

Quality of the tech is NOT the drive for success here. You are missing the point.


>Quality of the tech is NOT the drive for success here. You are missing the point.

Well, maybe it's better to miss the point, than to succeed by selling crap to people who deserve better?

When are technies gonna stand up for quality of tech?


Oh I do agree, it's just the debate is all about being a critic of electron instead of providing viable alternative.


> When are technies gonna stand up for quality of tech?

When the user notices a quality difference?


People used crashy, buggy, slow software for years. Photoshop and Office lost you data on a regular basis in the 2000's. Windows BSD was a common occurrence then. We didn't see a massive exodus to Mac products because of that. The only reason people started to go crazy for Apple was after the iPod came out. And even then, it was still a small part of the market.

You can see everyday that people favor cheapness, easiness and convenience over quality. You would not have so much junk food otherwise.


>You can see everyday that people favor cheapness, easiness and convenience over quality.

What I'm saying is "it shouldn't matter" what people favor.

Professionals should still favor quality, even if their customers would just as well have crap (or are ok with crap when its all they can find).


> But can you make a website with the same knowledge ?

One of my first commercial projects was a web-content management system written in Objective-C. Customers included Siemens and the German Bundestag.

Another couple of projects were written in WebObjects. If I wanted to, I could use Cappuccino, but I am not a big fan of web//client apps, so I don't.

> Can you make it portable to other OS ?

This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.

> Can you reuse 20 years of knowledge, resources and libs ?

In the sense you meant it: yes. Except it's more like 30 years. However, programming skills are (or should be) transportable. With virtually no previous experience, I became lead/architect on a Java project, which succeeded beyond anyone's imagination.

> Can you hire tomorrow 10 experts to help you on it ?

Is this a serious question?


This is a bad faith answer.

>One of my first commercial projects was a web-content management system written in Objective-C

You certainly didn't use any of your cocoa widget for the UI there. It was HTML + CSS.

> This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.

Yeah, GNU steps for GUI on windows... This is what you think could be an argument for electron users ?

> In the sense you meant it: yes. Except it's more like 30 years.

Again bad faith. The world has way, way more code, snippets, tutorials and doc about any HTML + CSS + JS code than any tech based on Objective-C.

Programming knowledge is transferrable, but the knowledge of the ecosystem is not, and is always the most consumming.

> Is this a serious question?

Oh yes, it is. Because you see we are living an era where it's hard to find any good programmer at all for anything. They are all taken, and are very expensive.

So basically, on a tech limited to one ecosystem, finding them will be even harder, and even more expensive.

The simple fact that you are pretending it's no big deal (while any company will tell you otherwise, so much that the GAFAs are spending millions just in their recruitment process) illustrate how much a troll you are.


> This is a bad faith answer.

It most certainly is not. You just don't know what you're talking about and keep making up new stuff when confronted with actual facts that contradict your fervently held beliefs.


I'm curious how you get Smalltalk like productivity on objective C? I thought the productivity on Smalltalk comes from its live programming environment?


No, it also comes from it's expressiveness and flexibility (at runtime).

It's not like Smalltalk is a bad language that just happened to have a productive live programming environment.

It's one of the best languages out there, and conceptually stands alongside Lisp et al.


You bet ? I have 3 software opened right now using Python for their GUI: my rss reader, my torrent downloader and dropbox. And I got many more installed on my machine.

But the thing is, even when I write something for myself, I first write a command line app, then a web service. Never a GUI, because it's such pain.


To be fair, your "Python GUI apps" are a very thin wrapper around C/C++ libraries.


Of course. Electron is just a layer around the chrome compiled engined as well. If you want anything to display a matrix of pixels changing rapidly, you need low level performances eventually.


Of course, but in your case, the C/C++ libraries you're calling into aren't doing any heavy lifting, they're just making calls to an X server (or equivalent) or perhaps to a graphics card. There's no reason GTK (or the GUI portion of Qt) couldn't be implemented in Python, it's just a huge undertaking and it was started in C (or C++ as the case may be).


>And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception)

This is generally true, but to be fair the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers, and aren't meant to be programmed in hand-coded assembly except for a few critical bits deep within OS code. Older CPUs weren't like this.

It still might be possible to write hand-rolled assembly that beats modern compilers, but you probably need to have seriously super-human mental abilities to do it.


> the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers

You got the causality wrong. Assembly programmer-friendly CPUs died because​ CPUs which weren't as friendly were faster and cheaper; those same CPUs were instead more amenable as compiler targets.


There's probably, line for line, more VB6 desktop apps in existence than any other language. And that's not just because of how verbose VB is...


Hey, I used to use it! :P I came from BASIC background. VB6 was supposedly good for rapid prototyping of GUI apps (esp CRUD variety). I found it would boot up in 1 second, deploy new project in 1 second, and load new app in 1 second. It was also safer so no constant blue screens over common functionality. It also could wrap foreign code in less safe languages which I could still write in an industrial BASIC. One could also export the GUI to code in different language.

Became one of my favorite toys. I'd still use it for GUI prototyping if it was FOSS and kept getting extended. I found even lay people could learn it well enough to get stuff done. Long after, I learned what horrible things lay people did with it. Yet, they got work done and got paid without the IT budget and staff they would've preferred. (shrugs)


C# has been the default way to write Windows apps since the early 2000s. It is no more "closer to the metal" than JavaScript.


>C# has been the default way to write Windows apps since the early 2000s.

No, it really hasn't. It was just the way Microsoft proposed businesses to write bloated internal enterprise apps, what they used to use VB for.

Those are not the same as desktop apps -- and no, or very very few, desktop apps, ever turned to C#. Not even MS own apps, like Office, and surely nothing like Abobe's or countless others.

>It is no more "closer to the metal" than JavaScript.

Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.

Besides, it's not JS itself that's the problem (though it took millions and top notch teams to make it fast): it's the web stack on top of it. C# just runs on a thin CLR VM layer -- and the graphics are native.


> Those are not the same as desktop apps

I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.

> Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.

You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."


>I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.

Windows Forms is a wrapper on top of MS Win32 API. And WPF is also based on native widgets wrapped (with some extended with managed code).

In any case, C# apps are not much represented in the majority of Windows desktop apps, most of which are written in C++ or similar, and surely all the succesful ones. Can you name your succesful C# desktop apps? (not in-house enterprise apps and no developer tools please. There where the users have no choice, even Java does well) I'll name the succesful C++/Delphi/native/etc ones and we can compare our lists.

>You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."

A call to a native drawing lib that doesn't pass through 10 layers of abstractions and bizarro architectures is as good as a direct native call. Especially from something like C# that runs circles around JS performance.

But even so, few consider JS to be what makes e.g. Electron slow.


As far as I know there are plenty of XNA games running on both pc and Xbox in C#. And that is pretty much the worst applications you can use c# for because, you know, latency. I don't see any real blocker to have complex c# apps on the desktop, apart maybe the quite shitty clickonce and the continuous need to have an upgraded .net framework to use the new features. But now for UWP apps the default is c#, they can be installed directly from the store, and with Roslyn you basically need only to target .net 4.5 to have all the features of the last version of the language. And this is a huge win that admittedly JavaScript already had because of transpiling. If I had to write a commercially distributed desktop application nowadays I would use for sure c# or f#, not JavaScript.


The comment I was responding to was:

> Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades.

So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.

This seems pretty much in line with the general industry trend towards the adoption of open-source software.


>So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.

This statement doesn't even parse.


A large portion of Visual Studio is written in C#.


By "large", you mean some of the UI using WPF, the plugin system supporting C# and some of NuGet?. Not that those are small projects, but considering what is inside Visual Studio, they are hardly "large portions" of Visual Studio.


We use Java because Python proved to be too slow.


And because the IDE support (refactoring etc etc), compile time error checking and ease of use more than compensates for Java being a little bit verbose.


Yeah, and they used C because the compilator could optimized stuff the JVM couldn't but now it can. And now Python get type hints so you can have the IDE tools you have with Java like with PyCharm. It's the circle of life.


Not quite correct about type hint. They are only in python 3.

Everyone who adopted python 2 on sizeable codebase is likely stuck there forever, with zero annotations and none of the new tools available, and they'll never be ported back.


You can use type hints with Python 2 in the form of special comments (http://mypy.readthedocs.io/en/latest/python2.html#type-check...). Guido has been working on the type hint project for dropbox specifically to help them to migrate 2.7 code to 3.

But let's be fair, type related tooling in Python are not close to the ones you have in Java yet. It's just that eventually, everything comes around. Java got faster. C++ easier. Python ...toolier ? Etc.


Java got faster and got some functional features.

Python broke all retro compatibility and put all existing sizeable software in a miserable deprecated state with the breaking of python 3.

I don't recall C++ getting easier. The few tools and IDE still fail at decent refactoring and code completion. The C++11 movement is adding few stuff more or less useful, piling on top of the vast amount of already existing complexity.


C++11 makes a LOT of things much, much easier. Yes, it does pile on top of existing complexity because they're loathe to eliminate any backwards compatibility, but the nice thing is that you don't have to use older features or methods of doing things. In fact, if you look multiple serious C++ codebases, it'll almost look like they're different languages, as every project basically chooses a subset of C++ they accept. Realtime embedded code doesn't look anything like desktop application code, for instance, but they're both technically C++.


And as far as I remember even google is only supporting python 2. Their python-go transpiler doesn't support python 3 for example (unless something changed in the last months and I missed it)


Most of the C++ annoyances have already been fixed by Dlang, which interfaces very nicely with C/C++ libraries and code.


And because packaging everything in a jar is easier than pulling 1e3 dependencies for every deployment. Not to mention drpendencies that also require a C/C++ compiler boost or other native libraries.


Yeah and some use C because Java is too slow. The point is, there is nothing new here, the history of computing is repeating itself. It's just now we have better toys, a bigger market and the stakes are higher.


Mmmmm, steak.


I'm french so my english is sometime funny. I didn't mean a piece of meat from a weed smoking cow obviously.


Your English was flawless there, he was just making a joke because stake sounds like steak, and he likes steak


I wrote steak in the first version. I edited back to stake after his/her comment.


FWIW, it is a mistake a native speaker could have made.


Mmmm... misteak


You were completely correct and idiomatic, goatlover just wanted to throw in the pun.


And this is the reason that I've become such a huge fan of Rust... back to the metal, high level interface like Java.


Rust doesn't bring much in that aspect. You could use C++ for performant high level abstraction for many years before Rust. Rust brings memory safety and that's huge. But it's not anything like Java, it's much harder.


Actually, I think it's very similar to Java in terms of what is being offered and at what layer of thought.

It does have a steep learning curve, but it's worth it. The number of concurrency bugs alone that I could have avoided if I had been able to use Rust years ago are sad to think about. Java has great concurrency tools, but doesn't do anything to make sure that your not shooting yourself in the foot.


Java also has JavaFX, which comes with an embedded Webkit browser. I can create my UI with React, or any other HTML/CSS/JS library, and make it interact with code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc. very easily. I think this provides all the benefits of Electron, but is even more flexible and powerful.


Do you need to run N programs on N copies of Chrome with JavaFX or use a single VM like all JVM apps do? Because that's the problem with Electron mentioned in the article and that's exactly one place where JVM languages are better.


add nashorn so you have a spare javascript vm just in case...


Yet, this is not as popular as electron. Ask yourself why.


This is basically a terrible argument in and of itself you would do better to flesh out why yourself instead of expecting everyone to conclude your self evident correctness.


Too many JS/web programmers that's why. Plus JS/web programming is way easier and more popular than most of the JVM languages.


i'd guess that there are a variety of things to consider. do you actually have anything to say?


> code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc

Your list of 7 JVM languages (both here and in your earlier comment on this submission) seems to be from most widely used to least. Yet in your HN comment from 2 days ago at https://news.ycombinator.com/item?id=14068664 you ordered that list differently, i.e. "Java, Scala, Clojure, Groovy, Kotlin, Ceylon, Frege, etc". Have you changed your mind about the relative adoption of Clojure and Apache Groovy in the last two days?


Not really. Actually, however, there is a recent survey that shows that Groovy is the second most popular language on the JVM, behind Java. Myself, I use Scala, and would like to learn Frege.


Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock. Sadly I still didn't find anything comparable for BDD in all the languages that I use.


> Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock

There's something wrong when a testing framework hacks into a language parser to make the language statement labels have special meanings like function names do, and overload the `|` operator in expressions so data will appear as a table in the source code. "Lovely" isn't the word for that sort of thing.


It's a DSL. And it works very well also because of these operator overloads.


We use some hipster language and act like we're doing it just to be ironic when actually it's a super cool language.


hehe, you remember me, that crystal, nim and rust are invented to avoid write C

¯\_(ツ)_/¯


>There are costs to developing desktop applications and developers don't want to pay them, so they make users pay for them

That is definitely not the whole story. Costs are shared between developers and users. If it's more expensive to develop an app, you can bet it's going to cost users more too.


Well, why would any developer or organization doing development "want" to pay those costs, as you suggest they should?


Because those costs grow orders of magnitude as they're externalized. $1000 spent during development can offset $1M in energy wasted by a million users. So they should want to pay those costs as responsible members of civilized society.

And if they don't want to, they should be forced to by said society.


To make a quality product and offer a better user experience?

Users may not notice the slowness right away, but if they run a bunch of applications at once they probably will, and they're more likely to notice it while running on battery.


THIS ^


> ... Electron applications are shite in comparison with proper native applications.

Is that true from a user's perspective? The average user I know would not have the idea that Slack is somehow inherently worse than say MS Word or any other truly native app they use everyday. What would Slack gain by integrating better with my Mac?

Slack starts up too fast for me to read the little quotation (penultimate Macbook Pro). I actually wish it was slightly slower because I like those cheesy quotes when I can catch them. Everything feels almost instant. Not bash instant, but as fast as any mainstream messaging app.

I leave it running all the time and get what feels like normal full hours upon hours of battery life. There are apps or browser tabs that I've noticed destroying battery life, but Slack nor Atom nor MS Visual code are on that list for me.

The anti-Electron complaints all kind of feel like the same argument that comes up as each new layer of abstraction gains acceptance. I'm sure there's a lot that Electron can and should improve. Running multiple copies of Chrome does sound awful, but until I read that I didn't know the atrocities it was committing.

Just doesn't feel productive to try to make Electron go away vs. working to improve it or create a better cross-platform abstraction.


> Is that true from a user's perspective?

It is for this user.

> What would Slack gain by integrating better with my Mac?

Far better resource management, one would hope, for starters. As the article pointed out, when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.

It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP. So Slack gets turned off, and maybe someone gets annoyed that I don't respond to something, while I run a few VMs in Slack's former memory space.

As far as other things, well, using a platform's capabilities as appropriate is generally considered good form - HCI concerns, and, generally, just people like things to be consistent. I don't know why we're supposed to forget this now.

I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.


Recently I was doing some mean stack development ram was maxing out 8 GB and system was struggling to run smoothly Mongo alone was hogging around 1.5 GB and webpack was hogging 1 GB there are three seperate expressjs servers hogging 500 mb each

And then there was slack


Finally an argument that is concrete and understandable! Seriously, this is the first comment I've read that is tangible as opposed to theoretical. All the other complaints are on the basis of being morally offended by the ratio of resource usage vs. application functionality.

The nice thing about slack is that it will run about anywhere. The app on my phone magically knows if the app is not running on my laptop and will then and only then send me notifications. But, I'm not suggesting this as a solution for you -- best productivity killer I know is having to pay attention to yet another device while trying to get work done.

Depends on your environment, but I have of one very effective and successful coworker who just flat out refuses to use Slack. The number of companies where that is acceptable is probably limited. Unfortunately.


I know I am really late to this, but one of the things I have been meaning to try is running Electron apps that have Web Apps with [Fluid](http://fluidapp.com). I think this may give me the perks without some of the drawbacks. I am not really sure, but it is an experiment I would like to try out.


I've never heard of Fluid before, but it seems to be a glorified shortcut with minor features such as notification badges. I really can't tell how this can integrate with Electron at all.

Electron would have you write a web app and package it as an app installed to your Applications folder just like any native app. You'll write this code in JS, HTML, CSS and use Electron APIs.

Fluid also would have you write this in JS, HTML, CSS but use Fluid APIs instead. The two compete for the same role in that sense.


With Fluid, you don't do anything from the web developer's prospective. The Fluid app just packages a Web site as a separate application. So, what this means that any user can take a Web site and make that a separate app if they want it.


> I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.

For what it's worth, since you seem to like IRC clients so much better, Slack has a built-in IRC bridge: https://get.slack.help/hc/en-us/articles/201727913-Connect-t.... It works just fine, in my experience.


From experience, using the bridge means pinging the admin on slack about it, and convincing them that it's even worth doing, and then being stuck in "The IRC Room", instead of the others rooms in which people are actually communicating.


Allow me to humbly suggest:

https://github.com/wee-slack/wee-slack

I use it extensively for everyday Slack, where I don't care much about inline media and inline formatting. And for a text client, weechat + this plugin handle it pretty gracefully.


> when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.

Not really, unless you're using most of your RAM or you treat this as a fundamental principle. I've never checked my Slack RAM usage, and I'm sure it's higher than if they built a great native app, but it's also never had any performance problem with Slack or any other application (okay, except for Eclipse, but I stopped using that when Android Studio came out).

> It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP

And there's the key. You have a very specific use case that is probably extremely rare. I'm a full-time developer and as far as I know I've never been low on RAM on my first generation Retina MBP.


People are less likely to be spinning up vms but may have far less ram to start with and the more apps they use that are wasteful of power and ram the more likely this becomes an issue for them.

It is truly strange to hear being resource constrained being described as a special case in computing.


Last time I taught undergrads (last semester) their computers were far from anything as good as we'd have. If I asked them to run anything remotely demanding, it really taxed their laptops.

We really do need to be more mindful of resource usage. I'd rather write my own lightweight clone of something I want than use an Electron-based hog. There are menubar-only apps that come with an entire Electron dependency.


Why is that strange for personal computer usage? How common do you think it is for users to experience a performance problem caused by low memory? I'm genuinely surprised that we could apparently have such opposing intuitions. My intuition is that it is exceedingly rare.


Your intuition is badly formed because it ignores how crappy real peoples computers are. The world is full of really bad computers that people don't upgrade because their computer is a low priority item for them and they have lots of other more important things to put their money towards like food shelter, and medicine.

Further their machines are full of software that runs all the time for no reason including but not limited to multiple redundant antivirus that are trying in vain to scan everything in real time in a vain attempt to prevent the next malware infection from taking hold.

Various services like file system indexing and virus scans run at inconvenient times and render things slower than before.

Laptops are super prevalent because their portability is more important than power. Its not at all unusual to keep using the same machine for 5-8 years as long as it still works.

Real people have bad computers with bad specs and in a massive number of cases the browser is already using a significant portion of the entire computers resources.


The electron apps I've seen don't really have much UI. As soon as you build an app with some complexity it will start being a problem.

As a native Mac developer I can drop in things like a predicate editor for defining filters or queries. They come with Cocoa. I can put in sophisticated table views and tree views. I can connect all of this easily to core data which loads from a database objects on demand without any code needed to be written by me.

You see this sort of shortcuts in the electron apps I've seen. They don't have proper GUI for preference configuration like any Mac app as they can't get that for free. Atom doesn't have a regular file open dialog. I use that a lot, dropping a file into it to jump to a particular location I got open in Finder (file manager).

My preferred editor, TextMate has very little development resources behind it compared to Atom, yet it has a far richer GUI. You got a GUI for creating and editing plugins. Not just editing config files. You got native rich UIs for selecting fonts and colors. You don't have to write font names and color names in some config file.

There is a certain irony in the claim about how stuff like Electron saves you cross platform code, when what is usually not cross platform is the GUI, and the electron apps I've seen has very little of it. Try making something with the complexity of Keynote, Pages or Numbers in electron and I think the lack of a comprehensive set of prefabricated GUI components will start making its mark.

Not to mention the numerous native APIs which exist which you have to duplicate, e.g. for vector graphics, animations, 3D graphics, audio, video, font handling, OpenCL.

How about people with disabilities or internationalization? You got great native tools for doing that, how do you accomplish that in Electron without re-inventing the wheel?


> The electron apps I've seen don't really have much UI. As soon as you build an app with some complexity it will start being a problem.

Well, VSCode certainly seems to contradict this.


The GUI in VS Code is basically: a file tree view, an editor area, the command palette overlay, a status bar, and the debugger panel/area.


I use a macmini for my dev work. Almost every time it freezes out of the blue the culprit was some html based "app" frequently with process nwjs using 100% of cpu and huge amounts of ram.

This includes chrome and electron based stuff


I actively don't use and discourage people from using Slack where I can because of how poorly it runs.


What about Slack webpage?


Same issues. The bloat is atrocious.


And because it's a closed-source, expensive implementation of IRC.


That's not great, but I can live with that.


With no fuss history


Honestly, though, the in-client search sucks. I'd much rather search archived text IRC logs through some web interface or locally than try and find something from 6 months ago within Slack.


I totally agree with you. I only use Slack from we app but I'm a heavy user of VSCode and Spotify. I believe in standardized Web API's rather than a clusterfuck of native API's.

The problem here is the base platform Electron and underlying Chrome/Nodejs

But as consumers we feel when our browser lags, so browser vendors optimize speed over memory and cpu, caching the shit out of everything. The Web is a dangerous place so they further isolate every tab as process, sandbox them and keeps lots or copies of the same thing in RAM because a security flaw is a lot more shameful than a memory flaw.

You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.

Making noise also makes a difference. When someone complained about VSCode cpu usage because of idle cursor and it blew on HN, next month's release had a fix (for all platform)

The truth is you can't move fast doing native development with different libraries. Electron keeps your dev costs down and allows you to move very fast.


> You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.

Won't that encourage more companies to cut corners? How can producers of quality products compete when other companies can release prototypes that their paying customers finish for them?


There's the argument "Why should I do it? If I'm paying for it then they should do it"

And there's, "This really sucks, its an open source project, I think I can make a difference that would affect millions in a small way but would still make a big impact overall"


So use a cross-platform toolkit then like Qt, WxWidgets, Tk, etc. You don't need the bloat of a browser engine.


seriously. and skype is a grand example of how shitty a native client can be.

tired of this argument. if you put in the proper effort, your users won't care or notice.


Is Skype native?

Except for the very old 4.x Linux version - which is native Qt - and comparably old Windows ones, I thought, all of the recent versions are webbrowser wrappers (one sort or another) with some occasional native widgets around that.


(Can't edit the post above) But, of course, Skype issues aren't because of their UI nativeness. Their UI layer is... well, tolerable. Heck, I'd be happy if all I'd have to complain about Skype would be a crappy UI.

But I don't think UI implementation has to do anything about why we love it so much. I mean, missing calls and messages every other week, random desync between clients and all the usual rituals that every other group call starts from (you can hear them accompanied with chants "can everyone hear me?", "$name are you here?", "tell me if you see my screen" and "let me drop the call and restart").


i think i was still overreacting to prove a point. everything you're saying is right.

my point is, even if i concede native is "better" or whatever, the difference should be negligible for most apps for all users (let alone "most" users).


You are of course correct. It's possible to make awful native applications, and good non-native applications.

The sliders mean that you will trade performance off against ease-of-development. It will be easier to ship a basic Electron/web-stack app than a native one for multiple platforms. However, it will be harder to do this is in a high-performance and well-integrated way.

I went through all this with Cordova back in the day when trying to build cross-platform mobile apps. I was able to ship apps quickly, but at the expense of quality – it's okay to do this so long as you are aware of the trade-off.


yep definitely. and there is definitely a cost and effort required to make Electron feel as good as native. depending on your feature requirements, this can get really hairy.

i'm just exhausted by the "web apps feel so much worse than native!1" argument which is an over simplification and not a rule.


Skype for Linux is a web app. It is a lot worse than the old Qt client.

Are the other platforms native? I thought they switched to webrtc in 2015 to be able to just use a simple web window wrapper for all platforms, just not Linux.


The newer skype versions on Windows are web-based.


Slack behaves perfectly fine for me and I don't have incredible hardware. It's non-Electron apps that are typically mentioned by macOS' energy management system as "consuming significant energy", in my experience.

Edit: Mass downvotes; lol ... Just stating my personal experience!


> Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power.

Are they though? The two applications that use the most energy on my Mac - by far - are Steam and Skype. Steam still has trouble with HiDPI and freezes when performing various UI interactions. The number of problems with Skype are uncountable.

I'm currently booted into Windows for work, looking at my current process stats, the top memory consumers are:

* Visual Studio (hodge podge of all kinds of things, 800MB)

* Chrome (215MB)

* Microsoft Intune (presumably native, 114MB)

* GitHub (.NET WPF application, 108MB)

* Explorer (native, 103MB)

* Search Indexer (native, 107MB)

* Lync (who knows, 98MB)

Meanwhile, the supposedly terrible Electron apps:

* Spotify - 58MB

* Slack - 93MB

* VS Code - 60MB

As far as interfaces go, Spotify, Slack and VS Code easily outclass GitHub, Visual Studio, Explorer and Lync in usability.


Each Chrome (and Electron) app instance is a group of processes. You are probably just looking at the main process, while the bulk of RAM/CPU use comes from the renderer processes.

Here are stats on my (Linux) box:

* atom - ~500MiB (one window)

* slack - ~816MiB

* chrome - ~935MiB (two tabs + hangouts)

* google music electron app - ~500MiB


Nope, those figures were after adding up all the processes. What I have noticed in switching between platforms is that applications tend to report a far lower memory usage in Windows than on Linux or OS X.

Might just be an accounting difference. Forked process applications in particular are very difficult to account, because even their private/RSS may be COW from another process.


My figures exclude shared memory and are calculated as VmRss - Shared from /proc/<pid>/statm.

If you are using Windows 10, your missing Atom processes will be under Background Processes in Task Manager. For the sake of the argument, I just did a fresh install of Atom and this is what I see on the first run: https://i.imgur.com/0ZRSumF.png. ~220MiB (no files open, zero extensions).


That's really interesting! I have an up-to-date atom install that i've been adding plugins to for about a year now (up to like 50 or so), and has been running for at least 12 hours (windows 10), and currently has 7 ~500 line files open.

Adding up all the processes' (7 of them) private memory gets me 194mb.


I installed the 64-bit version, you might be using 32-bit.


No it looks like i'm running 1.15.0 x64.

It might just be a difference of platforms.


At least if one crashes it wont bring down the Chrome stack.


VS Code has way fewer features than Visual Studio, especially for C#/.NET development. VS Code is a minimalistic IDE, very useful, especially for javascript development where tooling is quite minimal anyways. But let's not think that VS Code outclasses visual studio.


VS Code is not an IDE in the classic sense of what IDE is - integrated development environment. It's a text editor with some debugging extensions. I don't think it is even comparable to Visual Studio at all.


That line is getting really blurry with both Atom and VSCode. Haven't used VSCode, but the only feature I recall missing in Atom right now is refactoring; and to be fair, last time I used a real IDE (XCode), I couldn't refactor there either.


For me, refactoring is something that is nice to have, certainly, but not what defines an IDE. The debugging and development facilities are what matter most.


A lot of that is available in "text editors" like Atom (and I assume VSCode). Of course, it depends on if someone wrote a plugin for your language of choice.


IMO it qualifies as an IDE for Typescript, and borderline with Javascript — slightly better than WebStorm in the former case and slightly worse in the latter — but not so much other languages.


I agree with you, but there doesn't really seem to be any native application that occupies the same space as VS Code or Atom. VSCode is not just a text editor like vim or Notepad++, as some people here imply. As you say, it's a miniature IDE. And it's far more comprehensive than Sublime Text.

For .NET Core + TypeScript, VS Code is almost feature parity with full blown VS, while being an awful lot faster. The only thing I find particularly lacking is debugging, but even that is coming along well.


I agree, I use VS Code because its the best choice I have for Typescript development on the Mac. However, I still miss Visual Studio while using it. The debugger especially wants to make me cry, though this might have more to do with Chrome than VS Code's front end for it.


> As far as interfaces go

You missed this qualifier in the parent comment.


You need to compare apples to apples:

The right benchmark for VSCode is not Visual Studio, but Notepad++ (5.9MB on my system right now).


The right benchmark for notepad++ is not visual studio. It's notepad2.

Notepad2 = 1792 kB.

P.S. Pay attention, this is kilobytes, not bytes :D


But it's impossible to do a real apples-to-apples comparison, because I use Atom more like an IDE than I ever did with notepad++, and Atom has magnitudes more features for me than notepad++ ever did and most likely ever will.

I don't care about the difference of 175mb of ram between the 2 as one of them (atom) is infinitely more useful for me than the other (notepad++)


> * Spotify - 58MB

Maybe for one of the processes, but on Windows Spotify needs at least three processes usually to run (five if you count the Web Helper and Crash Service which are probably native code). On my machine the three main Spotify.exe processes take up at least 170MB of RAM, often more. Although I wasn't aware they were using Electron as their app has a standard, native Windows menu bar.


Another one for your list: Nylas Mail.

We straight up would have not shipped it without Electron and the CPU it uses to sync is on-par with apps like Apple Mail & Thunderbird.


Nylas Mail

> Nylas Mail - The best free email app | Nylas - The best free email app

What exactly makes it "best"? It looks to offer nothing more than other "best" mail apps.


Hard to answer your question without hijacking the thread, but here goes:

• It's got pretty much all the power features out there like Snoozing, Open Tracking, Send Later, Reminders, Enriched Contacts (i.e. Rapportive), Unified Inbox, Swipe Actions, Templates, etc.

• It's open source and super easy to extend with JavaScript plugins. Developer have made dozens of themes and some cool plugins including PGP, Unsubscribe, Translation, Todoist, Trello, Markdown, etc.

• It's cross-platform for Mac, Windows & Linux with custom UI styles for each.

• It works with all mail providers including Gmail, Yahoo, iCloud, Outlook and even vanilla IMAP and on-prem Exchange servers.

• It syncs your data directly (not via a cloud service) for speed and security.

• It works offline, so you can use it on a plane or when you don't have WiFi.

• It's open source GPL available on GitHub with >20k stars: https://github.com/nylas/nylas-mail

• It's free.

It's also still improving and has over 800 GitHub issues and we would love help from anyone who wants to make email better! :)


There are other comments in here comparing react-native to Electron. Do you know if you could build Nylas Mail at the same pace with react-native? Will the binary size/RAM usage drop significantly on react-native?


I haven't yet seen a substantial desktop app built with React Native and afaik neither FB nor GitHub is investing in React Native for desktop so generally this is hard to say. React Native is more of a framework whereas Electron is a runtime-- much different goals though both are super cool and I'm enthusiastic about the future of both!


I'm holding off my downvote to see if i can get a straight answer out of you. Nylas Mail bills itself as the best email app. Them's strong words, but maybe you're worth the claim? Let's see!

I see from screenshots that Nylas has folders and labels. Can i use either of these in the following fashion?

- i can have a tree structure of them

- an email can be in two separate folders/labels at the same time

- folders/labels can be configured to learn which emails to automatically sort into themselves, based on the email contents, by dragging and dropping the email into or out of them

Ball's in your court.

E: Bonus round! In this screenshot there's only 6 emails in the list: https://www.nylas.com/static/img/nylas-mail/hero_graphic_mac... Is there a way to get a list of emails where each line is actually only a line of text tall?


Some quick answers

• If by "tree structure" you mean a folder hierarchy, yep that's supported. I think we have a current bug with dragging nested subfolders but we're working on a fix. (Surprisingly >99% of users have a flat hierarchy.)

• A thread can certainly be in two separate folders (e.g. Inbox and Sent) but an individual message can't be in two folders at once. In that situation there are two copies on the actual mail server. For Gmail/Gsuite this is possible via labels where any thread can have an arbitrary number of labels. We support both systems.

• "labels can be configured to learn which emails to automatically sort into themselves, based on the email contents" -- this is a really cool idea and something we've talked about internally. AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters. It could also be an interesting plugin that anyone could build on NM. We have a Slack chat room where folks discuss stuff like this if you're interested: http://slack-invite.nylas.com/

• And for your bonus round (haha) yes there are 2 different ways to configure the UI. One of them is 3-pane with a reading mode like Outlook, and the other is 2-pane that navigates like Gmail. http://i.imgur.com/Lt0x7O4.png

Also in 3-pane if you make the message list wide enough it will switch into the compact version: http://i.imgur.com/SaGp9eV.png

(Obviously it will show your real mail data. We have a "screenshot mode" for sharing stuff like this without revealing sensitive information.)


That's worth upvotes for the effort alone, thanks. :)

> Surprisingly >99% of users have a flat hierarchy

You tend to end up with it only after really long-term usage. All the folders with sub-folders i have got them after they got too big to be just one, e.g. "Perl coding stuff" has several subs, "Shopping", "Clients", "Computer Game Emails", etc. Some clients have additional subs. All started out as a singular one though.

> threads, not singular mails

Ok, fair enough.

> labels auto-learning by drag&drop ... AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters

Opera M2 does it extremely well since ~2000. Google Inbox does it ... eh. Mobile and PC, none, right. The filtering is honestly super easy to implement. It's a bayesian filter. In older email clients those were used to filter out spam. Opera M2 simply gives each folder one (user-configurable) and runs all the filters on each mail that comes in.

And to be fully honest here, i still use Opera 12 as my main browser, along with its mail client and don't see myself jumping ship ... anytime really since for me the combination of mail client and browser is key. However to respect an email client i expect it to be a feature match to Opera M2 at least.

Not interested in Slack. If you had an IRC channel tough i wouldn't need to sacrifice a chicken and a CPU core. :)

> UI

Ok, that looks fine. I personally prefer to have the email below the mail list, but that's not a huge thing. Maybe an option to consider. Screenshot mode is cute. :)


You can actually join the chat via IRC/XMPP. No chicken sacrifice required! :P https://get.slack.help/hc/en-us/articles/201727913-Connect-t...

I haven't tried Opera M2-- I'll check it out. Might be a fun hackathon project to train a Bayesian filter on every folder and auto-suggest routing at least.

There was a big IBM Research study a few years ago that showed it's dramatically more efficient to search email versus categorizing messages into folders. Here's a link to the full paper: http://people.ucsc.edu/~swhittak/papers/chi2011_refinding_em...


Mac is a third class ports for these software. They never got optimized like on windows. Not a fair comparison.

With electron, every OS is a third class port.


http://imgur.com/a/Ofyei

Chrome kills me. :(


Stop using it. Both Firefox and Edge have come a long way. I recently dropped Chrome and have not looked back.


For me (I open a lot of tabs), Chrome is unusable without 'The Great Suspender' extension


Most of the Electron apps use multiple processes so you'll want to add up all their instances.


And simply adding up all the processes will count all the shared memory multiple times (which will greatly inflate the "final number" to much larger than it really is)


Steam always used a HTML renderer, even in its 2006 first incarnation. Nowadays it uses the same CEF (chrome library) as electron uses.

So Steam was one of the first "Electron" apps. A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.


No, VGUI is not HTML.

Here's a fun one. Start Steam with `-dev` and hit F7. Widget factory VGUI edition!

Oh also, https://developer.valvesoftware.com/wiki/VGUI_Documentation


Have I talked about VGUI? No.

Valve used a very ubscure/niche HTML render engine initially for Steam (2006). The company/website behind that isn't online anymore. An older version of the Wiki had some brief info, but all these info vanished.


Stop spewing bullshit.

Here's an old revision from 2005 by a Valve employee confirming Steam used VGUI back then.

https://developer.valvesoftware.com/w/index.php?title=VGUI_D...


It talks about the Steam overlay. Parts of the Steam application were always HTML. First the little known HTML renderer from a defunct company, than Trident and later CEF.

search for HTML: http://www.plastic-warfare.com/SteamUIGray.zip

Funny how the old things stay online. Notice also the cyber cafés menu entry. http://www.steampowered.com/status/game_stats.html


>A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.

That's a stretch; X/SG/HTML user interface APIs are not the same as a whole browser with Javascript VM, full networking and security stack, full-featured/standards-compliant (X)HTML/CSS rendering engine plus support for legacy features, UI assets, multimedia support, sandboxing, resource caching/persistence, and so on.


Win95 with shell update up to Windows Me and 2000 had the full trident engine (same as IE 3-5.5) in the Shell (Active Desktop, Explorer bars, etc.). Windows ME/2000 can play audio and video previews in the side bar (all HTML based).

Also WinXP used a forked Trident engine with some removed features for "Software" dialog and various other features (Windows Help, etc).


If you basically agree with the parent post, why do you say it's nonsense?

> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. Precisely.

> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.

But native applications are shite in terms of portability.

> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support

See the part where the original post said "it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies". The costs may not be "prohibitive", but they certainly would multiply effort and resources, and divide profit.


The "nonsense" part was that "nobody" was writing desktop apps for the past 5-10 years.


Well they were exaggerating but I think they had a point, til recently on my Desktop machines I only cared about having a web-browser, terminal (+ Steam at home).

Lately there are more UI-ish apps I value having everywhere including desktop (Spotify, Hipchat, Whatsapp, VSCode). I'd also love a decent cross-plat podcasts app.

I think it's clear there's now more demand for certain types of consumer-ish desktop apps (chat apps & music apps especially) than there was a few years ago.


> They fail to integrate with the host platform

Just as a counter-point, because native app fans often make this point as though it is universally recognized to be a good thing.

I don't want apps to integrate with the host platform. The host platform is not the thing I care about. I use several host platforms in different contexts (I have work and home computers and a smartphone, they all run different OSes) and I would prefer that Slack look like Slack and not have buttons in different places with different UI interactions just because that's the way Reminders.app works.

For me, the web is the host platform I care about. It's the one that I can use anywhere and only have to remember the URL.


I doubt you don't care. These are examples "integration with the host platform":

* Text selection

* Caret behaviour (e.g. Option-arrows on macOS)

* Clipboard

* Spell check

* Open/save dialogs

* File system access

* Drag/drop

* Window management

* Accessibility (screen reader support etc.)

* Standard right click menus

* Indexing (e.g. Spotlight on macOS)

* etc.

You may be thinking to native UI idioms, which even Apple threw out the window several years ago.

Electron apps are mostly very good at the things in the above list, because the Chromium web renderer has spent years abstracting the mechanisms needed to feel native where it matters.

Non-native toolkits such as Swing and Qt also spent years trying to achieve native look/feel, mostly through emulation and host OS detection, and they still feel pretty crappy compared to Electron apps.

Slack, Spotify and friends do a good job of inventing their own "web but native-feeling" UI. An example of the exact opposite is Google Docs, which still, for all its technological impressiveness, feels like a crummy Swing app trapped in a web page. For example, Google Docs renders its own right-click context menus, which look and feel nothing like native context menus. Google Docs' mini-apps also have a menu bar and a toolbar, but it's part of the host window, so you get two levels of menu bars and tool bars.


Qt done well comes a lot closer to native look and feel than any Electron app I've ever used. The problem is that a minimum-effort Qt app falls into the uncanny valley. Slack is clearly foreign, but it's polished.


Google doc is a lost fight. Word and spreadsheet editors need to be proper desktop web applications.

A web site inside a browser cannot resemble that and all the hacks to try to are not getting much close.


While true, Google Docs' collaboration capabilities are fantastic enough that it's worth using, though. We use it all the time to work, anything from tiny scratchpads to big documents.

To do the same in Microsoft Office, you need to dick around with OneDrive and/or SharePoint. The last decade or so, I've only touched Office when someone sends me some .xls or .ppt file and I'm being lazy and just want to view it.


You can use an old school network drive to share documents in an enterprise, irrelevant of their nature. Easy and simple.


I am referring to Google's realtime collaboration features. Given a URL, you can enter the document and see it being modified in real time, as well as edit, annotate and comment in real time.

It's 2017, this is how we work now. My colleagues (literally) across the world are not going to connect to some shared NFS drive or whatever via VPN to store documents.


What if the shared drive is automatically mounted on your computer when you log on?

One drive for your personal documents only for you. One drive for your team only visible and editable by people in your team. One drive company wide with common stuff.

You can send a link to your colleagues and it just works! That doesn't support multiple editing though. That's how things were done historically.

Google doc is good to send a documents to a bunch of emails and see/edit the documents. It's terrible to write longer documents with advanced formatting, pictures and schemas.


I often get frustrated when discussions about user interfaces in the context of native/non-native don't distinguish between 'skin' and 'ui'.

For the most part I think what people care about is that things work as they expect, which is primarily 1) placement of UI elements, and 2) interaction with/between these elements. If that's done right, nobody cares if the UI is flat, dark, light, or has a leopard print background.

Now I do understand that there's some overlap in ui/skin concerns, but the distinction still seems crucial to me.

For example, the web is clearly not consistent on the 'skin' of things. But I often know where to find things based on their location (header nav menu's, footer contact details, etc.), or general look (loupe for search, wide rectangle for inputs, some kind of wide rectangle with a doodad on the right hand side for a drop-down). Or a combination of placement and look (a search input field is an input field in the top right of a typical page).

Even lots of computer-challenged people I know seem to do pretty well in this regard.

But as you say, when it comes to interacting with elements, as long as the developer doesn't override 'native' behavior, a web-solution can be very native.

On the other hand, the vast majority of cross-platform native apps I use often look close to native, but their 'core', inputs, selects, text fields, and so one, often feel off.

Honestly, I much prefer a non-native looking app that uses native UI elements over an app that has an 'uncanny valley' native look that is slightly off and UI widgets that don't behave natively.


I'm a little confused by your list... In what way are text selection, caret behavior, clipboard, spell check, open/save dialogs, drag/drop, accessibility, and right click menus not available in the browser?


They are.


Of your examples, I know Slack fully supports all of them except indexing and accessibility. For those two, I don't know because neither are of concern to me.


No, I don't care about most of those things. I don't know why you didn't believe me when I wrote it the first time :). "Caret behavior", I use a Mac 8 hours a day and I have no idea what option-arrow does.

That clipboard shortcuts work the same is the only one that I'm used to enough to be annoyed if it were done differently.

There's something about Mac fans that are very preoccupied with all of the details of how Macs work. I'm not criticizing that, you like what you like, but you shouldn't be surprised that I don't care about Spotlight indexing.


You don't select or navigate in text?

Shift+arrows — select characters

Option+arrows — jump between words or paragraphs

Cmd+arrows — jump to beginning/end of line or text

Shift+Option+arrows — select words or paragraphs

Option+Backspace — delete one word back

Cmd+A — select all

I actually used all of these except the last one while writing and formatting this comment! Plus clipboard shortcuts.

If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe. But if you don't, I don't understand how you can have this opinion.

I get super annoyed with anything that somehow overrides these standard keyboard shortcuts, which is suprisingly often. Non-native UIs typically have to reimplement them because modern OSes have made the curious choice of not abstracting them.

I don't think it is a "Mac fans" thing. The exact same principles apply to Windows. Even to Linux, although the keyboard standardization there is next to non-existent. (I don't use Linux desktops often, but when I do, I get really frustrated that the terminals use Ctrl as a meta key instead of Command. So "copy" isn't Cmd+C, it's something like Ctrl+Alt+C.)


I use Cmd+A, that's it. It has a close corollary on every OS.

> If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe.

I do, and this is one of the reasons I've never bothered with all of the details and shortcuts that you like.

Vim attempts to make the best possible text editor. It doesn't let "OS conventions" dictate what makes good text editing experience. What you get from apps staying to strict OS guidelines is a bunch of average -- not terrible but not inspiring -- applications.


Sorry if this sounds negative, but I'm perplexed why you would even involve yourself in the discussion when you don't have an opinion. It's like asking a bicyclist about how highways should work. They might go "I don't want any highways at all", but that's hardly useful to highway users (environmental concerns notwithstanding).


I accept that not everybody will feel the same way – I do care about my local platform. I want to hit 'space' when I have an item selected and see quick preview. I want my documents indexed in Spotlight, I want to drag-and-drop files between applications, and I want all the rest of the UI niceties I am used to.


And on Windows, I want pressing Alt to highlight the first menu, and the arrow keys to move between and around menus.


1) Why use several host platforms if the platform makes no difference to you?

2) If you know a platform then you should have no problems knowing how to use it.

It would be ridiculous to have an app from Windows behave exactly the same in Mac OS just because you don't want to remember the difference. You don't want minimize and maximize buttons put on the opposite side of all other mac apps because that is how it is on windows. You don't want copy paste in Slack to use Ctrl rather than command key because that is what you do on Windows.

3) Whatever time you save from doing everything the same across platforms would be wasted, for anybody not working cross platform who suddenly have to deal with an app with completely non-standard alien behavior. I want my standard mac hot keys to work in a mac app. I want preferences to be in the standard location. I want my color and font selectors to work the way they work all other places. I want drag and drop to work like in all other Mac apps.

We Mac users have seen this again and again. When companies don't give a shit about our platform, it is usually just a question of time before a competitor arrives which does, and knocks the other guy out. You don't survive that long ignoring the platform unless you got some lock-in advantage.

Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.


> 1) Why use several host platforms if the platform makes no difference to you?

I don't, I use The Web for 90% of all things I use on a computer. A Chromebook is one of the computers I use the most when not working for precisely this reason.

> Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.

I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.


I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.

Hold on a bit with that assertion.

First: which apps built on Electron are being sold, period? All the ones I'm aware of are open source, like Atom, or front ends to services, like Slack.

Second: which companies are switching to Electron for development? Again, all the Electron apps I'm familiar with are ones that started out that way. While I'm sure there's probably an app or two out there that began as a native client and then went to "let's just be a web wrapper," I don't know of any big ones offhand. (I've come across companies that have shifted their strategy to using true native applications, however. Facebook famously shifted their mobile strategy from HTML5/JS to native apps some years back, and I know of several iOS apps that were using "write everything in JS, it'll be great!" toolkits that switched to actual native AppKit.)

Third, and admittedly anecdotally, in both my experience and what I've consistently heard and read from people who've had the opportunity to study the UX of both native and "wrapped web" apps, just because users don't use the language of developers doesn't mean they don't notice when apps are slow, resource-hoggy, and behave kinda weirdly compared to other apps. I run a Slack for a writing group that's mostly populated by non-technical people and it is not uncommon for users to complain about Slack "slowing down their machine." Just because people don't know the term "native app" doesn't mean they aren't going to be able to tell "this app over here is nicer to use than that app over there," and that might be because "that app over there" doesn't minimize properly, or has weird menus that put common things in uncommon places, or doesn't do what they expect when they right-click on selected objects.


Well, Slack is an example of an Electron app that is sold.


The app is free, and particular tiers of service are sold. Those are not the same thing.


> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.

Actually, the incentive is even stronger than this. The ability to visit a website and start using an identical version of the app immediately is just as important - I doubt that Slack and Discord would have had half the success they did if users had to download the application before using it, regardless of them being available on all platforms.

Discord has had the ability to give links to others to join a chat server since its inception, resulting in a two-step process to use it: click the link, type a name. This is miles less of a barrier to entry than: click link, download app, find downloads, install app, run app, create account, join server (rough process for most text/voice apps up to this point).


It's a difficult trade-off, even for me as a user. The fact that I can load Slack on all my devices and in the browser definitely played a role in my adoption of it. The fact that Slack is dog-slow and a memory-hog on my MacBook Air, on the other hand, has kept me from using it as a central part of my own work. Similarly, I am actively pushing everyone I communicate with the switch from WhatsApp to Telegram because the latter has a performant native app.

If I were to build an app where cross-platform support was crucial, I'd probably start with Electron as well because of time and budget concerns, and switch to native if the app 'takes off'. But on the other hand, I can imagine there being serious risks to building out an entire platform that way and having to rebuild it from scratch later on. Maybe there's never budget/time for it, leaving me locked in?

I suppose React Native could help in that regard.


Most people I know just the app though. Heck, I actually only discovered the Discord web client yesterday.


This issue is so furiously divisive, and I feel like it brings out the worst in HN. I'm from a web dev background. Some of the comments here make me feel like I should be ashamed of that fact.

I can see both sides of this argument (every time it comes up)

As a consumer of apps, I want the leanest, most minimalist, fastest thing going. I want native apps on my devices (If you think slack is a hog on the desktop you should try it on Windows Phone).

But as a developer, I know that electron is a shortcut that means my app will take less time to build. I can take my existing skills, take work I've already done on a WinJS app and publish it on Mac OS, Linux and Win 7. I can spend more time with my family, instead of spending all my evenings learning py+qt, or xamarin, or react native, or whatever the new fangled thing is. And I know people will use it.

Hell, I've even got better odds of pushing an electron app than a native one, as I can submit a pull request and maybe have it appear on https://electron.atom.io/

Obviously, seeing the Electron hate always gives me pause for thought, but at the end of the day it feels like the hatred is from fellow coders (if a dribbling front-end-js writing low-life such as myself can call you writing-assembly-on-a-napkin-while-you-quote-stallman-types fellow coders) and my apps user-base is overwhelmingly non-technical.


Honestly seems quite sad that developers today know only one (quite awful) language and have no whatsoever interest to learn new things because they can just write an application (probably with a quality not much better than the aforementioned language) that can be used everywhere but taking a huge amount of unneeded resources. The thing that I like the most in software development is the ability to continuously learn new techniques and new, better, ways of solving problems. I'm really afraid to think what would have happened if VbScript would have won the race as the de facto standard for the web.. ..but actually thinking about it probably it would not have been that different from today. We would have had VisualElectron instead of Electron, but with all the same problems maybe in a slightly worse or slightly better language, depending from the point of view.


> This issue is so furiously divisive, and I feel like it brings out the worst in HN. I'm from a web dev background. Some of the comments here make me feel like I should be ashamed of that fact.

I'm an embedded devices programmer and I'm proud of it. Proud to know a little bit about my hardware, and proud to get the most out of it. And I'm ashamed when I find a more efficient way to do something: it's not an optimisation but a bug fix.


Your non-technical users are capable of noticing that a certain application makes their computer slow or drains their battery or doesn't support a certain feature of their platform. It may take them longer to notice, and they won't know why, but that won't stop them from switching if a better alternative comes along.

Delivering high-quality web apps generally requires knowing JavaScript and CSS. Delivering high-quality desktop apps generally requires knowing the tools of that ecosystem. Sometimes we compromise to get things out the door, but we should be honest with ourselves when we do.


I'm honest with myself. I'm not disputing that Electron is not the most optimal way to create a desktop app, I'm saying that it makes desktop development (particularly cross-platform development) faster and easier. I know I could make a more efficient app by going native. And hopefully in time, I will. But for the sake of getting it out the door I'll most likely go the Electron route.


I meant that as a general statement, not a challenge to you specifically. I wish I'd phrased it more kindly, because dismissing criticisms as "hatred" and implying that non-technical users don't care about battery life are the sort of thing I'm advocating against.


> That's nonsense.

Proper this and proper that, and don't get me wrong, I agree, but parent is correct -- and that is these apps wouldn't even exist if it wasn't for Electron. It just wouldn't have been a consideration for it to be a desktop app. Instead it would have just been thrown up on the web. For example VSCode would have just been some type of online IDE clone like Cloud9 or something similar.

If you think this is nonsense, you are out of touch, and I don't mean this as an insult, although I realize there isn't a good way to say that.


I think people are talking past each other.

Yes. Electron enables some new stuff. New stuff that wouldn't exist otherwise.

Yes. Electron is a bucket of bloat that saddles what should be small, simple apps with enormous amounts of crap that has nothing to do with the app's functionality.

Both of these can be (and are) true.


I think that's the real take-away here. There are upsides and downsides; I am pleased that it is easier to release desktop applications, but sad that the performance of them is so much worse.

I wonder if one of the issues is that so many developers have now worked almost exclusively in the 'web' sphere, and aren't aware that native development maybe isn't as difficult as they think.


Perhaps someone with both native and web development can set me straight on this; I have very basic knowledge of the former.

I've done the latter for a long time now, and only now I'm trying to teach some people, I properly realize much knowledge is needed to do it right. It's not just arcane knowledge of the quirks of CSS/HTML/JS, but also tooling, build steps, knowledge of 'expected' web behavior, frameworks, libraries, etc. Much of this complexity is still there and often worse if you go for vanilla js and static html/css (in part because expectations of a web app are higher these days).

I started learning native iOS development and expected things to be much easier and more sensible, but instead I get the impression that it's not that different.

Xcode is apparently a piece of shit, and everyone tells me to avoid Xcode's interface builder. There's tons to learn about how a project is set up, as well as stuff that just isn't a concern for the web like packaging it up submitting to the app store (hours just figuring out how to correctly supply icons and get a certificate). More than once, as I'm following an online course, the lecturer will say something like 'this might seem like a logical approach, but DON'T DO THIS and do <unintuitive thing x> instead.

Again, please correct me if I'm wrong. Perhaps when you put it all together it is significantly easier than doing the equivalent on a web platform. I'm just saying that I expected my initial foray into native, in particular Apple's 'walled garden', to be at least a little more like an actual garden rather than the chaotic, exceptions-for-every-rule (but kinda fun!) scrap-heap of the web I am familiar with.


On the plus side, this is an ideal situation to take advantage of - by creating native and fast applications, it becomes easier (compared to before when everyone was doing it "the hard way") to distinguish your product in a bloated market.


Not sure if ironic, seems a sure bet most users won't notice and gladly prefer the "bloated" product.

Yet to hear a single non-dev coworker complain about Spotify being "bloated".


Sorry, I should of been a bit more clear. I meant demanding applications.


In discussions like this its important to try to distinguish between a priori reasons and post hoc justifications.

The real reason that parties like slack and spotify choose electron is because its easy for the devs they have that only really know JS/HTML.

Its not that they couldnt hire more/different devs that could do it in a saner way, its not that its too expensive or that the ROI is not good, when we are talking about companies worth hundreds of millions to billions of dollars focusing on a core market, that is just completely laughable post hoc bullshit.

Its that they dont give a shit, either about the user experience, or improving their toolset. They are happy where they are, and see no reason to change.


OneNote is a great example. It has fairly well-regarded native apps for each platform it supports, _and_ a decent web (HTML5) interface.

I think the issue is time and resources. Small teams, like Slack, would like to create a slick experience but they don't have the time.

Companies like Facebook went the other route -- HTML5 on mobile, got their fingers burnt, and went all-in on native. StackOverflow's iOS app appears to have improved a lot too, in v1 it was a thin shell around the web interface.

I hope React Native catches on. I'm not a huge fan of how 'heavy' Electron is myself.


Facebook's HTML5 app was 5 years ago, this example comes every time but it's outdated. technologies like Electron or React Native didn't even exist at the time.

5 years is huge when it comes to web technologies.


Electron is like almost any other cross-platform VM, including Java (which is living under most of the "native" apps on the world's most popular platform), .NET, and so forth. It doesn't share resources between instances but then you also don't get dependency hell (wrong version of Java for application X, etc.)

Subjectively, I'd say that Electron's performance overheads are not bad compared to, say, Smalltalk in the 90s, where one Smalltalk application could bring a fully loaded state of the art workstation to its knees.

And don't get me started about Flash. I had a whole project cancelled after an engineeer brought up the CPU meter during a fairly simple animation.


There are always bad engineers. I can write a shitty function in any language.


I've never seen anything performant built on top of Smalltalk, including the Xerox Star's built in apps which were developed by universally acknowledged geniuses. Flash's performance issues were legion.

Chrome is widely considered the best current desktop browser in a very competitive space. If it has problems it's definitely in idle power consumption (which indicates wasted idle CPU) but it is used by a hell of a lot of people who have free alternatives.


Those who forget desktop Java will be doomed to repeat it.

For those of you who aren't old enough to have been around, Sun initially pushed Java as a "write once, run everywhere" GUI language. It quickly became clear that Java applications were ugly and terrible everywhere, even in the primitive days of X11R5, when programs used a mixture of Xt, Motif, Qt, GTK, and raw X11 protocol (xv was awesome). Having a Java program for some task was worse than having no program at all, since it would discourage someone from writing a decent native solution.

Fortunately Sun found ways to make money using Java server-side, and Apple helped kill it client-side by not providing it by default.


I would like to offer http://openstreetmap.org as a counter example, where the best editing app is made in Java and the most used one is Javascript. There have been QT4 editors but they have never gained traction, I never found the problem to be performance between native and Java.

I think Javas problem was it's Unix engineering roots, with too little focus on UI, and perhaps a little too much of the "we don't care about performance"-mindshare. The latter being the only problem I see with some electron apps.


The cost argument is indeed a moot point - look at all of the major software companies, and you'll see that actual development is nowhere near their primary expense. They can afford to implement their own programming languages and OSes (and some do), but apparently it's too expensive or too difficult to make native apps for the respective platforms?

This goes for mobile too.


It's moot if you're an established company with deep pockets moving slowly. If you're starting from scratch with limited funds, it is definitely not a moot point. Would Slack have been as wildly successful if they had to implement native apps on all 3 platforms + iOS + Android + etc? I'd have a hard time believing so.


You don't have to rebuild your app for all platforms. Desktop widget toolkits (Qt, GTK+, wxWidgets, ...) have bindings out to most languages. If you can run Python, you can run a desktop app on all platforms from a single codebase. Also, all major platforms have C and C++ runtimes; if you write portable C or C++, your codebase should run on any of the big three.

It's ridiculous to pretend that you have to write 3 distinct codebases to get a multiplatform application. Are there 3 Firefoxes? 3 Chromiums? 3 VLCs? Back in the day, applications like Banshee, which written in C#, were the rage and were distributed as core parts of GNOME.

Why is there this belief that we have to use JavaScript for multiplatform?


This is a tangent, but CMake is incredible for building cross-platform C++. I had a project a year or two ago that was developed as a .so for Linux, using CMake simply because I like it.

When I went to go build a Windows .dll, I rolled up my sleeves and expected to have a bad day. `cmake -G "Visual Studio 14 2015 Win64"` just went ahead and made a Visual Studio project from my source tree, and that project built and worked first try. I was using all C++14 stuff like std::lock, std::thread, etc, and there wasn't a single #ifdef required in the entire project. Amazing!


If you target the native OS widget set, you must have a separate code base for each platform's native widgets. Things like Swing and Qt don't cut it.

Macintosh users in particular are sensitive to things like the placement of and spacing between UI elements; if anything is "off" from the gold standard set forth by Apple, they will scream and bitch at you because perfect UI consistency is paramount with this crowd.

Web-based apps get something of a pass because they look and feel like Web-based apps (though not always; witness the grousing in this thread about the new Mac Spotify client). But things like Qt and the XUL-based Firefox, which try to look native but miss subtle details, fall into a sort of UI uncanny valley and are roundly rejected by the Mac community.


Overall, that sounds like an argument to exclude Mac support, if anything. But I would suggest that if it's so difficult to get the widgets exact, it's better to use a toolkit that can provide widget portability as well as precise implementations that pass through to native OS windowing.

>If you target the native OS widget set, you must have a separate code base for each platform's native widgets.

I want to clarify the subtle distinction here. If your code's concerns are separated, having to directly provide native widgets on some platforms means a different "codebase" for windowing and widgets only, not for everything. You'd still compile your normal code, and use an #ifdef or equivalent to include the appropriate windowing/widget library.


No, and that's fine. I don't think anyone is going to begrudge a small team the power of developing a 'desktop' application using cross-platform tools like that. It's great that the barrier to developing an application in that way is lowered.

But that doesn't excuse the abuse of that platform. Spotify used to have a pretty convincing native Mac app, which was spiked in favour of their current abomination, and I've watched performance plummet.

Using Electron as a cost-cutting measure is fine, but it's not good for user experience and it's OK to be honest about that.


Electron is great for user experience. Compare to something like JetBrains' IDEs which are written in Java and are much less native feeling than Atom (and just as resource-hungry).


But JetBrains's IDEs are a fair bit more powerful than Atom. The resource-hungriness is taken up by functionality, not UI.


To be fair though, Java and Swing which JetBrains IDEs use are also not exactly native.


Comparing Atom and IntelliJ is like comparing a BB gun to a 155mm howitzer.


> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.

It's very far-fetched to call a lower barrier to entry an issue. The easier we can make it for people to get started, the better, isn't it?


Point taken – 'issue' is not the correct term here. Perhaps 'cause' is better.

Lowering the barrier to entry is great, I agree. It's awesome that Electron and web technologies can be used to quickly launch proof-of-concept desktop apps. I find it significantly less awesome that companies with hundreds or thousands of engineers continue to use it after the concept has been proven, however!


But these apps aren't "proof-of-concept" apps - they are fully functioning, working cross-platform applications.

So are you saying that these companies which have implemented these apps should instead say "You know what, we have this app developed, and working on the major platforms - but let's instead devote three new teams, one each for Windows, Mac, and Linux - to re-implement this app natively! I'm sure upper management will agree!"

That's not going to work. That's going to be shot down and laughed at. No company is going to re-implement a working cross-platform application over to three separate native contexts, and maintain all three. That just won't happen. It works already. The users probably like it just fine. What benefit to the company will this get them? Nothing - just more costs for maintenance across three platforms.

If native is wanted by the users, likely what would happen - if it happens at all - will be the company says "Ok - we'll make it native for Windows, maybe even Mac - maybe. Linux? Forget it!"

It's not the developers. Developers would love to make native apps, for all the platforms. But developers are limited by the companies they work for, and by the economic realities that all the platforms can't be supported natively; at most, only one or two can - because at that point, with the number of users on those platforms, the additional costs of maintenance and support are pretty much saturated. Adding additional native platforms doesn't just add onto those costs, it actually (in theory) multiply them - because a single person might use the application across multiple platforms. So if they have problems on two or three different native platforms, now there are three different support issues (needing more people to support) - instead of a single complaint for a single platform.

I get it, though - it would be great if these apps were native, and worked on every platform, from now and into the future (even on platforms that don't exist yet!). That's just not going to happen; if native is wanted, then only the most widely used platforms will be supported, and even then, one of those will likely be dropped, and it won't be the one from Redmond.

So - what can be done? I dunno. The concept the author brings up ("use React Native") might be the solution. Or some other interface that abstracts a platform's OS and other native functionality out to an API that is the same for all platforms. But now you have issues with security and other access - which you still have with Electron, but it is more contained and constrained, since it has to go thru the Chrome engine and the various rules/settings browsers have for sandboxing bad actors. Or - you leave it to the user and their operating system (and slim it all down - maybe that's what React Native does; I don't know, I've never used it).

Or - you (that is, the company) just says "Sure - we'll do a native only implementation - for Windows only." - because that's how it usually goes.


> It's not the developers. Developers would love to make native apps, for all the platforms.

I don't think that's true in the case of Slack. They only have JS "hacker" webdevs, and those are notoriously resistant to any change to their comfort zone. C++? WPF? Swift? Cocoa? Scary stuff.


"The easier we can make it for people to get started, the better, isn't it?"

Would you like to use an application that would not have been written without a low barrier of entry? Powerfull and easy tools are not necessarily the same thing.


"the easier we can make it for electricians to wire up my house, the better, isn't it?"

Unfortunately, no. Just as a trained electrician will wire up your house so as to not set it on fire as soon as you turn on a lamp, a trained developer will make apps that use a minimal amount of resources.

Especially more hilarious if someone writing Electron apps call him/herself an engineer.


I write Electron apps and call myself an engineer, because I am an engineer. Could you please explain why am I being hilarious?


Some engineers take exception with software developers using term, given that the disciplines are somewhat different.


As computer programmer in the web field, I find it awfully pretentious when web developers or any other programmers call themselves engineers when they are not legally licensed engineers.


It says "Software Engineering" on my graduation certificate. I wouldn't be able to get licensed even if I wanted to.


Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?

If yes, what in earth makes you think writing Electron apps is a good idea? If no, back to my point.

To quickly answer JetJaguar below you, yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers.


> Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?

I have been trained as a Software Engineer in Istanbul. I am very sorry that my country doesn't fulfill your expectations.

> If yes, what in earth makes you think writing Electron apps is a good idea?

Engineering is about making trade-offs. There's enough discussion here about what those are for writing Electron apps.

> If no, back to my point.

What, "no"? What was your point? ...that I'm not an engineer? As I said, I am one.

> yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers

I don't think you are a cunt. I think you are uninformed and have concrete opinions based on limited or self-fulfilling-prophecy-boosting experience.


> Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?

No country trains software engineers in the way you describe.


I think Germany was the country described in the post. Whether it's true to the fullest extent is hard to say, but having worked and socialized with them for years it seems mostly true. Engineering is the country's actual official religion. And I mean that as the highest compliment.

(No offense to any other country's engineers -- the engineer mindset is the same everywhere, but the mindset of the non-engineer in other countries is the distinction. That is, Germany grants engineers a degree of respect, almost reverence, that I've never seen in the US.)


Our company recently purchased some Hardware I/O devices that came with software (both made in Germany). Lets just say, I'd much rather buy their cars than their software.


So they should not call themselves engineers - any more than they call themselves doctors or lawyers. They are programmers or software developers. Or potentially software engineers - but certainly not engineers.


All of Europe, also parts of Canada.

No place in the US does, indeed.


Are there standardized licensing bodies for software developers in the EU and Canada? I'd be really interesting in learning more about them if so.

I sympathize your argument and I think the field is doing a great job right now demonstrating some of the upsides of a licensing authority, but actually getting one would be bad IMO.

Most software is not life and death, and licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means (today, these are mostly indirect because everyone is on the look out for them, but they are nevertheless still there). There are good arguments that the AMA and ABA have both seriously contributed to the astronomical expense of their respective services.


>Are there standardized licensing bodies for software developers in the EU and Canada?

For engineers, absolutely. For software engineers, absolutely, because they're still engineers. Software development? Go wild, anyone can do it. >licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means

That seems like a terribly US centric that I keep seeing online. Unions in France, as it's the one example I can be certain of, are in no way gatekeepers, and we are a country where they've been immensely powerful when it comes to influencing the state (whether talking about worker's unions or CEO unions). But you can get any job without being in an union, all they're doing is bringing everyone on an equal footing when it comes to negotiations.

Licensing authorities are purely a society thing. Either you have a numerus closus, because the end goal is for everyone that passed the selection to have a guaranteed job, with good living conditions, or anyone can pass, and good luck everyone. It works in some cases, doesn't in others.


I've never heard of some governing body in France or England taking away the title of "software engineer" for someone who wrote faulty code. What are these governing bodies?


Maybe I have the wrong impression, but I feel like the author of the original post never wrote anything cross platform that's sufficiently complex (ie, more than a "let's fill out a form and press OK" Qt app).

> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.

There's nothing inherently electron-specific that makes an app "shite" any more than writing one in Qt would. You can write shit in any language, framework or platform. The day someone invents a system that protects us from our own stupidity will be the day humans become obsolete.

> They fail to integrate with the host platform,

Untrue. You can—if you need to—integrate with the host platform by writing a native node module, however it becomes less cross-platform at this point.

> they are slow, they hog memory and drink power.

Yes web apps use a bit—sometimes considerably, depending on what you do—more memory, and a bit more CPU (and hence bettery), but they are not perceivably slower, unless you're doing something stupid (in which case the equivalent Qt app would probably suffer in the same way).

> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.

Maybe, maybe not. I think using a cross-platform solution wouldn't be ideal for them (they'd need to either rely on something like Qt, write their own rendering engine, or use something like SFML; all of these are overkill compared to Electron), and the alternative is writing it in different languages for each platform, which would probably inevitably cause the projects to get out of sync in one way or the other over time.


In article Author says he implemented user space usb driver for Chrome on Mac (guess what Chrome is cross platform)


Implementing a feature for macOS in Chromium is considered a cross platform project these days?


isn't QT cross platform?!

in my opinion this is an hiring problem, they assume that if you know javascript you can be spendable on more things and not only on the desktop to write an application.


Can't say I agree. Native apps are often garbage. Discord is another Electron app that I think is actually quite a bit better in most dimensions (attractiveness, functionality, speed) than the native apps it replaces (Mumble, Teamspeak, Skype, etc.).


But is that due to technology choices, or due to more attention being paid to UX? (and I do believe webapps give more flexiblity in that regard)

I get the feeling that UX and UI designers for web- and mobile apps are just a different breed than their native desktop equivalents. It might just be my Windows bias though, iirc Windows didn't have a strong / great UX guideline until their current one came around - and I haven't used many modern windows apps yet, the ones I do use are stuck in the 90's with their button bars and such.


That is due to the technologies available on the web being easily more suitable for GUI development than older native tools or desktop frameworks. We're in a similar boat, we made WPF apps. Now we're bringing it to the web and Electron. The things you can do now in React, heck i wouldn't even know where to begin in WPF or QT. If you look at modern Electron apps like Atom, VSCode, Discord and so on, the things they do and how flexible they are, this would mean so much hardship and pain in older solutions to the point where it's probably not viable any longer.


Other than declarative vs imperative, please substantiate your post with examples of what isn't possible in WPF or Cocoa.

I have a feeling it's more of an experience / competency issue rather than lacking APIs.


It's the ease of declaring components and structuring complex applications with it. There is nothing in WPF that is comparable to it unfortunately, except Redux.NET maybe, but XAML is a lost case against JSX. You'd have to see it, or work with it. The bottom line is, drastically less code, clean structures, things become simple to do.

Debugging is also something different: https://camo.githubusercontent.com/a0d66cf145fe35cbe5fb34149...

Like hot module reload where you write your app live. You edit one component, everything else stays in place and maintains state. Or time travel with Redux, where each piece of state is inspectable. You roll back or slide through the apps history and see it literally build and deconstruct itself. That's possible because UI is just a function of state. Same state, same UI.


It's never going to be possible to fully separate the factors out. With infinite time and effort by the exact same set of devs, I'm willing to grant that electron apps will be worse than native apps. But that hypothetical circumstance (same set of devs, infinite time and effort) does not exist in reality.


Facebook still has not made any desktop apps available, leading me to rely on a hodgepodge of 3rd party, electron-esque shells to provide a desktop experience.

This is despite their mobile apps being the most minutes used apps of any company. Even then, they tried to go with a electron-esque approach and only backed off when the performance and UX tradeoffs became unacceptable.

If one of the most profitable companies in the world can't see the business case for supporting a first party desktop ecosystem, it's very hard to believe many other companies have the justification.

This isn't an engineering problem, it's a business problem.


They don't. Java apps have been cross platform since their inception, .net is extremely close to being just as versatile, x11 is available on mac and pc, and python will run on literally anything. There's a pile of cross-platform options out there for you. What electron does is reduce the barrier to designing UIs, since you can do it all in html and javascript and its just as easy as hiring the guys who did your website to design. Because the one thing all those platforms have in common? Aside from .net (which isn't mature), they all have absolutely hideous windowing systems. Just ugly.


The main challenge, I think is that all of those apps are cross-platform, but also want to keep the same look and feel on all platforms - they're using their own UX patterns inside the container which generally don't match up with their host OSes.

There are some applications that do try so - 1password, GitHub for Mac/Windows to name a few that come to mind - but they seem to get less love than the cross-platform webapps. It feels like they get a certain amount of dev time before going to minimal maintenance mode.


This reminds me of what's happened with Unity (the game engine). There's a completely false perception that Unity is eating other engines' lunch. It isn't. All that happening is that game development has been democratized on an absolutely massive scale. So more games are being made. Most of them are bad. Most of the best games are made with either Unreal, or much more commonly, proprietary C++/asm engines.


Not just cross-platform, I think it significantly reduces the barrier to entry (and total cost of your app) even for single platform.


Clarified my statement above re "nobody"


Bingo--half these diatribes against Electron just seem to be making the case for premature optimization.


Nonsense. The optimization they are referring to is not premature in any sense of the word. If anything, it's belated and overdue.

In any case, the originator of the phrase, Knuth, specifically said that it related to "small efficiencies".

Nobody would call Electron's efficiencies that need to be addressed, "small".

There's a good discussion here: http://wiki.c2.com/?PrematureOptimization


Spotify, Slack and Atom are no longer hip startups though - they're established applications, and at least two of them have a very lucrative monthly subscription model. They can afford to optimize.


But the user experience is already great, so what's the gain? Electron and the web clearly make a first class app platform on desktop.

EDIT: thanks for the downvotes. I'd love to hear your thoughts on how electron is keeping Slack and Spotify from building a massive business and how their desktop users find the experience so bad they don't use the tools obsessively. Clearly there are things to improve w/ Electron (energy usage), but "terrible experience" is not how I'd describe Spotify and Slack on desktop, and their businesses clearly reflect that.

Moving to a native stack has major tradeoffs, would it 10x their business at least? I strongly believe that answer is "no."


Its a terrible experience to discover my battery life is cut in half because of a cursor animation in VS Code, or to have my laptop heat up and the fans engage because of an animated GIF in slack.

But its worse than that - most users don't even know why their battery life is awful. So they blame apple, or microsoft, or dell or whoever. And they just don't use their computers as much, because its all a bit gross and slow. And thats bad news for our entire industry.


Hasn't it already been pointed out a thousand times now that this was just a bug in Chrome that has been fixed right away?


Sorry, I'm confused. Which part of this bad performance was a bug in chrome? The peaking up to 100% usage for a few hours? The shockingly bad performance rendering GIFs? The super high idle CPU usage? The high download size and ram usage?

I used the desktop slack client all of last year, and these kind of problems were present the whole time. This isn't some "oh, yeah there was some particularly egregious bug we shipped accidentally in October" thing. Whatever is making the slack client a bloated ball of crap is much worse than a simple, quickly fixed chrome bug. Its endemic.


A bug that it's a lot more easy to be discovered and fixed when it affects so many people, How many "native" apps have similar battery hogging bugs that are not fixed because nobody founds about them?


As a paying customer, both Spotify and Slack are relativity shitty experiences.

I pay for both despite their poor quality software and bad UI/UX, not because of it.


Anecdotally, I had to cancel spotify due to shitty experience. One of their app updates on my phone left it completely draining my battery in a few hours even when not using the app, with no way to keep it from running in the background other than to uninstall. (ok or possibly rooting my phone to install more recent android with better background process control)


I pay for Spotify but I don't see it's poor UI/UX problems. The sections make sense, the color contrast is good, browsing through an artists albums and songs makes sense, the playback functions bind perfectly to the media buttons on my physical keyboard. What else could I want?


The user experience of Slack as a desktop app is not great. The UI is consistently one of the highest energy users on my machine, despite being one of the worst performing (have you ever tried scrolling up into a conversation's history?)


Scrolling up involves network calls, so clearly it's going to be slower. Energy usage is definitely an area of improvement.


i can easily scroll 10k lines in mIRC no problem


What advantages does the user receive for running the Slack Electron app, as opposed to having the site open in a browser tab?


None that I can think of, really. This is one of the cases where a desktop version of a web app exists and I'm not aware of why.

Spotify makes sense as it can reliably use your file system for storage and thus download songs so you're making fewer network requests. Perhaps Slack could keep a short log to prevent "scrolling up"-related network requests?


Choosing a platform isn't really premature optimization -- if you base your project on electron, and electron is too slow, your alternative is to... rewrite your project?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: