Hacker News new | comments | show | ask | jobs | submit login
Electron is flash for the desktop (2016) (josephg.com)
1519 points by charlieirish 45 days ago | hide | past | web | 996 comments | favorite



Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".

Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies. And as for Qt, Qt has existed for over two decades - if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.

But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that two of the largest tech companies in the world are behind, yet couldn't do.

That is a Big Deal.


That's nonsense. Desktop apps have been frequently released since forever.

The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. That is, it's cheaper to build a single cross-platform application in Electron than it is to build two or three native applications, and you can re-use your existing web experience. I can completely understand why companies might choose this approach.

The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power. It's fine to make those trade-offs – in some ways, it's better that you can get an application at all than the alternative of 'no support for your platform'. But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.


What you call "trade-offs" appears to me to be developers externalizing their costs onto users. There are costs to developing desktop applications and developers don't want to pay them, so they make users pay for them in wasted hardware dollars, bandwidth, RAM, battery life, and poor integration.


Well, to the extent that these costs degrade the user experience, I'm not sure they really are externalizing them. Presumably they pay this cost in the form of decreased user satisfaction. It's just that this is a cost they are willing to pay.

There may be an interesting economic lesson here: it really is not that easy to externalize costs. It surely can be done (air pollution), but it requires some special circumstances for those costs not to be internalized in a different form. (These special circumstances might include information asymmetries, harm to a public good, enjoyed by people other than a firm's own customers, etc.--themselves classic risk factors for market failure.)

By the same token, there probably are some truly externalized costs in this example, but I would expect them to be very minor and indirect. For example, most people probably do not pay the 'true' cost of their electricity. So to the extent electron wastes power, some of the cost will be internalized in the form of user dissatisfaction. But some will also be externalized either because the user doesn't know about the extra power consumption, or because the user herself doesn't fully internalize the costs of her power consumption and therefore doesn't care as much as she might if all costs were properly internalized.


> Well, to the extent that these costs degrade the user experience, I'm not sure they really are externalizing them. Presumably they pay this cost in the form of decreased user satisfaction. It's just that this is a cost they are willing to pay.

Precisely. I don't use Slack in part because its desktop application is irrationally bloated for its use-case. Although I have other reasons as well, they are enduring a cost--a small one to be sure, but presumably non-trivial in aggregate--in people refusing to use their service in part because their desktop application is poor.

But I do agree with the general emotion underlying the frustration of "externalities" here. As someone who advocates for high-performance and efficient web applications, I have toyed with the question of whether developers should confront the morality of wasting energy by having selected poor/low-performance platforms. Put in a somewhat comedic tone: low-performance software contributes to climate change.


By walking just one day a year than taking my car, I can offset the electricity consumed by my laptop.

You do have a point though. You make a difference where you can.


The problem is that despite how good it feels, individuals simply optimizing their individual activity does not scale to the societal scope. Post-agrarian humans literally behave and think via shared infrastructure. The only way individuals can make lasting improvements to society is to improve the infrastructure.


Individuals changing their behaviour can lead to societal change though.


Yes, but you are talking about contingencies. Optimism over dice rolls cannot excuse wasteful systems.


I'm talking about how nearly all progressive progress progresses. It swells up from individuals.


Does it? Or maybe those individual pop up and do their progressive thing only when existing infrastructure and technological landscape lets them do that?


Why they haven't made Slack a web based chat app like Discord or Wire or even Skype is beyond me. I don't even use their native apps, just open a tab in my browser and run the corresponding web app.


They already do, just not on mobile.


I don't get it. If slack.com is a web-based chat, why doesn't the OP open Slack on his existing instance of Chrome?


"Along with the features available in your browser, the app loads your conversations more quickly, gives you more control over notifications, and has better support for multiple teams."[1]

Also, the app gets its own entry in the task switcher.

[1] https://get.slack.help/hc/en-us/articles/207677868-Slack-for...


> Put in a somewhat comedic tone: low-performance software contributes to climate change.

Number of developers who would donate the increased efficiency to "idle": $denominator.

Number of developers who would fill the increased efficiency with more triangles: $numerator.

I'd argue $numerator is sufficiently large that the premise of your joke doesn't hold.


Couldn't they just pay to offset the performance cost of their application? I doubt the cost would be much compared to the development difference.


The real trick is whether or not users care about:

A) This works on all of the platforms that we use

vs

B) Performance could be better on my computer

It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...which is almost entirely because SSD's make them so much less noticeable if you start dipping into swap unless you're really working out your machine.

For most users, just knowing it will work on their machine is a bigger influence in using the product...and therefore a greater influence on business...than the performance of that system. It's especially true with a chat system where the most important feature is that everybody on the team can get access.


> It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...

Really???

Because personally, I keep having performance problems on all laptops I have. Don't try running on battery saving mode, seriously.

I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.

----

Fun Anecdote: I had to trial an entreprisey SaaS solution not long ago. A coworker gave me the name and I opened the site on my laptop (on the move outside of work, just taking a quick look).

Their site froze my firefox for 30 seconds because these idiots put a high quality full screen video of a surfer in the background of the main page. Looks cool, doesn't it? https://www.wavefront.com/

Couldn't see the site. Had to be at work to read it, on my top end workstation, the video played smoothly there. Needless to say, didn't take the product seriously.


> I stop counting the 5+ years old laptops that have to be upgraded,

5+ years ago is that LOOOOOONG time that OP was talking about. It's also unfair to compare technical capabilities of old hardware for many reasons. I think the point was that, new hardware, _while its new_, is becoming more and more capable. Any new laptop today, even budget ones, can handle YouTube videos in HD. The problem is that HD today won't be the same HD that exists in 5 years (i.e., 4k), and it's sensible that a budget laptop today will struggle with the 8k technology that comes out 5 years from now. This is an old problem (pun intended) and should not be surprising.

> Because personally, I keep having performance problems on all laptops I have.

Selection bias. Programmers who compile code, run VMs or containers, and process tons of data, are not the average consumer laptop use case and have much stricter requirements. Many people are sitting in Facebook, YouTube, Gmail, or Google Docs for most of their day-- and likely inside of Chrome.

Where are the "Chrome is Flash for the desktop" posts?

The idea that Electron is any different of a user experience for the vast majority of users seems skewed to developer usage, to me.


I don't know, 5+ years isn't that old anymore for a computer. Like, 5 years ago I was running... a core i7 with 4GB of ram. And now I'm running... a core i7 with 16GB of ram. The only things in computers that have really gotten significantly faster are SSDs and GPUs


I meant 1080p when I said full HD.

It takes a surprising amount of power to decode. The cheap CPU from netbooks have been struggling for a decade, especially in battery power saving mode.

Lately, they get hardware acceleration just for that. Special CPU instructions and drivers just to achieve that decently.


For Youtube in particular, they're sending VP9+opus where the browser supports it, without considering hardware acceleration. The rather anaemic Atom chips might have H264 decoding on-chip, but only Kabylake has VP9.


'Where are the "Chrome is Flash for the desktop" posts?'

Ehhh, you're in one, I think?


Different people have vastly different use cases. I haven't experienced a performance issue with desktop Slack or any other (non-game) application for many years. Sure, if I look at how much RAM Slack is using I'd probably conclude that it's more than it needs to be, but I haven't had an issue with RAM usage for, gee, probably ten years.

As for battery, again, my laptop battery has been little more than a UPS for at least five years.


It's mostly CPU performances I experience. Rendering any video, gif, website takes sizeable CPU power (for a cheap CPU in power saving mode).

Never had issue with RAM. I can remember a few friends who bought netbooks with 2GB memory some years ago, they quickly realized that they simply cannot run their development environment in that. (I'm talking swap death where a click that should take 1 second to act takes 1 entire minute).


Right now I'm using a backup laptop with Intel Celeron. It runs most apps fine. Web sites like Hacker News come up easily and snappily. The CPU fan kicks in when I watch a high-quality movie. It also kicks in on small, web apps. Kicks in big time on Youtube and Facebook. Everything on those is sluggish compared to my native apps. Even if non-technical, I'd know stuff was about to slow down when I go into my browser.

Just bad tools leading to waste users notice. Best to avoid them if possible. Not always possible...


> I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.

Are they all netbooks? Because those were crap the day they came out and even more crap 5 years later. You can find terrible discount desktop machines that can't handle anything just as easy as you can find terrible laptops.

My laptop is 5 years old and the only time I feel a lack of performance is when the Swift compiler fires up. Your example link came up right away (OK it burns away 30% of a core, but I got 4 of those).


They are not crap by any standard, they are simply low spec.

A core i3/i5 starts at 150$, most devices don't have these expensive CPU.

Even a 1000$ macbook from 5 years ago would have one of the first i3/i5. It would struggle to read 1080p video, unless plugged to a wall plug with the span spinning seriously.


Just an anecdote, but my 2011 Lenovo x220 with i5 running macOS Sierra plays this site in Chrome with around 30% CPU use, everything is snappy still


> It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems.

This annoys me a lot, for one of two reasons. Either:

1. Some developers have no idea what performance means -- it's crazy that I have time to notice a spinner when a glorified IRC client starts up or switches tabs, let alone have time to watch it for tens of seconds.

or:

2. If these developers are claiming they have no performance problems, clearly the laptops that can handle modern applications are being hoarded, and I have no way to obtain one of these magical machines. All I can get are recent i7 processors.


On point 1: Also there's still the effect where your app does the same thing a native app would do in an equally reasonable time but uses twice as much power to do it.


3. Battery life is excluded from "performance".


I agree with both of these.

With the power that even average machines have now, it beggars belief that we ever see a spinner at all for normal desktop-related stuff.


>It's especially true with a chat system where the most important feature is that everybody on the team can get access.

That's a complete other story... All that big open source movement for decades and in 2017 we are still depending on proprietary systems to provide a chat(!) for a company.


Yep, and when the government doesn't properly tax those harm-to-public-good things like air pollution to balance the incentives and make it cheaper to do "the right thing" and pollute less, companies (which are made of real people and are not just unthinking machines) will do whatever they can to maximize profits. This is why it's so important to e.g. Increase the price of water to reflect the true cost in the Central Valley of CA instead of letting tons of farmers grow water-intensive crops in open-air, wasting tons of water just because it's cheap (then they complain about "is GROWING FOOD a waste of water?!?", completely un-ironically). Many businesses subsidize their operations using government funding, yet again ironically most of the rich white men running them are staunch opponents of public assistance programs despite owing most of their wealth to similar mechanisms, typically in conjunction with massive tax avoidance.


Pretty funny that the only reply here is 'found the Marxist' (maybe it was supposed to be a joke?) when, in reality, all you're suggesting is that the government set the conditions for the proper functioning of the market, which is an idea straight our of Hayek.


I don't understand. The cost is mostly externalized into, quite literally, massively distributed increased electricity consumption. That an individual's power bill may be subsidized only emphasizes that problem.


It sounds like we agree. To the extent we're just talking about electricity costs, then it's true that easily written, but inefficient code may indeed externalize these costs to the extent that users are unaware of the added power consumption, or that the user herself does not pay the full cost of that electricity.

But these electricity costs struck me as only a small part of the broader point I was responding to, which is what I framed the point the way I did.


I contend that squandering the work and infrastructure involved in converting natural resources into electricity is the main problem with modern consumer software development. Any user's (lack of) perception of waste is irrelevant. The scope of the problem is not any individual literally paying a couple dollars more on their electric bill, but rather it's that gigawatt hours are literally being burned away because software developers who know their code will be running on machines the world over can't be assed to spend a little more time learning to practice their craft correctly.

There is no broader point than converting natural resources into societal infrastructure. Be a responsible member of society. Don't obstinately ship wasteful code. I understand if you legitimately don't know any better, but if you're part of the community that's constantly writing blog posts about computers being so fast that its OK to burn the end users' CPU and storage just so you don't have to spend a couple more minutes thinking about what you're doing, you're adding harm to the world.


Again. I think we agree. You're just making a different point than the one I was making. I was making a technical but hopefully still constructive point about what is an externality and what is not.

You seem more focused on making a case against shipping inefficient code. And you're point is a good one. It's just a little difficult to suss that out since you're framing it as standing in opposition to my related, but very different, observation about externalities.


The vast majority of users (like >99%) don't have a clue why their computer isn't performing up to par. They literally have no idea how any of it works. They might just assume that it's slow because they haven't deleted spam emails in a while (even while they use gmail) or didn't pay extra to get that 3.2Ghz machine instead of the 3Ghz. They certainly don't have any idea how to open up top stats and check what resources a particular app is using, so until that happens and users punish developers for this kind of stuff, expect it to continue.


> The vast majority of users (like >99%) don't have a clue why their computer isn't performing up to par. They literally have no idea how any of it works.

Same thing for Web developers.


Yes, you'll often hear things like "I think I have a virus" or "I think I need a new computer". They're computer is fine, they're just running slow apps.


They need a new computer because they don't know how to make their old one work properly.

Getting a new computer will fix the problem though. :-)


I don't disagree, but I'm hopeful that Electron will get better. When you choose an open source platform with a diverse group of major companies using it, part of what you are betting on is that the platform will a) survive for a reasonable amount of time and b) gradually improve.

GitHub, Slack, Spotify, Microsoft, etc., are all using Electron. As their developers gain experience with the platform and as they experience problems with resource usage, I would expect to see the platform improve. Maybe I'm wrong.


On top of that, webassembly could see some improvements across the board.


I agree. I think it also has the potential to help motivate developers to write native UI rather than use electron.

Currently if for your market, your primary target platform is the web, but other platforms are still important, going electron might make sense because you really don't want to write your business logic natively for each platform.

Web assembly allows us to use the same native libraries for all our business logic and data models, which besides being more performant, means we only need to be willing to write the UI natively on top of those libraries. We've actually taken (very) old desktop C++ code, and compiled it to asm.js and ran it in our web app, and then rendered its outputs with webGL onto a canvas and have had surprisingly successful results. This makes the prospect of WASM becoming standard across the board very exciting.

Now let's hope we don't decide to replace our native desktop app, that uses this old C++ library, with an electron app of our web app, running the asm.js compiled library :).


The future, where we write in native languages, that are compiled to Javascript, run in a webbrowser VM, and power desktop applications. Jesus wept.



It's not that they are native languages. They are languages with no overhead, GC is no longer necessary for example.


I've switched from Sublime Text 3 (native) to VSCode (Electron) and I couldn't be happier. If I didn't know it's electron based I wouldn't have been able to tell as a user. YMMV so don't generalise.


It's halves my battery life compared to Emacs or BBEdit.


If your battery is more important, just stick to Emacs.

What machine are you using ? I have a 2 year old thinkpad, it's still 10h+ in full brightness, vscode, compiling, etc. And it's the screen brightness that consumes the most battery.

It used to do 20h, but the main battery is external and can be replaced so it's a good thing. I would really recommand thinkpads to everyone instead of going for macbooks for example.


VSCode doesn't compete with emacs and vim. VSCode competes with Sublime Text and Atom. Maybe even with WebStorm on the more full-featured end and Notepad++ on the leaner end.

I'm not even sure whether you're trolling. BBEdit provides a tiny subset of the features VSCode offers. VSCode is not a text editor, it's halfway between a text editor and a traditional IDE.


When did you switch? Atom was damned near unusable around 1.0, and VSCode was only a little better.

The difference to now is night and day.


Atom and VSCode are in different leagues with respect to performance, and the difference is growing with time. If you think the reason that Atom is slow is because of Electron, VSCode is the counterexample.


If VSCode isn't slow because of electron, what's the reason it's slow?


VSCode isn't slow. Atom is slow.

Since both are written with electron the difference must obviously be the actual implementation, not the platform itself.

Keep in mind VSCode was written by Microsoft who have decades of experience writing IDEs and text editors, whereas Atom was written by GitHub who are mostly working on the GitHub product.


> VSCode isn't slow. Atom is slow.

Could have fooled me.


VScode runs like a dream on my hexacore/8GB/SSD desktop, but brings my atom/2GB/32GBeMMC based Asus Eeebook X205TA to its knees. Native programs for low power systems seems sound practice to me.


I first tried switching to Atom around 1.0 and it was a horrible mess.

I just switched to VSCode last month and it's been as nice to use as ST3 (actually even nicer because ST3 didn't have any code intelligence).

I've also used IntelliJ (or rather WebStorm) several times over the years and it always felt too sluggish and obnoxious.

For the record: I'm on Linux and have 32 gigs of RAM. So maybe I have lower standards for memory use and performance.


Everything in the world of web-stack development has turned externalizing costs onto users as a core first principle.


How about PWA, Service Workers, WebAssembly? Thousands of developers are working hard on improving the UX, reduce battery usage, wasted CPU cycles, network traffic, latency.


All of this because they're externalizing costs onto users through SPAs and other go-to practices that force all the work onto users' devices.


it's been that way since forever. if you're not employing expensive top-ranked brilliant engineers writing aggressively optimized, completely bespoke c applications tuned to every cpu model's feature set and performance characteristics... you just might be externalizing.

insofar as it benefits the budget, you should probably externalize as much as possible. consumers will provide the fitness function through deciding what products they prefer.


We're not even discussing aggressively optimized C vs web stack. We're talking about a massive segment of today's software developers not even bothering to learn competent C (or Obj-C, Swift, Java, whatever) to build a native application. Hell, it's about the very practice of calling Electron apps native to begin with. The same thing continues to hold across web stack everywhere—reach only for those things that don't require you to learn a new language/paradigm, letting you keep churning out the same thing you build in a browser and call it native for platform X, and let it be on the user to pony up the resources to use it.


consumers will provide the fitness function through deciding what products they prefer.

What about when your users don't have a choice?


hm... performance metrics, possibly? it may be hard to measure.


Well users/clients/bosses want everything, now, and for free so...


Free things give continuous hassle. People on business don't want hassle. They pass on free except if they're very inexperienced.


Yes. You just described modern computing.


Computing for ever. We use C to avoid writing assembly. Use Java to avoid writing C. Use Python to avoid writing Java.

The world is asking for more and more software, and the resources to provide it are not following so people are taking shortcuts.

Want something better ? Someone has to pay for it.


C is generally faster, than hand-written assembly, so it doesn't make sense to write software in assembler.

For macOS desktop applications are written with Objective C, which is C with fast message-passing, and doesn't trade much to speed. Swift is modern alternative, but it doesn't trade anything for speed either.

For Linux applications are traditionally written with C and Gtk or with C++ and Qt. Those options are both very performant.

For Windows main language for a long time was C++ and it remains supported language. There's movement to .NET, so Windows is an outlier here. But .NET is generally very performant language, it makes some tradeoffs for safety, but it has enough features to stay fast and its implementation is specifically tuned for desktop applications.

The only terrible platform with slow language is Android and it's well known for its lags.

There's very little desktop software written with Java and Python and usually those are specialized applications, when users don't really care about experience, but rather care about functionality.

So JavaScript invasion here is unexpected and not wanted. I won't use any JavaScript desktop application, if I have choice. I don't like this technology. JavaScript and V8 made for browser with advanced sandbox capabilities. That's fine. But for desktop it's just not needed. And there's no other advantages. UI is terrible and does not conform to any standards. Performance is not good. Memory consumption is abysmal. Energy consumption is abysmal as well.

If I'm about to buy application for macOS, I'm always carefully inspecting its bundle and trying to determine which technologies were used. Unless it's pure Objective C/Swift, I'm usually won't buy it. I hope, more users would do it.


>C is generally faster, than hand-written assembly, so it doesn't make sense to write software in assembler.

That's true now. It wasn't true then. The point still stands.


and not even true now for certain workloads, each tool for its job


Yeah, but Python still doesn't seem to have a great cross-platform desktop GUI, as far as I can find. I'd prefer to code entirely in Python if I could, but I haven't liked a single GUI library I've tried for Python.

Although to be fair I haven't really tried PyQT out still, but I don't like the idea of having to buy a commercial license for it.


PyQT is alright. Kivy as well.

Or anything in any other language.

But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.

Eventually all GUI app toolkit ends up with a custom MVC framework, a client/server architecture, some kind of db for persistence, their own implementation of asynchronous event and communication models and a declarative layer to create the UI without code. For the most advanced this layer separate structure and layout.

Well guess what, this is what the Web has natively being doing for ever.

Since the web is now the most popular platform, with millions of libs and tutorials on it, people just reused that. It just makes sense.

The problem is not the concept. The problem is we should have driven this effort with a standard to sanely close the gap between the desktop and the web so that you don't have to spawn a freaking browser-engine-os for every one of your app.

But no, the web is the only platform with a standard. And it flourished while all the big players created closes gardens with proprietary shitty API. And this is the result.

Have you not seen Jurassic Park for god sake ? Life finds a way.


> But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.

Web UI is a gazillion shades of shit, please don't dump that rubbish on the desktop. Thank you.

Seriously, how can one take Web UIs, the most cumbersome, unreliable, inconsistent, unreactive UIs, as examples to be followed? That's beyond me.


Haha, "a gazillion shades of shit". So true.


They should have adopted Qt; it makes it easy to write native C++ code that compiles on different platforms, so you get the performance and RAM usage of C++ but still most of the benefits that web coding offers, without so many drawbacks. (It's also quite nice to work with IMO.)

It's too bad it wasn't more universally adopted, by any of the 3 major platforms (including Linux, where the all-C Gtk+ has become the standard for the most part). Instead, it seems to have found its greatest success in, ironically, small embedded devices. Devices like this simply cannot take the performance hit of something like Electron.


I do strongly disagree with the web having the only strong standard, specially since it switched to a living standard

I admit that for someone from the web electron is a godsend, but to be honest, responsiveness of the applications leave something to be desired


It's completely false. The web at the beginning was a huge mess of static pages, hacked CGI scripts and no interactivity at all with the need to submit a form to trigger a full reload of the page. Everything was absolutely synchronous and a lot of times the database was accessible directly from the public interface and some pages did actually have the connection string directly in the HTML for everyone to see. I'm really not sure in which alternate reality you have seen web apps doing all that forever.


Yep, completely agree with everything you said. I'd love for their to be a standard on the desktop side, but until then, I'll keep using a hodge-podge of technologies depending on the project, I guess.


> Python still doesn't seem to have a great cross-platform desktop GUI, as far as I can find.

PyQt is literally the best cross-platform desktop GUI going, in any language.


Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago. And it was proprietary and it needed a licence. With all my good will I find quite difficult to believe that now PyQT is the silver bullet to write all UIs. For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')


> Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago.

Shrug, I found it much nicer than anything else I'd used, but I've never used WPF (which is single-platform in any case).

> And it was proprietary and it needed a licence.

Neither Qt nor PyQt is proprietary in the usal sense of the word (nor were they 5-6 years ago). If you're using a non-standard definition it would probably be more productive to use a different word.

> For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')

I'm a huge fan of type systems. I wish I could find a UI framework that's anywhere near as nice as PyQt for an ML-family language.


If you really want to use PyQT you may try PySide which is LGPL licensed. I'm not sure whether it restricts commercial use.


take a look at https://github.com/dddomodossola/remi, which transpiles python into html (kind of)


I can't help but think there's a different flavor to this than with higher-level languages. This is adding a platform on top of another platform. The same objections existed years ago with Java when Swing was released. Java is a cross-platform high-level language but Swing is basically an entirely new platform.

It's this platform on top of a platform that is objectionable from a performance, memory, storage, and integration perspective.


We see the same thing with containers. Docker, flatpak, snap... It's just the right time for it.

Languages have evolve to change the way we handle constraints like memory, speed, readability, expressivity etc.

We are arriving at the pick of what languages can bring on the table. Sure we can improve things here and there, but the huge challenges now are integration, packaging, distribution, updates, communications, multi-tiers architectures and all that.

So we now tweak platforms to help us with that.

But because we didn't see that coming, it's not done in any structured way. It's done exactly the way we did everything since the beginning of computing, by stitching together stuff then hitting hard on it with a hammer until the job is done.

This is not new. IT is a joke of an engineering field. We hack everything, don't think about the future, and then end up using the status quote. It's always has been like that.


I agree. Containers should be unnecessary -- all that they could provide could be done at the process level with an operating system designed to isolate computing resources appropriately. But operating systems were not historically designed for that so another (somewhat ridiculous) layer is added on top.


actually all containers do is utilise the very design in an OS like LXC in Linux to form containers. Containers are not a platform on top of Linux, they are a wrapper around different isolation tools build into the kernel.

https://en.wikipedia.org/wiki/LXC


Thats all containers are?

They are an abstraction over OS level isolation.


IT is a joke of engineering for non engineers. If a pull request doesn't follow some principles agreed a priori it doesn't get merged. If there are people that like to play the "IT cowboys" just hacking together stuff without any whatsoever process or unit test that certifies the behaviour of what they have written, jeopardising the entire team efforts, it is not a failure of software engineering, it's a failure of that specific team. And please bear in mind that I worked in such toxic environments, but I never thought for a moment that software engineering and software architecture are jokes. The joke was the team/organisation on which I was at the time.


Alan Kay might have something to say about all this.


>Computing for ever. We use C to avoid writing assembly. Use Java to avoid writing C. Use Python to avoid writing Java.

Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades. Which are all as close to the metal as can be. And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception), so the whole premise is moot.

The few Java desktop apps that were around, people used to hate as memory hogs.


You bet ? I have 3 software opened right now using Python for their GUI: my rss reader, my torrent downloader and dropbox. And I got many more installed on my machine.

But the thing is, even when I write something for myself, I first write a command line app, then a web service. Never a GUI, because it's such pain.


To be fair, your "Python GUI apps" are a very thin wrapper around C/C++ libraries.


Of course. Electron is just a layer around the chrome compiled engined as well. If you want anything to display a matrix of pixels changing rapidly, you need low level performances eventually.


Of course, but in your case, the C/C++ libraries you're calling into aren't doing any heavy lifting, they're just making calls to an X server (or equivalent) or perhaps to a graphics card. There's no reason GTK (or the GUI portion of Qt) couldn't be implemented in Python, it's just a huge undertaking and it was started in C (or C++ as the case may be).


Actually, I've been shipping desktop apps in Objective-C for roughly a quarter of a century.

Gets me the high-level, Smalltalk-ish productivity when I want it (most of the time) and the low-level C efficiency when I need it (sometimes).

Having my cake and eating it, that's me :-)


But can you make a website with the same knowledge ? Can you make it portable to other OS ? Can you reuse 20 years of knowledge, resources and libs ? Can you hire tomorrow 10 experts to help you on it ?

Quality of the tech is NOT the drive for success here. You are missing the point.


>Quality of the tech is NOT the drive for success here. You are missing the point.

Well, maybe it's better to miss the point, than to succeed by selling crap to people who deserve better?

When are technies gonna stand up for quality of tech?


Oh I do agree, it's just the debate is all about being a critic of electron instead of providing viable alternative.


> When are technies gonna stand up for quality of tech?

When the user notices a quality difference?


People used crashy, buggy, slow software for years. Photoshop and Office lost you data on a regular basis in the 2000's. Windows BSD was a common occurrence then. We didn't see a massive exodus to Mac products because of that. The only reason people started to go crazy for Apple was after the iPod came out. And even then, it was still a small part of the market.

You can see everyday that people favor cheapness, easiness and convenience over quality. You would not have so much junk food otherwise.


>You can see everyday that people favor cheapness, easiness and convenience over quality.

What I'm saying is "it shouldn't matter" what people favor.

Professionals should still favor quality, even if their customers would just as well have crap (or are ok with crap when its all they can find).


> But can you make a website with the same knowledge ?

One of my first commercial projects was a web-content management system written in Objective-C. Customers included Siemens and the German Bundestag.

Another couple of projects were written in WebObjects. If I wanted to, I could use Cappuccino, but I am not a big fan of web//client apps, so I don't.

> Can you make it portable to other OS ?

This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.

> Can you reuse 20 years of knowledge, resources and libs ?

In the sense you meant it: yes. Except it's more like 30 years. However, programming skills are (or should be) transportable. With virtually no previous experience, I became lead/architect on a Java project, which succeeded beyond anyone's imagination.

> Can you hire tomorrow 10 experts to help you on it ?

Is this a serious question?


This is a bad faith answer.

>One of my first commercial projects was a web-content management system written in Objective-C

You certainly didn't use any of your cocoa widget for the UI there. It was HTML + CSS.

> This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.

Yeah, GNU steps for GUI on windows... This is what you think could be an argument for electron users ?

> In the sense you meant it: yes. Except it's more like 30 years.

Again bad faith. The world has way, way more code, snippets, tutorials and doc about any HTML + CSS + JS code than any tech based on Objective-C.

Programming knowledge is transferrable, but the knowledge of the ecosystem is not, and is always the most consumming.

> Is this a serious question?

Oh yes, it is. Because you see we are living an era where it's hard to find any good programmer at all for anything. They are all taken, and are very expensive.

So basically, on a tech limited to one ecosystem, finding them will be even harder, and even more expensive.

The simple fact that you are pretending it's no big deal (while any company will tell you otherwise, so much that the GAFAs are spending millions just in their recruitment process) illustrate how much a troll you are.


> This is a bad faith answer.

It most certainly is not. You just don't know what you're talking about and keep making up new stuff when confronted with actual facts that contradict your fervently held beliefs.


I'm curious how you get Smalltalk like productivity on objective C? I thought the productivity on Smalltalk comes from its live programming environment?


No, it also comes from it's expressiveness and flexibility (at runtime).

It's not like Smalltalk is a bad language that just happened to have a productive live programming environment.

It's one of the best languages out there, and conceptually stands alongside Lisp et al.


There's probably, line for line, more VB6 desktop apps in existence than any other language. And that's not just because of how verbose VB is...


Hey, I used to use it! :P I came from BASIC background. VB6 was supposedly good for rapid prototyping of GUI apps (esp CRUD variety). I found it would boot up in 1 second, deploy new project in 1 second, and load new app in 1 second. It was also safer so no constant blue screens over common functionality. It also could wrap foreign code in less safe languages which I could still write in an industrial BASIC. One could also export the GUI to code in different language.

Became one of my favorite toys. I'd still use it for GUI prototyping if it was FOSS and kept getting extended. I found even lay people could learn it well enough to get stuff done. Long after, I learned what horrible things lay people did with it. Yet, they got work done and got paid without the IT budget and staff they would've preferred. (shrugs)


>And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception)

This is generally true, but to be fair the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers, and aren't meant to be programmed in hand-coded assembly except for a few critical bits deep within OS code. Older CPUs weren't like this.

It still might be possible to write hand-rolled assembly that beats modern compilers, but you probably need to have seriously super-human mental abilities to do it.


> the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers

You got the causality wrong. Assembly programmer-friendly CPUs died because​ CPUs which weren't as friendly were faster and cheaper; those same CPUs were instead more amenable as compiler targets.


C# has been the default way to write Windows apps since the early 2000s. It is no more "closer to the metal" than JavaScript.


>C# has been the default way to write Windows apps since the early 2000s.

No, it really hasn't. It was just the way Microsoft proposed businesses to write bloated internal enterprise apps, what they used to use VB for.

Those are not the same as desktop apps -- and no, or very very few, desktop apps, ever turned to C#. Not even MS own apps, like Office, and surely nothing like Abobe's or countless others.

>It is no more "closer to the metal" than JavaScript.

Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.

Besides, it's not JS itself that's the problem (though it took millions and top notch teams to make it fast): it's the web stack on top of it. C# just runs on a thin CLR VM layer -- and the graphics are native.


> Those are not the same as desktop apps

I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.

> Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.

You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."


>I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.

Windows Forms is a wrapper on top of MS Win32 API. And WPF is also based on native widgets wrapped (with some extended with managed code).

In any case, C# apps are not much represented in the majority of Windows desktop apps, most of which are written in C++ or similar, and surely all the succesful ones. Can you name your succesful C# desktop apps? (not in-house enterprise apps and no developer tools please. There where the users have no choice, even Java does well) I'll name the succesful C++/Delphi/native/etc ones and we can compare our lists.

>You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."

A call to a native drawing lib that doesn't pass through 10 layers of abstractions and bizarro architectures is as good as a direct native call. Especially from something like C# that runs circles around JS performance.

But even so, few consider JS to be what makes e.g. Electron slow.


As far as I know there are plenty of XNA games running on both pc and Xbox in C#. And that is pretty much the worst applications you can use c# for because, you know, latency. I don't see any real blocker to have complex c# apps on the desktop, apart maybe the quite shitty clickonce and the continuous need to have an upgraded .net framework to use the new features. But now for UWP apps the default is c#, they can be installed directly from the store, and with Roslyn you basically need only to target .net 4.5 to have all the features of the last version of the language. And this is a huge win that admittedly JavaScript already had because of transpiling. If I had to write a commercially distributed desktop application nowadays I would use for sure c# or f#, not JavaScript.


The comment I was responding to was:

> Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades.

So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.

This seems pretty much in line with the general industry trend towards the adoption of open-source software.


>So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.

This statement doesn't even parse.


A large portion of Visual Studio is written in C#.


By "large", you mean some of the UI using WPF, the plugin system supporting C# and some of NuGet?. Not that those are small projects, but considering what is inside Visual Studio, they are hardly "large portions" of Visual Studio.


We use Java because Python proved to be too slow.


And because the IDE support (refactoring etc etc), compile time error checking and ease of use more than compensates for Java being a little bit verbose.


Yeah, and they used C because the compilator could optimized stuff the JVM couldn't but now it can. And now Python get type hints so you can have the IDE tools you have with Java like with PyCharm. It's the circle of life.


Not quite correct about type hint. They are only in python 3.

Everyone who adopted python 2 on sizeable codebase is likely stuck there forever, with zero annotations and none of the new tools available, and they'll never be ported back.


You can use type hints with Python 2 in the form of special comments (http://mypy.readthedocs.io/en/latest/python2.html#type-check...). Guido has been working on the type hint project for dropbox specifically to help them to migrate 2.7 code to 3.

But let's be fair, type related tooling in Python are not close to the ones you have in Java yet. It's just that eventually, everything comes around. Java got faster. C++ easier. Python ...toolier ? Etc.


Java got faster and got some functional features.

Python broke all retro compatibility and put all existing sizeable software in a miserable deprecated state with the breaking of python 3.

I don't recall C++ getting easier. The few tools and IDE still fail at decent refactoring and code completion. The C++11 movement is adding few stuff more or less useful, piling on top of the vast amount of already existing complexity.


C++11 makes a LOT of things much, much easier. Yes, it does pile on top of existing complexity because they're loathe to eliminate any backwards compatibility, but the nice thing is that you don't have to use older features or methods of doing things. In fact, if you look multiple serious C++ codebases, it'll almost look like they're different languages, as every project basically chooses a subset of C++ they accept. Realtime embedded code doesn't look anything like desktop application code, for instance, but they're both technically C++.


And as far as I remember even google is only supporting python 2. Their python-go transpiler doesn't support python 3 for example (unless something changed in the last months and I missed it)


Most of the C++ annoyances have already been fixed by Dlang, which interfaces very nicely with C/C++ libraries and code.


And because packaging everything in a jar is easier than pulling 1e3 dependencies for every deployment. Not to mention drpendencies that also require a C/C++ compiler boost or other native libraries.


Yeah and some use C because Java is too slow. The point is, there is nothing new here, the history of computing is repeating itself. It's just now we have better toys, a bigger market and the stakes are higher.


Mmmmm, steak.


I'm french so my english is sometime funny. I didn't mean a piece of meat from a weed smoking cow obviously.


Your English was flawless there, he was just making a joke because stake sounds like steak, and he likes steak


I wrote steak in the first version. I edited back to stake after his/her comment.


FWIW, it is a mistake a native speaker could have made.


Mmmm... misteak


You were completely correct and idiomatic, goatlover just wanted to throw in the pun.


And this is the reason that I've become such a huge fan of Rust... back to the metal, high level interface like Java.


Rust doesn't bring much in that aspect. You could use C++ for performant high level abstraction for many years before Rust. Rust brings memory safety and that's huge. But it's not anything like Java, it's much harder.


Actually, I think it's very similar to Java in terms of what is being offered and at what layer of thought.

It does have a steep learning curve, but it's worth it. The number of concurrency bugs alone that I could have avoided if I had been able to use Rust years ago are sad to think about. Java has great concurrency tools, but doesn't do anything to make sure that your not shooting yourself in the foot.


Java also has JavaFX, which comes with an embedded Webkit browser. I can create my UI with React, or any other HTML/CSS/JS library, and make it interact with code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc. very easily. I think this provides all the benefits of Electron, but is even more flexible and powerful.


Do you need to run N programs on N copies of Chrome with JavaFX or use a single VM like all JVM apps do? Because that's the problem with Electron mentioned in the article and that's exactly one place where JVM languages are better.


add nashorn so you have a spare javascript vm just in case...


Yet, this is not as popular as electron. Ask yourself why.


This is basically a terrible argument in and of itself you would do better to flesh out why yourself instead of expecting everyone to conclude your self evident correctness.


Too many JS/web programmers that's why. Plus JS/web programming is way easier and more popular than most of the JVM languages.


i'd guess that there are a variety of things to consider. do you actually have anything to say?


> code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc

Your list of 7 JVM languages (both here and in your earlier comment on this submission) seems to be from most widely used to least. Yet in your HN comment from 2 days ago at https://news.ycombinator.com/item?id=14068664 you ordered that list differently, i.e. "Java, Scala, Clojure, Groovy, Kotlin, Ceylon, Frege, etc". Have you changed your mind about the relative adoption of Clojure and Apache Groovy in the last two days?


Not really. Actually, however, there is a recent survey that shows that Groovy is the second most popular language on the JVM, behind Java. Myself, I use Scala, and would like to learn Frege.


Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock. Sadly I still didn't find anything comparable for BDD in all the languages that I use.


> Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock

There's something wrong when a testing framework hacks into a language parser to make the language statement labels have special meanings like function names do, and overload the `|` operator in expressions so data will appear as a table in the source code. "Lovely" isn't the word for that sort of thing.


It's a DSL. And it works very well also because of these operator overloads.


We use some hipster language and act like we're doing it just to be ironic when actually it's a super cool language.


hehe, you remember me, that crystal, nim and rust are invented to avoid write C

¯\_(ツ)_/¯


>There are costs to developing desktop applications and developers don't want to pay them, so they make users pay for them

That is definitely not the whole story. Costs are shared between developers and users. If it's more expensive to develop an app, you can bet it's going to cost users more too.


Well, why would any developer or organization doing development "want" to pay those costs, as you suggest they should?


To make a quality product and offer a better user experience?

Users may not notice the slowness right away, but if they run a bunch of applications at once they probably will, and they're more likely to notice it while running on battery.


Because those costs grow orders of magnitude as they're externalized. $1000 spent during development can offset $1M in energy wasted by a million users. So they should want to pay those costs as responsible members of civilized society.

And if they don't want to, they should be forced to by said society.


THIS ^


> ... Electron applications are shite in comparison with proper native applications.

Is that true from a user's perspective? The average user I know would not have the idea that Slack is somehow inherently worse than say MS Word or any other truly native app they use everyday. What would Slack gain by integrating better with my Mac?

Slack starts up too fast for me to read the little quotation (penultimate Macbook Pro). I actually wish it was slightly slower because I like those cheesy quotes when I can catch them. Everything feels almost instant. Not bash instant, but as fast as any mainstream messaging app.

I leave it running all the time and get what feels like normal full hours upon hours of battery life. There are apps or browser tabs that I've noticed destroying battery life, but Slack nor Atom nor MS Visual code are on that list for me.

The anti-Electron complaints all kind of feel like the same argument that comes up as each new layer of abstraction gains acceptance. I'm sure there's a lot that Electron can and should improve. Running multiple copies of Chrome does sound awful, but until I read that I didn't know the atrocities it was committing.

Just doesn't feel productive to try to make Electron go away vs. working to improve it or create a better cross-platform abstraction.


> Is that true from a user's perspective?

It is for this user.

> What would Slack gain by integrating better with my Mac?

Far better resource management, one would hope, for starters. As the article pointed out, when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.

It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP. So Slack gets turned off, and maybe someone gets annoyed that I don't respond to something, while I run a few VMs in Slack's former memory space.

As far as other things, well, using a platform's capabilities as appropriate is generally considered good form - HCI concerns, and, generally, just people like things to be consistent. I don't know why we're supposed to forget this now.

I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.


Recently I was doing some mean stack development ram was maxing out 8 GB and system was struggling to run smoothly Mongo alone was hogging around 1.5 GB and webpack was hogging 1 GB there are three seperate expressjs servers hogging 500 mb each

And then there was slack


Finally an argument that is concrete and understandable! Seriously, this is the first comment I've read that is tangible as opposed to theoretical. All the other complaints are on the basis of being morally offended by the ratio of resource usage vs. application functionality.

The nice thing about slack is that it will run about anywhere. The app on my phone magically knows if the app is not running on my laptop and will then and only then send me notifications. But, I'm not suggesting this as a solution for you -- best productivity killer I know is having to pay attention to yet another device while trying to get work done.

Depends on your environment, but I have of one very effective and successful coworker who just flat out refuses to use Slack. The number of companies where that is acceptable is probably limited. Unfortunately.


I know I am really late to this, but one of the things I have been meaning to try is running Electron apps that have Web Apps with [Fluid](http://fluidapp.com). I think this may give me the perks without some of the drawbacks. I am not really sure, but it is an experiment I would like to try out.


I've never heard of Fluid before, but it seems to be a glorified shortcut with minor features such as notification badges. I really can't tell how this can integrate with Electron at all.

Electron would have you write a web app and package it as an app installed to your Applications folder just like any native app. You'll write this code in JS, HTML, CSS and use Electron APIs.

Fluid also would have you write this in JS, HTML, CSS but use Fluid APIs instead. The two compete for the same role in that sense.


With Fluid, you don't do anything from the web developer's prospective. The Fluid app just packages a Web site as a separate application. So, what this means that any user can take a Web site and make that a separate app if they want it.


> I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.

For what it's worth, since you seem to like IRC clients so much better, Slack has a built-in IRC bridge: https://get.slack.help/hc/en-us/articles/201727913-Connect-t.... It works just fine, in my experience.


From experience, using the bridge means pinging the admin on slack about it, and convincing them that it's even worth doing, and then being stuck in "The IRC Room", instead of the others rooms in which people are actually communicating.


Allow me to humbly suggest:

https://github.com/wee-slack/wee-slack

I use it extensively for everyday Slack, where I don't care much about inline media and inline formatting. And for a text client, weechat + this plugin handle it pretty gracefully.


> when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.

Not really, unless you're using most of your RAM or you treat this as a fundamental principle. I've never checked my Slack RAM usage, and I'm sure it's higher than if they built a great native app, but it's also never had any performance problem with Slack or any other application (okay, except for Eclipse, but I stopped using that when Android Studio came out).

> It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP

And there's the key. You have a very specific use case that is probably extremely rare. I'm a full-time developer and as far as I know I've never been low on RAM on my first generation Retina MBP.


People are less likely to be spinning up vms but may have far less ram to start with and the more apps they use that are wasteful of power and ram the more likely this becomes an issue for them.

It is truly strange to hear being resource constrained being described as a special case in computing.


Last time I taught undergrads (last semester) their computers were far from anything as good as we'd have. If I asked them to run anything remotely demanding, it really taxed their laptops.

We really do need to be more mindful of resource usage. I'd rather write my own lightweight clone of something I want than use an Electron-based hog. There are menubar-only apps that come with an entire Electron dependency.


Why is that strange for personal computer usage? How common do you think it is for users to experience a performance problem caused by low memory? I'm genuinely surprised that we could apparently have such opposing intuitions. My intuition is that it is exceedingly rare.


Your intuition is badly formed because it ignores how crappy real peoples computers are. The world is full of really bad computers that people don't upgrade because their computer is a low priority item for them and they have lots of other more important things to put their money towards like food shelter, and medicine.

Further their machines are full of software that runs all the time for no reason including but not limited to multiple redundant antivirus that are trying in vain to scan everything in real time in a vain attempt to prevent the next malware infection from taking hold.

Various services like file system indexing and virus scans run at inconvenient times and render things slower than before.

Laptops are super prevalent because their portability is more important than power. Its not at all unusual to keep using the same machine for 5-8 years as long as it still works.

Real people have bad computers with bad specs and in a massive number of cases the browser is already using a significant portion of the entire computers resources.


The electron apps I've seen don't really have much UI. As soon as you build an app with some complexity it will start being a problem.

As a native Mac developer I can drop in things like a predicate editor for defining filters or queries. They come with Cocoa. I can put in sophisticated table views and tree views. I can connect all of this easily to core data which loads from a database objects on demand without any code needed to be written by me.

You see this sort of shortcuts in the electron apps I've seen. They don't have proper GUI for preference configuration like any Mac app as they can't get that for free. Atom doesn't have a regular file open dialog. I use that a lot, dropping a file into it to jump to a particular location I got open in Finder (file manager).

My preferred editor, TextMate has very little development resources behind it compared to Atom, yet it has a far richer GUI. You got a GUI for creating and editing plugins. Not just editing config files. You got native rich UIs for selecting fonts and colors. You don't have to write font names and color names in some config file.

There is a certain irony in the claim about how stuff like Electron saves you cross platform code, when what is usually not cross platform is the GUI, and the electron apps I've seen has very little of it. Try making something with the complexity of Keynote, Pages or Numbers in electron and I think the lack of a comprehensive set of prefabricated GUI components will start making its mark.

Not to mention the numerous native APIs which exist which you have to duplicate, e.g. for vector graphics, animations, 3D graphics, audio, video, font handling, OpenCL.

How about people with disabilities or internationalization? You got great native tools for doing that, how do you accomplish that in Electron without re-inventing the wheel?


> The electron apps I've seen don't really have much UI. As soon as you build an app with some complexity it will start being a problem.

Well, VSCode certainly seems to contradict this.


The GUI in VS Code is basically: a file tree view, an editor area, the command palette overlay, a status bar, and the debugger panel/area.


I use a macmini for my dev work. Almost every time it freezes out of the blue the culprit was some html based "app" frequently with process nwjs using 100% of cpu and huge amounts of ram.

This includes chrome and electron based stuff


I actively don't use and discourage people from using Slack where I can because of how poorly it runs.


What about Slack webpage?


Same issues. The bloat is atrocious.


And because it's a closed-source, expensive implementation of IRC.


That's not great, but I can live with that.


With no fuss history


Honestly, though, the in-client search sucks. I'd much rather search archived text IRC logs through some web interface or locally than try and find something from 6 months ago within Slack.


I totally agree with you. I only use Slack from we app but I'm a heavy user of VSCode and Spotify. I believe in standardized Web API's rather than a clusterfuck of native API's.

The problem here is the base platform Electron and underlying Chrome/Nodejs

But as consumers we feel when our browser lags, so browser vendors optimize speed over memory and cpu, caching the shit out of everything. The Web is a dangerous place so they further isolate every tab as process, sandbox them and keeps lots or copies of the same thing in RAM because a security flaw is a lot more shameful than a memory flaw.

You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.

Making noise also makes a difference. When someone complained about VSCode cpu usage because of idle cursor and it blew on HN, next month's release had a fix (for all platform)

The truth is you can't move fast doing native development with different libraries. Electron keeps your dev costs down and allows you to move very fast.


> You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.

Won't that encourage more companies to cut corners? How can producers of quality products compete when other companies can release prototypes that their paying customers finish for them?


There's the argument "Why should I do it? If I'm paying for it then they should do it"

And there's, "This really sucks, its an open source project, I think I can make a difference that would affect millions in a small way but would still make a big impact overall"


So use a cross-platform toolkit then like Qt, WxWidgets, Tk, etc. You don't need the bloat of a browser engine.


seriously. and skype is a grand example of how shitty a native client can be.

tired of this argument. if you put in the proper effort, your users won't care or notice.


Is Skype native?

Except for the very old 4.x Linux version - which is native Qt - and comparably old Windows ones, I thought, all of the recent versions are webbrowser wrappers (one sort or another) with some occasional native widgets around that.


(Can't edit the post above) But, of course, Skype issues aren't because of their UI nativeness. Their UI layer is... well, tolerable. Heck, I'd be happy if all I'd have to complain about Skype would be a crappy UI.

But I don't think UI implementation has to do anything about why we love it so much. I mean, missing calls and messages every other week, random desync between clients and all the usual rituals that every other group call starts from (you can hear them accompanied with chants "can everyone hear me?", "$name are you here?", "tell me if you see my screen" and "let me drop the call and restart").


i think i was still overreacting to prove a point. everything you're saying is right.

my point is, even if i concede native is "better" or whatever, the difference should be negligible for most apps for all users (let alone "most" users).


You are of course correct. It's possible to make awful native applications, and good non-native applications.

The sliders mean that you will trade performance off against ease-of-development. It will be easier to ship a basic Electron/web-stack app than a native one for multiple platforms. However, it will be harder to do this is in a high-performance and well-integrated way.

I went through all this with Cordova back in the day when trying to build cross-platform mobile apps. I was able to ship apps quickly, but at the expense of quality – it's okay to do this so long as you are aware of the trade-off.


yep definitely. and there is definitely a cost and effort required to make Electron feel as good as native. depending on your feature requirements, this can get really hairy.

i'm just exhausted by the "web apps feel so much worse than native!1" argument which is an over simplification and not a rule.


Skype for Linux is a web app. It is a lot worse than the old Qt client.

Are the other platforms native? I thought they switched to webrtc in 2015 to be able to just use a simple web window wrapper for all platforms, just not Linux.


The newer skype versions on Windows are web-based.


Slack behaves perfectly fine for me and I don't have incredible hardware. It's non-Electron apps that are typically mentioned by macOS' energy management system as "consuming significant energy", in my experience.

Edit: Mass downvotes; lol ... Just stating my personal experience!


> Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power.

Are they though? The two applications that use the most energy on my Mac - by far - are Steam and Skype. Steam still has trouble with HiDPI and freezes when performing various UI interactions. The number of problems with Skype are uncountable.

I'm currently booted into Windows for work, looking at my current process stats, the top memory consumers are:

* Visual Studio (hodge podge of all kinds of things, 800MB)

* Chrome (215MB)

* Microsoft Intune (presumably native, 114MB)

* GitHub (.NET WPF application, 108MB)

* Explorer (native, 103MB)

* Search Indexer (native, 107MB)

* Lync (who knows, 98MB)

Meanwhile, the supposedly terrible Electron apps:

* Spotify - 58MB

* Slack - 93MB

* VS Code - 60MB

As far as interfaces go, Spotify, Slack and VS Code easily outclass GitHub, Visual Studio, Explorer and Lync in usability.


Each Chrome (and Electron) app instance is a group of processes. You are probably just looking at the main process, while the bulk of RAM/CPU use comes from the renderer processes.

Here are stats on my (Linux) box:

* atom - ~500MiB (one window)

* slack - ~816MiB

* chrome - ~935MiB (two tabs + hangouts)

* google music electron app - ~500MiB


Nope, those figures were after adding up all the processes. What I have noticed in switching between platforms is that applications tend to report a far lower memory usage in Windows than on Linux or OS X.

Might just be an accounting difference. Forked process applications in particular are very difficult to account, because even their private/RSS may be COW from another process.


My figures exclude shared memory and are calculated as VmRss - Shared from /proc/<pid>/statm.

If you are using Windows 10, your missing Atom processes will be under Background Processes in Task Manager. For the sake of the argument, I just did a fresh install of Atom and this is what I see on the first run: https://i.imgur.com/0ZRSumF.png. ~220MiB (no files open, zero extensions).


That's really interesting! I have an up-to-date atom install that i've been adding plugins to for about a year now (up to like 50 or so), and has been running for at least 12 hours (windows 10), and currently has 7 ~500 line files open.

Adding up all the processes' (7 of them) private memory gets me 194mb.


I installed the 64-bit version, you might be using 32-bit.


No it looks like i'm running 1.15.0 x64.

It might just be a difference of platforms.


At least if one crashes it wont bring down the Chrome stack.


VS Code has way fewer features than Visual Studio, especially for C#/.NET development. VS Code is a minimalistic IDE, very useful, especially for javascript development where tooling is quite minimal anyways. But let's not think that VS Code outclasses visual studio.


VS Code is not an IDE in the classic sense of what IDE is - integrated development environment. It's a text editor with some debugging extensions. I don't think it is even comparable to Visual Studio at all.


That line is getting really blurry with both Atom and VSCode. Haven't used VSCode, but the only feature I recall missing in Atom right now is refactoring; and to be fair, last time I used a real IDE (XCode), I couldn't refactor there either.


For me, refactoring is something that is nice to have, certainly, but not what defines an IDE. The debugging and development facilities are what matter most.


A lot of that is available in "text editors" like Atom (and I assume VSCode). Of course, it depends on if someone wrote a plugin for your language of choice.


IMO it qualifies as an IDE for Typescript, and borderline with Javascript — slightly better than WebStorm in the former case and slightly worse in the latter — but not so much other languages.


I agree with you, but there doesn't really seem to be any native application that occupies the same space as VS Code or Atom. VSCode is not just a text editor like vim or Notepad++, as some people here imply. As you say, it's a miniature IDE. And it's far more comprehensive than Sublime Text.

For .NET Core + TypeScript, VS Code is almost feature parity with full blown VS, while being an awful lot faster. The only thing I find particularly lacking is debugging, but even that is coming along well.


I agree, I use VS Code because its the best choice I have for Typescript development on the Mac. However, I still miss Visual Studio while using it. The debugger especially wants to make me cry, though this might have more to do with Chrome than VS Code's front end for it.


> As far as interfaces go

You missed this qualifier in the parent comment.


You need to compare apples to apples:

The right benchmark for VSCode is not Visual Studio, but Notepad++ (5.9MB on my system right now).


The right benchmark for notepad++ is not visual studio. It's notepad2.

Notepad2 = 1792 kB.

P.S. Pay attention, this is kilobytes, not bytes :D


But it's impossible to do a real apples-to-apples comparison, because I use Atom more like an IDE than I ever did with notepad++, and Atom has magnitudes more features for me than notepad++ ever did and most likely ever will.

I don't care about the difference of 175mb of ram between the 2 as one of them (atom) is infinitely more useful for me than the other (notepad++)


Steam always used a HTML renderer, even in its 2006 first incarnation. Nowadays it uses the same CEF (chrome library) as electron uses.

So Steam was one of the first "Electron" apps. A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.


No, VGUI is not HTML.

Here's a fun one. Start Steam with `-dev` and hit F7. Widget factory VGUI edition!

Oh also, https://developer.valvesoftware.com/wiki/VGUI_Documentation


Have I talked about VGUI? No.

Valve used a very ubscure/niche HTML render engine initially for Steam (2006). The company/website behind that isn't online anymore. An older version of the Wiki had some brief info, but all these info vanished.


Stop spewing bullshit.

Here's an old revision from 2005 by a Valve employee confirming Steam used VGUI back then.

https://developer.valvesoftware.com/w/index.php?title=VGUI_D...


It talks about the Steam overlay. Parts of the Steam application were always HTML. First the little known HTML renderer from a defunct company, than Trident and later CEF.

search for HTML: http://www.plastic-warfare.com/SteamUIGray.zip

Funny how the old things stay online. Notice also the cyber cafés menu entry. http://www.steampowered.com/status/game_stats.html


>A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.

That's a stretch; X/SG/HTML user interface APIs are not the same as a whole browser with Javascript VM, full networking and security stack, full-featured/standards-compliant (X)HTML/CSS rendering engine plus support for legacy features, UI assets, multimedia support, sandboxing, resource caching/persistence, and so on.


Win95 with shell update up to Windows Me and 2000 had the full trident engine (same as IE 3-5.5) in the Shell (Active Desktop, Explorer bars, etc.). Windows ME/2000 can play audio and video previews in the side bar (all HTML based).

Also WinXP used a forked Trident engine with some removed features for "Software" dialog and various other features (Windows Help, etc).


> * Spotify - 58MB

Maybe for one of the processes, but on Windows Spotify needs at least three processes usually to run (five if you count the Web Helper and Crash Service which are probably native code). On my machine the three main Spotify.exe processes take up at least 170MB of RAM, often more. Although I wasn't aware they were using Electron as their app has a standard, native Windows menu bar.


Another one for your list: Nylas Mail.

We straight up would have not shipped it without Electron and the CPU it uses to sync is on-par with apps like Apple Mail & Thunderbird.


Nylas Mail

> Nylas Mail - The best free email app | Nylas - The best free email app

What exactly makes it "best"? It looks to offer nothing more than other "best" mail apps.


Hard to answer your question without hijacking the thread, but here goes:

• It's got pretty much all the power features out there like Snoozing, Open Tracking, Send Later, Reminders, Enriched Contacts (i.e. Rapportive), Unified Inbox, Swipe Actions, Templates, etc.

• It's open source and super easy to extend with JavaScript plugins. Developer have made dozens of themes and some cool plugins including PGP, Unsubscribe, Translation, Todoist, Trello, Markdown, etc.

• It's cross-platform for Mac, Windows & Linux with custom UI styles for each.

• It works with all mail providers including Gmail, Yahoo, iCloud, Outlook and even vanilla IMAP and on-prem Exchange servers.

• It syncs your data directly (not via a cloud service) for speed and security.

• It works offline, so you can use it on a plane or when you don't have WiFi.

• It's open source GPL available on GitHub with >20k stars: https://github.com/nylas/nylas-mail

• It's free.

It's also still improving and has over 800 GitHub issues and we would love help from anyone who wants to make email better! :)


There are other comments in here comparing react-native to Electron. Do you know if you could build Nylas Mail at the same pace with react-native? Will the binary size/RAM usage drop significantly on react-native?


I haven't yet seen a substantial desktop app built with React Native and afaik neither FB nor GitHub is investing in React Native for desktop so generally this is hard to say. React Native is more of a framework whereas Electron is a runtime-- much different goals though both are super cool and I'm enthusiastic about the future of both!


I'm holding off my downvote to see if i can get a straight answer out of you. Nylas Mail bills itself as the best email app. Them's strong words, but maybe you're worth the claim? Let's see!

I see from screenshots that Nylas has folders and labels. Can i use either of these in the following fashion?

- i can have a tree structure of them

- an email can be in two separate folders/labels at the same time

- folders/labels can be configured to learn which emails to automatically sort into themselves, based on the email contents, by dragging and dropping the email into or out of them

Ball's in your court.

E: Bonus round! In this screenshot there's only 6 emails in the list: https://www.nylas.com/static/img/nylas-mail/hero_graphic_mac... Is there a way to get a list of emails where each line is actually only a line of text tall?


Some quick answers

• If by "tree structure" you mean a folder hierarchy, yep that's supported. I think we have a current bug with dragging nested subfolders but we're working on a fix. (Surprisingly >99% of users have a flat hierarchy.)

• A thread can certainly be in two separate folders (e.g. Inbox and Sent) but an individual message can't be in two folders at once. In that situation there are two copies on the actual mail server. For Gmail/Gsuite this is possible via labels where any thread can have an arbitrary number of labels. We support both systems.

• "labels can be configured to learn which emails to automatically sort into themselves, based on the email contents" -- this is a really cool idea and something we've talked about internally. AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters. It could also be an interesting plugin that anyone could build on NM. We have a Slack chat room where folks discuss stuff like this if you're interested: http://slack-invite.nylas.com/

• And for your bonus round (haha) yes there are 2 different ways to configure the UI. One of them is 3-pane with a reading mode like Outlook, and the other is 2-pane that navigates like Gmail. http://i.imgur.com/Lt0x7O4.png

Also in 3-pane if you make the message list wide enough it will switch into the compact version: http://i.imgur.com/SaGp9eV.png

(Obviously it will show your real mail data. We have a "screenshot mode" for sharing stuff like this without revealing sensitive information.)


That's worth upvotes for the effort alone, thanks. :)

> Surprisingly >99% of users have a flat hierarchy

You tend to end up with it only after really long-term usage. All the folders with sub-folders i have got them after they got too big to be just one, e.g. "Perl coding stuff" has several subs, "Shopping", "Clients", "Computer Game Emails", etc. Some clients have additional subs. All started out as a singular one though.

> threads, not singular mails

Ok, fair enough.

> labels auto-learning by drag&drop ... AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters

Opera M2 does it extremely well since ~2000. Google Inbox does it ... eh. Mobile and PC, none, right. The filtering is honestly super easy to implement. It's a bayesian filter. In older email clients those were used to filter out spam. Opera M2 simply gives each folder one (user-configurable) and runs all the filters on each mail that comes in.

And to be fully honest here, i still use Opera 12 as my main browser, along with its mail client and don't see myself jumping ship ... anytime really since for me the combination of mail client and browser is key. However to respect an email client i expect it to be a feature match to Opera M2 at least.

Not interested in Slack. If you had an IRC channel tough i wouldn't need to sacrifice a chicken and a CPU core. :)

> UI

Ok, that looks fine. I personally prefer to have the email below the mail list, but that's not a huge thing. Maybe an option to consider. Screenshot mode is cute. :)


You can actually join the chat via IRC/XMPP. No chicken sacrifice required! :P https://get.slack.help/hc/en-us/articles/201727913-Connect-t...

I haven't tried Opera M2-- I'll check it out. Might be a fun hackathon project to train a Bayesian filter on every folder and auto-suggest routing at least.

There was a big IBM Research study a few years ago that showed it's dramatically more efficient to search email versus categorizing messages into folders. Here's a link to the full paper: http://people.ucsc.edu/~swhittak/papers/chi2011_refinding_em...


Mac is a third class ports for these software. They never got optimized like on windows. Not a fair comparison.

With electron, every OS is a third class port.


http://imgur.com/a/Ofyei

Chrome kills me. :(


Stop using it. Both Firefox and Edge have come a long way. I recently dropped Chrome and have not looked back.


For me (I open a lot of tabs), Chrome is unusable without 'The Great Suspender' extension


Most of the Electron apps use multiple processes so you'll want to add up all their instances.


And simply adding up all the processes will count all the shared memory multiple times (which will greatly inflate the "final number" to much larger than it really is)


If you basically agree with the parent post, why do you say it's nonsense?

> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. Precisely.

> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.

But native applications are shite in terms of portability.

> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support

See the part where the original post said "it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies". The costs may not be "prohibitive", but they certainly would multiply effort and resources, and divide profit.


The "nonsense" part was that "nobody" was writing desktop apps for the past 5-10 years.


Well they were exaggerating but I think they had a point, til recently on my Desktop machines I only cared about having a web-browser, terminal (+ Steam at home).

Lately there are more UI-ish apps I value having everywhere including desktop (Spotify, Hipchat, Whatsapp, VSCode). I'd also love a decent cross-plat podcasts app.

I think it's clear there's now more demand for certain types of consumer-ish desktop apps (chat apps & music apps especially) than there was a few years ago.


> They fail to integrate with the host platform

Just as a counter-point, because native app fans often make this point as though it is universally recognized to be a good thing.

I don't want apps to integrate with the host platform. The host platform is not the thing I care about. I use several host platforms in different contexts (I have work and home computers and a smartphone, they all run different OSes) and I would prefer that Slack look like Slack and not have buttons in different places with different UI interactions just because that's the way Reminders.app works.

For me, the web is the host platform I care about. It's the one that I can use anywhere and only have to remember the URL.


I doubt you don't care. These are examples "integration with the host platform":

* Text selection

* Caret behaviour (e.g. Option-arrows on macOS)

* Clipboard

* Spell check

* Open/save dialogs

* File system access

* Drag/drop

* Window management

* Accessibility (screen reader support etc.)

* Standard right click menus

* Indexing (e.g. Spotlight on macOS)

* etc.

You may be thinking to native UI idioms, which even Apple threw out the window several years ago.

Electron apps are mostly very good at the things in the above list, because the Chromium web renderer has spent years abstracting the mechanisms needed to feel native where it matters.

Non-native toolkits such as Swing and Qt also spent years trying to achieve native look/feel, mostly through emulation and host OS detection, and they still feel pretty crappy compared to Electron apps.

Slack, Spotify and friends do a good job of inventing their own "web but native-feeling" UI. An example of the exact opposite is Google Docs, which still, for all its technological impressiveness, feels like a crummy Swing app trapped in a web page. For example, Google Docs renders its own right-click context menus, which look and feel nothing like native context menus. Google Docs' mini-apps also have a menu bar and a toolbar, but it's part of the host window, so you get two levels of menu bars and tool bars.


Qt done well comes a lot closer to native look and feel than any Electron app I've ever used. The problem is that a minimum-effort Qt app falls into the uncanny valley. Slack is clearly foreign, but it's polished.


I often get frustrated when discussions about user interfaces in the context of native/non-native don't distinguish between 'skin' and 'ui'.

For the most part I think what people care about is that things work as they expect, which is primarily 1) placement of UI elements, and 2) interaction with/between these elements. If that's done right, nobody cares if the UI is flat, dark, light, or has a leopard print background.

Now I do understand that there's some overlap in ui/skin concerns, but the distinction still seems crucial to me.

For example, the web is clearly not consistent on the 'skin' of things. But I often know where to find things based on their location (header nav menu's, footer contact details, etc.), or general look (loupe for search, wide rectangle for inputs, some kind of wide rectangle with a doodad on the right hand side for a drop-down). Or a combination of placement and look (a search input field is an input field in the top right of a typical page).

Even lots of computer-challenged people I know seem to do pretty well in this regard.

But as you say, when it comes to interacting with elements, as long as the developer doesn't override 'native' behavior, a web-solution can be very native.

On the other hand, the vast majority of cross-platform native apps I use often look close to native, but their 'core', inputs, selects, text fields, and so one, often feel off.

Honestly, I much prefer a non-native looking app that uses native UI elements over an app that has an 'uncanny valley' native look that is slightly off and UI widgets that don't behave natively.


I'm a little confused by your list... In what way are text selection, caret behavior, clipboard, spell check, open/save dialogs, drag/drop, accessibility, and right click menus not available in the browser?


They are.


Google doc is a lost fight. Word and spreadsheet editors need to be proper desktop web applications.

A web site inside a browser cannot resemble that and all the hacks to try to are not getting much close.


While true, Google Docs' collaboration capabilities are fantastic enough that it's worth using, though. We use it all the time to work, anything from tiny scratchpads to big documents.

To do the same in Microsoft Office, you need to dick around with OneDrive and/or SharePoint. The last decade or so, I've only touched Office when someone sends me some .xls or .ppt file and I'm being lazy and just want to view it.


You can use an old school network drive to share documents in an enterprise, irrelevant of their nature. Easy and simple.


I am referring to Google's realtime collaboration features. Given a URL, you can enter the document and see it being modified in real time, as well as edit, annotate and comment in real time.

It's 2017, this is how we work now. My colleagues (literally) across the world are not going to connect to some shared NFS drive or whatever via VPN to store documents.


What if the shared drive is automatically mounted on your computer when you log on?

One drive for your personal documents only for you. One drive for your team only visible and editable by people in your team. One drive company wide with common stuff.

You can send a link to your colleagues and it just works! That doesn't support multiple editing though. That's how things were done historically.

Google doc is good to send a documents to a bunch of emails and see/edit the documents. It's terrible to write longer documents with advanced formatting, pictures and schemas.


Of your examples, I know Slack fully supports all of them except indexing and accessibility. For those two, I don't know because neither are of concern to me.


No, I don't care about most of those things. I don't know why you didn't believe me when I wrote it the first time :). "Caret behavior", I use a Mac 8 hours a day and I have no idea what option-arrow does.

That clipboard shortcuts work the same is the only one that I'm used to enough to be annoyed if it were done differently.

There's something about Mac fans that are very preoccupied with all of the details of how Macs work. I'm not criticizing that, you like what you like, but you shouldn't be surprised that I don't care about Spotlight indexing.


You don't select or navigate in text?

Shift+arrows — select characters

Option+arrows — jump between words or paragraphs

Cmd+arrows — jump to beginning/end of line or text

Shift+Option+arrows — select words or paragraphs

Option+Backspace — delete one word back

Cmd+A — select all

I actually used all of these except the last one while writing and formatting this comment! Plus clipboard shortcuts.

If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe. But if you don't, I don't understand how you can have this opinion.

I get super annoyed with anything that somehow overrides these standard keyboard shortcuts, which is suprisingly often. Non-native UIs typically have to reimplement them because modern OSes have made the curious choice of not abstracting them.

I don't think it is a "Mac fans" thing. The exact same principles apply to Windows. Even to Linux, although the keyboard standardization there is next to non-existent. (I don't use Linux desktops often, but when I do, I get really frustrated that the terminals use Ctrl as a meta key instead of Command. So "copy" isn't Cmd+C, it's something like Ctrl+Alt+C.)


I use Cmd+A, that's it. It has a close corollary on every OS.

> If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe.

I do, and this is one of the reasons I've never bothered with all of the details and shortcuts that you like.

Vim attempts to make the best possible text editor. It doesn't let "OS conventions" dictate what makes good text editing experience. What you get from apps staying to strict OS guidelines is a bunch of average -- not terrible but not inspiring -- applications.


Sorry if this sounds negative, but I'm perplexed why you would even involve yourself in the discussion when you don't have an opinion. It's like asking a bicyclist about how highways should work. They might go "I don't want any highways at all", but that's hardly useful to highway users (environmental concerns notwithstanding).


I accept that not everybody will feel the same way – I do care about my local platform. I want to hit 'space' when I have an item selected and see quick preview. I want my documents indexed in Spotlight, I want to drag-and-drop files between applications, and I want all the rest of the UI niceties I am used to.


And on Windows, I want pressing Alt to highlight the first menu, and the arrow keys to move between and around menus.


1) Why use several host platforms if the platform makes no difference to you?

2) If you know a platform then you should have no problems knowing how to use it.

It would be ridiculous to have an app from Windows behave exactly the same in Mac OS just because you don't want to remember the difference. You don't want minimize and maximize buttons put on the opposite side of all other mac apps because that is how it is on windows. You don't want copy paste in Slack to use Ctrl rather than command key because that is what you do on Windows.

3) Whatever time you save from doing everything the same across platforms would be wasted, for anybody not working cross platform who suddenly have to deal with an app with completely non-standard alien behavior. I want my standard mac hot keys to work in a mac app. I want preferences to be in the standard location. I want my color and font selectors to work the way they work all other places. I want drag and drop to work like in all other Mac apps.

We Mac users have seen this again and again. When companies don't give a shit about our platform, it is usually just a question of time before a competitor arrives which does, and knocks the other guy out. You don't survive that long ignoring the platform unless you got some lock-in advantage.

Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.


> 1) Why use several host platforms if the platform makes no difference to you?

I don't, I use The Web for 90% of all things I use on a computer. A Chromebook is one of the computers I use the most when not working for precisely this reason.

> Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.

I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.


I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.

Hold on a bit with that assertion.

First: which apps built on Electron are being sold, period? All the ones I'm aware of are open source, like Atom, or front ends to services, like Slack.

Second: which companies are switching to Electron for development? Again, all the Electron apps I'm familiar with are ones that started out that way. While I'm sure there's probably an app or two out there that began as a native client and then went to "let's just be a web wrapper," I don't know of any big ones offhand. (I've come across companies that have shifted their strategy to using true native applications, however. Facebook famously shifted their mobile strategy from HTML5/JS to native apps some years back, and I know of several iOS apps that were using "write everything in JS, it'll be great!" toolkits that switched to actual native AppKit.)

Third, and admittedly anecdotally, in both my experience and what I've consistently heard and read from people who've had the opportunity to study the UX of both native and "wrapped web" apps, just because users don't use the language of developers doesn't mean they don't notice when apps are slow, resource-hoggy, and behave kinda weirdly compared to other apps. I run a Slack for a writing group that's mostly populated by non-technical people and it is not uncommon for users to complain about Slack "slowing down their machine." Just because people don't know the term "native app" doesn't mean they aren't going to be able to tell "this app over here is nicer to use than that app over there," and that might be because "that app over there" doesn't minimize properly, or has weird menus that put common things in uncommon places, or doesn't do what they expect when they right-click on selected objects.


Well, Slack is an example of an Electron app that is sold.


The app is free, and particular tiers of service are sold. Those are not the same thing.


> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.

Actually, the incentive is even stronger than this. The ability to visit a website and start using an identical version of the app immediately is just as important - I doubt that Slack and Discord would have had half the success they did if users had to download the application before using it, regardless of them being available on all platforms.

Discord has had the ability to give links to others to join a chat server since its inception, resulting in a two-step process to use it: click the link, type a name. This is miles less of a barrier to entry than: click link, download app, find downloads, install app, run app, create account, join server (rough process for most text/voice apps up to this point).


It's a difficult trade-off, even for me as a user. The fact that I can load Slack on all my devices and in the browser definitely played a role in my adoption of it. The fact that Slack is dog-slow and a memory-hog on my MacBook Air, on the other hand, has kept me from using it as a central part of my own work. Similarly, I am actively pushing everyone I communicate with the switch from WhatsApp to Telegram because the latter has a performant native app.

If I were to build an app where cross-platform support was crucial, I'd probably start with Electron as well because of time and budget concerns, and switch to native if the app 'takes off'. But on the other hand, I can imagine there being serious risks to building out an entire platform that way and having to rebuild it from scratch later on. Maybe there's never budget/time for it, leaving me locked in?

I suppose React Native could help in that regard.


Most people I know just the app though. Heck, I actually only discovered the Discord web client yesterday.


This issue is so furiously divisive, and I feel like it brings out the worst in HN. I'm from a web dev background. Some of the comments here make me feel like I should be ashamed of that fact.

I can see both sides of this argument (every time it comes up)

As a consumer of apps, I want the leanest, most minimalist, fastest thing going. I want native apps on my devices (If you think slack is a hog on the desktop you should try it on Windows Phone).

But as a developer, I know that electron is a shortcut that means my app will take less time to build. I can take my existing skills, take work I've already done on a WinJS app and publish it on Mac OS, Linux and Win 7. I can spend more time with my family, instead of spending all my evenings learning py+qt, or xamarin, or react native, or whatever the new fangled thing is. And I know people will use it.

Hell, I've even got better odds of pushing an electron app than a native one, as I can submit a pull request and maybe have it appear on https://electron.atom.io/

Obviously, seeing the Electron hate always gives me pause for thought, but at the end of the day it feels like the hatred is from fellow coders (if a dribbling front-end-js writing low-life such as myself can call you writing-assembly-on-a-napkin-while-you-quote-stallman-types fellow coders) and my apps user-base is overwhelmingly non-technical.


> This issue is so furiously divisive, and I feel like it brings out the worst in HN. I'm from a web dev background. Some of the comments here make me feel like I should be ashamed of that fact.

I'm an embedded devices programmer and I'm proud of it. Proud to know a little bit about my hardware, and proud to get the most out of it. And I'm ashamed when I find a more efficient way to do something: it's not an optimisation but a bug fix.


Honestly seems quite sad that developers today know only one (quite awful) language and have no whatsoever interest to learn new things because they can just write an application (probably with a quality not much better than the aforementioned language) that can be used everywhere but taking a huge amount of unneeded resources. The thing that I like the most in software development is the ability to continuously learn new techniques and new, better, ways of solving problems. I'm really afraid to think what would have happened if VbScript would have won the race as the de facto standard for the web.. ..but actually thinking about it probably it would not have been that different from today. We would have had VisualElectron instead of Electron, but with all the same problems maybe in a slightly worse or slightly better language, depending from the point of view.


Your non-technical users are capable of noticing that a certain application makes their computer slow or drains their battery or doesn't support a certain feature of their platform. It may take them longer to notice, and they won't know why, but that won't stop them from switching if a better alternative comes along.

Delivering high-quality web apps generally requires knowing JavaScript and CSS. Delivering high-quality desktop apps generally requires knowing the tools of that ecosystem. Sometimes we compromise to get things out the door, but we should be honest with ourselves when we do.


I'm honest with myself. I'm not disputing that Electron is not the most optimal way to create a desktop app, I'm saying that it makes desktop development (particularly cross-platform development) faster and easier. I know I could make a more efficient app by going native. And hopefully in time, I will. But for the sake of getting it out the door I'll most likely go the Electron route.


I meant that as a general statement, not a challenge to you specifically. I wish I'd phrased it more kindly, because dismissing criticisms as "hatred" and implying that non-technical users don't care about battery life are the sort of thing I'm advocating against.


> That's nonsense.

Proper this and proper that, and don't get me wrong, I agree, but parent is correct -- and that is these apps wouldn't even exist if it wasn't for Electron. It just wouldn't have been a consideration for it to be a desktop app. Instead it would have just been thrown up on the web. For example VSCode would have just been some type of online IDE clone like Cloud9 or something similar.

If you think this is nonsense, you are out of touch, and I don't mean this as an insult, although I realize there isn't a good way to say that.


I think people are talking past each other.

Yes. Electron enables some new stuff. New stuff that wouldn't exist otherwise.

Yes. Electron is a bucket of bloat that saddles what should be small, simple apps with enormous amounts of crap that has nothing to do with the app's functionality.

Both of these can be (and are) true.


I think that's the real take-away here. There are upsides and downsides; I am pleased that it is easier to release desktop applications, but sad that the performance of them is so much worse.

I wonder if one of the issues is that so many developers have now worked almost exclusively in the 'web' sphere, and aren't aware that native development maybe isn't as difficult as they think.


Perhaps someone with both native and web development can set me straight on this; I have very basic knowledge of the former.

I've done the latter for a long time now, and only now I'm trying to teach some people, I properly realize much knowledge is needed to do it right. It's not just arcane knowledge of the quirks of CSS/HTML/JS, but also tooling, build steps, knowledge of 'expected' web behavior, frameworks, libraries, etc. Much of this complexity is still there and often worse if you go for vanilla js and static html/css (in part because expectations of a web app are higher these days).

I started learning native iOS development and expected things to be much easier and more sensible, but instead I get the impression that it's not that different.

Xcode is apparently a piece of shit, and everyone tells me to avoid Xcode's interface builder. There's tons to learn about how a project is set up, as well as stuff that just isn't a concern for the web like packaging it up submitting to the app store (hours just figuring out how to correctly supply icons and get a certificate). More than once, as I'm following an online course, the lecturer will say something like 'this might seem like a logical approach, but DON'T DO THIS and do <unintuitive thing x> instead.

Again, please correct me if I'm wrong. Perhaps when you put it all together it is significantly easier than doing the equivalent on a web platform. I'm just saying that I expected my initial foray into native, in particular Apple's 'walled garden', to be at least a little more like an actual garden rather than the chaotic, exceptions-for-every-rule (but kinda fun!) scrap-heap of the web I am familiar with.


On the plus side, this is an ideal situation to take advantage of - by creating native and fast applications, it becomes easier (compared to before when everyone was doing it "the hard way") to distinguish your product in a bloated market.


Not sure if ironic, seems a sure bet most users won't notice and gladly prefer the "bloated" product.

Yet to hear a single non-dev coworker complain about Spotify being "bloated".


Sorry, I should of been a bit more clear. I meant demanding applications.


In discussions like this its important to try to distinguish between a priori reasons and post hoc justifications.

The real reason that parties like slack and spotify choose electron is because its easy for the devs they have that only really know JS/HTML.

Its not that they couldnt hire more/different devs that could do it in a saner way, its not that its too expensive or that the ROI is not good, when we are talking about companies worth hundreds of millions to billions of dollars focusing on a core market, that is just completely laughable post hoc bullshit.

Its that they dont give a shit, either about the user experience, or improving their toolset. They are happy where they are, and see no reason to change.


OneNote is a great example. It has fairly well-regarded native apps for each platform it supports, _and_ a decent web (HTML5) interface.

I think the issue is time and resources. Small teams, like Slack, would like to create a slick experience but they don't have the time.

Companies like Facebook went the other route -- HTML5 on mobile, got their fingers burnt, and went all-in on native. StackOverflow's iOS app appears to have improved a lot too, in v1 it was a thin shell around the web interface.

I hope React Native catches on. I'm not a huge fan of how 'heavy' Electron is myself.


Facebook's HTML5 app was 5 years ago, this example comes every time but it's outdated. technologies like Electron or React Native didn't even exist at the time.

5 years is huge when it comes to web technologies.


The cost argument is indeed a moot point - look at all of the major software companies, and you'll see that actual development is nowhere near their primary expense. They can afford to implement their own programming languages and OSes (and some do), but apparently it's too expensive or too difficult to make native apps for the respective platforms?

This goes for mobile too.


It's moot if you're an established company with deep pockets moving slowly. If you're starting from scratch with limited funds, it is definitely not a moot point. Would Slack have been as wildly successful if they had to implement native apps on all 3 platforms + iOS + Android + etc? I'd have a hard time believing so.


You don't have to rebuild your app for all platforms. Desktop widget toolkits (Qt, GTK+, wxWidgets, ...) have bindings out to most languages. If you can run Python, you can run a desktop app on all platforms from a single codebase. Also, all major platforms have C and C++ runtimes; if you write portable C or C++, your codebase should run on any of the big three.

It's ridiculous to pretend that you have to write 3 distinct codebases to get a multiplatform application. Are there 3 Firefoxes? 3 Chromiums? 3 VLCs? Back in the day, applications like Banshee, which written in C#, were the rage and were distributed as core parts of GNOME.

Why is there this belief that we have to use JavaScript for multiplatform?


This is a tangent, but CMake is incredible for building cross-platform C++. I had a project a year or two ago that was developed as a .so for Linux, using CMake simply because I like it.

When I went to go build a Windows .dll, I rolled up my sleeves and expected to have a bad day. `cmake -G "Visual Studio 14 2015 Win64"` just went ahead and made a Visual Studio project from my source tree, and that project built and worked first try. I was using all C++14 stuff like std::lock, std::thread, etc, and there wasn't a single #ifdef required in the entire project. Amazing!


If you target the native OS widget set, you must have a separate code base for each platform's native widgets. Things like Swing and Qt don't cut it.

Macintosh users in particular are sensitive to things like the placement of and spacing between UI elements; if anything is "off" from the gold standard set forth by Apple, they will scream and bitch at you because perfect UI consistency is paramount with this crowd.

Web-based apps get something of a pass because they look and feel like Web-based apps (though not always; witness the grousing in this thread about the new Mac Spotify client). But things like Qt and the XUL-based Firefox, which try to look native but miss subtle details, fall into a sort of UI uncanny valley and are roundly rejected by the Mac community.


Overall, that sounds like an argument to exclude Mac support, if anything. But I would suggest that if it's so difficult to get the widgets exact, it's better to use a toolkit that can provide widget portability as well as precise implementations that pass through to native OS windowing.

>If you target the native OS widget set, you must have a separate code base for each platform's native widgets.

I want to clarify the subtle distinction here. If your code's concerns are separated, having to directly provide native widgets on some platforms means a different "codebase" for windowing and widgets only, not for everything. You'd still compile your normal code, and use an #ifdef or equivalent to include the appropriate windowing/widget library.


No, and that's fine. I don't think anyone is going to begrudge a small team the power of developing a 'desktop' application using cross-platform tools like that. It's great that the barrier to developing an application in that way is lowered.

But that doesn't excuse the abuse of that platform. Spotify used to have a pretty convincing native Mac app, which was spiked in favour of their current abomination, and I've watched performance plummet.

Using Electron as a cost-cutting measure is fine, but it's not good for user experience and it's OK to be honest about that.


Electron is great for user experience. Compare to something like JetBrains' IDEs which are written in Java and are much less native feeling than Atom (and just as resource-hungry).


But JetBrains's IDEs are a fair bit more powerful than Atom. The resource-hungriness is taken up by functionality, not UI.


To be fair though, Java and Swing which JetBrains IDEs use are also not exactly native.


Comparing Atom and IntelliJ is like comparing a BB gun to a 155mm howitzer.


> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.

It's very far-fetched to call a lower barrier to entry an issue. The easier we can make it for people to get started, the better, isn't it?


Point taken – 'issue' is not the correct term here. Perhaps 'cause' is better.

Lowering the barrier to entry is great, I agree. It's awesome that Electron and web technologies can be used to quickly launch proof-of-concept desktop apps. I find it significantly less awesome that companies with hundreds or thousands of engineers continue to use it after the concept has been proven, however!


But these apps aren't "proof-of-concept" apps - they are fully functioning, working cross-platform applications.

So are you saying that these companies which have implemented these apps should instead say "You know what, we have this app developed, and working on the major platforms - but let's instead devote three new teams, one each for Windows, Mac, and Linux - to re-implement this app natively! I'm sure upper management will agree!"

That's not going to work. That's going to be shot down and laughed at. No company is going to re-implement a working cross-platform application over to three separate native contexts, and maintain all three. That just won't happen. It works already. The users probably like it just fine. What benefit to the company will this get them? Nothing - just more costs for maintenance across three platforms.

If native is wanted by the users, likely what would happen - if it happens at all - will be the company says "Ok - we'll make it native for Windows, maybe even Mac - maybe. Linux? Forget it!"

It's not the developers. Developers would love to make native apps, for all the platforms. But developers are limited by the companies they work for, and by the economic realities that all the platforms can't be supported natively; at most, only one or two can - because at that point, with the number of users on those platforms, the additional costs of maintenance and support are pretty much saturated. Adding additional native platforms doesn't just add onto those costs, it actually (in theory) multiply them - because a single person might use the application across multiple platforms. So if they have problems on two or three different native platforms, now there are three different support issues (needing more people to support) - instead of a single complaint for a single platform.

I get it, though - it would be great if these apps were native, and worked on every platform, from now and into the future (even on platforms that don't exist yet!). That's just not going to happen; if native is wanted, then only the most widely used platforms will be supported, and even then, one of those will likely be dropped, and it won't be the one from Redmond.

So - what can be done? I dunno. The concept the author brings up ("use React Native") might be the solution. Or some other interface that abstracts a platform's OS and other native functionality out to an API that is the same for all platforms. But now you have issues with security and other access - which you still have with Electron, but it is more contained and constrained, since it has to go thru the Chrome engine and the various rules/settings browsers have for sandboxing bad actors. Or - you leave it to the user and their operating system (and slim it all down - maybe that's what React Native does; I don't know, I've never used it).

Or - you (that is, the company) just says "Sure - we'll do a native only implementation - for Windows only." - because that's how it usually goes.


> It's not the developers. Developers would love to make native apps, for all the platforms.

I don't think that's true in the case of Slack. They only have JS "hacker" webdevs, and those are notoriously resistant to any change to their comfort zone. C++? WPF? Swift? Cocoa? Scary stuff.


"the easier we can make it for electricians to wire up my house, the better, isn't it?"

Unfortunately, no. Just as a trained electrician will wire up your house so as to not set it on fire as soon as you turn on a lamp, a trained developer will make apps that use a minimal amount of resources.

Especially more hilarious if someone writing Electron apps call him/herself an engineer.


I write Electron apps and call myself an engineer, because I am an engineer. Could you please explain why am I being hilarious?


Some engineers take exception with software developers using term, given that the disciplines are somewhat different.


As computer programmer in the web field, I find it awfully pretentious when web developers or any other programmers call themselves engineers when they are not legally licensed engineers.


It says "Software Engineering" on my graduation certificate. I wouldn't be able to get licensed even if I wanted to.


Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?

If yes, what in earth makes you think writing Electron apps is a good idea? If no, back to my point.

To quickly answer JetJaguar below you, yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers.


> Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?

I have been trained as a Software Engineer in Istanbul. I am very sorry that my country doesn't fulfill your expectations.

> If yes, what in earth makes you think writing Electron apps is a good idea?

Engineering is about making trade-offs. There's enough discussion here about what those are for writing Electron apps.

> If no, back to my point.

What, "no"? What was your point? ...that I'm not an engineer? As I said, I am one.

> yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers

I don't think you are a cunt. I think you are uninformed and have concrete opinions based on limited or self-fulfilling-prophecy-boosting experience.


> Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?

No country trains software engineers in the way you describe.


I think Germany was the country described in the post. Whether it's true to the fullest extent is hard to say, but having worked and socialized with them for years it seems mostly true. Engineering is the country's actual official religion. And I mean that as the highest compliment.

(No offense to any other country's engineers -- the engineer mindset is the same everywhere, but the mindset of the non-engineer in other countries is the distinction. That is, Germany grants engineers a degree of respect, almost reverence, that I've never seen in the US.)


Our company recently purchased some Hardware I/O devices that came with software (both made in Germany). Lets just say, I'd much rather buy their cars than their software.


So they should not call themselves engineers - any more than they call themselves doctors or lawyers. They are programmers or software developers. Or potentially software engineers - but certainly not engineers.


All of Europe, also parts of Canada.

No place in the US does, indeed.


Are there standardized licensing bodies for software developers in the EU and Canada? I'd be really interesting in learning more about them if so.

I sympathize your argument and I think the field is doing a great job right now demonstrating some of the upsides of a licensing authority, but actually getting one would be bad IMO.

Most software is not life and death, and licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means (today, these are mostly indirect because everyone is on the look out for them, but they are nevertheless still there). There are good arguments that the AMA and ABA have both seriously contributed to the astronomical expense of their respective services.


>Are there standardized licensing bodies for software developers in the EU and Canada?

For engineers, absolutely. For software engineers, absolutely, because they're still engineers. Software development? Go wild, anyone can do it. >licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means

That seems like a terribly US centric that I keep seeing online. Unions in France, as it's the one example I can be certain of, are in no way gatekeepers, and we are a country where they've been immensely powerful when it comes to influencing the state (whether talking about worker's unions or CEO unions). But you can get any job without being in an union, all they're doing is bringing everyone on an equal footing when it comes to negotiations.

Licensing authorities are purely a society thing. Either you have a numerus closus, because the end goal is for everyone that passed the selection to have a guaranteed job, with good living conditions, or anyone can pass, and good luck everyone. It works in some cases, doesn't in others.


I've never heard of some governing body in France or England taking away the title of "software engineer" for someone who wrote faulty code. What are these governing bodies?


"The easier we can make it for people to get started, the better, isn't it?"

Would you like to use an application that would not have been written without a low barrier of entry? Powerfull and easy tools are not necessarily the same thing.


Those who forget desktop Java will be doomed to repeat it.

For those of you who aren't old enough to have been around, Sun initially pushed Java as a "write once, run everywhere" GUI language. It quickly became clear that Java applications were ugly and terrible everywhere, even in the primitive days of X11R5, when programs used a mixture of Xt, Motif, Qt, GTK, and raw X11 protocol (xv was awesome). Having a Java program for some task was worse than having no program at all, since it would discourage someone from writing a decent native solution.

Fortunately Sun found ways to make money using Java server-side, and Apple helped kill it client-side by not providing it by default.


I would like to offer http://openstreetmap.org as a counter example, where the best editing app is made in Java and the most used one is Javascript. There have been QT4 editors but they have never gained traction, I never found the problem to be performance between native and Java.

I think Javas problem was it's Unix engineering roots, with too little focus on UI, and perhaps a little too much of the "we don't care about performance"-mindshare. The latter being the only problem I see with some electron apps.


Electron is like almost any other cross-platform VM, including Java (which is living under most of the "native" apps on the world's most popular platform), .NET, and so forth. It doesn't share resources between instances but then you also don't get dependency hell (wrong version of Java for application X, etc.)

Subjectively, I'd say that Electron's performance overheads are not bad compared to, say, Smalltalk in the 90s, where one Smalltalk application could bring a fully loaded state of the art workstation to its knees.

And don't get me started about Flash. I had a whole project cancelled after an engineeer brought up the CPU meter during a fairly simple animation.


There are always bad engineers. I can write a shitty function in any language.


I've never seen anything performant built on top of Smalltalk, including the Xerox Star's built in apps which were developed by universally acknowledged geniuses. Flash's performance issues were legion.

Chrome is widely considered the best current desktop browser in a very competitive space. If it has problems it's definitely in idle power consumption (which indicates wasted idle CPU) but it is used by a hell of a lot of people who have free alternatives.


Maybe I have the wrong impression, but I feel like the author of the original post never wrote anything cross platform that's sufficiently complex (ie, more than a "let's fill out a form and press OK" Qt app).

> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.

There's nothing inherently electron-specific that makes an app "shite" any more than writing one in Qt would. You can write shit in any language, framework or platform. The day someone invents a system that protects us from our own stupidity will be the day humans become obsolete.

> They fail to integrate with the host platform,

Untrue. You can—if you need to—integrate with the host platform by writing a native node module, however it becomes less cross-platform at this point.

> they are slow, they hog memory and drink power.

Yes web apps use a bit—sometimes considerably, depending on what you do—more memory, and a bit more CPU (and hence bettery), but they are not perceivably slower, unless you're doing something stupid (in which case the equivalent Qt app would probably suffer in the same way).

> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.

Maybe, maybe not. I think using a cross-platform solution wouldn't be ideal for them (they'd need to either rely on something like Qt, write their own rendering engine, or use something like SFML; all of these are overkill compared to Electron), and the alternative is writing it in different languages for each platform, which would probably inevitably cause the projects to get out of sync in one way or the other over time.


In article Author says he implemented user space usb driver for Chrome on Mac (guess what Chrome is cross platform)


Implementing a feature for macOS in Chromium is considered a cross platform project these days?


isn't QT cross platform?!

in my opinion this is an hiring problem, they assume that if you know javascript you can be spendable on more things and not only on the desktop to write an application.


Can't say I agree. Native apps are often garbage. Discord is another Electron app that I think is actually quite a bit better in most dimensions (attractiveness, functionality, speed) than the native apps it replaces (Mumble, Teamspeak, Skype, etc.).


But is that due to technology choices, or due to more attention being paid to UX? (and I do believe webapps give more flexiblity in that regard)

I get the feeling that UX and UI designers for web- and mobile apps are just a different breed than their native desktop equivalents. It might just be my Windows bias though, iirc Windows didn't have a strong / great UX guideline until their current one came around - and I haven't used many modern windows apps yet, the ones I do use are stuck in the 90's with their button bars and such.


That is due to the technologies available on the web being easily more suitable for GUI development than older native tools or desktop frameworks. We're in a similar boat, we made WPF apps. Now we're bringing it to the web and Electron. The things you can do now in React, heck i wouldn't even know where to begin in WPF or QT. If you look at modern Electron apps like Atom, VSCode, Discord and so on, the things they do and how flexible they are, this would mean so much hardship and pain in older solutions to the point where it's probably not viable any longer.


Other than declarative vs imperative, please substantiate your post with examples of what isn't possible in WPF or Cocoa.

I have a feeling it's more of an experience / competency issue rather than lacking APIs.


It's the ease of declaring components and structuring complex applications with it. There is nothing in WPF that is comparable to it unfortunately, except Redux.NET maybe, but XAML is a lost case against JSX. You'd have to see it, or work with it. The bottom line is, drastically less code, clean structures, things become simple to do.

Debugging is also something different: https://camo.githubusercontent.com/a0d66cf145fe35cbe5fb34149...

Like hot module reload where you write your app live. You edit one component, everything else stays in place and maintains state. Or time travel with Redux, where each piece of state is inspectable. You roll back or slide through the apps history and see it literally build and deconstruct itself. That's possible because UI is just a function of state. Same state, same UI.


It's never going to be possible to fully separate the factors out. With infinite time and effort by the exact same set of devs, I'm willing to grant that electron apps will be worse than native apps. But that hypothetical circumstance (same set of devs, infinite time and effort) does not exist in reality.


Facebook still has not made any desktop apps available, leading me to rely on a hodgepodge of 3rd party, electron-esque shells to provide a desktop experience.

This is despite their mobile apps being the most minutes used apps of any company. Even then, they tried to go with a electron-esque approach and only backed off when the performance and UX tradeoffs became unacceptable.

If one of the most profitable companies in the world can't see the business case for supporting a first party desktop ecosystem, it's very hard to believe many other companies have the justification.

This isn't an engineering problem, it's a business problem.


They don't. Java apps have been cross platform since their inception, .net is extremely close to being just as versatile, x11 is available on mac and pc, and python will run on literally anything. There's a pile of cross-platform options out there for you. What electron does is reduce the barrier to designing UIs, since you can do it all in html and javascript and its just as easy as hiring the guys who did your website to design. Because the one thing all those platforms have in common? Aside from .net (which isn't mature), they all have absolutely hideous windowing systems. Just ugly.


The main challenge, I think is that all of those apps are cross-platform, but also want to keep the same look and feel on all platforms - they're using their own UX patterns inside the container which generally don't match up with their host OSes.

There are some applications that do try so - 1password, GitHub for Mac/Windows to name a few that come to mind - but they seem to get less love than the cross-platform webapps. It feels like they get a certain amount of dev time before going to minimal maintenance mode.


This reminds me of what's happened with Unity (the game engine). There's a completely false perception that Unity is eating other engines' lunch. It isn't. All that happening is that game development has been democratized on an absolutely massive scale. So more games are being made. Most of them are bad. Most of the best games are made with either Unreal, or much more commonly, proprietary C++/asm engines.


Not just cross-platform, I think it significantly reduces the barrier to entry (and total cost of your app) even for single platform.


Clarified my statement above re "nobody"


Bingo--half these diatribes against Electron just seem to be making the case for premature optimization.


Nonsense. The optimization they are referring to is not premature in any sense of the word. If anything, it's belated and overdue.

In any case, the originator of the phrase, Knuth, specifically said that it related to "small efficiencies".

Nobody would call Electron's efficiencies that need to be addressed, "small".

There's a good discussion here: http://wiki.c2.com/?PrematureOptimization


Spotify, Slack and Atom are no longer hip startups though - they're established applications, and at least two of them have a very lucrative monthly subscription model. They can afford to optimize.


But the user experience is already great, so what's the gain? Electron and the web clearly make a first class app platform on desktop.

EDIT: thanks for the downvotes. I'd love to hear your thoughts on how electron is keeping Slack and Spotify from building a massive business and how their desktop users find the experience so bad they don't use the tools obsessively. Clearly there are things to improve w/ Electron (energy usage), but "terrible experience" is not how I'd describe Spotify and Slack on desktop, and their businesses clearly reflect that.

Moving to a native stack has major tradeoffs, would it 10x their business at least? I strongly believe that answer is "no."


Its a terrible experience to discover my battery life is cut in half because of a cursor animation in VS Code, or to have my laptop heat up and the fans engage because of an animated GIF in slack.

But its worse than that - most users don't even know why their battery life is awful. So they blame apple, or microsoft, or dell or whoever. And they just don't use their computers as much, because its all a bit gross and slow. And thats bad news for our entire industry.


Hasn't it already been pointed out a thousand times now that this was just a bug in Chrome that has been fixed right away?


Sorry, I'm confused. Which part of this bad performance was a bug in chrome? The peaking up to 100% usage for a few hours? The shockingly bad performance rendering GIFs? The super high idle CPU usage? The high download size and ram usage?

I used the desktop slack client all of last year, and these kind of problems were present the whole time. This isn't some "oh, yeah there was some particularly egregious bug we shipped accidentally in October" thing. Whatever is making the slack client a bloated ball of crap is much worse than a simple, quickly fixed chrome bug. Its endemic.


A bug that it's a lot more easy to be discovered and fixed when it affects so many people, How many "native" apps have similar battery hogging bugs that are not fixed because nobody founds about them?


As a paying customer, both Spotify and Slack are relativity shitty experiences.

I pay for both despite their poor quality software and bad UI/UX, not because of it.


Anecdotally, I had to cancel spotify due to shitty experience. One of their app updates on my phone left it completely draining my battery in a few hours even when not using the app, with no way to keep it from running in the background other than to uninstall. (ok or possibly rooting my phone to install more recent android with better background process control)


I pay for Spotify but I don't see it's poor UI/UX problems. The sections make sense, the color contrast is good, browsing through an artists albums and songs makes sense, the playback functions bind perfectly to the media buttons on my physical keyboard. What else could I want?


The user experience of Slack as a desktop app is not great. The UI is consistently one of the highest energy users on my machine, despite being one of the worst performing (have you ever tried scrolling up into a conversation's history?)


Scrolling up involves network calls, so clearly it's going to be slower. Energy usage is definitely an area of improvement.


i can easily scroll 10k lines in mIRC no problem


What advantages does the user receive for running the Slack Electron app, as opposed to having the site open in a browser tab?


None that I can think of, really. This is one of the cases where a desktop version of a web app exists and I'm not aware of why.

Spotify makes sense as it can reliably use your file system for storage and thus download songs so you're making fewer network requests. Perhaps Slack could keep a short log to prevent "scrolling up"-related network requests?


Choosing a platform isn't really premature optimization -- if you base your project on electron, and electron is too slow, your alternative is to... rewrite your project?


> You know what the alternative to all of these Electron apps coming out is?

A single page web app?

That's all most of these are. Particularly the Slack/Hipchat/Discords of the world.

> People are excited about the Desktop again

If by desktop, you mean "need to be connected to mains power to run for more than 10 minutes", yeah, people are excited for desktops again.

Electron only really makes Javascript developers happy; they don't have to learn another language or framework to make "desktop" apps now. As a consumer, I fail to see the value in having to download large single page webapps that are equivalent (or worse) than their online counterparts.

Cross-platform applications like Chrome and Firefox and Steam and Doom and Factorio and... these have existed for years, and so have cross-platform programming languages and frameworks. Java, TCL, Qt, Flash... Electron is simply the lastest in a long line of cross platform frameworks, which is currently in-vogue with developers solely because it uses HTML and JavaScript.


>> As a consumer, I fail to see the value in having to download large single page webapps that are equivalent (or worse) than their online counterparts.

I use the Slack app for Windows, and the value for me is that it gets its own easily identifiable presence in my Windows task bar.

I have a (self-inflicted) problem with tab proliferation, and because I rely heavily on Slack, it's just way more convenient to use the self-contained version than to have it running in a tab buried in one of my browser windows.

It doesn't bother me at all that the Slack app is an over glorified web browser running the Slack web client.


IE10/11 let you pin any website to the taskbar with its own icon, complete with notification overlays: https://msdn.microsoft.com/en-us/library/gg491738(v=vs.85).a...


Same for Chrome: http://www.geekpen.com/wp-content/uploads/2016/12/Chrome-Mor...

Probably Firefox also has this.


I can confirm for Firefox. Right click the tab -> Pin Tab. It shows up as a symbol only (hides the title) leftmost tab and never goes away.

Here's more info, with images of how it looks like: https://support.mozilla.org/t5/Learn-the-Basics-get-started/...


> It doesn't bother me at all that the Slack app is an over glorified web browser running the Slack web client.

It kinda should, though, because if the browser component isn't up to date on the regular then it's possible you're looking at (what should be) a simple chat app with a potential RCE.


Discord isn't an Electron app, its React native. And as a frequent user on my Macbook and my Android, it isn't resource intensive at all.


Discord desktop is electron. Only our iOS app is react native. We spend quite a lot of time and effort optimizing performance however, something thats required given the number of large servers with many thousands of concurrent users.


What planet are you on? I've been using great native apps since forever. I don't know what it is but there is a certain segment of the computer user population who don't seem to notice that awesome native apps exists for all sorts of stuff.

"actual dev time per feature (easily 10x the cost)" Yeah seriously what PLANET ARE YOU ON!!! I can make native app functionality in a fraction of the time it takes to do it in a horrible mess of a language that is JavaScript. Poor type system, shitty REPL, shitty module support, shitty debugger. Shitty GUI design tools. Shitty performance measurement tools.

"And as for Qt, Qt has existed for over two decades" It is C++, and we got great alternatives on Windows and OS X. still if you have to stick to C++, then Qt is probably the most widely used GUI toolkit.

"People are excited about the Desktop again" I don't know what bubble you live in. This is just a bunch of hipsters who are excited because they can suddenly use their only programming skills to hack on desktop apps.

Lets get real, this is all about the numbers and not the technology. Due to the the need for lots of custom made web apps, there is a huge population of JavaScript developers. These can now be utilized for desktop app development. It isn't like it wasn't possible before, it is just that these guys could not be bothered to learn anything else.

And it isn't all that important. Hardly any of the applications I use daily are Electron. I occasionally use Atom. Other than that great apps like OmniGraffle, Pages, Pixen, TextMate, Ulysses, Marked 2, Charles, Dash, Kaleidoscope, Tower, Keynote, GitUp, 1Password, Magnet, iBooks, ScreenFlow, Terminal etc none of them are electron based.


>I can make native app functionality in a fraction of the time it takes to do it in a horrible mess of a language that is JavaScript.

Why is Electron and similar so popular then, if it's easier to build native? Please don't say "because shitty hipsters".


He's right, and you're right. Web technologies have a low barrier to entry, so it's easy to go from inexperienced to productive.

Advanced native tools with a powerful language in the hands of a master programmer will be much more productive for the reasons he cites. The type system in particular, with a well-worn toolbox of primitives can make you extremely effective.

So, web technologies are more effective in aggregate across the entire industry, but native technologies can be more effective in the hands of an experienced single individual.


> Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it

Wut? So the whole Mac App Store, MS Office Suite, Adobe's apps, many other stuff were just a dream? Until Electron desktop was dead? You're bending facts here, and I hope that's because you don't know much about the stuff you're talking about.

Desktop is big, just that other things have grown bigger. That doesn't mean desktop is/was ever obsolete or losing it's utility.


By "nobody", he means JavaScript "hackers" (his term) and webdevs. How joyful that those types of "devs" are now excited and "care" about desktop and mobile development.


Limiting oneself to JavaScript is quite limiting. Just like a polyglot person is has access to way more information, cultures and fun than a monolingual one; a polyglot programmer too has access to more tools, paradigms and fun. Also, hammering everything as if all the stuff was nails is not hacking at all.


I guess he meant Indie developers. I am one. I didn't care, but do now.


Eh? Indie developers thrived on Mac ecosystem for years.


And also on Windows, and even on DOS, before Windows.

Googling terms like "shareware" and its history (not the only model, there were, of course, outright paid products too), will show some stories, maybe not a lot, since some of this was before the web, so not archived. But will show enough to get an idea. Jim Button was a classic example, but there were many other indie devs (many of them one-man shows), who may not have made it so big, but made good money from desktop apps.

Google term: shareware jim button

Edited to add:

http://asp-software.org/www/history/the-origin-of-shareware/


Ironically the Mac App Store itself is basically just a WebView in a window, and you're essentially browsing a website (DOM and JS and everything) with extra privileges with the host OS.


I don't see the irony there. And even if there's one, that's irrelevant anyways, the important thing is what's sold thru the app store.


Bullshit. Spotify, for example, existed before Electron, and it was very fast. Text editors existed before Electron, and they didn't require 300 MB minimum to run. Same with collaborative chat programs.

Electron is just a way to reuse web developers for desktop development - I.e. a way companies now have to cut development costs.


I'll give it a go to call you on "bullshit": Spotify is not built on Electron. (Surprise, the author is wrong).

It is and was, however, always built on Chromium Embedded Framework (the UI), even when you say it "was" fast.


Chrome was released as a beta the 2 September 2008. The first version of Spotify was released 7 October 2008, and the download size was around 5 MB. I know because I've using Spotify since it was invite-only, back in those days.

So according to you Spotify was able to build a full featured client using a beta technology in a month. Impressive.

Anyway, here's the version I'm speaking about, built by the people that developed uTorrent IIRC, by the way: http://static.filehorse.com/screenshots/mp3-and-audio/spotif...

Also even if the current version is not using Electron, it's still using Chrome, so the same argument applies.


Yep! And IFRAMES. Oh god, the horror.


Vim and Emacs are probably much slimmer, but a comparable GUI editor- Sublime Text, is using 311 megabytes for just one window of a folder that has 200 files in it.


> much slimmer

Yeah... I opened a 500KB log file in vim and Notepad++ and they are using 5 MB and 7 MB of RAM, respectively. They both also manage to use no measurable amount of CPU (even to blink the cursor!) unless you interact with the window.


> no measurable amount of CPU (even to blink the cursor!)

Is it really possible? What kind of alien technology is it?


I should say "measurable in task manager" - which reports in increments of 0.1%.

Realistically I would imagine that a native Win32 app (Notepad++) that's totally idle except for message loop and cursor blink requires less than a microsecond of CPU time per wall clock second.


It gets better: the caret in "real" Win32 is not rendered to the framebuffer or any memory-backed surfaces, but is "inlined" in the video output the same way the (hardware) cursor is - it's effectively an old-school console sprite.

But it does mean you're limited in the things you can do with it - when Office 2013 gained that fancy animated caret they had to do it themselves, similarly the caret in Atom and VSCode are both software-based.


But - can I get a copy of Notepad++ on my Linux box? No?

Now - I know there are a ton of other options for Linux, but the thing is, I can't jump from system to system and have the same app with the same experience - even if the app was developed as a native app for all of the platforms, because each has a slightly different native GUI implementation and usage which doesn't translate fully between each.

So now I have to learn and use potentially three or more different programs/apps/whatever to do the same task. Or, I have to remember the quirks for each native implementation.

...and let's be honest: Not many companies out there are going to develop a native version of the same app for all the platforms, because most platforms have a lower number of users than others (in many cases, much lower - depending on the genre of the app in question - like games).

It's an economic tradeoff: We either get a balkanized system where for certain kinds or types of apps we need a particular machine for the native implementation, or we have the case of these larger cross-platform apps that anyone can use on any system, in the same manner everywhere.

Here's another thing - most of these complaints seem to have to do with laptop users. I don't really worry about these issues on a desktop, because there I can have a ton of memory and way more CPU than what I can get in most laptops, and I don't have to worry about battery power.

But for those who are stuck with laptops - maybe they need to bother manufacturers to increase the amount of RAM and CPU available, to handle these larger apps.

It's also funny that I hear people complain how these apps are too big, and use too many resources for editing text or whatnot; you make the case that vim and notebook++ use only a few meg of memory, and no CPU.

I tend to wonder how well they'd fair on my old TRS-80 at home - you know, I had a full-screen text editor on it that didn't use much CPU (sub-1 MHz) nor memory (less than 64K) - so why can't we return to that?

Honestly - I don't want to; but we can take this argument down the rabbit hole, because the argument that today's stuff is bloated compared to another case, can easily be made about today's stuff vs older stuff. Most of that bloat of your "smaller" example comes from abstraction; the same as the "new bloat" - not many years ago a program taking of 10 MB of RAM would have been insane. Today, it's normal and expected.

I daresay that in the very near future, programs taking up several hundred meg to a gig or so will also seem normal, because by then we'll have even better CPUs (with maybe hundreds or thousands of cores) and way more RAM (terabytes).

Some might argue that this is the case today, in the form of cloud computing and SaaS - browser-base stuff, in other words.


> But - can I get a copy of Notepad++ on my Linux box?

Yes.


Without wine, you can't.


Good thing wine is widely and easily available.


touché


clearly you aren't talking about emacs :)


I remember a long time ago people discussing that Emacs would never be popular on PCs because nobody had 10MB sitting around just to use a text editor...


Emacs is not just a text editor. It's an operating system.


All it lacks is a good editor :P


Not since Evil, now it has the world's best editor (vim).


aren't all tools a way to cut development costs?


>Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".

A higher barrier to entry has that effect. Low quality crap doesn't get in as easily.

>Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.

I find this argument absurd. Desktop (and mobile native) apps are multi-tens of billion business. Ask Microsoft, Apple, Adobe, and thousands upon thousands of smaller businesses (down to SMEs like Panic and one-man-shops like Sublime Text).

If anything it's those web-based unicorns that are either merely burning VC money, or selling the user to advertisers -- in both cases, there are not much money in selling them directly.

And there's nothing about web development that couldn't be achieved just as easily if instead of all those money on browser engines and teams to create things like Dart, there was some effort towards a nice, cross platform, mobile and desktop UI library.

It could even have JS bindings for all web devs to use -- just without the web stack crap. React Native is something akin to that, but imagine if it had been going for years, and had more industry support, instead of the nth attempt to put lipstick on the web stack.


Making desktop apps might be a nice lifestyle business for niche segments, but there aren't many (any?) big businesses built on desktop apps any more. Microsoft and Adobe are doing well on the desktop, churning out new versions of apps they've been building for nearly 30 years, but I can't think of any company built in the last 10 years (at least) that came up that way.

In the meantime, here's one of those "web-based unicorns" you so despise: https://www.google.com/finance?q=amazon&ei=viTtWMGfNMWQ2Aa42...


If you can't think of any companies that have built significant desktop apps recently, you're just not looking. I don't even get how this is an argument. "Well I haven't personally installed a native app recently, so obviously they're unimportant."

You know what's never going to go out of fashion? Performance. Especially since CPU speeds have stalled in the last decade. you will never get good performance if your design is: "embed an entire browser, and then use a small piece of it"


> Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it

I would acutally pay Spotify extra money if they brought back the old, snappy, pre-Electron version of their Windows desktop app.

<edit>They replaced it with something that is simply too heavy to manage larger playlists (except maybe on the max-spec MBPs it's being coded on) just because they could iterate more quickly on some non-essential features which they might have packed inside a WebView, leaving essential features intact and responsive.</edit>


Seconded for the Mac app. The current experience is shameful.


I have no opinion about their app on those platforms, but as an Ubuntu user, I very much appreciate their electron app.


Yes, the big companies really don't give a fuck about Linux because it's user share is tiny for the desktop.

The fact that a Microsoft editor works on Linux is amazing. Underlying thanks to Electron.


The Spotify app is not built on Electron - it's built on CEF and always has been, author was wrong.


Spotify was not always based on Chrome -- the first version came out only a month after Chrome itself first was released.


What is there to be excited about? Having your battery drain every-time your browser runs some JS app? Wasting CPU and memory that I paid for? Lost productivity waiting hundreds of milliseconds (sometime seconds) for any UI response? I dread using any technology coming out of a SV startup, and its rather sad. I want to be excited, but can't get myself to be. As an aside, with all due respect to these startups, calling them successful is premature. They only appear successful because they're burning through someone else's money.

These technologies you diss have been reliable and stable for over a decade. As an embedded developer, my code has to be running 24/7 for months without rebooting, without running out of memory, without crashing. But forget embedded, and imagine a web service. Can you truly honestly say that an app written in electron will give me that reliability? Or should we just accept that if you want to write apps using "modern" tech, you'll just have to deal with it? I personally can't imagine any web framework ever maturing and being stable enough to where you can invest money in it knowing that it will be around 10 years from now. Having your underlying technology platform in a constant flux makes your entire product stack brittle. That is a Big Deal.


Wow, that's bullsh*t. Sorry but what you said was factually incorrect.

What Electron did is it enabled all those web devs (even front end guys) to write "desktop" apps. And trade offs are certainly visible.

I am boycotting Electron. I have zero Electron apps installed, and recommend strongly against it. I hate that web dev mindset that has been pushed into desktop userland. It just doesn't work that way. Write native desktop app yourself and you will see how wrong it is.


Can you expand on your last sentence? How is React + browser rendering any different then QT rendering? How is HTML/CSS any different from QT layout xml? How is QT sockets/threading any different from NodeJS sockets/threading?

They are very very close to each-other. Obviously they are not exactly the same, and I used QT in this example. But they are not wrong, or completely different.


Not OP but:

> How is React + browser rendering any different then QT rendering?

Qt has far less overhead. A small Qt program can be in the single digit megabytes in both disk and RAM consumption, while using near-zero CPU for the UI.

> How is HTML/CSS any different from QT layout xml?

My understanding is that Qt's layouts are translated to native code at compile time, resulting in native performance and overhead on desktop. Browsers' dynamic rendering is relatively intensive and expensive.

> How is QT sockets/threading any different from NodeJS sockets/threading?

When it comes to I/O I don't think there's a significant cost to using Node/V8. I'm totally fine with CLI applications written in Node but when it comes to UI, a browser is just too heavy.


Haven't wrote a lot of QT, but Cocoa/DELFI felt totally different than writing Electron app GUI.

Writing Electron app felt like sketching, at least to me. You place one line of code with tags and boom there's a button. Now QT takes place somewhere in between, and it still provides nice native multiplatform environment that could be appealing to web developers.


> Writing Electron app felt like sketching, at least to me. You place one line of code with tags and boom there's a button.

Runtime issues aside (bloat, cpu, etc), why is this a bad thing?

I like that I can write a single line of code and "bam!" get a button (or any number of things) to appear. Why would or should I want things to be more difficult for me to develop a piece of software?

Sure, I could do things in some other language - I mean, I know a ton of others. But implementing the same functionality can be a much larger pain in those other languages (and honestly, for app gui development, I haven't found anything that beats the drag-n-drop editor of VB3/4/5/6 - there was something close to it in Visual Studio for C++, but it still required some manual "hook up" with the code for callbacks and event handling and such).

I mean - if I really wanted to do things "right" - why don't I just whip out my text editor and write assembly for whatever CPU I'm targeting? I can full control over everything, then! Best performance! Those guys and their compilers, I tell ya, they don't know what they're missing!

(heh - sometimes they don't - there's a whole generation or more out there who've never coded for a CPU by looking at a datasheet and finding the byte values needed to represent op-codes to hand-assemble a piece of code - sometimes I do miss hacking on the Apple IIe and monitor - CALL -151 ftw)

Anyhow - as someone who's been coding for longer than I really care to say (of course, I kinda gave my age away above!) - I don't want to return to those days; I kinda like living in the future of computing I could only dream about as a kid.


And you really envision the future as webtech? That's sad commentary in and of itself, isn't it? You lament old ways of work in one sentence and glorify the most terrible ecosystem and methodology in the next.

Simplifying in itself is not bad. Improving API with newer and better is not bad. Adding language features to advance productivity is not bad (thinking of C# iterations vs Java). Basing the future on a terrible language, ecosystem, practices and "developer" mindset is very bad.


So do you think that the future of computing is blinking cursor in code text editor consuming 13% of CPU? Or maybe small gif using 30%?

You presented like the problem was in writing of the code, it's not, that's Electron's biggest strength. The problem is the bloat that it comes with in order to provide you with that experience. Like everything, it's the matter of tradeoff. For me, it's not worth it, nor I like writing HTML and CSS.


The problems you get from Electron apps isn't V8, it's Chromium. React-native apps will behave and consume resources much more like a native app than an electron app, hence the name.


I'm not even boycotting it, I just don't use it (or any apps built on it afaik).

Unless they significantly rethink their approach it's significantly flawed without much room for improvement. Plenty of room for something better to come along. Even if it saw mass adoption, the demand for a better base would invite an alternative to gain traction against a sluggish goliath a la Firefox-vs-IE (or Chrome vs Firefox).

Why stick with a doomed formula?


> If your answer is "A native Cocoa/WPF app", you are on another planet

If developers weren't so scared of Swift and C#, this wouldn't be a problem.

> (writing Desktop apps) is massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.

I find the opposite to be almost universally true.

Writing a lightweight native desktop app is almost always cheaper than trying to build a JS-heavy app that has to run well in Mobile, and Tablets, and in a standard web browser, and in Electron fake-native-desktop web browser. Yes, you have separate projects with separate codebases. But two or three small lightweight projects is almost always cheaper than one big codebase with lots of targets, in terms of total cost of ownership.


I think, only a special kind of "developers" is afraid of learning languages. A language is, after all, means to use the frameworks. If you are an experienced developer, moving from a language to a language is a matter of hours to days, depending on paradigm changes. Learning frameworks takes much longer, obviously, but with the wealth of information available out there these days (Stack Overflow, message boards, blogs, etc.), "hacking" on new frameworks is also quite easy.

I think all these "massive cost" comments come from sheer ignorance. Those "devs" are frightened at the need to learn a new language and frameworks, overestimate the time required to learn those, look around them and only see likewise clueless "devs", frightened of changes, and extrapolate some comically high overestimation of cost and time, when in reality, a properly written software is much more accessible to join in and support than a web "app" with the contemporary "sexy" observer pattern nonsense splattered all over, coupled with a horrible, horrible dependency management system and a language/framework combo that requires multiple dependencies to perform the most trivial of array loop.


Spot on, I serioulsly don't understand people that are Java developers/front-end developers/technology X devs if you're not talking about you're currently working on, and instead treat that as their profession.

Anyone would think it to be ridiculous if a carpenter told you he only works with saws because he is a saw carpenter and that for other kinds of work you should see the plane or sander carpenter. To me saying you're an [insert tech here] developer sounds the same...


> Anyone would think it to be ridiculous if a carpenter told you he only works with saws because he is a saw carpenter and that for other kinds of work you should see the plane or sander carpenter.

...and yet there are specialties within like "framer" and "finisher", among others. Then you have interior workers like specialized custom cabinet makers, flooring specialist, drywallers, painters, etc.

What I'm trying to say here is that even in "carpentry" for putting up a house, there are numerous specialties.


Specializing in one field is not bad, and to me, actually preferable. (I don't like jack of all trades types of developers.)

When the mere notion of a language change or API/paradigm difference frightens you, you are a bad developer.


> I think all these "massive cost" comments come from sheer ignorance.

I can't speak for others, but for me, the "massive costs" isn't about having to learn another language or framework. It's instead the massive costs to my employer. It might even be a massive cost to me as a single developer.

By developing a cross-platform app using a single set of easy-to-use tools, a large audience of users can be gained, that would otherwise be prohibitively expensive to support if native-only was the mantra. Instead, that application would have to be developed for only one, maybe two of the "major" platforms (and guess which platform it wouldn't be developed for - that would be the platform that I like most).

Supporting and maintaining a single codebase for one platform is a monumental task for any company, let alone a single indie developer. Supporting and maintaining multiple codebases for multiple platforms can be debilitating for a company, let alone a single developer.

I lived and played in those times; back in the "second gen" if you will of the microcomputer days - you had games and apps by different companies, and developers. In most cases, a game or app was only developed for one of those machines (usually the Apple IIe or the C=64, sometimes both - maybe an Atari too), but the other systems were all considered "second tier" by most developers. You might get a port of a game or app - but most times, you had to settle for something else, or buy a second system (ha! only if you had real money! I look back on the costs of those systems back then, and wonder how my parents ever managed it).

There's a reason you see a lot less of that going on today; it isn't because devs are frightened of learning a new language or framework.


The real irony is that OP's bio says

> JavaScript Hacker at Slack

meaning he works for a company that heavily leveraged an existing protocol, IRC, for which there are already a metric fuckton of native clients and libraries for every platform under the sun.

Implementing their shitty web client was undoubtedly far more work than supporting a handful of native clients.


With my company, our users have always begged for an Mac OSX and iOS app. We never provided it, because we have zero-internal expertise with any of the technologies involved. We could contract, or hire for that specific purpose, but the moment that team member was gone the project would be dead and out of sync with the rest of the codebase.

The codebase isn't lightweight to begin with, and duplicating it for a native app that maybe only one person could maintain was a non-starter.

What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?

Atom?

VSCode?

Slack?


> We never provided it, because we have zero-internal expertise with any of the technologies involved.

Couldn't your team learn the technologies? These days there is an abundance of resources available (online tutorials, books, bootcamps, etc), especially for ecosystems as popular as the Appleverse.

I can't pretend to know your situation, but as a reference point we had an iOS project come up at work a couple years ago and I was able to pick up Objective-C and the Apple libs in a few weeks while still being productive on other projects. I followed Apple's official tutorials[0] and built some toy apps, then learned the rest as I went on the real project. This is after having never owned an iDevice and doing mostly web and devops work in scripting languages for many years prior. A few peers of various experience levels were able to ramp up in about the same amount of time, so I'm not special.

[0]: https://developer.apple.com/library/content/referencelibrary...


> With my company, our users have always begged for an Mac OSX and iOS app. We never provided it, because we have zero-internal expertise with any of the technologies involved. We could contract, or hire for that specific purpose, but the moment that team member was gone the project would be dead and out of sync with the rest of the codebase.

I don't see how that is any different. If your Windows team members all left, that project would be dead or out-of-sync too. Wouldn't you replace valued team members who leave, in both cases? Or is this a concern that you won't be able to find developers willing to work with OSX and/or iOS tech?

> With my company, our users have always begged for an Mac OSX and iOS app.

Repeating this again because this should be telling. If your getting feedback begging for an OSX and iOS app, there's probably a number of really good reasons for that.

> What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?

Spotify, Slack, Twitter, Facebook, any streaming video service (Hulu, Netflix, Amazon Prime Video, HBO GO), etc.

Note that I'm not using "lightweight" to mean "small weekend project", but to mean "less complex than the codebase needed to reproduce these features in a browser or Electron browser".

--

If you are truly a small company / startup, and you truly have to support all platforms with a small team, then sure Electron makes sense despite all the drawbacks. I totally understand that.

But I usually hear this excuse from big companies, that still want to perceive themselves as small, but aren't. Slack is a billion dollar company, they are not a small business / small startup. If a company is large enough to have more than two people working in HR full time, then they are probably big enough to do this stuff right. "We're a small team" simply isn't true for them.


No one is scared of C# and Swift, XAML and so on, but these technologies aren't even close to what you have available on the web. React and Redux make apps possible that you wouldn't get with older technologies, not even with 10 times the effort and code. The animation possibilities, transitions, the flexibility overall, the eco system, debugging capabilities, hot module reload - if you have worked with it, you will not want to go back. Case in point, modern apps like VSCode, Atom, Hyperterm, Discord do things that would be very hard for a native app to mimic, and they do that with absolute ease.

I agree though that Electron is a problem. React native will probably be the best way forward. Microsoft has recently taken over RN-Windows and ReactXP, a RN-web clone. RN runs natively, doesn't need a browser, while being able to tap into the JS eco system.


Ah, the wonders of discovering the observer pattern. It's really a life "changer". How did the computer world survive for so many decades without it.


Explain? Redux isn't even close to observables in other systems. It's the immutable pattern that's a life changer if anything.


Redux and co are still the same pattern fundamentally, it's just that the implementing code is spread throughout your stack, instead of in an "Observable" implementation. That combined with an optimisation allowing identity and state equality to be conflated.


I don't know what you're saying, this is the complete opposite. Redux was made to centralize, because observables are literally spread throughout the stack. The entire logic is combined in composeable containers, later wrapped in a single store. Flux pattern is also completely different in how it works, there's almost no similarity whatsoever.

http://iweave.com/assets/blog/mvc_v_flux.png


Redux makes every single component check nextProps.foo !== this.props.foo, instead of just having them notified when it changes.


That is the whole point of immutable state, and it is very effective because parent props are also notified through shallow checks, while observables have to bubble up and keep track of their relations.

You also seem to agree now that both are completely different.


That's not MVC. That's the MVC Facebook attempts to show in a ridiculous presentation that finally convinced me how much they had to misinform to get these silly concepts any legitimacy.


That is exactly how a .NET WPF application worked, and that's also how MVC communicates in general. The flux pattern is very different from it.


> these technologies aren't even close to what you have available on the web

Yeah, they're that much better, that we grumpy old non-JS programmers keep wondering what the big deal is when somebody comes along and rewrites something that existed 20 years ago.


That isn't the point. JS does something the others can't. It has flexibility and a community that's larger than anything you know plain and simple. Btw, i am a grumpy non JS developer, or used to be. Started with assembler, c/c++, then c#. That sums up most of my professional career. I was wrong judging JS too fast, that's all i'm saying. Many of my colleagues are still grumpy about it, they'll get over it over stay behind. If they do move on they will see what they're missing soon enough.


What is VSCode doing that's impossible to do in a native application?

I'm curious, since I use PyCharm and I wonder what I could be missing out on.


You get to write plugins for it in Javascript. That seems to be the big thing about it. I've tried it, the hype is just fluff.


See the wider picture, how flexible they are and how fast they grow. Their freedom in making things possible that would be very hard to realize.

The shell that i'm using for instance, Hyperterm, it does things no other shell can do, and they had more than a decade to evolve. JS doesn't have a problem displaying json as json, display webpages on link clicks, moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS.

The same flexibility you see in Atom, VSC and the others.


> JS doesn't have a problem displaying json as json...

You know what else doesn't have a problem doing this? Literally any other programming language or tool that I've ever used to look at/edit Json.

> Hyperterm, it does things no other shell can do...display webpages on link clicks moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS....

Any pretty much any other terminal built with flexibility and extensibility in mind (even the base terminal in Linux can handle links lol, that's definitely not exclusive to js). ZSH springs to mind, with the benefit of being written in Case, so that you know, it's actually fast...

> The same flexibility you see in Atom, VSC and the others.

Ah yes, including the freedom to not open any binary file, or any file >2mb in size!


> React native will probably be the best way forward. ... RN runs natively, doesn't need a browser, while being able to tap into the JS eco system.

Similar arguments were made when Java first came on the scene. "Write once, run anywhere." [1] Years later, here we are again with "native" Javascript libraries. I'm sure in another 20 years or so we'll be rehashing this again, just with some other language.

[1] https://en.wikipedia.org/wiki/Write_once,_run_anywhere


Java Swing wasn't native, though. SWT is more comparable.

React Native isn't "native" (in quotes), it's native... it uses the actual widgets provided by the host OS. You use similar techniques to create your UI for each platform, but you do generally need to create separate UI for each platform with React Native.


There have been so many desktop apps released since 2007-2012 that I really don't know where to start:

- Everything GNOME does

- Everything MS does

- Everything KDE does

- All the apps in Mac OS

- Every browser

- Emulators

- Office suites

- Messaging apps

- Adobe's Creative Suite

- Blender

- All the content creation apps for video games

- Video games

- Editors

- IDEs

- Audio production suites

Web developers are excited about the desktop again, because to them the desktop is some kind of new frontier that Electron has opened up to them. I get that, but please be aware that just because something is new to you doesn't mean it's actually new.


>But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that two of the largest tech companies in the world are behind, yet couldn't do.

>xpaulbettsx: JavaScript Hacker at Slack

Ah, yes, we are all grateful for Slack's 850MB of memory used when idle. I can live without it, personally. Actually, if it could crawl back into whatever hole it came from, I'd be glad.

Electron isn't good. Electron is absolutely awful. Web applications aren't good. Javascript isn't good. At what point of dementia do you say "hey, web APIs are easier to use, let's bring the entire damn browser along with it!"? The normal reaction should be fixing current APIs, making good wrappers around them.


One of these days, you'll look at 850 MB of RAM like I used to see 4K of RAM: Amazed at how small it was, and how much we have today.

My first computer saved data to a cassette tape; today, terabyte hard drives are nearly give-away prizes in cereal boxes (yeah, I know they don't do that anymore, either). We have machines in our pockets which are arguably (in some ways) more powerful than the machines only government-funded agencies and such could afford when I was a kid (and they took up entire rooms).

I hope I live long enough to think of 850 MB as a small amount of memory (and 8-cores at 4 GHz each a slow machine; actually, there are some multi-cpu server mobos I'd love to have as a desktop, but I can't afford it - yet).

Honestly, I'm amazed that we do have enough memory to support these huge apps. Sure they're bloated; I won't argue that - but at the same time its amazing that we can run them at all - a decade ago it would have seemed ludicrous!

Could it be better? Sure - but in my mind, all software is bloated - because even the simplest piece of compiled code I can't run on my old TRS-80 from when I was a kid (heck, even an Arduino has better specs!).


And yet, we live in the present. The rate of software bloat is outpacing improvements in commodity hardware.


> Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.

Clearly Java never existed. I've been experimenting with Swing lately because Java has a library I'm interested in using. It's a breath of fresh air compared HTML/CSS/JS. It may not be proper "native" development, but I don't have to deal with <div> hell. It has proper layout management instead of, what, 4 quirky CSS layout styles: float, table-, flex-, and now grid-. Sure if you want to hack something together, Electron may be quicker short term. But I question if it will actually be cheaper in the long run for non-trivial applications.


> It may not be proper "native" development, but I don't have to deal with <div> hell. It has proper layout management

If you want something that's much closer to proper native, have a look at SWT. It's faster than Swing and actually looks/feels native since it's actually using native widgets. It's really a shame that the poster child application, Eclipse, is so bloated and slow, because people attribute that slowness to SWT when it's really primarily the plugin architecture that's responsible. And I actually find the more spartan developer interface to be more pleasant than Swing, so to me it's win-win. Swing will always have an uncanny valley, SWT has never had that issue, yet still allows you to write for the desktop in only Java.


I agree. I've been developing Swing and SWT applications for 15 years. SWT has a simple, lightweight API that just works. Swing is so bloated an complicated that you will be tearing your hair out to perform simple tasks.

I feel that Sun's choice of Swing was the main reason Java failed on the desktop.


Thanks for the tip. I'll have to look into SWT. I only chose Swing because it was included by default, but I would prefer to have the actual native look on each OS.


Layout management: I do wonder sometimes how much awareness the HTML/CSS/JS community has about how layout management is handled in things like Java's Swing or Apple's Auto Layout - or do people think that "desktop style" UI layout management is the "old way of doing things", and obviously HTML/CSS/JS is much more modern and therefore must be better?

I don't think it is better, to be honest, and so I also wonder why people aren't crying out for decent layout management for the web - is it just lack of awareness of how good things could be, or...?


> I also wonder why people aren't crying out for decent layout management for the web

They have been, and that's why today there are things like flexbox[0] and grid layout[1]. It just took a long time for the browser vendors/standards bodies to be convinced and then spec these things out to work within the confines of the existing layout models.

[0]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexibl...

[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_La...


Can't help thinking that Visual Basic had a better layout manager than CSS. I hate fixing CSS issues with a passion. I watch a designer whip up something in Illustrator and can't help thinking that CSS & HTML are a primitive way to do layouts.

Smalltalk had a GUI based approach to design in the 70s on a 2 mb disk, and most web layout is generating html & css, then tweaking that stuff. It doesn't feel like web layout is a progression.


Interestingly, Visual Basic did it all with absolute (pixel-by-pixel) positioning. Some (not all, but some) of the issue is that we have moved forward in what we expect out of UIs. (The rest, of course, is that the Web front-end is a shitshow of epic proportions)


The largest barrier to good layout on the web is that you can't measure text. All you can do is measure a node that has a specific text, which takes a long time and includes a reflow, which means either a) abysmal performance, or b) an entirely async layout algorithm, which would be much more complicated.


Java isn't new and shiny anymore. Having been around for the great Java hype of the 90s, it's crazy that Javascript has now replaced it. Nobody would have predicted that back then.


Javascript, Java, it's all the same after all ducks


I really hate Netscape & Sun for causing that confusion, and making Javascript look like Java on the surface, particularly as it obscured the prototypal model of JS and made anonymous functions clunky until the arrow functions of ES6. But it was good marketing.


If, instead of Swing, you use JavaFx (which comes included with Java) your UI will look nicer. It even has an embedded Webkit browser that can easily interact with JVM code. This means you can program your UI in any HTML/CSS/JS framework you want, and any other logic in any JVM language (Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc.)


I can't speak for large companies trying to make the call on Electron versus multiple OS-specific apps, but for indie devs this is absolutely critical.

Imagine being a small shop of 1 person (or even 5) and having to learn desktop programming languages, conventions, and native APIs for the web, Mac, Windows, and Linux. Then on top of that to develop and maintain a product that moves at the pace of customer feedback.

That's why Electron is powerful. I could not write and maintain the Hemingway App for Mac and Windows without Electron. My tens of thousands of users would not get to use that software _at all_.


Off topic but I just checked out your app. Very clever! I'd love to see a world where it is integrated tightly with iOS. Writing concisely is especially useful on mobile. Also I dig the demo. Great example of why developing electron apps can be really beneficial.


Thanks for your insight. The part that confuses me as a non-native developer is the folks in this thread who seem to be implying that there are only marginal amounts of additional time needed to release cross platform, fully native apps.


> as a non-native developer

I think the implication is as much that developers who label themselves like you have as "non-native" or "JS" or "web" developers don't have the perspective to make an informed decision on the matter.

Many of the rest of us know how to build React SPAs (and have been doing it long enough to have used Angular 1, Backbone, Sproutcore, ...) but "JS + Framework of the Week" is just one of the many tools we could turn to when building a UI. Many of us have also used cross-platform native libraries like QT or GTK, or platform-specific toolkits like Cocoa or .NET.

Basically, if the only tool you have is a hammer... it might be time to learn to use some new tools.


Are you saying that building a web version and two (or five) native versions of an application is equivalent to building a single JS codebase and having it just run? It certainly sounds wonderful to learn new tools, but how does that stack up from the engineering (cost-benefit) side?


> Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.

Come on man, it's a little too easy, but NOBODY? People write desktop apps all the time. What you mean is "nobody I know", which probably says more about your social circle.


By "nobody", he means himself and his JS "hacker" "dev" peers (his words from his profile).


One more "Big Deal" is that both Chrome and Electron are not accessible by default like native apps and take performance hits when VoiceOver is enabled.

I'm not "excited" that macOS is flooded with apps that have poor accessibility experience while Apple itself is fanatical about delivering first-class accessibility experience - like Safari having accessibility always enabled.


If more developers had a second look at FreePascal/Lazarus [1], they'd realize they could do pretty much the same thing, but with native compiled speed, minimal memory usage (no bundled JREs) on all the desktop platforms.

It's graphical GUI builder is out of this world, and it works flawlessly on all three desktop platforms. It did even compile without a single code change on windows, after I finished coding in ubuntu, a utility tool for some researchers in my organization [2].

And about the language ... I finished my utility tool project in 80 hours, from initial concept to happy customer, while learning the language in the process.

It is really such a shame that people have so strong aversions against Pascal, for no good reason other than subjective feelings.

[1] http://www.lazarus-ide.org/

[2] https://github.com/nbisweden/mdc-file-export


The other thing I find ionic about this whole article is that it's perfectly easy to only use Slack within Chrome as a normal web app. And the UI is almost exactly the same because it's nearly all the same code.

So if the Electron bloat is such as issue for you, just hit Slack in the web directly.


>So if the Electron bloat is such as issue for you, just hit Slack in the web directly.

That's even more horrible, i've tried it. Chrome uses his own notifications on macOS instead of integrating with systems notification manager and always when one of these ugly, poor animated guys pops up, i can search through my 100 open browser tabs to find the damn chat. Millions of flamewars were fought about what's the best window manager and we are ending up with a single window with a tabbar? Are you kids serious?


So just run it in a dedicated chrome window so you can alt+tab to it directly?

This was the main reason I used the electron version of slack and tbh it makes little difference if I run a dedicated Firefox window or the app since my laptop is less than a year old and could run a few hundred instances of either without sweating..


I have Canary installed for this very reason. My mail app, Slack, Jira and any 'productivity' sites live on that, whilst everything else stays on good old Chrome. Switching from one context to another then becomes very straightforward.


Use Safari then.


Safari just doesn't work for me. It's my favorite browser ui wise, every time i'm getting a new machine i first stick with safari, but after a while under heavy use i notice all the problems with Safari getting slow, doing weird things, etc and i find myself on the Chrome download page again :(


While I was on mac i got an extra few hours of battery life from using safari.

I'm on Firefox these days; with noscript (once you go through the initial few days of pain whitelisting things) it works brilliantly; I would suggest you give it another chance..

Unless you try to book flights from britishairways though.


s/ionic/ironic

But I find it to be a humorous typo given the topic of this discussion as ionic is a browser-based cross-platform mobile framework.


Indeed, we should start calling electron-less applications ionic.


I do this. Go to Slack and click 'add to desktop' and Open as Window and you're good to go. Same for Whatsapp Web etc...


People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again...

You could have expressed pretty much the same sentiment about Java ~20 years ago. The trade-offs (high resource consumption, additional complexity, but less demanding of developer time) and even the fundamentals of the approach (an additional abstraction layer that works as a VM, APIs oriented around all the latest thinking) were also very similar.

Java is used for a lot of things, but mostly not as the basis of mass-market cross-platform applications the way it was originally intended.


> Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".

This comment is completely wrong. You know what Slack was before it was an Electron app? It was a Cocoa app. Sure, it wrapped a WebView, but given Slack's nature that's not a surprise. Regardless, the Cocoa Slack app, while not great, was still much better than the Electron crap we have now.

I don't know if you're simply ignorant of the fact that the app this article is focused on actually was a native desktop app before it switched to Electron, or if you're just ignoring that pretty important information in order to try and make your point, regardless of whether or not it's true.


I don't think OP is ignorant of that fact - because they work at Slack.


Oh interesting. So why'd they even try to make the argument at all? Are they a new hire and simply aren't aware that Slack had a native app until recently?


> finding specialist developers who know these dated technologies

WPF is newer than Javascript and the DOM. Using native desktop GUI frameworks isn't some sort of ancient dark art. It's easier than web development!

>if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.

The same thing applies to the quality and performance of javascript apps. Decent performance and feature parity is always just about to happen when the next iteration of the standards is implemented, but it never arrives.


You couldn't be less coherent about this.

Desktop development has stagnated because of the proliferation of web technologies, not because desktop frameworks aren't very good. Qt is great, gtk is great, hell, with the right configuration, working with WinForms can be pleasant.

Electron packs a browser with each app, and that's unacceptable. Adobe used to have something called AIR, years before electron, but idk if they were just packing up a browser or not. I used to use the desktop version of tweetdeck, which used AIR, and it was pretty damn fast.

Anyways, you clearly don't know anything about desktop development, so rather than make claims that are wrong, just stick to what you DO know.


Wasn't this basically true of Java in 1995? Everyone was excited about finally being able to easily write cross platform apps. And then there was flash that anyone willing to pay for the designer could build apps that could run on any platform. So now we're on to the third thing.


> You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".

Agree fully with this. I find the complaints about how appalling it is that apps use a few extra 100MB of RAM and disk space really tiresome and impractical when laptops + desktops typically have lots of RAM and disk space now. Many of these cross platform apps either wouldn't exist or would exist on only one platform if everyone insisted on native apps especially for apps made my small teams or individuals.

I'm not saying Electron is perfect but easy cross platform support is a huge benefit and one of many. You also have the huge JavaScript community, pool of developers and available libraries to draw from as well as being able to share code between your desktop app and web app. Tossing that all away to save resources that most users aren't even going to notice isn't a good business decision in my opinion.


"Everyone has Jesus of memory, or doesn't matter if we chew through a bunch of it because we were to lazy to use the right tool for the job".

Just because it's there doesn't mean you need to use it all, that's like spending your whole paycheck at once because "you'll get another one in 2 weeks!". Any application that does the same job with less resources and the same or greater performance is unequivocally, objectively better.

Also, benefits of claims like easier and faster updated are rendered pretty moot when developers next spend that time changing things for the sake of it: Spotify app feels like it has since pointless, un-requested UI change every month. And VSCode can't even update Idle, decor that being one of the lauded benefits...


> finding specialist developers who know these dated technologies

It's crazy to think that developing something native on Desktop using the default stack of the operating system means using "dated technologies".


What? Qt is very popular and I use several Qt apps. Same for GTK. And there is MUCH more desktop software out there than functional web apps. I don't know what kind of Windows 10 fantasy land you live in but there has long been an answer to desktop apps.

What Electron has done is allow people with a particular skillset to apply it to a domain where it is both unnecessary and wasteful. These "developers" would be better off generalizing their understanding of software and UI design to the point where the tools used are irrelevant


The weird thing is I'm almost certain you can build the exact same thing with e.g. python+Qt, do literally all the drawing using python code in custom widgets, and the result would still be nicer to use than the status quo. I'm not advocating switching to the PyQt stack, I just want the electron stack to be saner, in particular

1) Electron is obviously a system library now. Treat it as such. Install ONE copy of electron per version, not ten private copies of electron in npm. It should be simple to integrate with at least Windows and OSX install systems - instead of downloading the full electron module, download an installer that installs the required version system-wide. This can be done as a step during installation like desktop programs have been installing required dependencies since the dawn of time.

2) Find some way to make it slimmer. Maybe not every app needs the entirety of the browser loaded. Maybe we can do lazy loading of certain components? Maybe optimise the most used critical paths, and provide tools to developers (if those don't already exist) to show exactly what is causing all those wakeups?

Moving off Electron is not an option any more, it's just such an easy and familiar platform to develop for, it's not going anywhere. Maybe we should accept it as the victor of the cross-platform desktop application toolkit and work to make it better. It's shitty, but it's the right kind of easy, like PHP.


You should throw a disclaimer in here that you're an Electron maintainer/Slack developer.


If you ship an electron app, I don't bother downloading it. I usually just move on. If I don't have a good choice, I suck it up and use the web app.

Electron isn't a better experience, and if it's in a browser, at least it can share some resources with other web pages.


What about NW.js, It's the real hero here. They spent 5 years building up the platform, being used by thousands of companies and apps, basically inventing this technology; just to have GitHub buy up their intern and pay him to make a shitty knockoff called "atom-shell". You're excited because someone tossed a marketing department at a shitty knockoff?

NW.js uses less resources (less ram, smaller distribution sizes).

It is updated more often (within 24 hours of every Chromium and Node.js release, ensuring access to latest technology).

It supports about twice as many OS's (XP+, OSX 10.6+, Debian 8, Ubuntu 10+). Electron doesn't even care enough about Linux to merge in simple bug fixes.

It's much easier to get started with and takes a no-nonsense approach to everything. (thejaredwilcurt.github.io/website/quickstart.html)

Allows for HTML or JS entry for apps.

It offers actual source code protection, and even recently updated this so that there is no longer a performance hit when using it. Which is a pretty serious technical achievement.

The only thing wrong with it is that it's got a shitty name/logo, and has a smaller ecosystem. If you can get passed that you will have a much nicer experience.


I agree and the author fails to mention the tradeoff between ram/diskspace and development time.

>>all you web devs: Go learn C or Rust or something. Your program runs on a computer.

I don't think that is a realistic request.

When the cost of Slack's ram/diskspace usage starts affecting their profit then they can/will take steps to re-write the application in 'native' code. Until then its premature optimization?


C is not that hard! If you're a programmer, and you don't know C, you're not good at your craft. I realize that sounds incredibly harsh but it's true: your job is to make a computer do a task efficiently. If you can't even read the language that makes that possible, you don't know what you're doing.


I could re-write your argument by replacing "C" with "x86 assembler" - but I won't. However, the argument could still stand, but I'd hesitate at saying it's a wholly valid argument.

My own prejudice aside - I honestly do think that if you don't have an understanding of how a CPU works at this very low level - maybe even lower - you don't know what you're doing; that, and I have done hand-assembled x86, 6502, and 6809 code in the past...

It's a different time; knowledge of lower level languages isn't needed today to be a successful software developer who knows their craft. And I recognize my prejudices as such, thus while I tell them, I understand that they really can't apply. Would I like it if these new guys all knew this kind of stuff? Well - yeah; but the same could be said of me by guys long gone and dead as to why I don't know how to wire up an analogue computer to solve a calculus problem, or why I can't wire up a plugboard to compute something on a IBM 401 or such. That doesn't make them a better developer, nor me or anyone else a worse one. We're just using different technologies.

That said, I do think developers should branch out, and at least have an understanding of other languages; maybe C/C++ - but even Java, Python, GoLang, Rust, etc - all could be just as useful to know.

Today - career-wise - I am a javascript/node.js developer - before that, I mainly used PHP. At home, I play with Perl, C/C++, and Python (these latter two more recently being involved in machine learning, TensorFlow, and Keras). Occasionally, I'll get a bit nostalgic and break out BASIC (my favorite today is QB64). Can't say I've done any recent assembler coding - but it doesn't scare me, either. Just another way of communicating with the machine.


I get that everyone has their specialty -- Im not trying to imply that someone that writes javascript for a living is a lesser coder or everything should be native. My point is that there are fundamentals to this craft, and all these web developers acting like C is exotic knowledge baffles me. With regard to your first point though, I actually do think knowing some form of assembly is also super useful. I just tend to focus on C since as is well known, its basically portable assembly.


>If you're a programmer, and you don't know C, you're not good at your craft.

Even as someone myself who knows (more or less) C, this is quite an elitist statement, and doesn't do justice to say, Lisp and people who use functional languages. The goal of programming isn't necessarily efficiency.


How is expecting competency elitist? Theres plenty of free resources to learn C. I learned C when I was 14.

Im not trying to claim low level knowledge of the machine is inherently better than high level knowlege. What I am claiming is you need both and knowing how this stuff works isnt a charming novelty reserved for the curious


Premature optimization used to mean optimizing something that doesn't cause any noticeable performance degradation.

I think it's a mistake to redefine it as optimization that doesn't affect profits. Companies often get away with painfully broken software for a long time because they benefit from some other moat (e.g banks). But they're setting themselves up for disruption by doing that.

Also, profits are not the only purpose in life if you value your craft. (I'm not denying that you have to be able to afford high standards and economics does play a legitimate role)


> I don't think that is a realistic request.

Hm? How come? Is there a mental barrier here that I'm not understanding? (C and Rust are "hard" languages?) I taught myself to code on my parents' hand-me-down Pentium 100Mhz in the early 2000s, and I started with the K&R C book. Python and C were my swiss army knives at the time. I really can't see this stuff being super difficult, I mean I was a teenager and I had no understanding of algorithms.


Why would native apps be so expensive? Are they 10x slower to write? Isn't the tooling etc for making a WPF app at least as good as it is for electron? Obviously if you have a web app or node backend already you might have code you can re use but that's not the case generally. Just being able to not use Js should be a pretty big win (although .ts is also available for electron I suppose). Not to mention that libraries etc are specifically created for desktop and not web (e.g the xaml layout model vs the DOM). Or are they expensive because traditional desktop devs are hard to find and/or cost more?

I think it's sad that both Apple and Microsoft dropped the ball on desktop because they are only focusing on funneling users to their stores. If the Microsoft of 2017 (with open source .NET and Linux as a focus) had been around a few years ago when WPF was invented, they could have made a decent cross platform toolkit now that didn't come from html/CSS/js


> it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies

What are you basing this on? There are plenty of iOS developers, and developing for Mac vs iOS is more similar than pretty much any two other dev platforms you can name.


Developing for Mac may = Developing for iOS.

Developing for Mac + Windows + Linux + Web > Developing for Electron.


Who gives a shit. The point of writing natives apps isn't just to check it off your bucket list - natives apps can have advantages over web apps if done right. Those advantages include better performance, integration with native UI components, integration with native APIs, and integration with the file system. The only one a "native" electron app can accomplish that a web app can't is the last one. If your app doesn't require integration with the file system beyond a file selector then an electron app offers no benefit AFAIK. Companies do it purely so they can tell everyone they have a native app without having to invest the work required to get a real native app. My suggestion for the author is to find a browser they like, and then run all electron apps as web apps within the browser whenever possible.


> Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along

What?... Are you saying what I think you're saying. I must be reading this wrong because surely you don't wish to imply that people didn't write desktop software before Electron came along?


> Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it

What a load of tripe.

Try stuffing all the functionality of a large enterprise desktop app like SAP or Siebel into Electron and let me know how well that goes.


Bologna: nothing truly performant uses Javascript because Javascript can't get at hardware accelerators in their full glory. Apple's macOS and iOS are themselves desktop apps, in the latter case for a pocketable desktop, and they resolutely don't rely on the very constrained subset of 50 years of engineering called Javascript.


Have you seen how many messages you can pump through and transform on a node process? Yes, there is an abstraction and that's a cost, but v8 perfomance is "good enough" for almost everything.

Maybe not a coin miner or a graphics engine; but cpu is never really the constraint for applications such as those inside electron containers. Where you see them chewing 100% of a core, those are bugs.

Also, while th e language has flaws modern; JS vms are pretty bleeding edge....


>"Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it,"

Have you ever heard of League of Legends? It's a desktop application made in that time frame. It does 10 figures a year in revenue.


You should throw a disclaimer in here that you're an Electron Maintainer/Slack developer.


How many electron apps can run on the same machine? For many of them it can be beneficial to just be a website.

Wouldn't it make sense to run a caching proxy on desktop and use a browser that is already there? That way you could still use some functionality not available to plain websites, like access to computer. But use system browser for rendering. Your proxy would also cache your website so it would run faster than normal website.

I understand, that this way you have to support more browsers, but you probably already have a web version of your app.

The problem could be with users seeing localhost address. How to overcome this issue? Virtual interface on which DNS server could run?

Maybe running from a file:// could be a solution for some apps?


Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it

You could open the Mac App Store, pick the 'top grossing' category and count the number of Electron apps.


well... actually "flash for desktop" best known as Adobe AIR is a pretty good alternative to Electron.


And is Electron more secure? I guess it doesn't have hackers all around the world trying to find vulnerabilities.

Is that good or bad?


that does not apply to desktop app


Why not? Is it not using a VM for desktop app?


it is not the same security context between web/browser apps and desktop apps wether there is a VM involved or not.

A browser can load remote payload (not under user control) that can then exploit browsers flaw, Flash AVM2 flaw, etc.

On the desktop the user run a local exe/app which has de facto "full system access".

A bit surprised you don't know that, it's kind of basic ...


Nice rant, can you name at least one of those mind-blowing apps that happens to be sold for real money?

Everyone in the last 40 years built and sold a multitude of destkop apps and they never stopped. Stop drinking the cool aid, and even if some bozo upvotes you on hackernews it does not mean that you are right (actually nowadays the opposite is more probable).


>Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along

Maybe nobody in the Web bubble.


It is good that there is a nice cross platform 'thing' you can use to make desktop apps. But it would be better if the best thing that existed for this wasn't so slow.

Perhaps someone can clone the outward api of electron with only a ~10% performance penalty instead of Two orders of magnitude


Completely agree. I simply have no interest in learning any native desktop programming and if I did it would be for my platform only and I wouldn't waste my time writing it for linux and definitely not for windows. I'd release what I had on github and move on. Now I can throw together a small (small by custom code sizes, I know electron is heavy) amount of code and get a cross platform app that doesn't look terrible in relatively short time. I think it's a great first step for a PoC. If it gains enough interest AND (<- big and) you NEED to go native for some reason you can go native later. There are a lot of things you can say bad about electron, yes it's heavy and "bloated" for most uses of it, but if people think empowering others to build things they couldn't or wouldn't before is bad because of how they built it then they are already a lost cause and probably think we should all be writing assembly still.


   > I simply have no interest in learning any native
   > […] programming
And it is amazing to see how many bad reinventions of the wheel happens because of this. People claiming that writing electron app is 10x faster than native have absolutely no idea what they are talking about. The only think web-based tech will give you is ability to push out raw cross-platform prototype. Now try to polish that to the acceptable quality across the platforms and the many wins start to turn into loses.


I am a web developer, I know that might be a dirty title in some circle but I really don't care. There are countless amazing things built on technology that people hate for whatever reason. PHP, JavaScript, and even Flash comes to mind. Guess what wins in the end? The thing that gets the job done quickest and easiest. It's one thing I really dislike about HN, the need to let everyone else know how terrible their decisions are because they didn't go as low level as the person posting the comment. "Oh you used PHP? Real programmers use C++" "Oh you used C++? Real programmers use C" "Oh you used C? Real programmers use assembly" "Oh you used assembly on a CPU? Real programmers design and build their own FPGA's for their code". Nothing is forcing you to use things built in electron so if it's so terrible then don't use it and move on with your life.


> Real programmers design and build their own FPGA's for their code...

https://xkcd.com/378/

/I don't disagree with you one bit...


I LOVE that!

I used to be one of the real programmers, now I program in Flash, strange...


A better alternative is actually making a web app, and ship the server together with the client, for example a Node.JS script that does the stuff not supported by the browser, and runs the default browser for the graphical user interface.


> Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along

What? Have you heard of Adobe AIR, AppJS, Node-Webkit (now NW.js) and Mozilla Prism/WebRunner?


Thanks, you summarized my thoughts.


Developer comfort over user needs in a nutshell for me.

Lost count of the number of times in my day to day work I see decisions made that make devs lives easier at the expense of performance or features.

Extremely pampered.


> You know what the alternative to all of these Electron apps coming out is?

Do you think people 'knew' what was going to kill Flash before it died?


Sure, but that's not what the argument is. The argument is that electron apps are way more resource hungry than native ones.


So what? We have those resources.

Does the weight of these just irritate you because of how wasteful it is (it irritates me for this reason alone) or does it actually have an impact on you?

People said the same things about gtk or qt when we had motif right? Apps are going to keep getting bigger, our machines get bigger too. I'm not sure what he problem is.

Edit: fat fingers


I think I'd like to do more with my machine as it's getting bigger and faster, not the same things from 10 years ago that now hog all of my excess. I have twice the cores, four times the RAM and an order of magnitude more disk space to play with than 2007, I don't think it's acceptable to just have applications take up an extra thread, twice the RAM and more GBs just because it's free; that hardware certainly wasn't free when I bought it.


or, you know, a chrome app which works exactly like an electron app except they don't all require their own separate chrome instance. I mean an electron app is literally just a chrome app packaged with its own version of chrome.


Agreed, I'm back to thinking desktop apps, thanks to electron. I did a project with Qt last december. It's usable but very old style and the license is not very permissive. A small electron app is not as heavy as I though first.


There was also node-webkit. Never forget.


Here's the thing. You know what the alternative to all of the JS that's been written? If your answer is "Keep writing more JS," then you are on another planet. The answer is, "We have fucked ourselves as an ecosystem by permitting a monoculture to flourish, especially one which prides itself on worship of bad languages, bloated runtimes, convoluted abstractions, and general fuckery."

Nobody in the last ~20 years has cared about writing Web apps before JS came along. There's basically zero money in it, and it's massively expensive, both in terms of actual development time per feature (easily 10x the cost of working in sane languages with sane runtimes), and also in finding trendy hipsters who don't know algorithms or data structures but took four weeks of coding academy classes a couple years ago. And as for Electron, Electron has existed for over three decades (Project Athena launched in 1983) -- if its massive "Nixon throwing the peace signs before getting on the plane" moment hasn't happened by then, sorry, but it's not going to happen.

But now? People are making all kinds of great new languages, and more often than not, they don't repeat JS's mistakes. People are excited about programming again -- JS is so bad that it's single-handedly revitalizing interest in languages which two of the largest tech companies in the world are behind, yet couldn't make popular.

This is a Big Deal.

(You are being parodied mostly for being a Slack developer, not disclosing it up front, and then trying to convince folks that Electron is good, which makes you sound a lot like a pig farmer trying to sell pigs' feet.)


> Nobody in the last ~20 years has cared about writing Web apps before JS came along. There's basically zero money in it

Zero money in web apps? You mean like Google, Facebook, Amazon, Twitch, Netflix, and YouTube?

There's way more money is web apps than desktop apps. hands down. I'm not talking about mobile because this is a convo about electron.


Desktop apps like Word, Excel, Powerpoint, Photoshop? I too can pick a few ridiculous outliers. Honestly I'm not sure if you're right or wrong but either way that argument isn't good.


> I too can pick a few ridiculous outliers

Those examples aren't event good, though. They all seems to be becoming less popular these days.


Let me know what web apps are better than the desktop apps that were mentioned. They are just as popular as they have ever been.


> They are just as popular as they have ever been

This can't be true seeing as Google Docs is used by many companies and students.

And "better" may not be the goal of many web apps. They may be trying to measure success by "convenience" rather than being objectively "better".


You have it exactly backwards. JS succeeded because it's not a monoculture. Within JavaScript there are multiple frameworks, even multiple languages (CoffeeScript, TypeScript, ES6, etc).

Win32, Java, MacOS, et al lost because there were such strong standards for doing things the One True Way that competing standards couldn't flourish and the APIs stagnated.

In JavaScript, a new framework comes out every year and goes straight at the throat of the old one. The reason there is churn is because these new frameworks are actually solving problems better than the old ones, and because of the LACK of a monoculture developers will actually switch to them. Which in turn is why framework designers are drawn to it, which in turn leads to more new frameworks.

Is iOS solving problems better today than they were last year? Or in 2015? Because JavaScript is solving problems way better this year than last.


JavaScript frameworks are ultimately solving problems that the Web/DOM/JavaScript infrastructure created in the first place. The amount of technology required to create a single page application is staggering yet on the desktop that's just "an application". The technology has been around for 20 years and we're just getting to point in JavaScript where we were 20 years ago.


Yes, of course JavaScript has made tradeoffs to get where it is. Those seem like problems in isolation, but look what we got in exchange:

- apps from untrusted developers can be run safely

- app installs are measured in milliseconds and don't require switching windows

- app updates are invisible to the user

- beginners can modify apps without leaving the app itself (MySpace profiles, etc)

- references deep inside one app can be embedded in another

- apps run on nearly every device from one codebase

Native app programming can do many things well, but failing on these counts is a deal breaker for many purposes. You act like JavaScript's weaknesses are somehow due to ignorance on our part, but they are tradeoffs made deliberately.

And for the most part, the things JavaScript is bad at (high performance graphics, professional ergonomics, etc) are things that are improving. I don't see native apps getting millisecond installs any time soon. It's a classic disruptive technology.


> like a pig farmer trying to sell pigs' feet

Actually - pig's feet are actually a pretty good eatin' portion of the pig. If you've ever had them, you might agree.


More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: