Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".
Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies. And as for Qt, Qt has existed for over two decades - if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that
two of the largest tech companies in the world are behind, yet couldn't do.
That's nonsense. Desktop apps have been frequently released since forever.
The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. That is, it's cheaper to build a single cross-platform application in Electron than it is to build two or three native applications, and you can re-use your existing web experience. I can completely understand why companies might choose this approach.
The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power. It's fine to make those trade-offs – in some ways, it's better that you can get an application at all than the alternative of 'no support for your platform'. But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.
What you call "trade-offs" appears to me to be developers externalizing their costs onto users. There are costs to developing desktop applications and developers don't want to pay them, so they make users pay for them in wasted hardware dollars, bandwidth, RAM, battery life, and poor integration.
Well, to the extent that these costs degrade the user experience, I'm not sure they really are externalizing them. Presumably they pay this cost in the form of decreased user satisfaction. It's just that this is a cost they are willing to pay.
There may be an interesting economic lesson here: it really is not that easy to externalize costs. It surely can be done (air pollution), but it requires some special circumstances for those costs not to be internalized in a different form. (These special circumstances might include information asymmetries, harm to a public good, enjoyed by people other than a firm's own customers, etc.--themselves classic risk factors for market failure.)
By the same token, there probably are some truly externalized costs in this example, but I would expect them to be very minor and indirect. For example, most people probably do not pay the 'true' cost of their electricity. So to the extent electron wastes power, some of the cost will be internalized in the form of user dissatisfaction. But some will also be externalized either because the user doesn't know about the extra power consumption, or because the user herself doesn't fully internalize the costs of her power consumption and therefore doesn't care as much as she might if all costs were properly internalized.
> Well, to the extent that these costs degrade the user experience, I'm not sure they really are externalizing them. Presumably they pay this cost in the form of decreased user satisfaction. It's just that this is a cost they are willing to pay.
Precisely. I don't use Slack in part because its desktop application is irrationally bloated for its use-case. Although I have other reasons as well, they are enduring a cost--a small one to be sure, but presumably non-trivial in aggregate--in people refusing to use their service in part because their desktop application is poor.
But I do agree with the general emotion underlying the frustration of "externalities" here. As someone who advocates for high-performance and efficient web applications, I have toyed with the question of whether developers should confront the morality of wasting energy by having selected poor/low-performance platforms. Put in a somewhat comedic tone: low-performance software contributes to climate change.
The problem is that despite how good it feels, individuals simply optimizing their individual activity does not scale to the societal scope. Post-agrarian humans literally behave and think via shared infrastructure. The only way individuals can make lasting improvements to society is to improve the infrastructure.
Does it? Or maybe those individual pop up and do their progressive thing only when existing infrastructure and technological landscape lets them do that?
Why they haven't made Slack a web based chat app like Discord or Wire or even Skype is beyond me. I don't even use their native apps, just open a tab in my browser and run the corresponding web app.
"Along with the features available in your browser, the app loads your conversations more quickly, gives you more control over notifications, and has better support for multiple teams."[1]
Also, the app gets its own entry in the task switcher.
The real trick is whether or not users care about:
A) This works on all of the platforms that we use
vs
B) Performance could be better on my computer
It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...which is almost entirely because SSD's make them so much less noticeable if you start dipping into swap unless you're really working out your machine.
For most users, just knowing it will work on their machine is a bigger influence in using the product...and therefore a greater influence on business...than the performance of that system. It's especially true with a chat system where the most important feature is that everybody on the team can get access.
> It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...
Really???
Because personally, I keep having performance problems on all laptops I have. Don't try running on battery saving mode, seriously.
I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.
----
Fun Anecdote: I had to trial an entreprisey SaaS solution not long ago. A coworker gave me the name and I opened the site on my laptop (on the move outside of work, just taking a quick look).
Their site froze my firefox for 30 seconds because these idiots put a high quality full screen video of a surfer in the background of the main page. Looks cool, doesn't it? https://www.wavefront.com/
Couldn't see the site. Had to be at work to read it, on my top end workstation, the video played smoothly there. Needless to say, didn't take the product seriously.
> I stop counting the 5+ years old laptops that have to be upgraded,
5+ years ago is that LOOOOOONG time that OP was talking about. It's also unfair to compare technical capabilities of old hardware for many reasons. I think the point was that, new hardware, _while its new_, is becoming more and more capable. Any new laptop today, even budget ones, can handle YouTube videos in HD. The problem is that HD today won't be the same HD that exists in 5 years (i.e., 4k), and it's sensible that a budget laptop today will struggle with the 8k technology that comes out 5 years from now. This is an old problem (pun intended) and should not be surprising.
> Because personally, I keep having performance problems on all laptops I have.
Selection bias. Programmers who compile code, run VMs or containers, and process tons of data, are not the average consumer laptop use case and have much stricter requirements. Many people are sitting in Facebook, YouTube, Gmail, or Google Docs for most of their day-- and likely inside of Chrome.
Where are the "Chrome is Flash for the desktop" posts?
The idea that Electron is any different of a user experience for the vast majority of users seems skewed to developer usage, to me.
I don't know, 5+ years isn't that old anymore for a computer. Like, 5 years ago I was running... a core i7 with 4GB of ram. And now I'm running... a core i7 with 16GB of ram. The only things in computers that have really gotten significantly faster are SSDs and GPUs
It takes a surprising amount of power to decode. The cheap CPU from netbooks have been struggling for a decade, especially in battery power saving mode.
Lately, they get hardware acceleration just for that. Special CPU instructions and drivers just to achieve that decently.
For Youtube in particular, they're sending VP9+opus where the browser supports it, without considering hardware acceleration. The rather anaemic Atom chips might have H264 decoding on-chip, but only Kabylake has VP9.
Different people have vastly different use cases. I haven't experienced a performance issue with desktop Slack or any other (non-game) application for many years. Sure, if I look at how much RAM Slack is using I'd probably conclude that it's more than it needs to be, but I haven't had an issue with RAM usage for, gee, probably ten years.
As for battery, again, my laptop battery has been little more than a UPS for at least five years.
It's mostly CPU performances I experience. Rendering any video, gif, website takes sizeable CPU power (for a cheap CPU in power saving mode).
Never had issue with RAM. I can remember a few friends who bought netbooks with 2GB memory some years ago, they quickly realized that they simply cannot run their development environment in that. (I'm talking swap death where a click that should take 1 second to act takes 1 entire minute).
Right now I'm using a backup laptop with Intel Celeron. It runs most apps fine. Web sites like Hacker News come up easily and snappily. The CPU fan kicks in when I watch a high-quality movie. It also kicks in on small, web apps. Kicks in big time on Youtube and Facebook. Everything on those is sluggish compared to my native apps. Even if non-technical, I'd know stuff was about to slow down when I go into my browser.
Just bad tools leading to waste users notice. Best to avoid them if possible. Not always possible...
> I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.
Are they all netbooks? Because those were crap the day they came out and even more crap 5 years later. You can find terrible discount desktop machines that can't handle anything just as easy as you can find terrible laptops.
My laptop is 5 years old and the only time I feel a lack of performance is when the Swift compiler fires up. Your example link came up right away (OK it burns away 30% of a core, but I got 4 of those).
They are not crap by any standard, they are simply low spec.
A core i3/i5 starts at 150$, most devices don't have these expensive CPU.
Even a 1000$ macbook from 5 years ago would have one of the first i3/i5. It would struggle to read 1080p video, unless plugged to a wall plug with the span spinning seriously.
> It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems.
This annoys me a lot, for one of two reasons. Either:
1. Some developers have no idea what performance means -- it's crazy that I have time to notice a spinner when a glorified IRC client starts up or switches tabs, let alone have time to watch it for tens of seconds.
or:
2. If these developers are claiming they have no performance problems, clearly the laptops that can handle modern applications are being hoarded, and I have no way to obtain one of these magical machines. All I can get are recent i7 processors.
On point 1: Also there's still the effect where your app does the same thing a native app would do in an equally reasonable time but uses twice as much power to do it.
>It's especially true with a chat system where the most important feature is that everybody on the team can get access.
That's a complete other story... All that big open source movement for decades and in 2017 we are still depending on proprietary systems to provide a chat(!) for a company.
Yep, and when the government doesn't properly tax those harm-to-public-good things like air pollution to balance the incentives and make it cheaper to do "the right thing" and pollute less, companies (which are made of real people and are not just unthinking machines) will do whatever they can to maximize profits. This is why it's so important to e.g. Increase the price of water to reflect the true cost in the Central Valley of CA instead of letting tons of farmers grow water-intensive crops in open-air, wasting tons of water just because it's cheap (then they complain about "is GROWING FOOD a waste of water?!?", completely un-ironically). Many businesses subsidize their operations using government funding, yet again ironically most of the rich white men running them are staunch opponents of public assistance programs despite owing most of their wealth to similar mechanisms, typically in conjunction with massive tax avoidance.
Pretty funny that the only reply here is 'found the Marxist' (maybe it was supposed to be a joke?) when, in reality, all you're suggesting is that the government set the conditions for the proper functioning of the market, which is an idea straight our of Hayek.
I don't understand. The cost is mostly externalized into, quite literally, massively distributed increased electricity consumption. That an individual's power bill may be subsidized only emphasizes that problem.
It sounds like we agree. To the extent we're just talking about electricity costs, then it's true that easily written, but inefficient code may indeed externalize these costs to the extent that users are unaware of the added power consumption, or that the user herself does not pay the full cost of that electricity.
But these electricity costs struck me as only a small part of the broader point I was responding to, which is what I framed the point the way I did.
I contend that squandering the work and infrastructure involved in converting natural resources into electricity is the main problem with modern consumer software development. Any user's (lack of) perception of waste is irrelevant. The scope of the problem is not any individual literally paying a couple dollars more on their electric bill, but rather it's that gigawatt hours are literally being burned away because software developers who know their code will be running on machines the world over can't be assed to spend a little more time learning to practice their craft correctly.
There is no broader point than converting natural resources into societal infrastructure. Be a responsible member of society. Don't obstinately ship wasteful code. I understand if you legitimately don't know any better, but if you're part of the community that's constantly writing blog posts about computers being so fast that its OK to burn the end users' CPU and storage just so you don't have to spend a couple more minutes thinking about what you're doing, you're adding harm to the world.
Again. I think we agree. You're just making a different point than the one I was making. I was making a technical but hopefully still constructive point about what is an externality and what is not.
You seem more focused on making a case against shipping inefficient code. And you're point is a good one. It's just a little difficult to suss that out since you're framing it as standing in opposition to my related, but very different, observation about externalities.
The vast majority of users (like >99%) don't have a clue why their computer isn't performing up to par. They literally have no idea how any of it works. They might just assume that it's slow because they haven't deleted spam emails in a while (even while they use gmail) or didn't pay extra to get that 3.2Ghz machine instead of the 3Ghz. They certainly don't have any idea how to open up top stats and check what resources a particular app is using, so until that happens and users punish developers for this kind of stuff, expect it to continue.
> The vast majority of users (like >99%) don't have a clue why their computer isn't performing up to par. They literally have no idea how any of it works.
Yes, you'll often hear things like "I think I have a virus" or "I think I need a new computer". They're computer is fine, they're just running slow apps.
I don't disagree, but I'm hopeful that Electron will get better. When you choose an open source platform with a diverse group of major companies using it, part of what you are betting on is that the platform will a) survive for a reasonable amount of time and b) gradually improve.
GitHub, Slack, Spotify, Microsoft, etc., are all using Electron. As their developers gain experience with the platform and as they experience problems with resource usage, I would expect to see the platform improve. Maybe I'm wrong.
I agree. I think it also has the potential to help motivate developers to write native UI rather than use electron.
Currently if for your market, your primary target platform is the web, but other platforms are still important, going electron might make sense because you really don't want to write your business logic natively for each platform.
Web assembly allows us to use the same native libraries for all our business logic and data models, which besides being more performant, means we only need to be willing to write the UI natively on top of those libraries. We've actually taken (very) old desktop C++ code, and compiled it to asm.js and ran it in our web app, and then rendered its outputs with webGL onto a canvas and have had surprisingly successful results. This makes the prospect of WASM becoming standard across the board very exciting.
Now let's hope we don't decide to replace our native desktop app, that uses this old C++ library, with an electron app of our web app, running the asm.js compiled library :).
I've switched from Sublime Text 3 (native) to VSCode (Electron) and I couldn't be happier. If I didn't know it's electron based I wouldn't have been able to tell as a user. YMMV so don't generalise.
If your battery is more important, just stick to Emacs.
What machine are you using ? I have a 2 year old thinkpad, it's still 10h+ in full brightness, vscode, compiling, etc. And it's the screen brightness that consumes the most battery.
It used to do 20h, but the main battery is external and can be replaced so it's a good thing. I would really recommand thinkpads to everyone instead of going for macbooks for example.
VSCode doesn't compete with emacs and vim. VSCode competes with Sublime Text and Atom. Maybe even with WebStorm on the more full-featured end and Notepad++ on the leaner end.
I'm not even sure whether you're trolling. BBEdit provides a tiny subset of the features VSCode offers. VSCode is not a text editor, it's halfway between a text editor and a traditional IDE.
Atom and VSCode are in different leagues with respect to performance, and the difference is growing with time. If you think the reason that Atom is slow is because of Electron, VSCode is the counterexample.
Since both are written with electron the difference must obviously be the actual implementation, not the platform itself.
Keep in mind VSCode was written by Microsoft who have decades of experience writing IDEs and text editors, whereas Atom was written by GitHub who are mostly working on the GitHub product.
VScode runs like a dream on my hexacore/8GB/SSD desktop, but brings my atom/2GB/32GBeMMC based Asus Eeebook X205TA to its knees. Native programs for low power systems seems sound practice to me.
How about PWA, Service Workers, WebAssembly? Thousands of developers are working hard on improving the UX, reduce battery usage, wasted CPU cycles, network traffic, latency.
it's been that way since forever. if you're not employing expensive top-ranked brilliant engineers writing aggressively optimized, completely bespoke c applications tuned to every cpu model's feature set and performance characteristics... you just might be externalizing.
insofar as it benefits the budget, you should probably externalize as much as possible. consumers will provide the fitness function through deciding what products they prefer.
We're not even discussing aggressively optimized C vs web stack. We're talking about a massive segment of today's software developers not even bothering to learn competent C (or Obj-C, Swift, Java, whatever) to build a native application. Hell, it's about the very practice of calling Electron apps native to begin with. The same thing continues to hold across web stack everywhere—reach only for those things that don't require you to learn a new language/paradigm, letting you keep churning out the same thing you build in a browser and call it native for platform X, and let it be on the user to pony up the resources to use it.
C is generally faster, than hand-written assembly, so it doesn't make sense to write software in assembler.
For macOS desktop applications are written with Objective C, which is C with fast message-passing, and doesn't trade much to speed. Swift is modern alternative, but it doesn't trade anything for speed either.
For Linux applications are traditionally written with C and Gtk or with C++ and Qt. Those options are both very performant.
For Windows main language for a long time was C++ and it remains supported language. There's movement to .NET, so Windows is an outlier here. But .NET is generally very performant language, it makes some tradeoffs for safety, but it has enough features to stay fast and its implementation is specifically tuned for desktop applications.
The only terrible platform with slow language is Android and it's well known for its lags.
There's very little desktop software written with Java and Python and usually those are specialized applications, when users don't really care about experience, but rather care about functionality.
So JavaScript invasion here is unexpected and not wanted. I won't use any JavaScript desktop application, if I have choice. I don't like this technology. JavaScript and V8 made for browser with advanced sandbox capabilities. That's fine. But for desktop it's just not needed. And there's no other advantages. UI is terrible and does not conform to any standards. Performance is not good. Memory consumption is abysmal. Energy consumption is abysmal as well.
If I'm about to buy application for macOS, I'm always carefully inspecting its bundle and trying to determine which technologies were used. Unless it's pure Objective C/Swift, I'm usually won't buy it. I hope, more users would do it.
Yeah, but Python still doesn't seem to have a great cross-platform desktop GUI, as far as I can find. I'd prefer to code entirely in Python if I could, but I haven't liked a single GUI library I've tried for Python.
Although to be fair I haven't really tried PyQT out still, but I don't like the idea of having to buy a commercial license for it.
But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.
Eventually all GUI app toolkit ends up with a custom MVC framework, a client/server architecture, some kind of db for persistence, their own implementation of asynchronous event and communication models and a declarative layer to create the UI without code. For the most advanced this layer separate structure and layout.
Well guess what, this is what the Web has natively being doing for ever.
Since the web is now the most popular platform, with millions of libs and tutorials on it, people just reused that. It just makes sense.
The problem is not the concept. The problem is we should have driven this effort with a standard to sanely close the gap between the desktop and the web so that you don't have to spawn a freaking browser-engine-os for every one of your app.
But no, the web is the only platform with a standard. And it flourished while all the big players created closes gardens with proprietary shitty API. And this is the result.
Have you not seen Jurassic Park for god sake ? Life finds a way.
It's completely false.
The web at the beginning was a huge mess of static pages, hacked CGI scripts and no interactivity at all with the need to submit a form to trigger a full reload of the page.
Everything was absolutely synchronous and a lot of times the database was accessible directly from the public interface and some pages did actually have the connection string directly in the HTML for everyone to see.
I'm really not sure in which alternate reality you have seen web apps doing all that forever.
Yep, completely agree with everything you said. I'd love for their to be a standard on the desktop side, but until then, I'll keep using a hodge-podge of technologies depending on the project, I guess.
They should have adopted Qt; it makes it easy to write native C++ code that compiles on different platforms, so you get the performance and RAM usage of C++ but still most of the benefits that web coding offers, without so many drawbacks. (It's also quite nice to work with IMO.)
It's too bad it wasn't more universally adopted, by any of the 3 major platforms (including Linux, where the all-C Gtk+ has become the standard for the most part). Instead, it seems to have found its greatest success in, ironically, small embedded devices. Devices like this simply cannot take the performance hit of something like Electron.
Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago.
And it was proprietary and it needed a licence.
With all my good will I find quite difficult to believe that now PyQT is the silver bullet to write all UIs.
For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')
> Last time that I tried QT was a huge pain compared to WPF or other solutions, admittedly something like 5-6 years ago.
Shrug, I found it much nicer than anything else I'd used, but I've never used WPF (which is single-platform in any case).
> And it was proprietary and it needed a licence.
Neither Qt nor PyQt is proprietary in the usal sense of the word (nor were they 5-6 years ago). If you're using a non-standard definition it would probably be more productive to use a different word.
> For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')
I'm a huge fan of type systems. I wish I could find a UI framework that's anywhere near as nice as PyQt for an ML-family language.
I can't help but think there's a different flavor to this than with higher-level languages. This is adding a platform on top of another platform. The same objections existed years ago with Java when Swing was released. Java is a cross-platform high-level language but Swing is basically an entirely new platform.
It's this platform on top of a platform that is objectionable from a performance, memory, storage, and integration perspective.
We see the same thing with containers. Docker, flatpak, snap... It's just the right time for it.
Languages have evolve to change the way we handle constraints like memory, speed, readability, expressivity etc.
We are arriving at the pick of what languages can bring on the table. Sure we can improve things here and there, but the huge challenges now are integration, packaging, distribution, updates, communications, multi-tiers architectures and all that.
So we now tweak platforms to help us with that.
But because we didn't see that coming, it's not done in any structured way. It's done exactly the way we did everything since the beginning of computing, by stitching together stuff then hitting hard on it with a hammer until the job is done.
This is not new. IT is a joke of an engineering field. We hack everything, don't think about the future, and then end up using the status quote. It's always has been like that.
I agree. Containers should be unnecessary -- all that they could provide could be done at the process level with an operating system designed to isolate computing resources appropriately. But operating systems were not historically designed for that so another (somewhat ridiculous) layer is added on top.
actually all containers do is utilise the very design in an OS like LXC in Linux to form containers. Containers are not a platform on top of Linux, they are a wrapper around different isolation tools build into the kernel.
IT is a joke of engineering for non engineers.
If a pull request doesn't follow some principles agreed a priori it doesn't get merged.
If there are people that like to play the "IT cowboys" just hacking together stuff without any whatsoever process or unit test that certifies the behaviour of what they have written, jeopardising the entire team efforts, it is not a failure of software engineering, it's a failure of that specific team.
And please bear in mind that I worked in such toxic environments, but I never thought for a moment that software engineering and software architecture are jokes.
The joke was the team/organisation on which I was at the time.
>Computing for ever. We use C to avoid writing assembly. Use Java to avoid writing C. Use Python to avoid writing Java.
Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades. Which are all as close to the metal as can be. And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception), so the whole premise is moot.
The few Java desktop apps that were around, people used to hate as memory hogs.
But can you make a website with the same knowledge ? Can you make it portable to other OS ? Can you reuse 20 years of knowledge, resources and libs ? Can you hire tomorrow 10 experts to help you on it ?
Quality of the tech is NOT the drive for success here. You are missing the point.
People used crashy, buggy, slow software for years. Photoshop and Office lost you data on a regular basis in the 2000's. Windows BSD was a common occurrence then. We didn't see a massive exodus to Mac products because of that. The only reason people started to go crazy for Apple was after the iPod came out. And even then, it was still a small part of the market.
You can see everyday that people favor cheapness, easiness and convenience over quality. You would not have so much junk food otherwise.
> But can you make a website with the same knowledge ?
One of my first commercial projects was a web-content management system written in Objective-C. Customers included Siemens and the German Bundestag.
Another couple of projects were written in WebObjects. If I wanted to, I could use Cappuccino, but I am not a big fan of web//client apps, so I don't.
> Can you make it portable to other OS ?
This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
> Can you reuse 20 years of knowledge, resources and libs ?
In the sense you meant it: yes. Except it's more like 30 years. However, programming skills are (or should be) transportable. With virtually no previous experience, I became lead/architect on a Java project, which succeeded beyond anyone's imagination.
> Can you hire tomorrow 10 experts to help you on it ?
>One of my first commercial projects was a web-content management system written in Objective-C
You certainly didn't use any of your cocoa widget for the UI there. It was HTML + CSS.
> This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
Yeah, GNU steps for GUI on windows... This is what you think could be an argument for electron users ?
> In the sense you meant it: yes. Except it's more like 30 years.
Again bad faith. The world has way, way more code, snippets, tutorials and doc about any HTML + CSS + JS code than any tech based on Objective-C.
Programming knowledge is transferrable, but the knowledge of the ecosystem is not, and is always the most consumming.
> Is this a serious question?
Oh yes, it is. Because you see we are living an era where it's hard to find any good programmer at all for anything. They are all taken, and are very expensive.
So basically, on a tech limited to one ecosystem, finding them will be even harder, and even more expensive.
The simple fact that you are pretending it's no big deal (while any company will tell you otherwise, so much that the GAFAs are spending millions just in their recruitment process) illustrate how much a troll you are.
It most certainly is not. You just don't know what you're talking about and keep making up new stuff when confronted with actual facts that contradict your fervently held beliefs.
I'm curious how you get Smalltalk like productivity on objective C? I thought the productivity on Smalltalk comes from its live programming environment?
You bet ? I have 3 software opened right now using Python for their GUI: my rss reader, my torrent downloader and dropbox. And I got many more installed on my machine.
But the thing is, even when I write something for myself, I first write a command line app, then a web service. Never a GUI, because it's such pain.
Of course. Electron is just a layer around the chrome compiled engined as well. If you want anything to display a matrix of pixels changing rapidly, you need low level performances eventually.
Of course, but in your case, the C/C++ libraries you're calling into aren't doing any heavy lifting, they're just making calls to an X server (or equivalent) or perhaps to a graphics card. There's no reason GTK (or the GUI portion of Qt) couldn't be implemented in Python, it's just a huge undertaking and it was started in C (or C++ as the case may be).
>And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception)
This is generally true, but to be fair the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers, and aren't meant to be programmed in hand-coded assembly except for a few critical bits deep within OS code. Older CPUs weren't like this.
It still might be possible to write hand-rolled assembly that beats modern compilers, but you probably need to have seriously super-human mental abilities to do it.
> the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers
You got the causality wrong. Assembly programmer-friendly CPUs died because CPUs which weren't as friendly were faster and cheaper; those same CPUs were instead more amenable as compiler targets.
Hey, I used to use it! :P I came from BASIC background. VB6 was supposedly good for rapid prototyping of GUI apps (esp CRUD variety). I found it would boot up in 1 second, deploy new project in 1 second, and load new app in 1 second. It was also safer so no constant blue screens over common functionality. It also could wrap foreign code in less safe languages which I could still write in an industrial BASIC. One could also export the GUI to code in different language.
Became one of my favorite toys. I'd still use it for GUI prototyping if it was FOSS and kept getting extended. I found even lay people could learn it well enough to get stuff done. Long after, I learned what horrible things lay people did with it. Yet, they got work done and got paid without the IT budget and staff they would've preferred. (shrugs)
>C# has been the default way to write Windows apps since the early 2000s.
No, it really hasn't. It was just the way Microsoft proposed businesses to write bloated internal enterprise apps, what they used to use VB for.
Those are not the same as desktop apps -- and no, or very very few, desktop apps, ever turned to C#. Not even MS own apps, like Office, and surely nothing like Abobe's or countless others.
>It is no more "closer to the metal" than JavaScript.
Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.
Besides, it's not JS itself that's the problem (though it took millions and top notch teams to make it fast): it's the web stack on top of it. C# just runs on a thin CLR VM layer -- and the graphics are native.
I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.
> Actually it very much is: it is statically typed, it has scalar types and contiguous memory allocation that allow for much better speeds (hence the effort to add some of those things asm.js and the like for Javascript), and it even has AOT compilation.
You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
>I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.
Windows Forms is a wrapper on top of MS Win32 API. And WPF is also based on native widgets wrapped (with some extended with managed code).
In any case, C# apps are not much represented in the majority of Windows desktop apps, most of which are written in C++ or similar, and surely all the succesful ones. Can you name your succesful C# desktop apps? (not in-house enterprise apps and no developer tools please. There where the users have no choice, even Java does well) I'll name the succesful C++/Delphi/native/etc ones and we can compare our lists.
>You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
A call to a native drawing lib that doesn't pass through 10 layers of abstractions and bizarro architectures is as good as a direct native call. Especially from something like C# that runs circles around JS performance.
But even so, few consider JS to be what makes e.g. Electron slow.
As far as I know there are plenty of XNA games running on both pc and Xbox in C#.
And that is pretty much the worst applications you can use c# for because, you know, latency.
I don't see any real blocker to have complex c# apps on the desktop, apart maybe the quite shitty clickonce and the continuous need to have an upgraded .net framework to use the new features.
But now for UWP apps the default is c#, they can be installed directly from the store, and with Roslyn you basically need only to target .net 4.5 to have all the features of the last version of the language.
And this is a huge win that admittedly JavaScript already had because of transpiling.
If I had to write a commercially distributed desktop application nowadays I would use for sure c# or f#, not JavaScript.
> Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades.
So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.
This seems pretty much in line with the general industry trend towards the adoption of open-source software.
>So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.
By "large", you mean some of the UI using WPF, the plugin system supporting C# and some of NuGet?. Not that those are small projects, but considering what is inside Visual Studio, they are hardly "large portions" of Visual Studio.
And because the IDE support (refactoring etc etc), compile time error checking and ease of use more than compensates for Java being a little bit verbose.
Yeah, and they used C because the compilator could optimized stuff the JVM couldn't but now it can. And now Python get type hints so you can have the IDE tools you have with Java like with PyCharm. It's the circle of life.
Not quite correct about type hint. They are only in python 3.
Everyone who adopted python 2 on sizeable codebase is likely stuck there forever, with zero annotations and none of the new tools available, and they'll never be ported back.
But let's be fair, type related tooling in Python are not close to the ones you have in Java yet. It's just that eventually, everything comes around. Java got faster. C++ easier. Python ...toolier ? Etc.
Python broke all retro compatibility and put all existing sizeable software in a miserable deprecated state with the breaking of python 3.
I don't recall C++ getting easier. The few tools and IDE still fail at decent refactoring and code completion. The C++11 movement is adding few stuff more or less useful, piling on top of the vast amount of already existing complexity.
C++11 makes a LOT of things much, much easier. Yes, it does pile on top of existing complexity because they're loathe to eliminate any backwards compatibility, but the nice thing is that you don't have to use older features or methods of doing things. In fact, if you look multiple serious C++ codebases, it'll almost look like they're different languages, as every project basically chooses a subset of C++ they accept. Realtime embedded code doesn't look anything like desktop application code, for instance, but they're both technically C++.
And as far as I remember even google is only supporting python 2.
Their python-go transpiler doesn't support python 3 for example (unless something changed in the last months and I missed it)
And because packaging everything in a jar is easier than pulling 1e3 dependencies for every deployment. Not to mention drpendencies that also require a C/C++ compiler boost or other native libraries.
Yeah and some use C because Java is too slow. The point is, there is nothing new here, the history of computing is repeating itself. It's just now we have better toys, a bigger market and the stakes are higher.
Rust doesn't bring much in that aspect. You could use C++ for performant high level abstraction for many years before Rust. Rust brings memory safety and that's huge. But it's not anything like Java, it's much harder.
Actually, I think it's very similar to Java in terms of what is being offered and at what layer of thought.
It does have a steep learning curve, but it's worth it. The number of concurrency bugs alone that I could have avoided if I had been able to use Rust years ago are sad to think about. Java has great concurrency tools, but doesn't do anything to make sure that your not shooting yourself in the foot.
Java also has JavaFX, which comes with an embedded Webkit browser. I can create my UI with React, or any other HTML/CSS/JS library, and make it interact with code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc. very easily. I think this provides all the benefits of Electron, but is even more flexible and powerful.
Do you need to run N programs on N copies of Chrome with JavaFX or use a single VM like all JVM apps do? Because that's the problem with Electron mentioned in the article and that's exactly one place where JVM languages are better.
This is basically a terrible argument in and of itself you would do better to flesh out why yourself instead of expecting everyone to conclude your self evident correctness.
> code written in Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc
Your list of 7 JVM languages (both here and in your earlier comment on this submission) seems to be from most widely used to least. Yet in your HN comment from 2 days ago at https://news.ycombinator.com/item?id=14068664 you ordered that list differently, i.e. "Java, Scala, Clojure, Groovy, Kotlin, Ceylon, Frege, etc". Have you changed your mind about the relative adoption of Clojure and Apache Groovy in the last two days?
Not really. Actually, however, there is a recent survey that shows that Groovy is the second most popular language on the JVM, behind Java. Myself, I use Scala, and would like to learn Frege.
Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock.
Sadly I still didn't find anything comparable for BDD in all the languages that I use.
> Groovy is a quite lovely dynamic language and it's actually the best solution for BDD using Spock
There's something wrong when a testing framework hacks into a language parser to make the language statement labels have special meanings like function names do, and overload the `|` operator in expressions so data will appear as a table in the source code. "Lovely" isn't the word for that sort of thing.
>There are costs to developing desktop applications and developers don't want to pay them, so they make users pay for them
That is definitely not the whole story. Costs are shared between developers and users. If it's more expensive to develop an app, you can bet it's going to cost users more too.
Because those costs grow orders of magnitude as they're externalized. $1000 spent during development can offset $1M in energy wasted by a million users. So they should want to pay those costs as responsible members of civilized society.
And if they don't want to, they should be forced to by said society.
To make a quality product and offer a better user experience?
Users may not notice the slowness right away, but if they run a bunch of applications at once they probably will, and they're more likely to notice it while running on battery.
> ... Electron applications are shite in comparison with proper native applications.
Is that true from a user's perspective? The average user I know would not have the idea that Slack is somehow inherently worse than say MS Word or any other truly native app they use everyday. What would Slack gain by integrating better with my Mac?
Slack starts up too fast for me to read the little quotation (penultimate Macbook Pro). I actually wish it was slightly slower because I like those cheesy quotes when I can catch them. Everything feels almost instant. Not bash instant, but as fast as any mainstream messaging app.
I leave it running all the time and get what feels like normal full hours upon hours of battery life. There are apps or browser tabs that I've noticed destroying battery life, but Slack nor Atom nor MS Visual code are on that list for me.
The anti-Electron complaints all kind of feel like the same argument that comes up as each new layer of abstraction gains acceptance. I'm sure there's a lot that Electron can and should improve. Running multiple copies of Chrome does sound awful, but until I read that I didn't know the atrocities it was committing.
Just doesn't feel productive to try to make Electron go away vs. working to improve it or create a better cross-platform abstraction.
> What would Slack gain by integrating better with my Mac?
Far better resource management, one would hope, for starters. As the article pointed out, when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.
It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP. So Slack gets turned off, and maybe someone gets annoyed that I don't respond to something, while I run a few VMs in Slack's former memory space.
As far as other things, well, using a platform's capabilities as appropriate is generally considered good form - HCI concerns, and, generally, just people like things to be consistent. I don't know why we're supposed to forget this now.
I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.
Recently I was doing some mean stack development ram was maxing out 8 GB and system was struggling to run smoothly
Mongo alone was hogging around 1.5 GB and webpack was hogging 1 GB there are three seperate expressjs servers hogging 500 mb each
Finally an argument that is concrete and understandable! Seriously, this is the first comment I've read that is tangible as opposed to theoretical. All the other complaints are on the basis of being morally offended by the ratio of resource usage vs. application functionality.
The nice thing about slack is that it will run about anywhere. The app on my phone magically knows if the app is not running on my laptop and will then and only then send me notifications. But, I'm not suggesting this as a solution for you -- best productivity killer I know is having to pay attention to yet another device while trying to get work done.
Depends on your environment, but I have of one very effective and successful coworker who just flat out refuses to use Slack. The number of companies where that is acceptable is probably limited. Unfortunately.
> I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.
From experience, using the bridge means pinging the admin on slack about it, and convincing them that it's even worth doing, and then being stuck in "The IRC Room", instead of the others rooms in which people are actually communicating.
I use it extensively for everyday Slack, where I don't care much about inline media and inline formatting. And for a text client, weechat + this plugin handle it pretty gracefully.
I know I am really late to this, but one of the things I have been meaning to try is running Electron apps that have Web Apps with [Fluid](http://fluidapp.com). I think this may give me the perks without some of the drawbacks. I am not really sure, but it is an experiment I would like to try out.
I've never heard of Fluid before, but it seems to be a glorified shortcut with minor features such as notification badges. I really can't tell how this can integrate with Electron at all.
Electron would have you write a web app and package it as an app installed to your Applications folder just like any native app. You'll write this code in JS, HTML, CSS and use Electron APIs.
Fluid also would have you write this in JS, HTML, CSS but use Fluid APIs instead. The two compete for the same role in that sense.
With Fluid, you don't do anything from the web developer's prospective. The Fluid app just packages a Web site as a separate application. So, what this means that any user can take a Web site and make that a separate app if they want it.
> when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.
Not really, unless you're using most of your RAM or you treat this as a fundamental principle. I've never checked my Slack RAM usage, and I'm sure it's higher than if they built a great native app, but it's also never had any performance problem with Slack or any other application (okay, except for Eclipse, but I stopped using that when Android Studio came out).
> It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP
And there's the key. You have a very specific use case that is probably extremely rare. I'm a full-time developer and as far as I know I've never been low on RAM on my first generation Retina MBP.
People are less likely to be spinning up vms but may have far less ram to start with and the more apps they use that are wasteful of power and ram the more likely this becomes an issue for them.
It is truly strange to hear being resource constrained being described as a special case in computing.
Last time I taught undergrads (last semester) their computers were far from anything as good as we'd have. If I asked them to run anything remotely demanding, it really taxed their laptops.
We really do need to be more mindful of resource usage. I'd rather write my own lightweight clone of something I want than use an Electron-based hog. There are menubar-only apps that come with an entire Electron dependency.
Why is that strange for personal computer usage? How common do you think it is for users to experience a performance problem caused by low memory? I'm genuinely surprised that we could apparently have such opposing intuitions. My intuition is that it is exceedingly rare.
Your intuition is badly formed because it ignores how crappy real peoples computers are. The world is full of really bad computers that people don't upgrade because their computer is a low priority item for them and they have lots of other more important things to put their money towards like food shelter, and medicine.
Further their machines are full of software that runs all the time for no reason including but not limited to multiple redundant antivirus that are trying in vain to scan everything in real time in a vain attempt to prevent the next malware infection from taking hold.
Various services like file system indexing and virus scans run at inconvenient times and render things slower than before.
Laptops are super prevalent because their portability is more important than power. Its not at all unusual to keep using the same machine for 5-8 years as long as it still works.
Real people have bad computers with bad specs and in a massive number of cases the browser is already using a significant portion of the entire computers resources.
The electron apps I've seen don't really have much UI. As soon as you build an app with some complexity it will start being a problem.
As a native Mac developer I can drop in things like a predicate editor for defining filters or queries. They come with Cocoa. I can put in sophisticated table views and tree views. I can connect all of this easily to core data which loads from a database objects on demand without any code needed to be written by me.
You see this sort of shortcuts in the electron apps I've seen. They don't have proper GUI for preference configuration like any Mac app as they can't get that for free. Atom doesn't have a regular file open dialog. I use that a lot, dropping a file into it to jump to a particular location I got open in Finder (file manager).
My preferred editor, TextMate has very little development resources behind it compared to Atom, yet it has a far richer GUI. You got a GUI for creating and editing plugins. Not just editing config files. You got native rich UIs for selecting fonts and colors. You don't have to write font names and color names in some config file.
There is a certain irony in the claim about how stuff like Electron saves you cross platform code, when what is usually not cross platform is the GUI, and the electron apps I've seen has very little of it. Try making something with the complexity of Keynote, Pages or Numbers in electron and I think the lack of a comprehensive set of prefabricated GUI components will start making its mark.
Not to mention the numerous native APIs which exist which you have to duplicate, e.g. for vector graphics, animations, 3D graphics, audio, video, font handling, OpenCL.
How about people with disabilities or internationalization? You got great native tools for doing that, how do you accomplish that in Electron without re-inventing the wheel?
I use a macmini for my dev work. Almost every time it freezes out of the blue the culprit was some html based "app" frequently with process nwjs using 100% of cpu and huge amounts of ram.
Honestly, though, the in-client search sucks. I'd much rather search archived text IRC logs through some web interface or locally than try and find something from 6 months ago within Slack.
I totally agree with you. I only use Slack from we app but I'm a heavy user of VSCode and Spotify. I believe in standardized Web API's rather than a clusterfuck of native API's.
The problem here is the base platform Electron and underlying Chrome/Nodejs
But as consumers we feel when our browser lags, so browser vendors optimize speed over memory and cpu, caching the shit out of everything. The Web is a dangerous place so they further isolate every tab as process, sandbox them and keeps lots or copies of the same thing in RAM because a security flaw is a lot more shameful than a memory flaw.
You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.
Making noise also makes a difference. When someone complained about VSCode cpu usage because of idle cursor and it blew on HN, next month's release had a fix (for all platform)
The truth is you can't move fast doing native development with different libraries. Electron keeps your dev costs down and allows you to move very fast.
> You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.
Won't that encourage more companies to cut corners? How can producers of quality products compete when other companies can release prototypes that their paying customers finish for them?
There's the argument "Why should I do it? If I'm paying for it then they should do it"
And there's, "This really sucks, its an open source project, I think I can make a difference that would affect millions in a small way but would still make a big impact overall"
Except for the very old 4.x Linux version - which is native Qt - and comparably old Windows ones, I thought, all of the recent versions are webbrowser wrappers (one sort or another) with some occasional native widgets around that.
(Can't edit the post above) But, of course, Skype issues aren't because of their UI nativeness. Their UI layer is... well, tolerable. Heck, I'd be happy if all I'd have to complain about Skype would be a crappy UI.
But I don't think UI implementation has to do anything about why we love it so much. I mean, missing calls and messages every other week, random desync between clients and all the usual rituals that every other group call starts from (you can hear them accompanied with chants "can everyone hear me?", "$name are you here?", "tell me if you see my screen" and "let me drop the call and restart").
i think i was still overreacting to prove a point. everything you're saying is right.
my point is, even if i concede native is "better" or whatever, the difference should be negligible for most apps for all users (let alone "most" users).
You are of course correct. It's possible to make awful native applications, and good non-native applications.
The sliders mean that you will trade performance off against ease-of-development. It will be easier to ship a basic Electron/web-stack app than a native one for multiple platforms. However, it will be harder to do this is in a high-performance and well-integrated way.
I went through all this with Cordova back in the day when trying to build cross-platform mobile apps. I was able to ship apps quickly, but at the expense of quality – it's okay to do this so long as you are aware of the trade-off.
yep definitely. and there is definitely a cost and effort required to make Electron feel as good as native. depending on your feature requirements, this can get really hairy.
i'm just exhausted by the "web apps feel so much worse than native!1" argument which is an over simplification and not a rule.
Skype for Linux is a web app. It is a lot worse than the old Qt client.
Are the other platforms native? I thought they switched to webrtc in 2015 to be able to just use a simple web window wrapper for all platforms, just not Linux.
Slack behaves perfectly fine for me and I don't have incredible hardware. It's non-Electron apps that are typically mentioned by macOS' energy management system as "consuming significant energy", in my experience.
Edit: Mass downvotes; lol ... Just stating my personal experience!
> Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power.
Are they though? The two applications that use the most energy on my Mac - by far - are Steam and Skype. Steam still has trouble with HiDPI and freezes when performing various UI interactions. The number of problems with Skype are uncountable.
I'm currently booted into Windows for work, looking at my current process stats, the top memory consumers are:
* Visual Studio (hodge podge of all kinds of things, 800MB)
* Chrome (215MB)
* Microsoft Intune (presumably native, 114MB)
* GitHub (.NET WPF application, 108MB)
* Explorer (native, 103MB)
* Search Indexer (native, 107MB)
* Lync (who knows, 98MB)
Meanwhile, the supposedly terrible Electron apps:
* Spotify - 58MB
* Slack - 93MB
* VS Code - 60MB
As far as interfaces go, Spotify, Slack and VS Code easily outclass GitHub, Visual Studio, Explorer and Lync in usability.
Each Chrome (and Electron) app instance is a group of processes. You are probably just looking at the main process, while the bulk of RAM/CPU use comes from the renderer processes.
Nope, those figures were after adding up all the processes. What I have noticed in switching between platforms is that applications tend to report a far lower memory usage in Windows than on Linux or OS X.
Might just be an accounting difference. Forked process applications in particular are very difficult to account, because even their private/RSS may be COW from another process.
My figures exclude shared memory and are calculated as VmRss - Shared from /proc/<pid>/statm.
If you are using Windows 10, your missing Atom processes will be under Background Processes in Task Manager. For the sake of the argument, I just did a fresh install of Atom and this is what I see on the first run: https://i.imgur.com/0ZRSumF.png. ~220MiB (no files open, zero extensions).
That's really interesting! I have an up-to-date atom install that i've been adding plugins to for about a year now (up to like 50 or so), and has been running for at least 12 hours (windows 10), and currently has 7 ~500 line files open.
Adding up all the processes' (7 of them) private memory gets me 194mb.
VS Code has way fewer features than Visual Studio, especially for C#/.NET development. VS Code is a minimalistic IDE, very useful, especially for javascript development where tooling is quite minimal anyways. But let's not think that VS Code outclasses visual studio.
VS Code is not an IDE in the classic sense of what IDE is - integrated development environment. It's a text editor with some debugging extensions. I don't think it is even comparable to Visual Studio at all.
That line is getting really blurry with both Atom and VSCode. Haven't used VSCode, but the only feature I recall missing in Atom right now is refactoring; and to be fair, last time I used a real IDE (XCode), I couldn't refactor there either.
For me, refactoring is something that is nice to have, certainly, but not what defines an IDE. The debugging and development facilities are what matter most.
A lot of that is available in "text editors" like Atom (and I assume VSCode). Of course, it depends on if someone wrote a plugin for your language of choice.
IMO it qualifies as an IDE for Typescript, and borderline with Javascript — slightly better than WebStorm in the former case and slightly worse in the latter — but not so much other languages.
I agree with you, but there doesn't really seem to be any native application that occupies the same space as VS Code or Atom. VSCode is not just a text editor like vim or Notepad++, as some people here imply. As you say, it's a miniature IDE. And it's far more comprehensive than Sublime Text.
For .NET Core + TypeScript, VS Code is almost feature parity with full blown VS, while being an awful lot faster. The only thing I find particularly lacking is debugging, but even that is coming along well.
I agree, I use VS Code because its the best choice I have for Typescript development on the Mac. However, I still miss Visual Studio while using it. The debugger especially wants to make me cry, though this might have more to do with Chrome than VS Code's front end for it.
But it's impossible to do a real apples-to-apples comparison, because I use Atom more like an IDE than I ever did with notepad++, and Atom has magnitudes more features for me than notepad++ ever did and most likely ever will.
I don't care about the difference of 175mb of ram between the 2 as one of them (atom) is infinitely more useful for me than the other (notepad++)
Maybe for one of the processes, but on Windows Spotify needs at least three processes usually to run (five if you count the Web Helper and Crash Service which are probably native code). On my machine the three main Spotify.exe processes take up at least 170MB of RAM, often more. Although I wasn't aware they were using Electron as their app has a standard, native Windows menu bar.
Hard to answer your question without hijacking the thread, but here goes:
• It's got pretty much all the power features out there like Snoozing, Open Tracking, Send Later, Reminders, Enriched Contacts (i.e. Rapportive), Unified Inbox, Swipe Actions, Templates, etc.
• It's open source and super easy to extend with JavaScript plugins. Developer have made dozens of themes and some cool plugins including PGP, Unsubscribe, Translation, Todoist, Trello, Markdown, etc.
• It's cross-platform for Mac, Windows & Linux with custom UI styles for each.
• It works with all mail providers including Gmail, Yahoo, iCloud, Outlook and even vanilla IMAP and on-prem Exchange servers.
• It syncs your data directly (not via a cloud service) for speed and security.
• It works offline, so you can use it on a plane or when you don't have WiFi.
There are other comments in here comparing react-native to Electron. Do you know if you could build Nylas Mail at the same pace with react-native? Will the binary size/RAM usage drop significantly on react-native?
I haven't yet seen a substantial desktop app built with React Native and afaik neither FB nor GitHub is investing in React Native for desktop so generally this is hard to say. React Native is more of a framework whereas Electron is a runtime-- much different goals though both are super cool and I'm enthusiastic about the future of both!
I'm holding off my downvote to see if i can get a straight answer out of you. Nylas Mail bills itself as the best email app. Them's strong words, but maybe you're worth the claim? Let's see!
I see from screenshots that Nylas has folders and labels. Can i use either of these in the following fashion?
- i can have a tree structure of them
- an email can be in two separate folders/labels at the same time
- folders/labels can be configured to learn which emails to automatically sort into themselves, based on the email contents, by dragging and dropping the email into or out of them
• If by "tree structure" you mean a folder hierarchy, yep that's supported. I think we have a current bug with dragging nested subfolders but we're working on a fix. (Surprisingly >99% of users have a flat hierarchy.)
• A thread can certainly be in two separate folders (e.g. Inbox and Sent) but an individual message can't be in two folders at once. In that situation there are two copies on the actual mail server. For Gmail/Gsuite this is possible via labels where any thread can have an arbitrary number of labels. We support both systems.
• "labels can be configured to learn which emails to automatically sort into themselves, based on the email contents" -- this is a really cool idea and something we've talked about internally. AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters. It could also be an interesting plugin that anyone could build on NM. We have a Slack chat room where folks discuss stuff like this if you're interested: http://slack-invite.nylas.com/
• And for your bonus round (haha) yes there are 2 different ways to configure the UI. One of them is 3-pane with a reading mode like Outlook, and the other is 2-pane that navigates like Gmail. http://i.imgur.com/Lt0x7O4.png
Also in 3-pane if you make the message list wide enough it will switch into the compact version: http://i.imgur.com/SaGp9eV.png
(Obviously it will show your real mail data. We have a "screenshot mode" for sharing stuff like this without revealing sensitive information.)
That's worth upvotes for the effort alone, thanks. :)
> Surprisingly >99% of users have a flat hierarchy
You tend to end up with it only after really long-term usage. All the folders with sub-folders i have got them after they got too big to be just one, e.g. "Perl coding stuff" has several subs, "Shopping", "Clients", "Computer Game Emails", etc. Some clients have additional subs. All started out as a singular one though.
> threads, not singular mails
Ok, fair enough.
> labels auto-learning by drag&drop ... AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters
Opera M2 does it extremely well since ~2000. Google Inbox does it ... eh. Mobile and PC, none, right. The filtering is honestly super easy to implement. It's a bayesian filter. In older email clients those were used to filter out spam. Opera M2 simply gives each folder one (user-configurable) and runs all the filters on each mail that comes in.
And to be fully honest here, i still use Opera 12 as my main browser, along with its mail client and don't see myself jumping ship ... anytime really since for me the combination of mail client and browser is key. However to respect an email client i expect it to be a feature match to Opera M2 at least.
Not interested in Slack. If you had an IRC channel tough i wouldn't need to sacrifice a chicken and a CPU core. :)
> UI
Ok, that looks fine. I personally prefer to have the email below the mail list, but that's not a huge thing. Maybe an option to consider. Screenshot mode is cute. :)
I haven't tried Opera M2-- I'll check it out. Might be a fun hackathon project to train a Bayesian filter on every folder and auto-suggest routing at least.
And simply adding up all the processes will count all the shared memory multiple times (which will greatly inflate the "final number" to much larger than it really is)
Steam always used a HTML renderer, even in its 2006 first incarnation. Nowadays it uses the same CEF (chrome library) as electron uses.
So Steam was one of the first "Electron" apps. A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.
Valve used a very ubscure/niche HTML render engine initially for Steam (2006). The company/website behind that isn't online anymore. An older version of the Wiki had some brief info, but all these info vanished.
It talks about the Steam overlay. Parts of the Steam application were always HTML. First the little known HTML renderer from a defunct company, than Trident and later CEF.
>A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.
That's a stretch; X/SG/HTML user interface APIs are not the same as a whole browser with Javascript VM, full networking and security stack, full-featured/standards-compliant (X)HTML/CSS rendering engine plus support for legacy features, UI assets, multimedia support, sandboxing, resource caching/persistence, and so on.
Win95 with shell update up to Windows Me and 2000 had the full trident engine (same as IE 3-5.5) in the Shell (Active Desktop, Explorer bars, etc.). Windows ME/2000 can play audio and video previews in the side bar (all HTML based).
Also WinXP used a forked Trident engine with some removed features for "Software" dialog and various other features (Windows Help, etc).
If you basically agree with the parent post, why do you say it's nonsense?
> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.
Precisely.
> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.
But native applications are shite in terms of portability.
> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support
See the part where the original post said "it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies". The costs may not be "prohibitive", but they certainly would multiply effort and resources, and divide profit.
Well they were exaggerating but I think they had a point, til recently on my Desktop machines I only cared about having a web-browser, terminal (+ Steam at home).
Lately there are more UI-ish apps I value having everywhere including desktop (Spotify, Hipchat, Whatsapp, VSCode). I'd also love a decent cross-plat podcasts app.
I think it's clear there's now more demand for certain types of consumer-ish desktop apps (chat apps & music apps especially) than there was a few years ago.
Just as a counter-point, because native app fans often make this point as though it is universally recognized to be a good thing.
I don't want apps to integrate with the host platform. The host platform is not the thing I care about. I use several host platforms in different contexts (I have work and home computers and a smartphone, they all run different OSes) and I would prefer that Slack look like Slack and not have buttons in different places with different UI interactions just because that's the way Reminders.app works.
For me, the web is the host platform I care about. It's the one that I can use anywhere and only have to remember the URL.
I doubt you don't care. These are examples "integration with the host platform":
* Text selection
* Caret behaviour (e.g. Option-arrows on macOS)
* Clipboard
* Spell check
* Open/save dialogs
* File system access
* Drag/drop
* Window management
* Accessibility (screen reader support etc.)
* Standard right click menus
* Indexing (e.g. Spotlight on macOS)
* etc.
You may be thinking to native UI idioms, which even Apple threw out the window several years ago.
Electron apps are mostly very good at the things in the above list, because the Chromium web renderer has spent years abstracting the mechanisms needed to feel native where it matters.
Non-native toolkits such as Swing and Qt also spent years trying to achieve native look/feel, mostly through emulation and host OS detection, and they still feel pretty crappy compared to Electron apps.
Slack, Spotify and friends do a good job of inventing their own "web but native-feeling" UI. An example of the exact opposite is Google Docs, which still, for all its technological impressiveness, feels like a crummy Swing app trapped in a web page. For example, Google Docs renders its own right-click context menus, which look and feel nothing like native context menus. Google Docs' mini-apps also have a menu bar and a toolbar, but it's part of the host window, so you get two levels of menu bars and tool bars.
Qt done well comes a lot closer to native look and feel than any Electron app I've ever used. The problem is that a minimum-effort Qt app falls into the uncanny valley. Slack is clearly foreign, but it's polished.
While true, Google Docs' collaboration capabilities are fantastic enough that it's worth using, though. We use it all the time to work, anything from tiny scratchpads to big documents.
To do the same in Microsoft Office, you need to dick around with OneDrive and/or SharePoint. The last decade or so, I've only touched Office when someone sends me some .xls or .ppt file and I'm being lazy and just want to view it.
I am referring to Google's realtime collaboration features. Given a URL, you can enter the document and see it being modified in real time, as well as edit, annotate and comment in real time.
It's 2017, this is how we work now. My colleagues (literally) across the world are not going to connect to some shared NFS drive or whatever via VPN to store documents.
What if the shared drive is automatically mounted on your computer when you log on?
One drive for your personal documents only for you. One drive for your team only visible and editable by people in your team. One drive company wide with common stuff.
You can send a link to your colleagues and it just works! That doesn't support multiple editing though. That's how things were done historically.
Google doc is good to send a documents to a bunch of emails and see/edit the documents. It's terrible to write longer documents with advanced formatting, pictures and schemas.
I often get frustrated when discussions about user interfaces in the context of native/non-native don't distinguish between 'skin' and 'ui'.
For the most part I think what people care about is that things work as they expect, which is primarily 1) placement of UI elements, and 2) interaction with/between these elements. If that's done right, nobody cares if the UI is flat, dark, light, or has a leopard print background.
Now I do understand that there's some overlap in ui/skin concerns, but the distinction still seems crucial to me.
For example, the web is clearly not consistent on the 'skin' of things. But I often know where to find things based on their location (header nav menu's, footer contact details, etc.), or general look (loupe for search, wide rectangle for inputs, some kind of wide rectangle with a doodad on the right hand side for a drop-down). Or a combination of placement and look (a search input field is an input field in the top right of a typical page).
Even lots of computer-challenged people I know seem to do pretty well in this regard.
But as you say, when it comes to interacting with elements, as long as the developer doesn't override 'native' behavior, a web-solution can be very native.
On the other hand, the vast majority of cross-platform native apps I use often look close to native, but their 'core', inputs, selects, text fields, and so one, often feel off.
Honestly, I much prefer a non-native looking app that uses native UI elements over an app that has an 'uncanny valley' native look that is slightly off and UI widgets that don't behave natively.
I'm a little confused by your list... In what way are text selection, caret behavior, clipboard, spell check, open/save dialogs, drag/drop, accessibility, and right click menus not available in the browser?
Of your examples, I know Slack fully supports all of them except indexing and accessibility. For those two, I don't know because neither are of concern to me.
No, I don't care about most of those things. I don't know why you didn't believe me when I wrote it the first time :). "Caret behavior", I use a Mac 8 hours a day and I have no idea what option-arrow does.
That clipboard shortcuts work the same is the only one that I'm used to enough to be annoyed if it were done differently.
There's something about Mac fans that are very preoccupied with all of the details of how Macs work. I'm not criticizing that, you like what you like, but you shouldn't be surprised that I don't care about Spotlight indexing.
Cmd+arrows — jump to beginning/end of line or text
Shift+Option+arrows — select words or paragraphs
Option+Backspace — delete one word back
Cmd+A — select all
I actually used all of these except the last one while writing and formatting this comment! Plus clipboard shortcuts.
If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe. But if you don't, I don't understand how you can have this opinion.
I get super annoyed with anything that somehow overrides these standard keyboard shortcuts, which is suprisingly often. Non-native UIs typically have to reimplement them because modern OSes have made the curious choice of not abstracting them.
I don't think it is a "Mac fans" thing. The exact same principles apply to Windows. Even to Linux, although the keyboard standardization there is next to non-existent. (I don't use Linux desktops often, but when I do, I get really frustrated that the terminals use Ctrl as a meta key instead of Command. So "copy" isn't Cmd+C, it's something like Ctrl+Alt+C.)
I use Cmd+A, that's it. It has a close corollary on every OS.
> If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe.
I do, and this is one of the reasons I've never bothered with all of the details and shortcuts that you like.
Vim attempts to make the best possible text editor. It doesn't let "OS conventions" dictate what makes good text editing experience. What you get from apps staying to strict OS guidelines is a bunch of average -- not terrible but not inspiring -- applications.
Sorry if this sounds negative, but I'm perplexed why you would even involve yourself in the discussion when you don't have an opinion. It's like asking a bicyclist about how highways should work. They might go "I don't want any highways at all", but that's hardly useful to highway users (environmental concerns notwithstanding).
I accept that not everybody will feel the same way – I do care about my local platform. I want to hit 'space' when I have an item selected and see quick preview. I want my documents indexed in Spotlight, I want to drag-and-drop files between applications, and I want all the rest of the UI niceties I am used to.
1) Why use several host platforms if the platform makes no difference to you?
2) If you know a platform then you should have no problems knowing how to use it.
It would be ridiculous to have an app from Windows behave exactly the same in Mac OS just because you don't want to remember the difference. You don't want minimize and maximize buttons put on the opposite side of all other mac apps because that is how it is on windows. You don't want copy paste in Slack to use Ctrl rather than command key because that is what you do on Windows.
3) Whatever time you save from doing everything the same across platforms would be wasted, for anybody not working cross platform who suddenly have to deal with an app with completely non-standard alien behavior. I want my standard mac hot keys to work in a mac app. I want preferences to be in the standard location. I want my color and font selectors to work the way they work all other places. I want drag and drop to work like in all other Mac apps.
We Mac users have seen this again and again. When companies don't give a shit about our platform, it is usually just a question of time before a competitor arrives which does, and knocks the other guy out. You don't survive that long ignoring the platform unless you got some lock-in advantage.
Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.
> 1) Why use several host platforms if the platform makes no difference to you?
I don't, I use The Web for 90% of all things I use on a computer. A Chromebook is one of the computers I use the most when not working for precisely this reason.
> Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.
I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.
I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.
Hold on a bit with that assertion.
First: which apps built on Electron are being sold, period? All the ones I'm aware of are open source, like Atom, or front ends to services, like Slack.
Second: which companies are switching to Electron for development? Again, all the Electron apps I'm familiar with are ones that started out that way. While I'm sure there's probably an app or two out there that began as a native client and then went to "let's just be a web wrapper," I don't know of any big ones offhand. (I've come across companies that have shifted their strategy to using true native applications, however. Facebook famously shifted their mobile strategy from HTML5/JS to native apps some years back, and I know of several iOS apps that were using "write everything in JS, it'll be great!" toolkits that switched to actual native AppKit.)
Third, and admittedly anecdotally, in both my experience and what I've consistently heard and read from people who've had the opportunity to study the UX of both native and "wrapped web" apps, just because users don't use the language of developers doesn't mean they don't notice when apps are slow, resource-hoggy, and behave kinda weirdly compared to other apps. I run a Slack for a writing group that's mostly populated by non-technical people and it is not uncommon for users to complain about Slack "slowing down their machine." Just because people don't know the term "native app" doesn't mean they aren't going to be able to tell "this app over here is nicer to use than that app over there," and that might be because "that app over there" doesn't minimize properly, or has weird menus that put common things in uncommon places, or doesn't do what they expect when they right-click on selected objects.
> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.
Actually, the incentive is even stronger than this. The ability to visit a website and start using an identical version of the app immediately is just as important - I doubt that Slack and Discord would have had half the success they did if users had to download the application before using it, regardless of them being available on all platforms.
Discord has had the ability to give links to others to join a chat server since its inception, resulting in a two-step process to use it: click the link, type a name. This is miles less of a barrier to entry than: click link, download app, find downloads, install app, run app, create account, join server (rough process for most text/voice apps up to this point).
It's a difficult trade-off, even for me as a user. The fact that I can load Slack on all my devices and in the browser definitely played a role in my adoption of it. The fact that Slack is dog-slow and a memory-hog on my MacBook Air, on the other hand, has kept me from using it as a central part of my own work. Similarly, I am actively pushing everyone I communicate with the switch from WhatsApp to Telegram because the latter has a performant native app.
If I were to build an app where cross-platform support was crucial, I'd probably start with Electron as well because of time and budget concerns, and switch to native if the app 'takes off'. But on the other hand, I can imagine there being serious risks to building out an entire platform that way and having to rebuild it from scratch later on. Maybe there's never budget/time for it, leaving me locked in?
This issue is so furiously divisive, and I feel like it brings out the worst in HN.
I'm from a web dev background. Some of the comments here make me feel like I should be ashamed of that fact.
I can see both sides of this argument (every time it comes up)
As a consumer of apps, I want the leanest, most minimalist, fastest thing going. I want native apps on my devices (If you think slack is a hog on the desktop you should try it on Windows Phone).
But as a developer, I know that electron is a shortcut that means my app will take less time to build. I can take my existing skills, take work I've already done on a WinJS app and publish it on Mac OS, Linux and Win 7. I can spend more time with my family, instead of spending all my evenings learning py+qt, or xamarin, or react native, or whatever the new fangled thing is. And I know people will use it.
Hell, I've even got better odds of pushing an electron app than a native one, as I can submit a pull request and maybe have it appear on https://electron.atom.io/
Obviously, seeing the Electron hate always gives me pause for thought, but at the end of the day it feels like the hatred is from fellow coders (if a dribbling front-end-js writing low-life such as myself can call you writing-assembly-on-a-napkin-while-you-quote-stallman-types fellow coders) and my apps user-base is overwhelmingly non-technical.
Honestly seems quite sad that developers today know only one (quite awful) language and have no whatsoever interest to learn new things because they can just write an application (probably with a quality not much better than the aforementioned language) that can be used everywhere but taking a huge amount of unneeded resources.
The thing that I like the most in software development is the ability to continuously learn new techniques and new, better, ways of solving problems.
I'm really afraid to think what would have happened if VbScript would have won the race as the de facto standard for the web..
..but actually thinking about it probably it would not have been that different from today.
We would have had VisualElectron instead of Electron, but with all the same problems maybe in a slightly worse or slightly better language, depending from the point of view.
> This issue is so furiously divisive, and I feel like it brings out the worst in HN. I'm from a web dev background. Some of the comments here make me feel like I should be ashamed of that fact.
I'm an embedded devices programmer and I'm proud of it. Proud to know a little bit about my hardware, and proud to get the most out of it. And I'm ashamed when I find a more efficient way to do something: it's not an optimisation but a bug fix.
Your non-technical users are capable of noticing that a certain application makes their computer slow or drains their battery or doesn't support a certain feature of their platform. It may take them longer to notice, and they won't know why, but that won't stop them from switching if a better alternative comes along.
Delivering high-quality web apps generally requires knowing JavaScript and CSS. Delivering high-quality desktop apps generally requires knowing the tools of that ecosystem. Sometimes we compromise to get things out the door, but we should be honest with ourselves when we do.
I'm honest with myself. I'm not disputing that Electron is not the most optimal way to create a desktop app, I'm saying that it makes desktop development (particularly cross-platform development) faster and easier. I know I could make a more efficient app by going native. And hopefully in time, I will. But for the sake of getting it out the door I'll most likely go the Electron route.
I meant that as a general statement, not a challenge to you specifically. I wish I'd phrased it more kindly, because dismissing criticisms as "hatred" and implying that non-technical users don't care about battery life are the sort of thing I'm advocating against.
Proper this and proper that, and don't get me wrong, I agree, but parent is correct -- and that is these apps wouldn't even exist if it wasn't for Electron. It just wouldn't have been a consideration for it to be a desktop app. Instead it would have just been thrown up on the web. For example VSCode would have just been some type of online IDE clone like Cloud9 or something similar.
If you think this is nonsense, you are out of touch, and I don't mean this as an insult, although I realize there isn't a good way to say that.
Yes. Electron enables some new stuff. New stuff that wouldn't exist otherwise.
Yes. Electron is a bucket of bloat that saddles what should be small, simple apps with enormous amounts of crap that has nothing to do with the app's functionality.
I think that's the real take-away here. There are upsides and downsides; I am pleased that it is easier to release desktop applications, but sad that the performance of them is so much worse.
I wonder if one of the issues is that so many developers have now worked almost exclusively in the 'web' sphere, and aren't aware that native development maybe isn't as difficult as they think.
Perhaps someone with both native and web development can set me straight on this; I have very basic knowledge of the former.
I've done the latter for a long time now, and only now I'm trying to teach some people, I properly realize much knowledge is needed to do it right. It's not just arcane knowledge of the quirks of CSS/HTML/JS, but also tooling, build steps, knowledge of 'expected' web behavior, frameworks, libraries, etc. Much of this complexity is still there and often worse if you go for vanilla js and static html/css (in part because expectations of a web app are higher these days).
I started learning native iOS development and expected things to be much easier and more sensible, but instead I get the impression that it's not that different.
Xcode is apparently a piece of shit, and everyone tells me to avoid Xcode's interface builder. There's tons to learn about how a project is set up, as well as stuff that just isn't a concern for the web like packaging it up submitting to the app store (hours just figuring out how to correctly supply icons and get a certificate). More than once, as I'm following an online course, the lecturer will say something like 'this might seem like a logical approach, but DON'T DO THIS and do <unintuitive thing x> instead.
Again, please correct me if I'm wrong. Perhaps when you put it all together it is significantly easier than doing the equivalent on a web platform. I'm just saying that I expected my initial foray into native, in particular Apple's 'walled garden', to be at least a little more like an actual garden rather than the chaotic, exceptions-for-every-rule (but kinda fun!) scrap-heap of the web I am familiar with.
On the plus side, this is an ideal situation to take advantage of - by creating native and fast applications, it becomes easier (compared to before when everyone was doing it "the hard way") to distinguish your product in a bloated market.
In discussions like this its important to try to distinguish between a priori reasons and post hoc justifications.
The real reason that parties like slack and spotify choose electron is because its easy for the devs they have that only really know JS/HTML.
Its not that they couldnt hire more/different devs that could do it in a saner way, its not that its too expensive or that the ROI is not good, when we are talking about companies worth hundreds of millions to billions of dollars focusing on a core market, that is just completely laughable post hoc bullshit.
Its that they dont give a shit, either about the user experience, or improving their toolset. They are happy where they are, and see no reason to change.
OneNote is a great example. It has fairly well-regarded native apps for each platform it supports, _and_ a decent web (HTML5) interface.
I think the issue is time and resources. Small teams, like Slack, would like to create a slick experience but they don't have the time.
Companies like Facebook went the other route -- HTML5 on mobile, got their fingers burnt, and went all-in on native. StackOverflow's iOS app appears to have improved a lot too, in v1 it was a thin shell around the web interface.
I hope React Native catches on. I'm not a huge fan of how 'heavy' Electron is myself.
Facebook's HTML5 app was 5 years ago, this example comes every time but it's outdated. technologies like Electron or React Native didn't even exist at the time.
5 years is huge when it comes to web technologies.
Electron is like almost any other cross-platform VM, including Java (which is living under most of the "native" apps on the world's most popular platform), .NET, and so forth. It doesn't share resources between instances but then you also don't get dependency hell (wrong version of Java for application X, etc.)
Subjectively, I'd say that Electron's performance overheads are not bad compared to, say, Smalltalk in the 90s, where one Smalltalk application could bring a fully loaded state of the art workstation to its knees.
And don't get me started about Flash. I had a whole project cancelled after an engineeer brought up the CPU meter during a fairly simple animation.
I've never seen anything performant built on top of Smalltalk, including the Xerox Star's built in apps which were developed by universally acknowledged geniuses. Flash's performance issues were legion.
Chrome is widely considered the best current desktop browser in a very competitive space. If it has problems it's definitely in idle power consumption (which indicates wasted idle CPU) but it is used by a hell of a lot of people who have free alternatives.
Those who forget desktop Java will be doomed to repeat it.
For those of you who aren't old enough to have been around, Sun initially pushed Java as a "write once, run everywhere" GUI language. It quickly became clear that Java applications were ugly and terrible everywhere, even in the primitive days of X11R5, when programs used a mixture of Xt, Motif, Qt, GTK, and raw X11 protocol (xv was awesome). Having a Java program for some task was worse than having no program at all, since it would discourage someone from writing a decent native solution.
Fortunately Sun found ways to make money using Java server-side, and Apple helped kill it client-side by not providing it by default.
I would like to offer http://openstreetmap.org as a counter example, where the best editing app is made in Java and the most used one is Javascript. There have been QT4 editors but they have never gained traction, I never found the problem to be performance between native and Java.
I think Javas problem was it's Unix engineering roots, with too little focus on UI, and perhaps a little too much of the "we don't care about performance"-mindshare. The latter being the only problem I see with some electron apps.
The cost argument is indeed a moot point - look at all of the major software companies, and you'll see that actual development is nowhere near their primary expense. They can afford to implement their own programming languages and OSes (and some do), but apparently it's too expensive or too difficult to make native apps for the respective platforms?
It's moot if you're an established company with deep pockets moving slowly. If you're starting from scratch with limited funds, it is definitely not a moot point. Would Slack have been as wildly successful if they had to implement native apps on all 3 platforms + iOS + Android + etc? I'd have a hard time believing so.
You don't have to rebuild your app for all platforms. Desktop widget toolkits (Qt, GTK+, wxWidgets, ...) have bindings out to most languages. If you can run Python, you can run a desktop app on all platforms from a single codebase. Also, all major platforms have C and C++ runtimes; if you write portable C or C++, your codebase should run on any of the big three.
It's ridiculous to pretend that you have to write 3 distinct codebases to get a multiplatform application. Are there 3 Firefoxes? 3 Chromiums? 3 VLCs? Back in the day, applications like Banshee, which written in C#, were the rage and were distributed as core parts of GNOME.
Why is there this belief that we have to use JavaScript for multiplatform?
This is a tangent, but CMake is incredible for building cross-platform C++. I had a project a year or two ago that was developed as a .so for Linux, using CMake simply because I like it.
When I went to go build a Windows .dll, I rolled up my sleeves and expected to have a bad day. `cmake -G "Visual Studio 14 2015 Win64"` just went ahead and made a Visual Studio project from my source tree, and that project built and worked first try. I was using all C++14 stuff like std::lock, std::thread, etc, and there wasn't a single #ifdef required in the entire project. Amazing!
If you target the native OS widget set, you must have a separate code base for each platform's native widgets. Things like Swing and Qt don't cut it.
Macintosh users in particular are sensitive to things like the placement of and spacing between UI elements; if anything is "off" from the gold standard set forth by Apple, they will scream and bitch at you because perfect UI consistency is paramount with this crowd.
Web-based apps get something of a pass because they look and feel like Web-based apps (though not always; witness the grousing in this thread about the new Mac Spotify client). But things like Qt and the XUL-based Firefox, which try to look native but miss subtle details, fall into a sort of UI uncanny valley and are roundly rejected by the Mac community.
Overall, that sounds like an argument to exclude Mac support, if anything. But I would suggest that if it's so difficult to get the widgets exact, it's better to use a toolkit that can provide widget portability as well as precise implementations that pass through to native OS windowing.
>If you target the native OS widget set, you must have a separate code base for each platform's native widgets.
I want to clarify the subtle distinction here. If your code's concerns are separated, having to directly provide native widgets on some platforms means a different "codebase" for windowing and widgets only, not for everything. You'd still compile your normal code, and use an #ifdef or equivalent to include the appropriate windowing/widget library.
No, and that's fine. I don't think anyone is going to begrudge a small team the power of developing a 'desktop' application using cross-platform tools like that. It's great that the barrier to developing an application in that way is lowered.
But that doesn't excuse the abuse of that platform. Spotify used to have a pretty convincing native Mac app, which was spiked in favour of their current abomination, and I've watched performance plummet.
Using Electron as a cost-cutting measure is fine, but it's not good for user experience and it's OK to be honest about that.
Electron is great for user experience. Compare to something like JetBrains' IDEs which are written in Java and are much less native feeling than Atom (and just as resource-hungry).
Point taken – 'issue' is not the correct term here. Perhaps 'cause' is better.
Lowering the barrier to entry is great, I agree. It's awesome that Electron and web technologies can be used to quickly launch proof-of-concept desktop apps. I find it significantly less awesome that companies with hundreds or thousands of engineers continue to use it after the concept has been proven, however!
But these apps aren't "proof-of-concept" apps - they are fully functioning, working cross-platform applications.
So are you saying that these companies which have implemented these apps should instead say "You know what, we have this app developed, and working on the major platforms - but let's instead devote three new teams, one each for Windows, Mac, and Linux - to re-implement this app natively! I'm sure upper management will agree!"
That's not going to work. That's going to be shot down and laughed at. No company is going to re-implement a working cross-platform application over to three separate native contexts, and maintain all three. That just won't happen. It works already. The users probably like it just fine. What benefit to the company will this get them? Nothing - just more costs for maintenance across three platforms.
If native is wanted by the users, likely what would happen - if it happens at all - will be the company says "Ok - we'll make it native for Windows, maybe even Mac - maybe. Linux? Forget it!"
It's not the developers. Developers would love to make native apps, for all the platforms. But developers are limited by the companies they work for, and by the economic realities that all the platforms can't be supported natively; at most, only one or two can - because at that point, with the number of users on those platforms, the additional costs of maintenance and support are pretty much saturated. Adding additional native platforms doesn't just add onto those costs, it actually (in theory) multiply them - because a single person might use the application across multiple platforms. So if they have problems on two or three different native platforms, now there are three different support issues (needing more people to support) - instead of a single complaint for a single platform.
I get it, though - it would be great if these apps were native, and worked on every platform, from now and into the future (even on platforms that don't exist yet!). That's just not going to happen; if native is wanted, then only the most widely used platforms will be supported, and even then, one of those will likely be dropped, and it won't be the one from Redmond.
So - what can be done? I dunno. The concept the author brings up ("use React Native") might be the solution. Or some other interface that abstracts a platform's OS and other native functionality out to an API that is the same for all platforms. But now you have issues with security and other access - which you still have with Electron, but it is more contained and constrained, since it has to go thru the Chrome engine and the various rules/settings browsers have for sandboxing bad actors. Or - you leave it to the user and their operating system (and slim it all down - maybe that's what React Native does; I don't know, I've never used it).
Or - you (that is, the company) just says "Sure - we'll do a native only implementation - for Windows only." - because that's how it usually goes.
> It's not the developers. Developers would love to make native apps, for all the platforms.
I don't think that's true in the case of Slack. They only have JS "hacker" webdevs, and those are notoriously resistant to any change to their comfort zone. C++? WPF? Swift? Cocoa? Scary stuff.
"The easier we can make it for people to get started, the better, isn't it?"
Would you like to use an application that would not have been written without a low barrier of entry? Powerfull and easy tools are not necessarily the same thing.
"the easier we can make it for electricians to wire up my house, the better, isn't it?"
Unfortunately, no. Just as a trained electrician will wire up your house so as to not set it on fire as soon as you turn on a lamp, a trained developer will make apps that use a minimal amount of resources.
Especially more hilarious if someone writing Electron apps call him/herself an engineer.
As computer programmer in the web field, I find it awfully pretentious when web developers or any other programmers call themselves engineers when they are not legally licensed engineers.
Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?
If yes, what in earth makes you think writing Electron apps is a good idea?
If no, back to my point.
To quickly answer JetJaguar below you, yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers.
> Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?
I have been trained as a Software Engineer in Istanbul. I am very sorry that my country doesn't fulfill your expectations.
> If yes, what in earth makes you think writing Electron apps is a good idea?
Engineering is about making trade-offs. There's enough discussion here about what those are for writing Electron apps.
> If no, back to my point.
What, "no"? What was your point? ...that I'm not an engineer? As I said, I am one.
> yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers
I don't think you are a cunt. I think you are uninformed and have concrete opinions based on limited or self-fulfilling-prophecy-boosting experience.
> Have you been trained as an engineer, in a country where "engineer" is a very specific thing and is held to higher expectations, including the possibility of having your title taken away by your peers if you do not respect your duty to, first and foremost, your users and/or society, like every country that takes the title of engineer seriously?
No country trains software engineers in the way you describe.
I think Germany was the country described in the post. Whether it's true to the fullest extent is hard to say, but having worked and socialized with them for years it seems mostly true. Engineering is the country's actual official religion. And I mean that as the highest compliment.
(No offense to any other country's engineers -- the engineer mindset is the same everywhere, but the mindset of the non-engineer in other countries is the distinction. That is, Germany grants engineers a degree of respect, almost reverence, that I've never seen in the US.)
Our company recently purchased some Hardware I/O devices that came with software (both made in Germany). Lets just say, I'd much rather buy their cars than their software.
So they should not call themselves engineers - any more than they call themselves doctors or lawyers. They are programmers or software developers. Or potentially software engineers - but certainly not engineers.
Are there standardized licensing bodies for software developers in the EU and Canada? I'd be really interesting in learning more about them if so.
I sympathize your argument and I think the field is doing a great job right now demonstrating some of the upsides of a licensing authority, but actually getting one would be bad IMO.
Most software is not life and death, and licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means (today, these are mostly indirect because everyone is on the look out for them, but they are nevertheless still there). There are good arguments that the AMA and ABA have both seriously contributed to the astronomical expense of their respective services.
>Are there standardized licensing bodies for software developers in the EU and Canada?
For engineers, absolutely.
For software engineers, absolutely, because they're still engineers.
Software development? Go wild, anyone can do it.
>licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means
That seems like a terribly US centric that I keep seeing online. Unions in France, as it's the one example I can be certain of, are in no way gatekeepers, and we are a country where they've been immensely powerful when it comes to influencing the state (whether talking about worker's unions or CEO unions). But you can get any job without being in an union, all they're doing is bringing everyone on an equal footing when it comes to negotiations.
Licensing authorities are purely a society thing. Either you have a numerus closus, because the end goal is for everyone that passed the selection to have a guaranteed job, with good living conditions, or anyone can pass, and good luck everyone. It works in some cases, doesn't in others.
I've never heard of some governing body in France or England taking away the title of "software engineer" for someone who wrote faulty code. What are these governing bodies?
Maybe I have the wrong impression, but I feel like the author of the original post never wrote anything cross platform that's sufficiently complex (ie, more than a "let's fill out a form and press OK" Qt app).
> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.
There's nothing inherently electron-specific that makes an app "shite" any more than writing one in Qt would. You can write shit in any language, framework or platform. The day someone invents a system that protects us from our own stupidity will be the day humans become obsolete.
> They fail to integrate with the host platform,
Untrue. You can—if you need to—integrate with the host platform by writing a native node module, however it becomes less cross-platform at this point.
> they are slow, they hog memory and drink power.
Yes web apps use a bit—sometimes considerably, depending on what you do—more memory, and a bit more CPU (and hence bettery),
but they are not perceivably slower, unless you're doing something stupid (in which case the equivalent Qt app would probably suffer in the same way).
> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.
Maybe, maybe not. I think using a cross-platform solution wouldn't be ideal for them (they'd need to either rely on something like Qt, write their own rendering engine, or use something like SFML; all of these are overkill compared to Electron), and the alternative is writing it in different languages for each platform, which would probably inevitably cause the projects to get out of sync in one way or the other over time.
in my opinion this is an hiring problem, they assume that if you know javascript you can be spendable on more things and not only on the desktop to write an application.
Can't say I agree. Native apps are often garbage. Discord is another Electron app that I think is actually quite a bit better in most dimensions (attractiveness, functionality, speed) than the native apps it replaces (Mumble, Teamspeak, Skype, etc.).
But is that due to technology choices, or due to more attention being paid to UX? (and I do believe webapps give more flexiblity in that regard)
I get the feeling that UX and UI designers for web- and mobile apps are just a different breed than their native desktop equivalents. It might just be my Windows bias though, iirc Windows didn't have a strong / great UX guideline until their current one came around - and I haven't used many modern windows apps yet, the ones I do use are stuck in the 90's with their button bars and such.
That is due to the technologies available on the web being easily more suitable for GUI development than older native tools or desktop frameworks. We're in a similar boat, we made WPF apps. Now we're bringing it to the web and Electron. The things you can do now in React, heck i wouldn't even know where to begin in WPF or QT. If you look at modern Electron apps like Atom, VSCode, Discord and so on, the things they do and how flexible they are, this would mean so much hardship and pain in older solutions to the point where it's probably not viable any longer.
It's the ease of declaring components and structuring complex applications with it. There is nothing in WPF that is comparable to it unfortunately, except Redux.NET maybe, but XAML is a lost case against JSX. You'd have to see it, or work with it. The bottom line is, drastically less code, clean structures, things become simple to do.
Like hot module reload where you write your app live. You edit one component, everything else stays in place and maintains state. Or time travel with Redux, where each piece of state is inspectable. You roll back or slide through the apps history and see it literally build and deconstruct itself. That's possible because UI is just a function of state. Same state, same UI.
It's never going to be possible to fully separate the factors out. With infinite time and effort by the exact same set of devs, I'm willing to grant that electron apps will be worse than native apps. But that hypothetical circumstance (same set of devs, infinite time and effort) does not exist in reality.
Facebook still has not made any desktop apps available, leading me to rely on a hodgepodge of 3rd party, electron-esque shells to provide a desktop experience.
This is despite their mobile apps being the most minutes used apps of any company. Even then, they tried to go with a electron-esque approach and only backed off when the performance and UX tradeoffs became unacceptable.
If one of the most profitable companies in the world can't see the business case for supporting a first party desktop ecosystem, it's very hard to believe many other companies have the justification.
This isn't an engineering problem, it's a business problem.
They don't. Java apps have been cross platform since their inception, .net is extremely close to being just as versatile, x11 is available on mac and pc, and python will run on literally anything. There's a pile of cross-platform options out there for you. What electron does is reduce the barrier to designing UIs, since you can do it all in html and javascript and its just as easy as hiring the guys who did your website to design. Because the one thing all those platforms have in common? Aside from .net (which isn't mature), they all have absolutely hideous windowing systems. Just ugly.
The main challenge, I think is that all of those apps are cross-platform, but also want to keep the same look and feel on all platforms - they're using their own UX patterns inside the container which generally don't match up with their host OSes.
There are some applications that do try so - 1password, GitHub for Mac/Windows to name a few that come to mind - but they seem to get less love than the cross-platform webapps. It feels like they get a certain amount of dev time before going to minimal maintenance mode.
This reminds me of what's happened with Unity (the game engine). There's a completely false perception that Unity is eating other engines' lunch. It isn't. All that happening is that game development has been democratized on an absolutely massive scale. So more games are being made. Most of them are bad. Most of the best games are made with either Unreal, or much more commonly, proprietary C++/asm engines.
Spotify, Slack and Atom are no longer hip startups though - they're established applications, and at least two of them have a very lucrative monthly subscription model. They can afford to optimize.
But the user experience is already great, so what's the gain? Electron and the web clearly make a first class app platform on desktop.
EDIT: thanks for the downvotes. I'd love to hear your thoughts on how electron is keeping Slack and Spotify from building a massive business and how their desktop users find the experience so bad they don't use the tools obsessively. Clearly there are things to improve w/ Electron (energy usage), but "terrible experience" is not how I'd describe Spotify and Slack on desktop, and their businesses clearly reflect that.
Moving to a native stack has major tradeoffs, would it 10x their business at least? I strongly believe that answer is "no."
Its a terrible experience to discover my battery life is cut in half because of a cursor animation in VS Code, or to have my laptop heat up and the fans engage because of an animated GIF in slack.
But its worse than that - most users don't even know why their battery life is awful. So they blame apple, or microsoft, or dell or whoever. And they just don't use their computers as much, because its all a bit gross and slow. And thats bad news for our entire industry.
Sorry, I'm confused. Which part of this bad performance was a bug in chrome? The peaking up to 100% usage for a few hours? The shockingly bad performance rendering GIFs? The super high idle CPU usage? The high download size and ram usage?
I used the desktop slack client all of last year, and these kind of problems were present the whole time. This isn't some "oh, yeah there was some particularly egregious bug we shipped accidentally in October" thing. Whatever is making the slack client a bloated ball of crap is much worse than a simple, quickly fixed chrome bug. Its endemic.
A bug that it's a lot more easy to be discovered and fixed when it affects so many people, How many "native" apps have similar battery hogging bugs that are not fixed because nobody founds about them?
Anecdotally, I had to cancel spotify due to shitty experience. One of their app updates on my phone left it completely draining my battery in a few hours even when not using the app, with no way to keep it from running in the background other than to uninstall. (ok or possibly rooting my phone to install more recent android with better background process control)
I pay for Spotify but I don't see it's poor UI/UX problems. The sections make sense, the color contrast is good, browsing through an artists albums and songs makes sense, the playback functions bind perfectly to the media buttons on my physical keyboard. What else could I want?
The user experience of Slack as a desktop app is not great. The UI is consistently one of the highest energy users on my machine, despite being one of the worst performing (have you ever tried scrolling up into a conversation's history?)
None that I can think of, really. This is one of the cases where a desktop version of a web app exists and I'm not aware of why.
Spotify makes sense as it can reliably use your file system for storage and thus download songs so you're making fewer network requests. Perhaps Slack could keep a short log to prevent "scrolling up"-related network requests?
Choosing a platform isn't really premature optimization -- if you base your project on electron, and electron is too slow, your alternative is to... rewrite your project?
> You know what the alternative to all of these Electron apps coming out is?
A single page web app?
That's all most of these are. Particularly the Slack/Hipchat/Discords of the world.
> People are excited about the Desktop again
If by desktop, you mean "need to be connected to mains power to run for more than 10 minutes", yeah, people are excited for desktops again.
Electron only really makes Javascript developers happy; they don't have to learn another language or framework to make "desktop" apps now. As a consumer, I fail to see the value in having to download large single page webapps that are equivalent (or worse) than their online counterparts.
Cross-platform applications like Chrome and Firefox and Steam and Doom and Factorio and... these have existed for years, and so have cross-platform programming languages and frameworks. Java, TCL, Qt, Flash... Electron is simply the lastest in a long line of cross platform frameworks, which is currently in-vogue with developers solely because it uses HTML and JavaScript.
>> As a consumer, I fail to see the value in having to download large single page webapps that are equivalent (or worse) than their online counterparts.
I use the Slack app for Windows, and the value for me is that it gets its own easily identifiable presence in my Windows task bar.
I have a (self-inflicted) problem with tab proliferation, and because I rely heavily on Slack, it's just way more convenient to use the self-contained version than to have it running in a tab buried in one of my browser windows.
It doesn't bother me at all that the Slack app is an over glorified web browser running the Slack web client.
> It doesn't bother me at all that the Slack app is an over glorified web browser running the Slack web client.
It kinda should, though, because if the browser component isn't up to date on the regular then it's possible you're looking at (what should be) a simple chat app with a potential RCE.
Discord desktop is electron. Only our iOS app is react native. We spend quite a lot of time and effort optimizing performance however, something thats required given the number of large servers with many thousands of concurrent users.
What planet are you on? I've been using great native apps since forever. I don't know what it is but there is a certain segment of the computer user population who don't seem to notice that awesome native apps exists for all sorts of stuff.
"actual dev time per feature (easily 10x the cost)"
Yeah seriously what PLANET ARE YOU ON!!! I can make native app functionality in a fraction of the time it takes to do it in a horrible mess of a language that is JavaScript. Poor type system, shitty REPL, shitty module support, shitty debugger. Shitty GUI design tools. Shitty performance measurement tools.
"And as for Qt, Qt has existed for over two decades"
It is C++, and we got great alternatives on Windows and OS X. still if you have to stick to C++, then Qt is probably the most widely used GUI toolkit.
"People are excited about the Desktop again"
I don't know what bubble you live in. This is just a bunch of hipsters who are excited because they can suddenly use their only programming skills to hack on desktop apps.
Lets get real, this is all about the numbers and not the technology. Due to the the need for lots of custom made web apps, there is a huge population of JavaScript developers. These can now be utilized for desktop app development. It isn't like it wasn't possible before, it is just that these guys could not be bothered to learn anything else.
And it isn't all that important. Hardly any of the applications I use daily are Electron. I occasionally use Atom. Other than that great apps like OmniGraffle, Pages, Pixen, TextMate, Ulysses, Marked 2, Charles, Dash, Kaleidoscope, Tower, Keynote, GitUp, 1Password, Magnet, iBooks, ScreenFlow, Terminal etc none of them are electron based.
He's right, and you're right. Web technologies have a low barrier to entry, so it's easy to go from inexperienced to productive.
Advanced native tools with a powerful language in the hands of a master programmer will be much more productive for the reasons he cites. The type system in particular, with a well-worn toolbox of primitives can make you extremely effective.
So, web technologies are more effective in aggregate across the entire industry, but native technologies can be more effective in the hands of an experienced single individual.
>Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".
A higher barrier to entry has that effect. Low quality crap doesn't get in as easily.
>Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
I find this argument absurd. Desktop (and mobile native) apps are multi-tens of billion business. Ask Microsoft, Apple, Adobe, and thousands upon thousands of smaller businesses (down to SMEs like Panic and one-man-shops like Sublime Text).
If anything it's those web-based unicorns that are either merely burning VC money, or selling the user to advertisers -- in both cases, there are not much money in selling them directly.
And there's nothing about web development that couldn't be achieved just as easily if instead of all those money on browser engines and teams to create things like Dart, there was some effort towards a nice, cross platform, mobile and desktop UI library.
It could even have JS bindings for all web devs to use -- just without the web stack crap. React Native is something akin to that, but imagine if it had been going for years, and had more industry support, instead of the nth attempt to put lipstick on the web stack.
Making desktop apps might be a nice lifestyle business for niche segments, but there aren't many (any?) big businesses built on desktop apps any more. Microsoft and Adobe are doing well on the desktop, churning out new versions of apps they've been building for nearly 30 years, but I can't think of any company built in the last 10 years (at least) that came up that way.
If you can't think of any companies that have built significant desktop apps recently, you're just not looking. I don't even get how this is an argument. "Well I haven't personally installed a native app recently, so obviously they're unimportant."
You know what's never going to go out of fashion? Performance. Especially since CPU speeds have stalled in the last decade. you will never get good performance if your design is: "embed an entire browser, and then use a small piece of it"
> Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it
Wut? So the whole Mac App Store, MS Office Suite, Adobe's apps, many other stuff were just a dream? Until Electron desktop was dead? You're bending facts here, and I hope that's because you don't know much about the stuff you're talking about.
Desktop is big, just that other things have grown bigger. That doesn't mean desktop is/was ever obsolete or losing it's utility.
By "nobody", he means JavaScript "hackers" (his term) and webdevs. How joyful that those types of "devs" are now excited and "care" about desktop and mobile development.
Limiting oneself to JavaScript is quite limiting. Just like a polyglot person is has access to way more information, cultures and fun than a monolingual one; a polyglot programmer too has access to more tools, paradigms and fun. Also, hammering everything as if all the stuff was nails is not hacking at all.
And also on Windows, and even on DOS, before Windows.
Googling terms like "shareware" and its history (not the only model, there were, of course, outright paid products too), will show some stories, maybe not a lot, since some of this was before the web, so not archived. But will show enough to get an idea. Jim Button was a classic example, but there were many other indie devs (many of them one-man shows), who may not have made it so big, but made good money from desktop apps.
Ironically the Mac App Store itself is basically just a WebView in a window, and you're essentially browsing a website (DOM and JS and everything) with extra privileges with the host OS.
Bullshit. Spotify, for example, existed before Electron, and it was very fast. Text editors existed before Electron, and they didn't require 300 MB minimum to run. Same with collaborative chat programs.
Electron is just a way to reuse web developers for desktop development - I.e. a way companies now have to cut development costs.
Chrome was released as a beta the 2 September 2008. The first version of Spotify was released 7 October 2008, and the download size was around 5 MB. I know because I've using Spotify since it was invite-only, back in those days.
So according to you Spotify was able to build a full featured client using a beta technology in a month. Impressive.
Vim and Emacs are probably much slimmer, but a comparable GUI editor- Sublime Text, is using 311 megabytes for just one window of a folder that has 200 files in it.
Yeah... I opened a 500KB log file in vim and Notepad++ and they are using 5 MB and 7 MB of RAM, respectively. They both also manage to use no measurable amount of CPU (even to blink the cursor!) unless you interact with the window.
I should say "measurable in task manager" - which reports in increments of 0.1%.
Realistically I would imagine that a native Win32 app (Notepad++) that's totally idle except for message loop and cursor blink requires less than a microsecond of CPU time per wall clock second.
It gets better: the caret in "real" Win32 is not rendered to the framebuffer or any memory-backed surfaces, but is "inlined" in the video output the same way the (hardware) cursor is - it's effectively an old-school console sprite.
But it does mean you're limited in the things you can do with it - when Office 2013 gained that fancy animated caret they had to do it themselves, similarly the caret in Atom and VSCode are both software-based.
But - can I get a copy of Notepad++ on my Linux box? No?
Now - I know there are a ton of other options for Linux, but the thing is, I can't jump from system to system and have the same app with the same experience - even if the app was developed as a native app for all of the platforms, because each has a slightly different native GUI implementation and usage which doesn't translate fully between each.
So now I have to learn and use potentially three or more different programs/apps/whatever to do the same task. Or, I have to remember the quirks for each native implementation.
...and let's be honest: Not many companies out there are going to develop a native version of the same app for all the platforms, because most platforms have a lower number of users than others (in many cases, much lower - depending on the genre of the app in question - like games).
It's an economic tradeoff: We either get a balkanized system where for certain kinds or types of apps we need a particular machine for the native implementation, or we have the case of these larger cross-platform apps that anyone can use on any system, in the same manner everywhere.
Here's another thing - most of these complaints seem to have to do with laptop users. I don't really worry about these issues on a desktop, because there I can have a ton of memory and way more CPU than what I can get in most laptops, and I don't have to worry about battery power.
But for those who are stuck with laptops - maybe they need to bother manufacturers to increase the amount of RAM and CPU available, to handle these larger apps.
It's also funny that I hear people complain how these apps are too big, and use too many resources for editing text or whatnot; you make the case that vim and notebook++ use only a few meg of memory, and no CPU.
I tend to wonder how well they'd fair on my old TRS-80 at home - you know, I had a full-screen text editor on it that didn't use much CPU (sub-1 MHz) nor memory (less than 64K) - so why can't we return to that?
Honestly - I don't want to; but we can take this argument down the rabbit hole, because the argument that today's stuff is bloated compared to another case, can easily be made about today's stuff vs older stuff. Most of that bloat of your "smaller" example comes from abstraction; the same as the "new bloat" - not many years ago a program taking of 10 MB of RAM would have been insane. Today, it's normal and expected.
I daresay that in the very near future, programs taking up several hundred meg to a gig or so will also seem normal, because by then we'll have even better CPUs (with maybe hundreds or thousands of cores) and way more RAM (terabytes).
Some might argue that this is the case today, in the form of cloud computing and SaaS - browser-base stuff, in other words.
I remember a long time ago people discussing that Emacs would never be popular on PCs because nobody had 10MB sitting around just to use a text editor...
Wow, that's bullsh*t. Sorry but what you said was factually incorrect.
What Electron did is it enabled all those web devs (even front end guys) to write "desktop" apps. And trade offs are certainly visible.
I am boycotting Electron. I have zero Electron apps installed, and recommend strongly against it. I hate that web dev mindset that has been pushed into desktop userland. It just doesn't work that way. Write native desktop app yourself and you will see how wrong it is.
Can you expand on your last sentence? How is React + browser rendering any different then QT rendering? How is HTML/CSS any different from QT layout xml? How is QT sockets/threading any different from NodeJS sockets/threading?
They are very very close to each-other. Obviously they are not exactly the same, and I used QT in this example. But they are not wrong, or completely different.
> How is React + browser rendering any different then QT rendering?
Qt has far less overhead. A small Qt program can be in the single digit megabytes in both disk and RAM consumption, while using near-zero CPU for the UI.
> How is HTML/CSS any different from QT layout xml?
My understanding is that Qt's layouts are translated to native code at compile time, resulting in native performance and overhead on desktop. Browsers' dynamic rendering is relatively intensive and expensive.
> How is QT sockets/threading any different from NodeJS sockets/threading?
When it comes to I/O I don't think there's a significant cost to using Node/V8. I'm totally fine with CLI applications written in Node but when it comes to UI, a browser is just too heavy.
Haven't wrote a lot of QT, but Cocoa/DELFI felt totally different than writing Electron app GUI.
Writing Electron app felt like sketching, at least to me. You place one line of code with tags and boom there's a button. Now QT takes place somewhere in between, and it still provides nice native multiplatform environment that could be appealing to web developers.
> Writing Electron app felt like sketching, at least to me. You place one line of code with tags and boom there's a button.
Runtime issues aside (bloat, cpu, etc), why is this a bad thing?
I like that I can write a single line of code and "bam!" get a button (or any number of things) to appear. Why would or should I want things to be more difficult for me to develop a piece of software?
Sure, I could do things in some other language - I mean, I know a ton of others. But implementing the same functionality can be a much larger pain in those other languages (and honestly, for app gui development, I haven't found anything that beats the drag-n-drop editor of VB3/4/5/6 - there was something close to it in Visual Studio for C++, but it still required some manual "hook up" with the code for callbacks and event handling and such).
I mean - if I really wanted to do things "right" - why don't I just whip out my text editor and write assembly for whatever CPU I'm targeting? I can full control over everything, then! Best performance! Those guys and their compilers, I tell ya, they don't know what they're missing!
(heh - sometimes they don't - there's a whole generation or more out there who've never coded for a CPU by looking at a datasheet and finding the byte values needed to represent op-codes to hand-assemble a piece of code - sometimes I do miss hacking on the Apple IIe and monitor - CALL -151 ftw)
Anyhow - as someone who's been coding for longer than I really care to say (of course, I kinda gave my age away above!) - I don't want to return to those days; I kinda like living in the future of computing I could only dream about as a kid.
And you really envision the future as webtech? That's sad commentary in and of itself, isn't it? You lament old ways of work in one sentence and glorify the most terrible ecosystem and methodology in the next.
Simplifying in itself is not bad. Improving API with newer and better is not bad. Adding language features to advance productivity is not bad (thinking of C# iterations vs Java). Basing the future on a terrible language, ecosystem, practices and "developer" mindset is very bad.
So do you think that the future of computing is blinking cursor in code text editor consuming 13% of CPU? Or maybe small gif using 30%?
You presented like the problem was in writing of the code, it's not, that's Electron's biggest strength. The problem is the bloat that it comes with in order to provide you with that experience. Like everything, it's the matter of tradeoff. For me, it's not worth it, nor I like writing HTML and CSS.
The problems you get from Electron apps isn't V8, it's Chromium. React-native apps will behave and consume resources much more like a native app than an electron app, hence the name.
I'm not even boycotting it, I just don't use it (or any apps built on it afaik).
Unless they significantly rethink their approach it's significantly flawed without much room for improvement. Plenty of room for something better to come along. Even if it saw mass adoption, the demand for a better base would invite an alternative to gain traction against a sluggish goliath a la Firefox-vs-IE (or Chrome vs Firefox).
> Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it
I would acutally pay Spotify extra money if they brought back the old, snappy, pre-Electron version of their Windows desktop app.
<edit>They replaced it with something that is simply too heavy to manage larger playlists (except maybe on the max-spec MBPs it's being coded on) just because they could iterate more quickly on some non-essential features which they might have packed inside a WebView, leaving essential features intact and responsive.</edit>
What is there to be excited about? Having your battery drain every-time your browser runs some JS app? Wasting CPU and memory that I paid for? Lost productivity waiting hundreds of milliseconds (sometime seconds) for any UI response? I dread using any technology coming out of a SV startup, and its rather sad. I want to be excited, but can't get myself to be. As an aside, with all due respect to these startups, calling them successful is premature. They only appear successful because they're burning through someone else's money.
These technologies you diss have been reliable and stable for over a decade. As an embedded developer, my code has to be running 24/7 for months without rebooting, without running out of memory, without crashing. But forget embedded, and imagine a web service. Can you truly honestly say that an app written in electron will give me that reliability? Or should we just accept that if you want to write apps using "modern" tech, you'll just have to deal with it? I personally can't imagine any web framework ever maturing and being stable enough to where you can invest money in it knowing that it will be around 10 years from now. Having your underlying technology platform in a constant flux makes your entire product stack brittle. That is a Big Deal.
> If your answer is "A native Cocoa/WPF app", you are on another planet
If developers weren't so scared of Swift and C#, this wouldn't be a problem.
> (writing Desktop apps) is massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
I find the opposite to be almost universally true.
Writing a lightweight native desktop app is almost always cheaper than trying to build a JS-heavy app that has to run well in Mobile, and Tablets, and in a standard web browser, and in Electron fake-native-desktop web browser. Yes, you have separate projects with separate codebases. But two or three small lightweight projects is almost always cheaper than one big codebase with lots of targets, in terms of total cost of ownership.
I think, only a special kind of "developers" is afraid of learning languages. A language is, after all, means to use the frameworks. If you are an experienced developer, moving from a language to a language is a matter of hours to days, depending on paradigm changes. Learning frameworks takes much longer, obviously, but with the wealth of information available out there these days (Stack Overflow, message boards, blogs, etc.), "hacking" on new frameworks is also quite easy.
I think all these "massive cost" comments come from sheer ignorance. Those "devs" are frightened at the need to learn a new language and frameworks, overestimate the time required to learn those, look around them and only see likewise clueless "devs", frightened of changes, and extrapolate some comically high overestimation of cost and time, when in reality, a properly written software is much more accessible to join in and support than a web "app" with the contemporary "sexy" observer pattern nonsense splattered all over, coupled with a horrible, horrible dependency management system and a language/framework combo that requires multiple dependencies to perform the most trivial of array loop.
Spot on, I serioulsly don't understand people that are Java developers/front-end developers/technology X devs if you're not talking about you're currently working on, and instead treat that as their profession.
Anyone would think it to be ridiculous if a carpenter told you he only works with saws because he is a saw carpenter and that for other kinds of work you should see the plane or sander carpenter. To me saying you're an [insert tech here] developer sounds the same...
> Anyone would think it to be ridiculous if a carpenter told you he only works with saws because he is a saw carpenter and that for other kinds of work you should see the plane or sander carpenter.
...and yet there are specialties within like "framer" and "finisher", among others. Then you have interior workers like specialized custom cabinet makers, flooring specialist, drywallers, painters, etc.
What I'm trying to say here is that even in "carpentry" for putting up a house, there are numerous specialties.
> I think all these "massive cost" comments come from sheer ignorance.
I can't speak for others, but for me, the "massive costs" isn't about having to learn another language or framework. It's instead the massive costs to my employer. It might even be a massive cost to me as a single developer.
By developing a cross-platform app using a single set of easy-to-use tools, a large audience of users can be gained, that would otherwise be prohibitively expensive to support if native-only was the mantra. Instead, that application would have to be developed for only one, maybe two of the "major" platforms (and guess which platform it wouldn't be developed for - that would be the platform that I like most).
Supporting and maintaining a single codebase for one platform is a monumental task for any company, let alone a single indie developer. Supporting and maintaining multiple codebases for multiple platforms can be debilitating for a company, let alone a single developer.
I lived and played in those times; back in the "second gen" if you will of the microcomputer days - you had games and apps by different companies, and developers. In most cases, a game or app was only developed for one of those machines (usually the Apple IIe or the C=64, sometimes both - maybe an Atari too), but the other systems were all considered "second tier" by most developers. You might get a port of a game or app - but most times, you had to settle for something else, or buy a second system (ha! only if you had real money! I look back on the costs of those systems back then, and wonder how my parents ever managed it).
There's a reason you see a lot less of that going on today; it isn't because devs are frightened of learning a new language or framework.
meaning he works for a company that heavily leveraged an existing protocol, IRC, for which there are already a metric fuckton of native clients and libraries for every platform under the sun.
Implementing their shitty web client was undoubtedly far more work than supporting a handful of native clients.
With my company, our users have always begged for an Mac OSX and iOS app. We never provided it, because we have zero-internal expertise with any of the technologies involved. We could contract, or hire for that specific purpose, but the moment that team member was gone the project would be dead and out of sync with the rest of the codebase.
The codebase isn't lightweight to begin with, and duplicating it for a native app that maybe only one person could maintain was a non-starter.
What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?
> We never provided it, because we have zero-internal expertise with any of the technologies involved.
Couldn't your team learn the technologies? These days there is an abundance of resources available (online tutorials, books, bootcamps, etc), especially for ecosystems as popular as the Appleverse.
I can't pretend to know your situation, but as a reference point we had an iOS project come up at work a couple years ago and I was able to pick up Objective-C and the Apple libs in a few weeks while still being productive on other projects. I followed Apple's official tutorials[0] and built some toy apps, then learned the rest as I went on the real project. This is after having never owned an iDevice and doing mostly web and devops work in scripting languages for many years prior. A few peers of various experience levels were able to ramp up in about the same amount of time, so I'm not special.
> With my company, our users have always begged for an Mac OSX and iOS app. We never provided it, because we have zero-internal expertise with any of the technologies involved. We could contract, or hire for that specific purpose, but the moment that team member was gone the project would be dead and out of sync with the rest of the codebase.
I don't see how that is any different. If your Windows team members all left, that project would be dead or out-of-sync too. Wouldn't you replace valued team members who leave, in both cases? Or is this a concern that you won't be able to find developers willing to work with OSX and/or iOS tech?
> With my company, our users have always begged for an Mac OSX and iOS app.
Repeating this again because this should be telling. If your getting feedback begging for an OSX and iOS app, there's probably a number of really good reasons for that.
> What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?
Spotify, Slack, Twitter, Facebook, any streaming video service (Hulu, Netflix, Amazon Prime Video, HBO GO), etc.
Note that I'm not using "lightweight" to mean "small weekend project", but to mean "less complex than the codebase needed to reproduce these features in a browser or Electron browser".
--
If you are truly a small company / startup, and you truly have to support all platforms with a small team, then sure Electron makes sense despite all the drawbacks. I totally understand that.
But I usually hear this excuse from big companies, that still want to perceive themselves as small, but aren't. Slack is a billion dollar company, they are not a small business / small startup. If a company is large enough to have more than two people working in HR full time, then they are probably big enough to do this stuff right. "We're a small team" simply isn't true for them.
No one is scared of C# and Swift, XAML and so on, but these technologies aren't even close to what you have available on the web. React and Redux make apps possible that you wouldn't get with older technologies, not even with 10 times the effort and code. The animation possibilities, transitions, the flexibility overall, the eco system, debugging capabilities, hot module reload - if you have worked with it, you will not want to go back. Case in point, modern apps like VSCode, Atom, Hyperterm, Discord do things that would be very hard for a native app to mimic, and they do that with absolute ease.
I agree though that Electron is a problem. React native will probably be the best way forward. Microsoft has recently taken over RN-Windows and ReactXP, a RN-web clone. RN runs natively, doesn't need a browser, while being able to tap into the JS eco system.
Redux and co are still the same pattern fundamentally, it's just that the implementing code is spread throughout your stack, instead of in an "Observable" implementation. That combined with an optimisation allowing identity and state equality to be conflated.
I don't know what you're saying, this is the complete opposite. Redux was made to centralize, because observables are literally spread throughout the stack. The entire logic is combined in composeable containers, later wrapped in a single store. Flux pattern is also completely different in how it works, there's almost no similarity whatsoever.
That is the whole point of immutable state, and it is very effective because parent props are also notified through shallow checks, while observables have to bubble up and keep track of their relations.
You also seem to agree now that both are completely different.
That's not MVC. That's the MVC Facebook attempts to show in a ridiculous presentation that finally convinced me how much they had to misinform to get these silly concepts any legitimacy.
See the wider picture, how flexible they are and how fast they grow. Their freedom in making things possible that would be very hard to realize.
The shell that i'm using for instance, Hyperterm, it does things no other shell can do, and they had more than a decade to evolve. JS doesn't have a problem displaying json as json, display webpages on link clicks, moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS.
The same flexibility you see in Atom, VSC and the others.
> JS doesn't have a problem displaying json as json...
You know what else doesn't have a problem doing this? Literally any other programming language or tool that I've ever used to look at/edit Json.
> Hyperterm, it does things no other shell can do...display webpages on link clicks moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS....
Any pretty much any other terminal built with flexibility and extensibility in mind (even the base terminal in Linux can handle links lol, that's definitely not exclusive to js). ZSH springs to mind, with the benefit of being written in Case, so that you know, it's actually fast...
> The same flexibility you see in Atom, VSC and the others.
Ah yes, including the freedom to not open any binary file, or any file >2mb in size!
> React native will probably be the best way forward. ... RN runs natively, doesn't need a browser, while being able to tap into the JS eco system.
Similar arguments were made when Java first came on the scene. "Write once, run anywhere." [1] Years later, here we are again with "native" Javascript libraries. I'm sure in another 20 years or so we'll be rehashing this again, just with some other language.
Java Swing wasn't native, though. SWT is more comparable.
React Native isn't "native" (in quotes), it's native... it uses the actual widgets provided by the host OS. You use similar techniques to create your UI for each platform, but you do generally need to create separate UI for each platform with React Native.
> these technologies aren't even close to what you have available on the web
Yeah, they're that much better, that we grumpy old non-JS programmers keep wondering what the big deal is when somebody comes along and rewrites something that existed 20 years ago.
That isn't the point. JS does something the others can't. It has flexibility and a community that's larger than anything you know plain and simple. Btw, i am a grumpy non JS developer, or used to be. Started with assembler, c/c++, then c#. That sums up most of my professional career. I was wrong judging JS too fast, that's all i'm saying. Many of my colleagues are still grumpy about it, they'll get over it over stay behind. If they do move on they will see what they're missing soon enough.
There have been so many desktop apps released since 2007-2012 that I really don't know where to start:
- Everything GNOME does
- Everything MS does
- Everything KDE does
- All the apps in Mac OS
- Every browser
- Emulators
- Office suites
- Messaging apps
- Adobe's Creative Suite
- Blender
- All the content creation apps for video games
- Video games
- Editors
- IDEs
- Audio production suites
Web developers are excited about the desktop again, because to them the desktop is some kind of new frontier that Electron has opened up to them. I get that, but please be aware that just because something is new to you doesn't mean it's actually new.
> Nobody cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
Clearly Java never existed. I've been experimenting with Swing lately because Java has a library I'm interested in using. It's a breath of fresh air compared HTML/CSS/JS. It may not be proper "native" development, but I don't have to deal with <div> hell. It has proper layout management instead of, what, 4 quirky CSS layout styles: float, table-, flex-, and now grid-. Sure if you want to hack something together, Electron may be quicker short term. But I question if it will actually be cheaper in the long run for non-trivial applications.
> It may not be proper "native" development, but I don't have to deal with <div> hell. It has proper layout management
If you want something that's much closer to proper native, have a look at SWT. It's faster than Swing and actually looks/feels native since it's actually using native widgets. It's really a shame that the poster child application, Eclipse, is so bloated and slow, because people attribute that slowness to SWT when it's really primarily the plugin architecture that's responsible. And I actually find the more spartan developer interface to be more pleasant than Swing, so to me it's win-win. Swing will always have an uncanny valley, SWT has never had that issue, yet still allows you to write for the desktop in only Java.
I agree. I've been developing Swing and SWT applications for 15 years. SWT has a simple, lightweight API that just works. Swing is so bloated an complicated that you will be tearing your hair out to perform simple tasks.
I feel that Sun's choice of Swing was the main reason Java failed on the desktop.
Thanks for the tip. I'll have to look into SWT. I only chose Swing because it was included by default, but I would prefer to have the actual native look on each OS.
Layout management: I do wonder sometimes how much awareness the HTML/CSS/JS community has about how layout management is handled in things like Java's Swing or Apple's Auto Layout - or do people think that "desktop style" UI layout management is the "old way of doing things", and obviously HTML/CSS/JS is much more modern and therefore must be better?
I don't think it is better, to be honest, and so I also wonder why people aren't crying out for decent layout management for the web - is it just lack of awareness of how good things could be, or...?
> I also wonder why people aren't crying out for decent layout management for the web
They have been, and that's why today there are things like flexbox[0] and grid layout[1]. It just took a long time for the browser vendors/standards bodies to be convinced and then spec these things out to work within the confines of the existing layout models.
Can't help thinking that Visual Basic had a better layout manager than CSS. I hate fixing CSS issues with a passion. I watch a designer whip up something in Illustrator and can't help thinking that CSS & HTML are a primitive way to do layouts.
Smalltalk had a GUI based approach to design in the 70s on a 2 mb disk, and most web layout is generating html & css, then tweaking that stuff. It doesn't feel like web layout is a progression.
Interestingly, Visual Basic did it all with absolute (pixel-by-pixel) positioning. Some (not all, but some) of the issue is that we have moved forward in what we expect out of UIs.
(The rest, of course, is that the Web front-end is a shitshow of epic proportions)
The largest barrier to good layout on the web is that you can't measure text. All you can do is measure a node that has a specific text, which takes a long time and includes a reflow, which means either a) abysmal performance, or b) an entirely async layout algorithm, which would be much more complicated.
Java isn't new and shiny anymore. Having been around for the great Java hype of the 90s, it's crazy that Javascript has now replaced it. Nobody would have predicted that back then.
I really hate Netscape & Sun for causing that confusion, and making Javascript look like Java on the surface, particularly as it obscured the prototypal model of JS and made anonymous functions clunky until the arrow functions of ES6. But it was good marketing.
If, instead of Swing, you use JavaFx (which comes included with Java) your UI will look nicer. It even has an embedded Webkit browser that can easily interact with JVM code. This means you can program your UI in any HTML/CSS/JS framework you want, and any other logic in any JVM language (Java, Scala, Groovy, Clojure, Kotlin, Ceylon, Frege, etc.)
I can't speak for large companies trying to make the call on Electron versus multiple OS-specific apps, but for indie devs this is absolutely critical.
Imagine being a small shop of 1 person (or even 5) and having to learn desktop programming languages, conventions, and native APIs for the web, Mac, Windows, and Linux. Then on top of that to develop and maintain a product that moves at the pace of customer feedback.
That's why Electron is powerful. I could not write and maintain the Hemingway App for Mac and Windows without Electron. My tens of thousands of users would not get to use that software _at all_.
Off topic but I just checked out your app. Very clever! I'd love to see a world where it is integrated tightly with iOS. Writing concisely is especially useful on mobile. Also I dig the demo. Great example of why developing electron apps can be really beneficial.
Thanks for your insight. The part that confuses me as a non-native developer is the folks in this thread who seem to be implying that there are only marginal amounts of additional time needed to release cross platform, fully native apps.
I think the implication is as much that developers who label themselves like you have as "non-native" or "JS" or "web" developers don't have the perspective to make an informed decision on the matter.
Many of the rest of us know how to build React SPAs (and have been doing it long enough to have used Angular 1, Backbone, Sproutcore, ...) but "JS + Framework of the Week" is just one of the many tools we could turn to when building a UI. Many of us have also used cross-platform native libraries like QT or GTK, or platform-specific toolkits like Cocoa or .NET.
Basically, if the only tool you have is a hammer... it might be time to learn to use some new tools.
Are you saying that building a web version and two (or five) native versions of an application is equivalent to building a single JS codebase and having it just run? It certainly sounds wonderful to learn new tools, but how does that stack up from the engineering (cost-benefit) side?
> Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
Come on man, it's a little too easy, but NOBODY? People write desktop apps all the time. What you mean is "nobody I know", which probably says more about your social circle.
One more "Big Deal" is that both Chrome and Electron are not accessible by default like native apps and take performance hits when VoiceOver is enabled.
I'm not "excited" that macOS is flooded with apps that have poor accessibility experience while Apple itself is fanatical about delivering first-class accessibility experience - like Safari having accessibility always enabled.
The other thing I find ionic about this whole article is that it's perfectly easy to only use Slack within Chrome as a normal web app. And the UI is almost exactly the same because it's nearly all the same code.
So if the Electron bloat is such as issue for you, just hit Slack in the web directly.
>So if the Electron bloat is such as issue for you, just hit Slack in the web directly.
That's even more horrible, i've tried it. Chrome uses his own notifications on macOS instead of integrating with systems notification manager and always when one of these ugly, poor animated guys pops up, i can search through my 100 open browser tabs to find the damn chat. Millions of flamewars were fought about what's the best window manager and we are ending up with a single window with a tabbar? Are you kids serious?
So just run it in a dedicated chrome window so you can alt+tab to it directly?
This was the main reason I used the electron version of slack and tbh it makes little difference if I run a dedicated Firefox window or the app since my laptop is less than a year old and could run a few hundred instances of either without sweating..
I have Canary installed for this very reason. My mail app, Slack, Jira and any 'productivity' sites live on that, whilst everything else stays on good old Chrome. Switching from one context to another then becomes very straightforward.
Safari just doesn't work for me.
It's my favorite browser ui wise, every time i'm getting a new machine i first stick with safari, but after a while under heavy use i notice all the problems with Safari getting slow, doing weird things, etc and i find myself on the Chrome download page again :(
While I was on mac i got an extra few hours of battery life from using safari.
I'm on Firefox these days; with noscript (once you go through the initial few days of pain whitelisting things) it works brilliantly; I would suggest you give it another chance..
Unless you try to book flights from britishairways though.
People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again...
You could have expressed pretty much the same sentiment about Java ~20 years ago. The trade-offs (high resource consumption, additional complexity, but less demanding of developer time) and even the fundamentals of the approach (an additional abstraction layer that works as a VM, APIs oriented around all the latest thinking) were also very similar.
Java is used for a lot of things, but mostly not as the basis of mass-market cross-platform applications the way it was originally intended.
> Here's the thing. You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".
This comment is completely wrong. You know what Slack was before it was an Electron app? It was a Cocoa app. Sure, it wrapped a WebView, but given Slack's nature that's not a surprise. Regardless, the Cocoa Slack app, while not great, was still much better than the Electron crap we have now.
I don't know if you're simply ignorant of the fact that the app this article is focused on actually was a native desktop app before it switched to Electron, or if you're just ignoring that pretty important information in order to try and make your point, regardless of whether or not it's true.
Oh interesting. So why'd they even try to make the argument at all? Are they a new hire and simply aren't aware that Slack had a native app until recently?
What? Qt is very popular and I use several Qt apps. Same for GTK. And there is MUCH more desktop software out there than functional web apps. I don't know what kind of Windows 10 fantasy land you live in but there has long been an answer to desktop apps.
What Electron has done is allow people with a particular skillset to apply it to a domain where it is both unnecessary and wasteful. These "developers" would be better off generalizing their understanding of software and UI design to the point where the tools used are irrelevant
> finding specialist developers who know these dated technologies
WPF is newer than Javascript and the DOM. Using native desktop GUI frameworks isn't some sort of ancient dark art. It's easier than web development!
>if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
The same thing applies to the quality and performance of javascript apps. Decent performance and feature parity is always just about to happen when the next iteration of the standards is implemented, but it never arrives.
Wasn't this basically true of Java in 1995? Everyone was excited about finally being able to easily write cross platform apps. And then there was flash that anyone willing to pay for the designer could build apps that could run on any platform. So now we're on to the third thing.
> You know what the alternative to all of these Electron apps coming out is? If your answer is "A native Cocoa/WPF app", you are on another planet, the answer is, "It wouldn't exist at all".
Agree fully with this. I find the complaints about how appalling it is that apps use a few extra 100MB of RAM and disk space really tiresome and impractical when laptops + desktops typically have lots of RAM and disk space now. Many of these cross platform apps either wouldn't exist or would exist on only one platform if everyone insisted on native apps especially for apps made my small teams or individuals.
I'm not saying Electron is perfect but easy cross platform support is a huge benefit and one of many. You also have the huge JavaScript community, pool of developers and available libraries to draw from as well as being able to share code between your desktop app and web app. Tossing that all away to save resources that most users aren't even going to notice isn't a good business decision in my opinion.
"Everyone has Jesus of memory, or doesn't matter if we chew through a bunch of it because we were to lazy to use the right tool for the job".
Just because it's there doesn't mean you need to use it all, that's like spending your whole paycheck at once because "you'll get another one in 2 weeks!".
Any application that does the same job with less resources and the same or greater performance is unequivocally, objectively better.
Also, benefits of claims like easier and faster updated are rendered pretty moot when developers next spend that time changing things for the sake of it: Spotify app feels like it has since pointless, un-requested UI change every month. And VSCode can't even update Idle, decor that being one of the lauded benefits...
> it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies
What are you basing this on? There are plenty of iOS developers, and developing for Mac vs iOS is more similar than pretty much any two other dev platforms you can name.
The weird thing is I'm almost certain you can build the exact same thing with e.g. python+Qt, do literally all the drawing using python code in custom widgets, and the result would still be nicer to use than the status quo. I'm not advocating switching to the PyQt stack, I just want the electron stack to be saner, in particular
1) Electron is obviously a system library now. Treat it as such. Install ONE copy of electron per version, not ten private copies of electron in npm. It should be simple to integrate with at least Windows and OSX install systems - instead of downloading the full electron module, download an installer that installs the required version system-wide. This can be done as a step during installation like desktop programs have been installing required dependencies since the dawn of time.
2) Find some way to make it slimmer. Maybe not every app needs the entirety of the browser loaded. Maybe we can do lazy loading of certain components? Maybe optimise the most used critical paths, and provide tools to developers (if those don't already exist) to show exactly what is causing all those wakeups?
Moving off Electron is not an option any more, it's just such an easy and familiar platform to develop for, it's not going anywhere. Maybe we should accept it as the victor of the cross-platform desktop application toolkit and work to make it better. It's shitty, but it's the right kind of easy, like PHP.
I agree and the author fails to mention the tradeoff between ram/diskspace and development time.
>>all you web devs: Go learn C or Rust or something. Your program runs on a computer.
I don't think that is a realistic request.
When the cost of Slack's ram/diskspace usage starts affecting their profit then they can/will take steps to re-write the application in 'native' code. Until then its premature optimization?
C is not that hard! If you're a programmer, and you don't know C, you're not good at your craft. I realize that sounds incredibly harsh but it's true: your job is to make a computer do a task efficiently. If you can't even read the language that makes that possible, you don't know what you're doing.
I could re-write your argument by replacing "C" with "x86 assembler" - but I won't. However, the argument could still stand, but I'd hesitate at saying it's a wholly valid argument.
My own prejudice aside - I honestly do think that if you don't have an understanding of how a CPU works at this very low level - maybe even lower - you don't know what you're doing; that, and I have done hand-assembled x86, 6502, and 6809 code in the past...
It's a different time; knowledge of lower level languages isn't needed today to be a successful software developer who knows their craft. And I recognize my prejudices as such, thus while I tell them, I understand that they really can't apply. Would I like it if these new guys all knew this kind of stuff? Well - yeah; but the same could be said of me by guys long gone and dead as to why I don't know how to wire up an analogue computer to solve a calculus problem, or why I can't wire up a plugboard to compute something on a IBM 401 or such. That doesn't make them a better developer, nor me or anyone else a worse one. We're just using different technologies.
That said, I do think developers should branch out, and at least have an understanding of other languages; maybe C/C++ - but even Java, Python, GoLang, Rust, etc - all could be just as useful to know.
Today - career-wise - I am a javascript/node.js developer - before that, I mainly used PHP. At home, I play with Perl, C/C++, and Python (these latter two more recently being involved in machine learning, TensorFlow, and Keras). Occasionally, I'll get a bit nostalgic and break out BASIC (my favorite today is QB64). Can't say I've done any recent assembler coding - but it doesn't scare me, either. Just another way of communicating with the machine.
I get that everyone has their specialty -- Im not trying to imply that someone that writes javascript for a living is a lesser coder or everything should be native. My point is that there are fundamentals to this craft, and all these web developers acting like C is exotic knowledge baffles me. With regard to your first point though, I actually do think knowing some form of assembly is also super useful. I just tend to focus on C since as is well known, its basically portable assembly.
>If you're a programmer, and you don't know C, you're not good at your craft.
Even as someone myself who knows (more or less) C, this is quite an elitist statement, and doesn't do justice to say, Lisp and people who use functional languages. The goal of programming isn't necessarily efficiency.
How is expecting competency elitist? Theres plenty of free resources to learn C. I learned C when I was 14.
Im not trying to claim low level knowledge of the machine is inherently better than high level knowlege. What I am claiming is you need both and knowing how this stuff works isnt a charming novelty reserved for the curious
Premature optimization used to mean optimizing something that doesn't cause any noticeable performance degradation.
I think it's a mistake to redefine it as optimization that doesn't affect profits. Companies often get away with painfully broken software for a long time because they benefit from some other moat (e.g banks). But they're setting themselves up for disruption by doing that.
Also, profits are not the only purpose in life if you value your craft. (I'm not denying that you have to be able to afford high standards and economics does play a legitimate role)
Hm? How come? Is there a mental barrier here that I'm not understanding? (C and Rust are "hard" languages?) I taught myself to code on my parents' hand-me-down Pentium 100Mhz in the early 2000s, and I started with the K&R C book. Python and C were my swiss army knives at the time. I really can't see this stuff being super difficult, I mean I was a teenager and I had no understanding of algorithms.
Why would native apps be so expensive? Are they 10x slower to write? Isn't the tooling etc for making a WPF app at least as good as it is for electron? Obviously if you have a web app or node backend already you might have code you can re use but that's not the case generally.
Just being able to not use Js should be a pretty big win (although .ts is also available for electron I suppose). Not to mention that libraries etc are specifically created for desktop and not web (e.g the xaml layout model vs the DOM). Or are they expensive because traditional desktop devs are hard to find and/or cost more?
I think it's sad that both Apple and Microsoft dropped the ball on desktop because they are only focusing on funneling users to their stores. If the Microsoft of 2017 (with open source .NET and Linux as a focus) had been around a few years ago when WPF was invented, they could have made a decent cross platform toolkit now that didn't come from html/CSS/js
If more developers had a second look at FreePascal/Lazarus [1], they'd realize they could do pretty much the same thing, but with native compiled speed, minimal memory usage (no bundled JREs) on all the desktop platforms.
It's graphical GUI builder is out of this world, and it works flawlessly on all three desktop platforms. It did even compile without a single code change on windows, after I finished coding in ubuntu, a utility tool for some researchers in my organization [2].
And about the language ... I finished my utility tool project in 80 hours, from initial concept to happy customer, while learning the language in the process.
It is really such a shame that people have so strong aversions against Pascal, for no good reason other than subjective feelings.
Bologna: nothing truly performant uses Javascript because Javascript can't get at hardware accelerators in their full glory. Apple's macOS and iOS are themselves desktop apps, in the latter case for a pocketable desktop, and they resolutely don't rely on the very constrained subset of 50 years of engineering called Javascript.
Have you seen how many messages you can pump through and transform on a node process? Yes, there is an abstraction and that's a cost, but v8 perfomance is "good enough" for almost everything.
Maybe not a coin miner or a graphics engine; but cpu is never really the constraint for applications such as those inside electron containers. Where you see them chewing 100% of a core, those are bugs.
Also, while th e language has flaws modern; JS vms are pretty bleeding edge....
Who gives a shit. The point of writing natives apps isn't just to check it off your bucket list - natives apps can have advantages over web apps if done right. Those advantages include better performance, integration with native UI components, integration with native APIs, and integration with the file system. The only one a "native" electron app can accomplish that a web app can't is the last one. If your app doesn't require integration with the file system beyond a file selector then an electron app offers no benefit AFAIK. Companies do it purely so they can tell everyone they have a native app without having to invest the work required to get a real native app. My suggestion for the author is to find a browser they like, and then run all electron apps as web apps within the browser whenever possible.
> Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along
What?... Are you saying what I think you're saying. I must be reading this wrong because surely you don't wish to imply that people didn't write desktop software before Electron came along?
What about NW.js, It's the real hero here. They spent 5 years building up the platform, being used by thousands of companies and apps, basically inventing this technology; just to have GitHub buy up their intern and pay him to make a shitty knockoff called "atom-shell". You're excited because someone tossed a marketing department at a shitty knockoff?
NW.js uses less resources (less ram, smaller distribution sizes).
It is updated more often (within 24 hours of every Chromium and Node.js release, ensuring access to latest technology).
It supports about twice as many OS's (XP+, OSX 10.6+, Debian 8, Ubuntu 10+). Electron doesn't even care enough about Linux to merge in simple bug fixes.
It's much easier to get started with and takes a no-nonsense approach to everything. (thejaredwilcurt.github.io/website/quickstart.html)
Allows for HTML or JS entry for apps.
It offers actual source code protection, and even recently updated this so that there is no longer a performance hit when using it. Which is a pretty serious technical achievement.
The only thing wrong with it is that it's got a shitty name/logo, and has a smaller ecosystem. If you can get passed that you will have a much nicer experience.
How many electron apps can run on the same machine? For many of them it can be beneficial to just be a website.
Wouldn't it make sense to run a caching proxy on desktop and use a browser that is already there? That way you could still use some functionality not available to plain websites, like access to computer. But use system browser for rendering. Your proxy would also cache your website so it would run faster than normal website.
I understand, that this way you have to support more browsers, but you probably already have a web version of your app.
The problem could be with users seeing localhost address. How to overcome this issue? Virtual interface on which DNS server could run?
Maybe running from a file:// could be a solution for some apps?
Nice rant, can you name at least one of those mind-blowing apps that happens to be sold for real money?
Everyone in the last 40 years built and sold a multitude of destkop apps and they never stopped. Stop drinking the cool aid, and even if some bozo upvotes you on hackernews it does not mean that you are right (actually nowadays the opposite is more probable).
Completely agree. I simply have no interest in learning any native desktop programming and if I did it would be for my platform only and I wouldn't waste my time writing it for linux and definitely not for windows. I'd release what I had on github and move on. Now I can throw together a small (small by custom code sizes, I know electron is heavy) amount of code and get a cross platform app that doesn't look terrible in relatively short time. I think it's a great first step for a PoC. If it gains enough interest AND (<- big and) you NEED to go native for some reason you can go native later. There are a lot of things you can say bad about electron, yes it's heavy and "bloated" for most uses of it, but if people think empowering others to build things they couldn't or wouldn't before is bad because of how they built it then they are already a lost cause and probably think we should all be writing assembly still.
> I simply have no interest in learning any native
> […] programming
And it is amazing to see how many bad reinventions of the wheel happens because of this.
People claiming that writing electron app is 10x faster than native have absolutely no idea what they are talking about. The only think web-based tech will give you is ability to push out raw cross-platform prototype. Now try to polish that to the acceptable quality across the platforms and the many wins start to turn into loses.
I am a web developer, I know that might be a dirty title in some circle but I really don't care. There are countless amazing things built on technology that people hate for whatever reason. PHP, JavaScript, and even Flash comes to mind. Guess what wins in the end? The thing that gets the job done quickest and easiest. It's one thing I really dislike about HN, the need to let everyone else know how terrible their decisions are because they didn't go as low level as the person posting the comment. "Oh you used PHP? Real programmers use C++" "Oh you used C++? Real programmers use C" "Oh you used C? Real programmers use assembly" "Oh you used assembly on a CPU? Real programmers design and build their own FPGA's for their code". Nothing is forcing you to use things built in electron so if it's so terrible then don't use it and move on with your life.
A better alternative is actually making a web app, and ship the server together with the client, for example a Node.JS script that does the stuff not supported by the browser, and runs the default browser for the graphical user interface.
Does the weight of these just irritate you because of how wasteful it is (it irritates me for this reason alone) or does it actually have an impact on you?
People said the same things about gtk or qt when we had motif right? Apps are going to keep getting bigger, our machines get bigger too. I'm not sure what he problem is.
I think I'd like to do more with my machine as it's getting bigger and faster, not the same things from 10 years ago that now hog all of my excess. I have twice the cores, four times the RAM and an order of magnitude more disk space to play with than 2007, I don't think it's acceptable to just have applications take up an extra thread, twice the RAM and more GBs just because it's free; that hardware certainly wasn't free when I bought it.
or, you know, a chrome app which works exactly like an electron app except they don't all require their own separate chrome instance. I mean an electron app is literally just a chrome app packaged with its own version of chrome.
Agreed, I'm back to thinking desktop apps, thanks to electron. I did a project with Qt last december. It's usable but very old style and the license is not very permissive. A small electron app is not as heavy as I though first.
Here's the thing. You know what the alternative to all of the JS that's been written? If your answer is "Keep writing more JS," then you are on another planet. The answer is, "We have fucked ourselves as an ecosystem by permitting a monoculture to flourish, especially one which prides itself on worship of bad languages, bloated runtimes, convoluted abstractions, and general fuckery."
Nobody in the last ~20 years has cared about writing Web apps before JS came along. There's basically zero money in it, and it's massively expensive, both in terms of actual development time per feature (easily 10x the cost of working in sane languages with sane runtimes), and also in finding trendy hipsters who don't know algorithms or data structures but took four weeks of coding academy classes a couple years ago. And as for Electron, Electron has existed for over three decades (Project Athena launched in 1983) -- if its massive "Nixon throwing the peace signs before getting on the plane" moment hasn't happened by then, sorry, but it's not going to happen.
But now? People are making all kinds of great new languages, and more often than not, they don't repeat JS's mistakes. People are excited about programming again -- JS is so bad that it's single-handedly revitalizing interest in languages which two of the largest tech companies in the world are behind, yet couldn't make popular.
This is a Big Deal.
(You are being parodied mostly for being a Slack developer, not disclosing it up front, and then trying to convince folks that Electron is good, which makes you sound a lot like a pig farmer trying to sell pigs' feet.)
Desktop apps like Word, Excel, Powerpoint, Photoshop? I too can pick a few ridiculous outliers. Honestly I'm not sure if you're right or wrong but either way that argument isn't good.
You have it exactly backwards. JS succeeded because it's not a monoculture. Within JavaScript there are multiple frameworks, even multiple languages (CoffeeScript, TypeScript, ES6, etc).
Win32, Java, MacOS, et al lost because there were such strong standards for doing things the One True Way that competing standards couldn't flourish and the APIs stagnated.
In JavaScript, a new framework comes out every year and goes straight at the throat of the old one. The reason there is churn is because these new frameworks are actually solving problems better than the old ones, and because of the LACK of a monoculture developers will actually switch to them. Which in turn is why framework designers are drawn to it, which in turn leads to more new frameworks.
Is iOS solving problems better today than they were last year? Or in 2015? Because JavaScript is solving problems way better this year than last.
JavaScript frameworks are ultimately solving problems that the Web/DOM/JavaScript infrastructure created in the first place. The amount of technology required to create a single page application is staggering yet on the desktop that's just "an application". The technology has been around for 20 years and we're just getting to point in JavaScript where we were 20 years ago.
Yes, of course JavaScript has made tradeoffs to get where it is. Those seem like problems in isolation, but look what we got in exchange:
- apps from untrusted developers can be run safely
- app installs are measured in milliseconds and don't require switching windows
- app updates are invisible to the user
- beginners can modify apps without leaving the app itself (MySpace profiles, etc)
- references deep inside one app can be embedded in another
- apps run on nearly every device from one codebase
Native app programming can do many things well, but failing on these counts is a deal breaker for many purposes. You act like JavaScript's weaknesses are somehow due to ignorance on our part, but they are tradeoffs made deliberately.
And for the most part, the things JavaScript is bad at (high performance graphics, professional ergonomics, etc) are things that are improving. I don't see native apps getting millisecond installs any time soon. It's a classic disruptive technology.
>But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that two of the largest tech companies in the world are behind, yet couldn't do.
>xpaulbettsx: JavaScript Hacker at Slack
Ah, yes, we are all grateful for Slack's 850MB of memory used when idle.
I can live without it, personally. Actually, if it could crawl back into whatever hole it came from, I'd be glad.
Electron isn't good. Electron is absolutely awful. Web applications aren't good. Javascript isn't good.
At what point of dementia do you say "hey, web APIs are easier to use, let's bring the entire damn browser along with it!"? The normal reaction should be fixing current APIs, making good wrappers around them.
One of these days, you'll look at 850 MB of RAM like I used to see 4K of RAM: Amazed at how small it was, and how much we have today.
My first computer saved data to a cassette tape; today, terabyte hard drives are nearly give-away prizes in cereal boxes (yeah, I know they don't do that anymore, either). We have machines in our pockets which are arguably (in some ways) more powerful than the machines only government-funded agencies and such could afford when I was a kid (and they took up entire rooms).
I hope I live long enough to think of 850 MB as a small amount of memory (and 8-cores at 4 GHz each a slow machine; actually, there are some multi-cpu server mobos I'd love to have as a desktop, but I can't afford it - yet).
Honestly, I'm amazed that we do have enough memory to support these huge apps. Sure they're bloated; I won't argue that - but at the same time its amazing that we can run them at all - a decade ago it would have seemed ludicrous!
Could it be better? Sure - but in my mind, all software is bloated - because even the simplest piece of compiled code I can't run on my old TRS-80 from when I was a kid (heck, even an Arduino has better specs!).
Desktop development has stagnated because of the proliferation of web technologies, not because desktop frameworks aren't very good. Qt is great, gtk is great, hell, with the right configuration, working with WinForms can be pleasant.
Electron packs a browser with each app, and that's unacceptable. Adobe used to have something called AIR, years before electron, but idk if they were just packing up a browser or not. I used to use the desktop version of tweetdeck, which used AIR, and it was pretty damn fast.
Anyways, you clearly don't know anything about desktop development, so rather than make claims that are wrong, just stick to what you DO know.
It is good that there is a nice cross platform 'thing' you can use to make desktop apps. But it would be better if the best thing that existed for this wasn't so slow.
Perhaps someone can clone the outward api of electron with only a ~10% performance penalty instead of Two orders of magnitude
People complaining about electron are the same type than the ones that used to say :
- why are people writing this stuff in C. It's so slow and you don't have as much control on memory. Write it in Assembly.
- why are people writing this stuff in Java. It's so slow and you don't have as much control on memory. Write it in C.
- why are people writing this stuff in Python. It's so slow and you don't have as much control on memory. Write it in Java.
Eventually people want to solve a problem with the best ratio cost/quality for them, not "doing it the proper way".
So unless you provide a better solution for them to do that, or pay big money to create such solution, they will find the ugly-hackish-half-baked-working-for-them solution that let them do this.
We had PHP. Now we have JS.
I hate it. You hate it. We all hate it.
But I created portable GUI apps with an installer before in my favorite language and others and it was a pain. All of it. The UI, the event model, the API, the portability edge cases, the new stuff to learn, the packaging, the dependencies...
It sucked.
So get over it or provide a solution, but stop complaining.
And before you start providing an EXISTING solution, remember people tried it and they didn't like it as much as electron. And since electron is so bad, that should tell you something.
So the market is saying you are wrong. You can ignore it from your better tech tower... Try to boycott electron apps all you want. But we know how it has played for betamax, lisp and the ogg format.
P.S: oh, and if electron is flash, remember that flash won the web for 15 years while it sucked. And you know why ? Because it allowed people to do stuff they wanted easily, like videos and animations. And it's not because we couldn't do it any other way, we could. Yet we had to wait almost 2 decades to see it dies, at the price of battery, stability, security and everything else. Feature trumps everything. People. Don't. Care. We only manage to kill it because we finally replaced it with systems that could compete on easiness and features. So you know what to do.
Software dev is about tradeoffs, and the argument against Electron is that the tradeoff in resource usage is bananas for nebulous claims about "it looks better" and "I wrote my app faster". Every time this comes up on HN, someone says, "wow, Electron is Dracula to batteries, and all I'm doing is sending text to people/playing an MP3/blinking a cursor".
Managing tradeoffs is a fundamental part of development strategy. You can ramp up quickly in a productive language like Python and then stabilize on a more efficient language like Java. New platforms like Go/Swift/Kotlin are trying to get in this sweet spot (and mostly succeeding), and existing platforms are trying to inch closer towards it (Python/JS with performance and Java/C#/C++ with ease-of-use features) to avoid the "do over"
> And before you start providing an EXISTING solution, remember people tried it and they didn't like it as much as electron. And since electron is so bad, that should tell you something.
I doubt everyone who's built a desktop in Electron tested Qt first. Definitely not in C++, but probably not even in QML or Go.
> People. Don't. Care.
People absolutely care, they're just not software engineers so they can't articulate precisely what they care about. It's like you never heard someone complain about shit battery life, unresponsive web pages and apps, or slowness in general.
Most of these Electron apps work in network effects. People have Spotify because their friends have Spotify; people have Slack because their work uses Slack; essentially people are generally saddled with your app for non-technical reasons. But that should be an argument for giving them good tech, not an excuse for giving them bad tech. "Well, you'd have to use this anyway so I can get away with burning through your battery life and you'd still count towards my usage numbers". Ick.
I guess I don't know what your point is then. Isn't your argument: "progress means taking advantage of hardware advances to use more productive tools?"
I disagree with your assertion, because I don't think the only measure of progress is how productive programmers are (time-to-market, whatever). There are other measures:
- user experience
- program efficiency (battery life, fossil fuels burned)
- program security
- ease of contribution, maintenance, and improvement
I mean, correct me if I'm wrong. But I feel like I've directly addressed your points. I simply don't think that you can say, "well it's X% faster to build an app on Electron, therefore Electron is the best... even though it runs Y% slower and burns Z% more battery". That's why I talked a lot about tradeoffs, because there's more than one issue.
For what it's worth, I also dispute the productivity argument. It's hilariously easy to build an app in Qt Creator. Maybe web devs can build Electron apps faster, but that's because they're web devs. I'd bet if a web dev took a month to get to know Qt they'd be as productive. The difference is that the resulting app would be a lot more efficient.
No, my point is people will use anything that allow them to do what they want the easiest way possible, not matter the cost and cleaness.
The point is the web is the most active platform.
The point is doing things right is not winning.
The reason you are missing the point is that you think it's only a technical problem.
It is not. It's technical + cultural + societal + historical + economical.
> It's hilariously easy to build an app in Qt Creator.
You fail to put yourself in a web dev shoes:
- they are the majority on the most popular plateform
- they don't have the time to lear a hole new tech or API. You don't realize the ton of stuff you need to learn to do half of what you can do with HTML + CSS + JS with QT. It's huge.
- then you have to learn all the edge cases for packaging, distributing, updating and maintaining this software. On multiple OS.
- then if you need a tutorial or a doc to help you, how does it compare to the web stack ?
- then if you need to have new people added to the team ?
- then if you need a custom widget ? reuse something that has been done ?
- then the licence ? The versions conflicts ?
And if only QT was much easier than the web stack. But as a Python dev, I did PyQT dev. And Web dev. And the ratio power/easiness/flexibility is not on QT side at all, it's a mess of a badly documented powerhouse with abstractions everywhere. You want to get the value of a cell in a table as a date ? Unwrap 3 layers of composed classes, each of them with their own API.
Then of course you will end up creating a client/server architecture (with threads, multiprocessing or else) for your app eventually to deal with background tasks, an MVC layer on top of QT's, and then some DB for persistance. And use their markup to generate the UI in a declarative way.
So basically all what constitute a web app.
Basically, you have to put all that in the balance VS speed, memory and battery life. What do you think won ?
If we want native clean apps back in the game, we need to make it easy and convenient for the new generation of devs to provide the features the new generation of users want.
Otherwise, we have lost and we'll be dinosaurs, while people will just buy more powerful computers because "32gb of ram is not enough to display all those albums, you need to upgrade".
Thanks for clarifying :) You're right, I missed the point.
> people will use anything that allow them to do what they want the easiest way possible, not matter the cost and cleaness.
I disagree, because people are implementing things in Rust right now even though it's harder than doing it in JS. Generally, they do that because they care about things like performance, safety, and maintainability. For them, "harder" is a broader term than just, "I wrote this quickly", because they factor in things like bugs and performance.
> The reason you are missing the point is that you think it's only a technical problem.
> It is not. It's technical + cultural + societal + historical + economical.
I do not believe these problems are only technical, which is actually why I feel so strongly about this topic in particular. There's a certain level of "I know this is bad, but [my boss told me to do it]/[I have a deadline]/[competitor X is right on our heels]" in tech right now. No other serious profession acts this way. Doctors don't say, "I know this surgery is wildly unsafe, but my chief told me to do it, so if people die oh well, it's not my fault". They have professional organizations and laws to back them up, and maybe more than that they have professional ethics and pride too. I'm not trying to say all doctors/lawyers/etc. are perfect, nor am I saying devs are bereft of ethics, but software engineers need to realize that our work is shaping society in a fundamental way. We need to start sticking to our ethics, our judgement, and our values when faced with economic or cultural pressure, because there are more important things than shipping and making money.
In particular, we need to start taking security seriously. Starting an app in C/C++ should mean 100% test coverage, thorough fuzzing, and pervasive use of tooling to avoid security issues, and honestly it's better to use a memory-safe language like Go/Swift/Rust/Java/C#/Kotlin whenever possible.
We need to take performance seriously. Our code takes electricity, and we mostly still get electricity from fossil fuels. I'm not trying to dip into moral hyperbole by arguing that the cursor in Atom is causing future famines, but we need to realize that performance isn't some nebulous concept. Slow code costs money and burns things in real life.
We need to take user experience seriously. PGP is the poster child for this, but this concept exists elsewhere too: the best security software is the security software you use (the best exercise is the exercise you do, etc.). It doesn't matter if you built the perfect programming language if it's too opaque, it doesn't matter if you built a decentralized Facebook if users don't use it.
We need to consider that not everyone has fast computers, fast phones, and fast, low latency, ubiquitous Internet access.
We need to say no to addictive features and dark patterns. A lot of the reason the web is a cesspool is that we've allowed companies to build products that are addictive, or that are very difficult to get out of. Good luck building an open Facebook; they have huge teams of people working on ways to keep people on their site forever.
We need to push back on closed-source. Tech has boomed in the last 10-20 years because of FOSS. Linux is the leading server and mobile platform. Security research is where it is because of FOSS projects like OpenBSD. Practically all of our development tools are free software. Our web servers, our browsers, our JS engines, our UI toolkits, that's right, it's all FOSS. If you want an example of what happens without FOSS, look at the state of messaging.
> You don't realize the ton of stuff you need to learn to do half of what you can do with HTML + CSS + JS with QT. It's huge.
Having done both, I do. I prefer Qt because I can use pretty much any language I want, Qt Creator is great, and I have better tooling in general. Web dev requires JavaScript and a bananas toolchain, neither of which I'm a fan of.
You act as though web dev is the easiest thing in the world, but just look around for JavaScript fatigue posts. All the toolchain stuff you have to set up, all the library/framework decisions you have to make, all the knowledge about the DOM, Babel, ES2017, CSS quirks, blah blah blah. It's "easy" (kind of) for web devs because they've been doing it for a while, but it's definitely not prima facie easy.
> they don't have the time to lear a hole new tech or API.
They absolutely do, and if their jobs don't give them time for professional development they should find ones that do. Lots of professions get this, nurses, teachers, attorneys, managers. This is in line with my "professional standards" rant above.
> - then you have to learn all the edge cases for packaging, distributing, updating and maintaining this software. On multiple OS.
> - then if you need a tutorial or a doc to help you, how does it compare to the web stack ?
> - then if you need to have new people added to the team ?
> - then if you need a custom widget ? reuse something that has been done ?
> - then the licence ? The versions conflicts ?
I guess just look at Qt projects like Telegram Desktop. Hey look they managed to distribute their code. Hey look custom widgets. Hey look, a lot of people worked on it. Or even stuff like GitHub Desktop written using WPF. Or major desktop apps like Photoshop, Nuendo, etc.
The Qt license comes up all the time, and it's seriously just don't link your app statically. Or advocate for your product to be open source. Or just Google LGPL.
> If we want native clean apps back in the game, we need to make it easy and convenient for the new generation of devs to provide the features the new generation of users want.
I view this from the "professional standards" perspective as well. I don't need to change my values and prioritize developer productivity above everything else, in fact I think that's disasterous. Instead, I think we need to advocate our values to "the new generation of devs": be willing to learn new things, take responsibility for the major role your work plays in society, realize none of this would be possible without FOSS, and consider that "better" doesn't just mean "I shipped faster".
> Otherwise, we have lost and we'll be dinosaurs, while people will just buy more powerful computers because "32gb of ram is not enough to display all those albums, you need to upgrade".
Re-reading your posts, I kind of get the feeling we're on the same side. While I think it's important to create a better culture in software dev, I also think it's probably just as important to make it easy to write a program "the right way". Right now, it's just way, way too hard.
I kind of blame Electron for making it easy. It's sort of a trick right, "hey, I don't know what those C++ people are doing, just use JS, it's super easy". I don't necessarily think projects should post a table of pros and cons of their software -- there's definitely a certain level of "do your homework" involved -- but I do think some level of self-awareness is warranted. Think of SQLite's "Appropriate Uses For SQLite" page.
> If we want native clean apps back in the game, we need to make it easy and convenient for the new generation of devs to provide the features the new generation of users want.
Yeah, thinking about it a little more, I super agree. I don't think we should compromise all our values, but we definitely need to spend some time making software dev a lot easier.
I don't think there are many people in the industry who are in denial (though I am known to over-estimate the average Homo Sapiens' abilities).
The reality is already well-acknowledged. What would you do next? Surely, trying to beat people's faces in this reality via your 10+ comments in this thread isn't a next step, more like infinite repeating of the acknowledging part.
I'd definitely volunteer to make Qt more dev-friendly. As it is though, I am too late because my personal life and health are in a dire need of attention and money. So I'll sadly leave it to others. But if these others only repeat the reality, then what hope do we have, I wonder.
Business interests and the resulting pressure have been a fact ever since first written history. Again, the reality is well-acknowledged for a LONG time now.
I work on Wunderlist. One of our key differentiators, the thing customers love, love, love, is native apps on the major platforms (iOS, macOS, Android, Windows), in addition to an outstanding Web app.
We quickly got several million customers, were acquired by Microsoft for "an undisclosed amount".
Slack has a native app on every platform, too. Including Linux.
Users don't care what the app framework is. They just want something that isn't in a browser and does stuff like notifications. That's what native means to them.
They don't care what the framework is. They do care, strongly, about how native something feels. Slack doesn't in lots of naggy and laggy little ways. Look up the "uncanny valley" of native apps.
It's ridiculous to think that the choice of GUI framework would have a measurable impact at a market level. There's no strong correlation between using a fad technology and market success, so it's incorrect to use one as a proxy for the other.
> So get over it or provide a solution, but stop complaining.
Complaining is a necessary first step in finding a better solution. Before being able to solve a problem, you need to first articulate what the problem is.
> P.S: oh, and if electron is flash, remember that flash won the web for 15 years while it sucked. And you know why ? Because it allowed people to do stuff they wanted easily, like videos and animations. And it's not because we couldn't do it any other way, we could. Yet we had to wait almost 2 decades to see it dies, at the price of battery, stability, security and everything else. Feature trumps everything. People. Don't. Care. We only manage to kill it because we finally replaced it with systems that could compete on easiness and features. So you know what to do.
I really miss the simplicity of a self contained swf file that could be embedded in an html page, but was usually also self contained and could be downloaded and run on its own. And of course they are tiny and fast. And I could run them in Linux, or even on my Palm Pilot. Except for battery consumption, were they really that bad?
They were crashing your browser all the time, hogging randomly CPU and were the worst vector of malware infection ever. They destroy select, right click, history, scrolling and copy/paste.
But I get your point, it's was really handy. Only this year HTML5 videos are starting to be on par with flash videos. Those were so light to download.
Most of those problems have been replicated with HTML5. Yes, HTML5 can do some cool things. But I also find myself missing flash sometimes. HTML5 was not a replacement.
And, based on the direction things are going, I think it's only a matter of time until you can't even select text properly or scroll in a predictable way.
Few people rewrite scrollbar with HTML5 based tech. I have yet to see a virus propagate because of a W3C web API. For the memory/battery though, you are more and more right every day.
That's always been the trade-off: high-level development environments with a great developer experience and a pleasure to deploy to users... at the expense of premature pessimization. In all my years managing applications built for mobile (whether phones, tablets, or laptops) any time someone says, "let's use webviews," you know there are going to be performance issues down the road. You'll get to market quickly but you're going to spend that development effort down the road closing the gap and trying to tame a vastly complex VM to make your users happy again.
I'm hoping react-native like dev experiences will make us re-think how we can have our cake and eat it too.
Wordperfect is famous for failing because they sticked to assembly while the competition moved to C.
> this one is odd to me considering Python came first...
It is indeed. But date of creation has no importance in this. Java became popular first, and is faster and consume less memory than Python. I'm a Python dev now, and I when I picked up the language more than a decade ago, people where looking at me like I was crazy.
That was a common refrain through the 90s and early 2000s – compilers, and especially open-source ones, have improved significantly since then but at the time it was relatively common for someone to replace a hotspot with inline assembly for a significant boost.
The flip side perspective of this is that end users are forever plagued with poorly-performing/behaving software, because developers favor their convience over user experience by continually moving on to new technologies before they're mature enough to be satisfactory replacements of the old stuff.
But the majority of users/clients/bosses won't pay for better software. They want it now, for free.
The market is asking for more and more software, with more and more complex features. And they must be usable on many platforms, by users incapable or writing their name correctly, let alone comprehend a computer.
In the mean time you have a bit more trained programmers, but not that much more. And most of them are not remotely good enough to provide fast, reliable and usable software. The ones that can are expensive.
In this situation, any shortcut you can take, you take.
It's like the quality of food or kitchen wear. You want 1000 of fruits available all year long ? Ok, but the quality will suck. You want everybody to be able to afford 10 machines to do every single things instead of you. Sure, but they'll break in 2 years.
My mothers kept her machines for 20 years, and I can seldom keep mine for 5. I eat tasty tomatoes 25 years ago. Today I have to look for them with expert knowledge and a bag of money or they'll taste like plastic.
As long as everybody wants a piece of the cake but nobody wants to pay for quality and don't want to wait, well, you'll end up in those situations.
For software, this is only going to get worse. Every year, I get paid more and more, I accumulate more knowledge that the newcomers are struggling with. I refuse work. And I don't get penalized for any shortcut I take.
But the author says right there at the bottom, the problem isn't JS or web APIs, just that we don't need to be running a chrome underneath everything. They suggest React Native, I don't really know enough about it to judge, I've simply never been a fan of the resource toll electron apps have on my computers.
There are situations where one has a choice between putting in the required work and becoming competent at a task, or waiting and hoping that the task will be simplified so that they can approach it with their current level of understanding. Unsurprisingly, the latter is not the path to personal growth.
A good example is the tech competence of the general population. Computer savvy or proudly dismissive? Now we have seen the technological improvements that allow people without tech skills to perform all sorts of tasks that they couldn't before.
The drawback is that the general population is at the mercy of the providers of these magic tools. If the provider decides to increase prices or move to a subscription model, they have to accept it. If the provider adds tracking & analytics and other privacy-invasive functionality they will be outraged, and then accept it. If the provider updates their app and puts features behind a paywall or completely redesigns the UI because their designers were getting bored, those people will have to grin and bear it.
e.g: Microsoft has decided to transform Windows into spyware: they can run code remotely, log keystrokes and upload them to their cloud, etc.
Most people either don't have a clue or can't do anything about it, because they outsourced their software skills to Microsoft, which worked well enough until Microsoft decided that their interests are not aligned with those of their customers.
JS developers are another example of this - they are prisoners of JavaScript, although I wager most of them would say that they're happy to live in their golden cage rather than in the technological jungle that they're unfamiliar with.
Condemned to go along with any decisions made by Google, Github, Apple or other companies that they have no influence on, because by not challenging themselves to learn, they gave up their freedom.
You mentioned that (some? most?) web developers don't even know what linking is and you support going on the same false path, dumbing down things until they are understandable to those that didn't take the time to educate themselves. This will only result in a generation of helpless individuals cobbling components together in a decent enough way to call it software.
A new kind of browser engine, not based on any existing engine, allowing developers to use a subset of the web (webgl, 2d, layout) without being forced to use a bloated ecosystem (and which also works on mobile and low end device).
Both your webpage and your docs are riddled with typos and the like. While it isn't indicative of code or project quality, it sure makes taking the project seriously a lot harder.
Hey, I think you're doing the right thing, in 1) asking for help from the community in an area where you feel lacking, and 2) by making it available to the best your ability. Many projects fail at one of those two, which dooms them to never have a community. Keep up the good work.
Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies. And as for Qt, Qt has existed for over two decades - if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that two of the largest tech companies in the world are behind, yet couldn't do.
That is a Big Deal.