Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies. And as for Qt, Qt has existed for over two decades - if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that
two of the largest tech companies in the world are behind, yet couldn't do.
That is a Big Deal.
The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. That is, it's cheaper to build a single cross-platform application in Electron than it is to build two or three native applications, and you can re-use your existing web experience. I can completely understand why companies might choose this approach.
The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power. It's fine to make those trade-offs – in some ways, it's better that you can get an application at all than the alternative of 'no support for your platform'. But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.
There may be an interesting economic lesson here: it really is not that easy to externalize costs. It surely can be done (air pollution), but it requires some special circumstances for those costs not to be internalized in a different form. (These special circumstances might include information asymmetries, harm to a public good, enjoyed by people other than a firm's own customers, etc.--themselves classic risk factors for market failure.)
By the same token, there probably are some truly externalized costs in this example, but I would expect them to be very minor and indirect. For example, most people probably do not pay the 'true' cost of their electricity. So to the extent electron wastes power, some of the cost will be internalized in the form of user dissatisfaction. But some will also be externalized either because the user doesn't know about the extra power consumption, or because the user herself doesn't fully internalize the costs of her power consumption and therefore doesn't care as much as she might if all costs were properly internalized.
Precisely. I don't use Slack in part because its desktop application is irrationally bloated for its use-case. Although I have other reasons as well, they are enduring a cost--a small one to be sure, but presumably non-trivial in aggregate--in people refusing to use their service in part because their desktop application is poor.
But I do agree with the general emotion underlying the frustration of "externalities" here. As someone who advocates for high-performance and efficient web applications, I have toyed with the question of whether developers should confront the morality of wasting energy by having selected poor/low-performance platforms. Put in a somewhat comedic tone: low-performance software contributes to climate change.
You do have a point though. You make a difference where you can.
Also, the app gets its own entry in the task switcher.
Number of developers who would donate the increased efficiency to "idle": $denominator.
Number of developers who would fill the increased efficiency with more triangles: $numerator.
I'd argue $numerator is sufficiently large that the premise of your joke doesn't hold.
A) This works on all of the platforms that we use
B) Performance could be better on my computer
It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...which is almost entirely because SSD's make them so much less noticeable if you start dipping into swap unless you're really working out your machine.
For most users, just knowing it will work on their machine is a bigger influence in using the product...and therefore a greater influence on business...than the performance of that system. It's especially true with a chat system where the most important feature is that everybody on the team can get access.
Because personally, I keep having performance problems on all laptops I have. Don't try running on battery saving mode, seriously.
I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.
Fun Anecdote: I had to trial an entreprisey SaaS solution not long ago. A coworker gave me the name and I opened the site on my laptop (on the move outside of work, just taking a quick look).
Their site froze my firefox for 30 seconds because these idiots put a high quality full screen video of a surfer in the background of the main page. Looks cool, doesn't it? https://www.wavefront.com/
Couldn't see the site. Had to be at work to read it, on my top end workstation, the video played smoothly there. Needless to say, didn't take the product seriously.
5+ years ago is that LOOOOOONG time that OP was talking about. It's also unfair to compare technical capabilities of old hardware for many reasons. I think the point was that, new hardware, _while its new_, is becoming more and more capable. Any new laptop today, even budget ones, can handle YouTube videos in HD. The problem is that HD today won't be the same HD that exists in 5 years (i.e., 4k), and it's sensible that a budget laptop today will struggle with the 8k technology that comes out 5 years from now. This is an old problem (pun intended) and should not be surprising.
> Because personally, I keep having performance problems on all laptops I have.
Selection bias. Programmers who compile code, run VMs or containers, and process tons of data, are not the average consumer laptop use case and have much stricter requirements. Many people are sitting in Facebook, YouTube, Gmail, or Google Docs for most of their day-- and likely inside of Chrome.
Where are the "Chrome is Flash for the desktop" posts?
The idea that Electron is any different of a user experience for the vast majority of users seems skewed to developer usage, to me.
It takes a surprising amount of power to decode. The cheap CPU from netbooks have been struggling for a decade, especially in battery power saving mode.
Lately, they get hardware acceleration just for that. Special CPU instructions and drivers just to achieve that decently.
Ehhh, you're in one, I think?
As for battery, again, my laptop battery has been little more than a UPS for at least five years.
Never had issue with RAM. I can remember a few friends who bought netbooks with 2GB memory some years ago, they quickly realized that they simply cannot run their development environment in that. (I'm talking swap death where a click that should take 1 second to act takes 1 entire minute).
Just bad tools leading to waste users notice. Best to avoid them if possible. Not always possible...
Are they all netbooks? Because those were crap the day they came out and even more crap 5 years later. You can find terrible discount desktop machines that can't handle anything just as easy as you can find terrible laptops.
My laptop is 5 years old and the only time I feel a lack of performance is when the Swift compiler fires up. Your example link came up right away (OK it burns away 30% of a core, but I got 4 of those).
A core i3/i5 starts at 150$, most devices don't have these expensive CPU.
Even a 1000$ macbook from 5 years ago would have one of the first i3/i5. It would struggle to read 1080p video, unless plugged to a wall plug with the span spinning seriously.
This annoys me a lot, for one of two reasons. Either:
1. Some developers have no idea what performance means -- it's crazy that I have time to notice a spinner when a glorified IRC client starts up or switches tabs, let alone have time to watch it for tens of seconds.
2. If these developers are claiming they have no performance problems, clearly the laptops that can handle modern applications are being hoarded, and I have no way to obtain one of these magical machines. All I can get are recent i7 processors.
With the power that even average machines have now, it beggars belief that we ever see a spinner at all for normal desktop-related stuff.
That's a complete other story... All that big open source movement for decades and in 2017 we are still depending on proprietary systems to provide a chat(!) for a company.
But these electricity costs struck me as only a small part of the broader point I was responding to, which is what I framed the point the way I did.
There is no broader point than converting natural resources into societal infrastructure. Be a responsible member of society. Don't obstinately ship wasteful code. I understand if you legitimately don't know any better, but if you're part of the community that's constantly writing blog posts about computers being so fast that its OK to burn the end users' CPU and storage just so you don't have to spend a couple more minutes thinking about what you're doing, you're adding harm to the world.
You seem more focused on making a case against shipping inefficient code. And you're point is a good one. It's just a little difficult to suss that out since you're framing it as standing in opposition to my related, but very different, observation about externalities.
Same thing for Web developers.
Getting a new computer will fix the problem though. :-)
GitHub, Slack, Spotify, Microsoft, etc., are all using Electron. As their developers gain experience with the platform and as they experience problems with resource usage, I would expect to see the platform improve. Maybe I'm wrong.
Currently if for your market, your primary target platform is the web, but other platforms are still important, going electron might make sense because you really don't want to write your business logic natively for each platform.
Web assembly allows us to use the same native libraries for all our business logic and data models, which besides being more performant, means we only need to be willing to write the UI natively on top of those libraries. We've actually taken (very) old desktop C++ code, and compiled it to asm.js and ran it in our web app, and then rendered its outputs with webGL onto a canvas and have had surprisingly successful results. This makes the prospect of WASM becoming standard across the board very exciting.
Now let's hope we don't decide to replace our native desktop app, that uses this old C++ library, with an electron app of our web app, running the asm.js compiled library :).
No longer laughing.
What machine are you using ? I have a 2 year old thinkpad, it's still 10h+ in full brightness, vscode, compiling, etc. And it's the screen brightness that consumes the most battery.
It used to do 20h, but the main battery is external and can be replaced so it's a good thing. I would really recommand thinkpads to everyone instead of going for macbooks for example.
I'm not even sure whether you're trolling. BBEdit provides a tiny subset of the features VSCode offers. VSCode is not a text editor, it's halfway between a text editor and a traditional IDE.
The difference to now is night and day.
Since both are written with electron the difference must obviously be the actual implementation, not the platform itself.
Keep in mind VSCode was written by Microsoft who have decades of experience writing IDEs and text editors, whereas Atom was written by GitHub who are mostly working on the GitHub product.
Could have fooled me.
I just switched to VSCode last month and it's been as nice to use as ST3 (actually even nicer because ST3 didn't have any code intelligence).
I've also used IntelliJ (or rather WebStorm) several times over the years and it always felt too sluggish and obnoxious.
For the record: I'm on Linux and have 32 gigs of RAM. So maybe I have lower standards for memory use and performance.
insofar as it benefits the budget, you should probably externalize as much as possible. consumers will provide the fitness function through deciding what products they prefer.
What about when your users don't have a choice?
The world is asking for more and more software, and the resources to provide it are not following so people are taking shortcuts.
Want something better ? Someone has to pay for it.
For macOS desktop applications are written with Objective C, which is C with fast message-passing, and doesn't trade much to speed. Swift is modern alternative, but it doesn't trade anything for speed either.
For Linux applications are traditionally written with C and Gtk or with C++ and Qt. Those options are both very performant.
For Windows main language for a long time was C++ and it remains supported language. There's movement to .NET, so Windows is an outlier here. But .NET is generally very performant language, it makes some tradeoffs for safety, but it has enough features to stay fast and its implementation is specifically tuned for desktop applications.
The only terrible platform with slow language is Android and it's well known for its lags.
There's very little desktop software written with Java and Python and usually those are specialized applications, when users don't really care about experience, but rather care about functionality.
If I'm about to buy application for macOS, I'm always carefully inspecting its bundle and trying to determine which technologies were used. Unless it's pure Objective C/Swift, I'm usually won't buy it. I hope, more users would do it.
That's true now. It wasn't true then. The point still stands.
Although to be fair I haven't really tried PyQT out still, but I don't like the idea of having to buy a commercial license for it.
Or anything in any other language.
But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.
Eventually all GUI app toolkit ends up with a custom MVC framework, a client/server architecture, some kind of db for persistence, their own implementation of asynchronous event and communication models and a declarative layer to create the UI without code. For the most advanced this layer separate structure and layout.
Well guess what, this is what the Web has natively being doing for ever.
Since the web is now the most popular platform, with millions of libs and tutorials on it, people just reused that. It just makes sense.
The problem is not the concept. The problem is we should have driven this effort with a standard to sanely close the gap between the desktop and the web so that you don't have to spawn a freaking browser-engine-os for every one of your app.
But no, the web is the only platform with a standard. And it flourished while all the big players created closes gardens with proprietary shitty API. And this is the result.
Have you not seen Jurassic Park for god sake ? Life finds a way.
Web UI is a gazillion shades of shit, please don't dump that rubbish on the desktop. Thank you.
Seriously, how can one take Web UIs, the most cumbersome, unreliable, inconsistent, unreactive UIs, as examples to be followed? That's beyond me.
It's too bad it wasn't more universally adopted, by any of the 3 major platforms (including Linux, where the all-C Gtk+ has become the standard for the most part). Instead, it seems to have found its greatest success in, ironically, small embedded devices. Devices like this simply cannot take the performance hit of something like Electron.
I admit that for someone from the web electron is a godsend, but to be honest, responsiveness of the applications leave something to be desired
PyQt is literally the best cross-platform desktop GUI going, in any language.
Shrug, I found it much nicer than anything else I'd used, but I've never used WPF (which is single-platform in any case).
> And it was proprietary and it needed a licence.
Neither Qt nor PyQt is proprietary in the usal sense of the word (nor were they 5-6 years ago). If you're using a non-standard definition it would probably be more productive to use a different word.
> For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')
I'm a huge fan of type systems. I wish I could find a UI framework that's anywhere near as nice as PyQt for an ML-family language.
It's this platform on top of a platform that is objectionable from a performance, memory, storage, and integration perspective.
Languages have evolve to change the way we handle constraints like memory, speed, readability, expressivity etc.
We are arriving at the pick of what languages can bring on the table. Sure we can improve things here and there, but the huge challenges now are integration, packaging, distribution, updates, communications, multi-tiers architectures and all that.
So we now tweak platforms to help us with that.
But because we didn't see that coming, it's not done in any structured way. It's done exactly the way we did everything since the beginning of computing, by stitching together stuff then hitting hard on it with a hammer until the job is done.
This is not new. IT is a joke of an engineering field. We hack everything, don't think about the future, and then end up using the status quote. It's always has been like that.
They are an abstraction over OS level isolation.
Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades. Which are all as close to the metal as can be. And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception), so the whole premise is moot.
The few Java desktop apps that were around, people used to hate as memory hogs.
But the thing is, even when I write something for myself, I first write a command line app, then a web service. Never a GUI, because it's such pain.
Gets me the high-level, Smalltalk-ish productivity when I want it (most of the time) and the low-level C efficiency when I need it (sometimes).
Having my cake and eating it, that's me :-)
Quality of the tech is NOT the drive for success here. You are missing the point.
Well, maybe it's better to miss the point, than to succeed by selling crap to people who deserve better?
When are technies gonna stand up for quality of tech?
When the user notices a quality difference?
You can see everyday that people favor cheapness, easiness and convenience over quality. You would not have so much junk food otherwise.
What I'm saying is "it shouldn't matter" what people favor.
Professionals should still favor quality, even if their customers would just as well have crap (or are ok with crap when its all they can find).
One of my first commercial projects was a web-content management system written in Objective-C. Customers included Siemens and the German Bundestag.
Another couple of projects were written in WebObjects. If I wanted to, I could use Cappuccino, but I am not a big fan of web//client apps, so I don't.
> Can you make it portable to other OS ?
This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
> Can you reuse 20 years of knowledge, resources and libs ?
In the sense you meant it: yes. Except it's more like 30 years. However, programming skills are (or should be) transportable. With virtually no previous experience, I became lead/architect on a Java project, which succeeded beyond anyone's imagination.
> Can you hire tomorrow 10 experts to help you on it ?
Is this a serious question?
>One of my first commercial projects was a web-content management system written in Objective-C
You certainly didn't use any of your cocoa widget for the UI there. It was HTML + CSS.
> This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
Yeah, GNU steps for GUI on windows... This is what you think could be an argument for electron users ?
> In the sense you meant it: yes. Except it's more like 30 years.
Again bad faith. The world has way, way more code, snippets, tutorials and doc about any HTML + CSS + JS code than any tech based on Objective-C.
Programming knowledge is transferrable, but the knowledge of the ecosystem is not, and is always the most consumming.
> Is this a serious question?
Oh yes, it is. Because you see we are living an era where it's hard to find any good programmer at all for anything. They are all taken, and are very expensive.
So basically, on a tech limited to one ecosystem, finding them will be even harder, and even more expensive.
The simple fact that you are pretending it's no big deal (while any company will tell you otherwise, so much that the GAFAs are spending millions just in their recruitment process) illustrate how much a troll you are.
It most certainly is not. You just don't know what you're talking about and keep making up new stuff when confronted with actual facts that contradict your fervently held beliefs.
It's not like Smalltalk is a bad language that just happened to have a productive live programming environment.
It's one of the best languages out there, and conceptually stands alongside Lisp et al.
Became one of my favorite toys. I'd still use it for GUI prototyping if it was FOSS and kept getting extended. I found even lay people could learn it well enough to get stuff done. Long after, I learned what horrible things lay people did with it. Yet, they got work done and got paid without the IT budget and staff they would've preferred. (shrugs)
This is generally true, but to be fair the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers, and aren't meant to be programmed in hand-coded assembly except for a few critical bits deep within OS code. Older CPUs weren't like this.
It still might be possible to write hand-rolled assembly that beats modern compilers, but you probably need to have seriously super-human mental abilities to do it.
You got the causality wrong. Assembly programmer-friendly CPUs died because CPUs which weren't as friendly were faster and cheaper; those same CPUs were instead more amenable as compiler targets.
No, it really hasn't. It was just the way Microsoft proposed businesses to write bloated internal enterprise apps, what they used to use VB for.
Those are not the same as desktop apps -- and no, or very very few, desktop apps, ever turned to C#. Not even MS own apps, like Office, and surely nothing like Abobe's or countless others.
Besides, it's not JS itself that's the problem (though it took millions and top notch teams to make it fast): it's the web stack on top of it. C# just runs on a thin CLR VM layer -- and the graphics are native.
I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.
You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
Windows Forms is a wrapper on top of MS Win32 API. And WPF is also based on native widgets wrapped (with some extended with managed code).
In any case, C# apps are not much represented in the majority of Windows desktop apps, most of which are written in C++ or similar, and surely all the succesful ones. Can you name your succesful C# desktop apps? (not in-house enterprise apps and no developer tools please. There where the users have no choice, even Java does well) I'll name the succesful C++/Delphi/native/etc ones and we can compare our lists.
>You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
A call to a native drawing lib that doesn't pass through 10 layers of abstractions and bizarro architectures is as good as a direct native call. Especially from something like C# that runs circles around JS performance.
But even so, few consider JS to be what makes e.g. Electron slow.
> Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades.
So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.
This seems pretty much in line with the general industry trend towards the adoption of open-source software.
This statement doesn't even parse.
Everyone who adopted python 2 on sizeable codebase is likely stuck there forever, with zero annotations and none of the new tools available, and they'll never be ported back.
But let's be fair, type related tooling in Python are not close to the ones you have in Java yet. It's just that eventually, everything comes around. Java got faster. C++ easier. Python ...toolier ? Etc.
Python broke all retro compatibility and put all existing sizeable software in a miserable deprecated state with the breaking of python 3.
I don't recall C++ getting easier. The few tools and IDE still fail at decent refactoring and code completion. The C++11 movement is adding few stuff more or less useful, piling on top of the vast amount of already existing complexity.
It does have a steep learning curve, but it's worth it. The number of concurrency bugs alone that I could have avoided if I had been able to use Rust years ago are sad to think about. Java has great concurrency tools, but doesn't do anything to make sure that your not shooting yourself in the foot.
Your list of 7 JVM languages (both here and in your earlier comment on this submission) seems to be from most widely used to least. Yet in your HN comment from 2 days ago at https://news.ycombinator.com/item?id=14068664 you ordered that list differently, i.e. "Java, Scala, Clojure, Groovy, Kotlin, Ceylon, Frege, etc". Have you changed your mind about the relative adoption of Clojure and Apache Groovy in the last two days?
There's something wrong when a testing framework hacks into a language parser to make the language statement labels have special meanings like function names do, and overload the `|` operator in expressions so data will appear as a table in the source code. "Lovely" isn't the word for that sort of thing.
That is definitely not the whole story. Costs are shared between developers and users. If it's more expensive to develop an app, you can bet it's going to cost users more too.
Users may not notice the slowness right away, but if they run a bunch of applications at once they probably will, and they're more likely to notice it while running on battery.
And if they don't want to, they should be forced to by said society.
Is that true from a user's perspective? The average user I know would not have the idea that Slack is somehow inherently worse than say MS Word or any other truly native app they use everyday. What would Slack gain by integrating better with my Mac?
Slack starts up too fast for me to read the little quotation (penultimate Macbook Pro). I actually wish it was slightly slower because I like those cheesy quotes when I can catch them. Everything feels almost instant. Not bash instant, but as fast as any mainstream messaging app.
I leave it running all the time and get what feels like normal full hours upon hours of battery life. There are apps or browser tabs that I've noticed destroying battery life, but Slack nor Atom nor MS Visual code are on that list for me.
The anti-Electron complaints all kind of feel like the same argument that comes up as each new layer of abstraction gains acceptance. I'm sure there's a lot that Electron can and should improve. Running multiple copies of Chrome does sound awful, but until I read that I didn't know the atrocities it was committing.
Just doesn't feel productive to try to make Electron go away vs. working to improve it or create a better cross-platform abstraction.
It is for this user.
> What would Slack gain by integrating better with my Mac?
Far better resource management, one would hope, for starters. As the article pointed out, when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.
It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP. So Slack gets turned off, and maybe someone gets annoyed that I don't respond to something, while I run a few VMs in Slack's former memory space.
As far as other things, well, using a platform's capabilities as appropriate is generally considered good form - HCI concerns, and, generally, just people like things to be consistent. I don't know why we're supposed to forget this now.
I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.
And then there was slack
The nice thing about slack is that it will run about anywhere. The app on my phone magically knows if the app is not running on my laptop and will then and only then send me notifications. But, I'm not suggesting this as a solution for you -- best productivity killer I know is having to pay attention to yet another device while trying to get work done.
Depends on your environment, but I have of one very effective and successful coworker who just flat out refuses to use Slack. The number of companies where that is acceptable is probably limited. Unfortunately.
Electron would have you write a web app and package it as an app installed to your Applications folder just like any native app. You'll write this code in JS, HTML, CSS and use Electron APIs.
Fluid also would have you write this in JS, HTML, CSS but use Fluid APIs instead. The two compete for the same role in that sense.
For what it's worth, since you seem to like IRC clients so much better, Slack has a built-in IRC bridge: https://get.slack.help/hc/en-us/articles/201727913-Connect-t.... It works just fine, in my experience.
I use it extensively for everyday Slack, where I don't care much about inline media and inline formatting. And for a text client, weechat + this plugin handle it pretty gracefully.
Not really, unless you're using most of your RAM or you treat this as a fundamental principle. I've never checked my Slack RAM usage, and I'm sure it's higher than if they built a great native app, but it's also never had any performance problem with Slack or any other application (okay, except for Eclipse, but I stopped using that when Android Studio came out).
> It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP
And there's the key. You have a very specific use case that is probably extremely rare. I'm a full-time developer and as far as I know I've never been low on RAM on my first generation Retina MBP.
It is truly strange to hear being resource constrained being described as a special case in computing.
We really do need to be more mindful of resource usage. I'd rather write my own lightweight clone of something I want than use an Electron-based hog. There are menubar-only apps that come with an entire Electron dependency.
Further their machines are full of software that runs all the time for no reason including but not limited to multiple redundant antivirus that are trying in vain to scan everything in real time in a vain attempt to prevent the next malware infection from taking hold.
Various services like file system indexing and virus scans run at inconvenient times and render things slower than before.
Laptops are super prevalent because their portability is more important than power. Its not at all unusual to keep using the same machine for 5-8 years as long as it still works.
Real people have bad computers with bad specs and in a massive number of cases the browser is already using a significant portion of the entire computers resources.
As a native Mac developer I can drop in things like a predicate editor for defining filters or queries. They come with Cocoa. I can put in sophisticated table views and tree views. I can connect all of this easily to core data which loads from a database objects on demand without any code needed to be written by me.
You see this sort of shortcuts in the electron apps I've seen. They don't have proper GUI for preference configuration like any Mac app as they can't get that for free. Atom doesn't have a regular file open dialog. I use that a lot, dropping a file into it to jump to a particular location I got open in Finder (file manager).
My preferred editor, TextMate has very little development resources behind it compared to Atom, yet it has a far richer GUI. You got a GUI for creating and editing plugins. Not just editing config files. You got native rich UIs for selecting fonts and colors. You don't have to write font names and color names in some config file.
There is a certain irony in the claim about how stuff like Electron saves you cross platform code, when what is usually not cross platform is the GUI, and the electron apps I've seen has very little of it. Try making something with the complexity of Keynote, Pages or Numbers in electron and I think the lack of a comprehensive set of prefabricated GUI components will start making its mark.
Not to mention the numerous native APIs which exist which you have to duplicate, e.g. for vector graphics, animations, 3D graphics, audio, video, font handling, OpenCL.
How about people with disabilities or internationalization? You got great native tools for doing that, how do you accomplish that in Electron without re-inventing the wheel?
Well, VSCode certainly seems to contradict this.
This includes chrome and electron based stuff
The problem here is the base platform Electron and underlying Chrome/Nodejs
But as consumers we feel when our browser lags, so browser vendors optimize speed over memory and cpu, caching the shit out of everything. The Web is a dangerous place so they further isolate every tab as process, sandbox them and keeps lots or copies of the same thing in RAM because a security flaw is a lot more shameful than a memory flaw.
You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.
Making noise also makes a difference. When someone complained about VSCode cpu usage because of idle cursor and it blew on HN, next month's release had a fix (for all platform)
The truth is you can't move fast doing native development with different libraries. Electron keeps your dev costs down and allows you to move very fast.
Won't that encourage more companies to cut corners? How can producers of quality products compete when other companies can release prototypes that their paying customers finish for them?
And there's, "This really sucks, its an open source project, I think I can make a difference that would affect millions in a small way but would still make a big impact overall"
tired of this argument. if you put in the proper effort, your users won't care or notice.
Except for the very old 4.x Linux version - which is native Qt - and comparably old Windows ones, I thought, all of the recent versions are webbrowser wrappers (one sort or another) with some occasional native widgets around that.
But I don't think UI implementation has to do anything about why we love it so much. I mean, missing calls and messages every other week, random desync between clients and all the usual rituals that every other group call starts from (you can hear them accompanied with chants "can everyone hear me?", "$name are you here?", "tell me if you see my screen" and "let me drop the call and restart").
my point is, even if i concede native is "better" or whatever, the difference should be negligible for most apps for all users (let alone "most" users).
The sliders mean that you will trade performance off against ease-of-development. It will be easier to ship a basic Electron/web-stack app than a native one for multiple platforms. However, it will be harder to do this is in a high-performance and well-integrated way.
I went through all this with Cordova back in the day when trying to build cross-platform mobile apps. I was able to ship apps quickly, but at the expense of quality – it's okay to do this so long as you are aware of the trade-off.
i'm just exhausted by the "web apps feel so much worse than native!1" argument which is an over simplification and not a rule.
Are the other platforms native? I thought they switched to webrtc in 2015 to be able to just use a simple web window wrapper for all platforms, just not Linux.
Edit: Mass downvotes; lol ... Just stating my personal experience!
Are they though? The two applications that use the most energy on my Mac - by far - are Steam and Skype. Steam still has trouble with HiDPI and freezes when performing various UI interactions. The number of problems with Skype are uncountable.
I'm currently booted into Windows for work, looking at my current process stats, the top memory consumers are:
* Visual Studio (hodge podge of all kinds of things, 800MB)
* Chrome (215MB)
* Microsoft Intune (presumably native, 114MB)
* GitHub (.NET WPF application, 108MB)
* Explorer (native, 103MB)
* Search Indexer (native, 107MB)
* Lync (who knows, 98MB)
Meanwhile, the supposedly terrible Electron apps:
* Spotify - 58MB
* Slack - 93MB
* VS Code - 60MB
As far as interfaces go, Spotify, Slack and VS Code easily outclass GitHub, Visual Studio, Explorer and Lync in usability.
Here are stats on my (Linux) box:
* atom - ~500MiB (one window)
* slack - ~816MiB
* chrome - ~935MiB (two tabs + hangouts)
* google music electron app - ~500MiB
Might just be an accounting difference. Forked process applications in particular are very difficult to account, because even their private/RSS may be COW from another process.
If you are using Windows 10, your missing Atom processes will be under Background Processes in Task Manager. For the sake of the argument, I just did a fresh install of Atom and this is what I see on the first run: https://i.imgur.com/0ZRSumF.png. ~220MiB (no files open, zero extensions).
Adding up all the processes' (7 of them) private memory gets me 194mb.
It might just be a difference of platforms.
For .NET Core + TypeScript, VS Code is almost feature parity with full blown VS, while being an awful lot faster. The only thing I find particularly lacking is debugging, but even that is coming along well.
You missed this qualifier in the parent comment.
The right benchmark for VSCode is not Visual Studio, but Notepad++ (5.9MB on my system right now).
Notepad2 = 1792 kB.
P.S. Pay attention, this is kilobytes, not bytes :D
I don't care about the difference of 175mb of ram between the 2 as one of them (atom) is infinitely more useful for me than the other (notepad++)
So Steam was one of the first "Electron" apps. A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.
Here's a fun one. Start Steam with `-dev` and hit F7. Widget factory VGUI edition!
Oh also, https://developer.valvesoftware.com/wiki/VGUI_Documentation
Valve used a very ubscure/niche HTML render engine initially for Steam (2006). The company/website behind that isn't online anymore. An older version of the Wiki had some brief info, but all these info vanished.
Here's an old revision from 2005 by a Valve employee confirming Steam used VGUI back then.
search for HTML: http://www.plastic-warfare.com/SteamUIGray.zip
Funny how the old things stay online. Notice also the cyber cafés menu entry. http://www.steampowered.com/status/game_stats.html
Also WinXP used a forked Trident engine with some removed features for "Software" dialog and various other features (Windows Help, etc).
Maybe for one of the processes, but on Windows Spotify needs at least three processes usually to run (five if you count the Web Helper and Crash Service which are probably native code). On my machine the three main Spotify.exe processes take up at least 170MB of RAM, often more. Although I wasn't aware they were using Electron as their app has a standard, native Windows menu bar.
We straight up would have not shipped it without Electron and the CPU it uses to sync is on-par with apps like Apple Mail & Thunderbird.
> Nylas Mail - The best free email app | Nylas - The best free email app
What exactly makes it "best"? It looks to offer nothing more than other "best" mail apps.
• It's got pretty much all the power features out there like Snoozing, Open Tracking, Send Later, Reminders, Enriched Contacts (i.e. Rapportive), Unified Inbox, Swipe Actions, Templates, etc.
• It's cross-platform for Mac, Windows & Linux with custom UI styles for each.
• It works with all mail providers including Gmail, Yahoo, iCloud, Outlook and even vanilla IMAP and on-prem Exchange servers.
• It syncs your data directly (not via a cloud service) for speed and security.
• It works offline, so you can use it on a plane or when you don't have WiFi.
• It's open source GPL available on GitHub with >20k stars: https://github.com/nylas/nylas-mail
• It's free.
It's also still improving and has over 800 GitHub issues and we would love help from anyone who wants to make email better! :)
I see from screenshots that Nylas has folders and labels. Can i use either of these in the following fashion?
- i can have a tree structure of them
- an email can be in two separate folders/labels at the same time
- folders/labels can be configured to learn which emails to automatically sort into themselves, based on the email contents, by dragging and dropping the email into or out of them
Ball's in your court.
E: Bonus round! In this screenshot there's only 6 emails in the list: https://www.nylas.com/static/img/nylas-mail/hero_graphic_mac... Is there a way to get a list of emails where each line is actually only a line of text tall?
• If by "tree structure" you mean a folder hierarchy, yep that's supported. I think we have a current bug with dragging nested subfolders but we're working on a fix. (Surprisingly >99% of users have a flat hierarchy.)
• A thread can certainly be in two separate folders (e.g. Inbox and Sent) but an individual message can't be in two folders at once. In that situation there are two copies on the actual mail server. For Gmail/Gsuite this is possible via labels where any thread can have an arbitrary number of labels. We support both systems.
• "labels can be configured to learn which emails to automatically sort into themselves, based on the email contents" -- this is a really cool idea and something we've talked about internally. AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters. It could also be an interesting plugin that anyone could build on NM. We have a Slack chat room where folks discuss stuff like this if you're interested: http://slack-invite.nylas.com/
• And for your bonus round (haha) yes there are 2 different ways to configure the UI. One of them is 3-pane with a reading mode like Outlook, and the other is 2-pane that navigates like Gmail. http://i.imgur.com/Lt0x7O4.png
Also in 3-pane if you make the message list wide enough it will switch into the compact version: http://i.imgur.com/SaGp9eV.png
(Obviously it will show your real mail data. We have a "screenshot mode" for sharing stuff like this without revealing sensitive information.)
> Surprisingly >99% of users have a flat hierarchy
You tend to end up with it only after really long-term usage. All the folders with sub-folders i have got them after they got too big to be just one, e.g. "Perl coding stuff" has several subs, "Shopping", "Clients", "Computer Game Emails", etc. Some clients have additional subs. All started out as a singular one though.
> threads, not singular mails
Ok, fair enough.
> labels auto-learning by drag&drop ... AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters
Opera M2 does it extremely well since ~2000. Google Inbox does it ... eh. Mobile and PC, none, right. The filtering is honestly super easy to implement. It's a bayesian filter. In older email clients those were used to filter out spam. Opera M2 simply gives each folder one (user-configurable) and runs all the filters on each mail that comes in.
And to be fully honest here, i still use Opera 12 as my main browser, along with its mail client and don't see myself jumping ship ... anytime really since for me the combination of mail client and browser is key. However to respect an email client i expect it to be a feature match to Opera M2 at least.
Not interested in Slack. If you had an IRC channel tough i wouldn't need to sacrifice a chicken and a CPU core. :)
Ok, that looks fine. I personally prefer to have the email below the mail list, but that's not a huge thing. Maybe an option to consider. Screenshot mode is cute. :)
I haven't tried Opera M2-- I'll check it out. Might be a fun hackathon project to train a Bayesian filter on every folder and auto-suggest routing at least.
There was a big IBM Research study a few years ago that showed it's dramatically more efficient to search email versus categorizing messages into folders. Here's a link to the full paper: http://people.ucsc.edu/~swhittak/papers/chi2011_refinding_em...
With electron, every OS is a third class port.
Chrome kills me. :(
> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.
> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.
But native applications are shite in terms of portability.
> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support
See the part where the original post said "it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies". The costs may not be "prohibitive", but they certainly would multiply effort and resources, and divide profit.
Lately there are more UI-ish apps I value having everywhere including desktop (Spotify, Hipchat, Whatsapp, VSCode). I'd also love a decent cross-plat podcasts app.
I think it's clear there's now more demand for certain types of consumer-ish desktop apps (chat apps & music apps especially) than there was a few years ago.
Just as a counter-point, because native app fans often make this point as though it is universally recognized to be a good thing.
I don't want apps to integrate with the host platform. The host platform is not the thing I care about. I use several host platforms in different contexts (I have work and home computers and a smartphone, they all run different OSes) and I would prefer that Slack look like Slack and not have buttons in different places with different UI interactions just because that's the way Reminders.app works.
For me, the web is the host platform I care about. It's the one that I can use anywhere and only have to remember the URL.
* Text selection
* Caret behaviour (e.g. Option-arrows on macOS)
* Spell check
* Open/save dialogs
* File system access
* Window management
* Accessibility (screen reader support etc.)
* Standard right click menus
* Indexing (e.g. Spotlight on macOS)
You may be thinking to native UI idioms, which even Apple threw out the window several years ago.
Electron apps are mostly very good at the things in the above list, because the Chromium web renderer has spent years abstracting the mechanisms needed to feel native where it matters.
Non-native toolkits such as Swing and Qt also spent years trying to achieve native look/feel, mostly through emulation and host OS detection, and they still feel pretty crappy compared to Electron apps.
Slack, Spotify and friends do a good job of inventing their own "web but native-feeling" UI. An example of the exact opposite is Google Docs, which still, for all its technological impressiveness, feels like a crummy Swing app trapped in a web page. For example, Google Docs renders its own right-click context menus, which look and feel nothing like native context menus. Google Docs' mini-apps also have a menu bar and a toolbar, but it's part of the host window, so you get two levels of menu bars and tool bars.
For the most part I think what people care about is that things work as they expect, which is primarily 1) placement of UI elements, and 2) interaction with/between these elements. If that's done right, nobody cares if the UI is flat, dark, light, or has a leopard print background.
Now I do understand that there's some overlap in ui/skin concerns, but the distinction still seems crucial to me.
For example, the web is clearly not consistent on the 'skin' of things. But I often know where to find things based on their location (header nav menu's, footer contact details, etc.), or general look (loupe for search, wide rectangle for inputs, some kind of wide rectangle with a doodad on the right hand side for a drop-down). Or a combination of placement and look (a search input field is an input field in the top right of a typical page).
Even lots of computer-challenged people I know seem to do pretty well in this regard.
But as you say, when it comes to interacting with elements, as long as the developer doesn't override 'native' behavior, a web-solution can be very native.
On the other hand, the vast majority of cross-platform native apps I use often look close to native, but their 'core', inputs, selects, text fields, and so one, often feel off.
Honestly, I much prefer a non-native looking app that uses native UI elements over an app that has an 'uncanny valley' native look that is slightly off and UI widgets that don't behave natively.
A web site inside a browser cannot resemble that and all the hacks to try to are not getting much close.
To do the same in Microsoft Office, you need to dick around with OneDrive and/or SharePoint. The last decade or so, I've only touched Office when someone sends me some .xls or .ppt file and I'm being lazy and just want to view it.
It's 2017, this is how we work now. My colleagues (literally) across the world are not going to connect to some shared NFS drive or whatever via VPN to store documents.
One drive for your personal documents only for you. One drive for your team only visible and editable by people in your team. One drive company wide with common stuff.
You can send a link to your colleagues and it just works! That doesn't support multiple editing though. That's how things were done historically.
Google doc is good to send a documents to a bunch of emails and see/edit the documents. It's terrible to write longer documents with advanced formatting, pictures and schemas.
That clipboard shortcuts work the same is the only one that I'm used to enough to be annoyed if it were done differently.
There's something about Mac fans that are very preoccupied with all of the details of how Macs work. I'm not criticizing that, you like what you like, but you shouldn't be surprised that I don't care about Spotlight indexing.
Shift+arrows — select characters
Option+arrows — jump between words or paragraphs
Cmd+arrows — jump to beginning/end of line or text
Shift+Option+arrows — select words or paragraphs
Option+Backspace — delete one word back
Cmd+A — select all
I actually used all of these except the last one while writing and formatting this comment! Plus clipboard shortcuts.
If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe. But if you don't, I don't understand how you can have this opinion.
I get super annoyed with anything that somehow overrides these standard keyboard shortcuts, which is suprisingly often. Non-native UIs typically have to reimplement them because modern OSes have made the curious choice of not abstracting them.
I don't think it is a "Mac fans" thing. The exact same principles apply to Windows. Even to Linux, although the keyboard standardization there is next to non-existent. (I don't use Linux desktops often, but when I do, I get really frustrated that the terminals use Ctrl as a meta key instead of Command. So "copy" isn't Cmd+C, it's something like Ctrl+Alt+C.)
> If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe.
I do, and this is one of the reasons I've never bothered with all of the details and shortcuts that you like.
Vim attempts to make the best possible text editor. It doesn't let "OS conventions" dictate what makes good text editing experience. What you get from apps staying to strict OS guidelines is a bunch of average -- not terrible but not inspiring -- applications.
2) If you know a platform then you should have no problems knowing how to use it.
It would be ridiculous to have an app from Windows behave exactly the same in Mac OS just because you don't want to remember the difference. You don't want minimize and maximize buttons put on the opposite side of all other mac apps because that is how it is on windows. You don't want copy paste in Slack to use Ctrl rather than command key because that is what you do on Windows.
3) Whatever time you save from doing everything the same across platforms would be wasted, for anybody not working cross platform who suddenly have to deal with an app with completely non-standard alien behavior. I want my standard mac hot keys to work in a mac app. I want preferences to be in the standard location. I want my color and font selectors to work the way they work all other places. I want drag and drop to work like in all other Mac apps.
We Mac users have seen this again and again. When companies don't give a shit about our platform, it is usually just a question of time before a competitor arrives which does, and knocks the other guy out. You don't survive that long ignoring the platform unless you got some lock-in advantage.
Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.
I don't, I use The Web for 90% of all things I use on a computer. A Chromebook is one of the computers I use the most when not working for precisely this reason.
> Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.
I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.
Hold on a bit with that assertion.
First: which apps built on Electron are being sold, period? All the ones I'm aware of are open source, like Atom, or front ends to services, like Slack.
Second: which companies are switching to Electron for development? Again, all the Electron apps I'm familiar with are ones that started out that way. While I'm sure there's probably an app or two out there that began as a native client and then went to "let's just be a web wrapper," I don't know of any big ones offhand. (I've come across companies that have shifted their strategy to using true native applications, however. Facebook famously shifted their mobile strategy from HTML5/JS to native apps some years back, and I know of several iOS apps that were using "write everything in JS, it'll be great!" toolkits that switched to actual native AppKit.)
Third, and admittedly anecdotally, in both my experience and what I've consistently heard and read from people who've had the opportunity to study the UX of both native and "wrapped web" apps, just because users don't use the language of developers doesn't mean they don't notice when apps are slow, resource-hoggy, and behave kinda weirdly compared to other apps. I run a Slack for a writing group that's mostly populated by non-technical people and it is not uncommon for users to complain about Slack "slowing down their machine." Just because people don't know the term "native app" doesn't mean they aren't going to be able to tell "this app over here is nicer to use than that app over there," and that might be because "that app over there" doesn't minimize properly, or has weird menus that put common things in uncommon places, or doesn't do what they expect when they right-click on selected objects.
Actually, the incentive is even stronger than this. The ability to visit a website and start using an identical version of the app immediately is just as important - I doubt that Slack and Discord would have had half the success they did if users had to download the application before using it, regardless of them being available on all platforms.
Discord has had the ability to give links to others to join a chat server since its inception, resulting in a two-step process to use it: click the link, type a name. This is miles less of a barrier to entry than: click link, download app, find downloads, install app, run app, create account, join server (rough process for most text/voice apps up to this point).
If I were to build an app where cross-platform support was crucial, I'd probably start with Electron as well because of time and budget concerns, and switch to native if the app 'takes off'. But on the other hand, I can imagine there being serious risks to building out an entire platform that way and having to rebuild it from scratch later on. Maybe there's never budget/time for it, leaving me locked in?
I suppose React Native could help in that regard.
I can see both sides of this argument (every time it comes up)
As a consumer of apps, I want the leanest, most minimalist, fastest thing going. I want native apps on my devices (If you think slack is a hog on the desktop you should try it on Windows Phone).
But as a developer, I know that electron is a shortcut that means my app will take less time to build. I can take my existing skills, take work I've already done on a WinJS app and publish it on Mac OS, Linux and Win 7. I can spend more time with my family, instead of spending all my evenings learning py+qt, or xamarin, or react native, or whatever the new fangled thing is. And I know people will use it.
Hell, I've even got better odds of pushing an electron app than a native one, as I can submit a pull request and maybe have it appear on https://electron.atom.io/
Obviously, seeing the Electron hate always gives me pause for thought, but at the end of the day it feels like the hatred is from fellow coders (if a dribbling front-end-js writing low-life such as myself can call you writing-assembly-on-a-napkin-while-you-quote-stallman-types fellow coders) and my apps user-base is overwhelmingly non-technical.
I'm an embedded devices programmer and I'm proud of it. Proud to know a little bit about my hardware, and proud to get the most out of it. And I'm ashamed when I find a more efficient way to do something: it's not an optimisation but a bug fix.
Proper this and proper that, and don't get me wrong, I agree, but parent is correct -- and that is these apps wouldn't even exist if it wasn't for Electron. It just wouldn't have been a consideration for it to be a desktop app. Instead it would have just been thrown up on the web. For example VSCode would have just been some type of online IDE clone like Cloud9 or something similar.
If you think this is nonsense, you are out of touch, and I don't mean this as an insult, although I realize there isn't a good way to say that.
Yes. Electron enables some new stuff. New stuff that wouldn't exist otherwise.
Yes. Electron is a bucket of bloat that saddles what should be small, simple apps with enormous amounts of crap that has nothing to do with the app's functionality.
Both of these can be (and are) true.
I wonder if one of the issues is that so many developers have now worked almost exclusively in the 'web' sphere, and aren't aware that native development maybe isn't as difficult as they think.
I've done the latter for a long time now, and only now I'm trying to teach some people, I properly realize much knowledge is needed to do it right. It's not just arcane knowledge of the quirks of CSS/HTML/JS, but also tooling, build steps, knowledge of 'expected' web behavior, frameworks, libraries, etc. Much of this complexity is still there and often worse if you go for vanilla js and static html/css (in part because expectations of a web app are higher these days).
I started learning native iOS development and expected things to be much easier and more sensible, but instead I get the impression that it's not that different.
Xcode is apparently a piece of shit, and everyone tells me to avoid Xcode's interface builder. There's tons to learn about how a project is set up, as well as stuff that just isn't a concern for the web like packaging it up submitting to the app store (hours just figuring out how to correctly supply icons and get a certificate). More than once, as I'm following an online course, the lecturer will say something like 'this might seem like a logical approach, but DON'T DO THIS and do <unintuitive thing x> instead.
Again, please correct me if I'm wrong. Perhaps when you put it all together it is significantly easier than doing the equivalent on a web platform. I'm just saying that I expected my initial foray into native, in particular Apple's 'walled garden', to be at least a little more like an actual garden rather than the chaotic, exceptions-for-every-rule (but kinda fun!) scrap-heap of the web I am familiar with.
Yet to hear a single non-dev coworker complain about Spotify being "bloated".
The real reason that parties like slack and spotify choose electron is because its easy for the devs they have that only really know JS/HTML.
Its not that they couldnt hire more/different devs that could do it in a saner way, its not that its too expensive or that the ROI is not good, when we are talking about companies worth hundreds of millions to billions of dollars focusing on a core market, that is just completely laughable post hoc bullshit.
Its that they dont give a shit, either about the user experience, or improving their toolset. They are happy where they are, and see no reason to change.
I think the issue is time and resources. Small teams, like Slack, would like to create a slick experience but they don't have the time.
Companies like Facebook went the other route -- HTML5 on mobile, got their fingers burnt, and went all-in on native. StackOverflow's iOS app appears to have improved a lot too, in v1 it was a thin shell around the web interface.
I hope React Native catches on. I'm not a huge fan of how 'heavy' Electron is myself.
5 years is huge when it comes to web technologies.
This goes for mobile too.
It's ridiculous to pretend that you have to write 3 distinct codebases to get a multiplatform application. Are there 3 Firefoxes? 3 Chromiums? 3 VLCs? Back in the day, applications like Banshee, which written in C#, were the rage and were distributed as core parts of GNOME.
When I went to go build a Windows .dll, I rolled up my sleeves and expected to have a bad day. `cmake -G "Visual Studio 14 2015 Win64"` just went ahead and made a Visual Studio project from my source tree, and that project built and worked first try. I was using all C++14 stuff like std::lock, std::thread, etc, and there wasn't a single #ifdef required in the entire project. Amazing!
Macintosh users in particular are sensitive to things like the placement of and spacing between UI elements; if anything is "off" from the gold standard set forth by Apple, they will scream and bitch at you because perfect UI consistency is paramount with this crowd.
Web-based apps get something of a pass because they look and feel like Web-based apps (though not always; witness the grousing in this thread about the new Mac Spotify client). But things like Qt and the XUL-based Firefox, which try to look native but miss subtle details, fall into a sort of UI uncanny valley and are roundly rejected by the Mac community.
>If you target the native OS widget set, you must have a separate code base for each platform's native widgets.
I want to clarify the subtle distinction here. If your code's concerns are separated, having to directly provide native widgets on some platforms means a different "codebase" for windowing and widgets only, not for everything. You'd still compile your normal code, and use an #ifdef or equivalent to include the appropriate windowing/widget library.
But that doesn't excuse the abuse of that platform. Spotify used to have a pretty convincing native Mac app, which was spiked in favour of their current abomination, and I've watched performance plummet.
Using Electron as a cost-cutting measure is fine, but it's not good for user experience and it's OK to be honest about that.
It's very far-fetched to call a lower barrier to entry an issue. The easier we can make it for people to get started, the better, isn't it?
Lowering the barrier to entry is great, I agree. It's awesome that Electron and web technologies can be used to quickly launch proof-of-concept desktop apps. I find it significantly less awesome that companies with hundreds or thousands of engineers continue to use it after the concept has been proven, however!
So are you saying that these companies which have implemented these apps should instead say "You know what, we have this app developed, and working on the major platforms - but let's instead devote three new teams, one each for Windows, Mac, and Linux - to re-implement this app natively! I'm sure upper management will agree!"
That's not going to work. That's going to be shot down and laughed at. No company is going to re-implement a working cross-platform application over to three separate native contexts, and maintain all three. That just won't happen. It works already. The users probably like it just fine. What benefit to the company will this get them? Nothing - just more costs for maintenance across three platforms.
If native is wanted by the users, likely what would happen - if it happens at all - will be the company says "Ok - we'll make it native for Windows, maybe even Mac - maybe. Linux? Forget it!"
It's not the developers. Developers would love to make native apps, for all the platforms. But developers are limited by the companies they work for, and by the economic realities that all the platforms can't be supported natively; at most, only one or two can - because at that point, with the number of users on those platforms, the additional costs of maintenance and support are pretty much saturated. Adding additional native platforms doesn't just add onto those costs, it actually (in theory) multiply them - because a single person might use the application across multiple platforms. So if they have problems on two or three different native platforms, now there are three different support issues (needing more people to support) - instead of a single complaint for a single platform.
I get it, though - it would be great if these apps were native, and worked on every platform, from now and into the future (even on platforms that don't exist yet!). That's just not going to happen; if native is wanted, then only the most widely used platforms will be supported, and even then, one of those will likely be dropped, and it won't be the one from Redmond.
So - what can be done? I dunno. The concept the author brings up ("use React Native") might be the solution. Or some other interface that abstracts a platform's OS and other native functionality out to an API that is the same for all platforms. But now you have issues with security and other access - which you still have with Electron, but it is more contained and constrained, since it has to go thru the Chrome engine and the various rules/settings browsers have for sandboxing bad actors. Or - you leave it to the user and their operating system (and slim it all down - maybe that's what React Native does; I don't know, I've never used it).
Or - you (that is, the company) just says "Sure - we'll do a native only implementation - for Windows only." - because that's how it usually goes.
I don't think that's true in the case of Slack. They only have JS "hacker" webdevs, and those are notoriously resistant to any change to their comfort zone. C++? WPF? Swift? Cocoa? Scary stuff.
Unfortunately, no. Just as a trained electrician will wire up your house so as to not set it on fire as soon as you turn on a lamp, a trained developer will make apps that use a minimal amount of resources.
Especially more hilarious if someone writing Electron apps call him/herself an engineer.
If yes, what in earth makes you think writing Electron apps is a good idea?
If no, back to my point.
To quickly answer JetJaguar below you, yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers.
I have been trained as a Software Engineer in Istanbul. I am very sorry that my country doesn't fulfill your expectations.
> If yes, what in earth makes you think writing Electron apps is a good idea?
Engineering is about making trade-offs. There's enough discussion here about what those are for writing Electron apps.
> If no, back to my point.
What, "no"? What was your point? ...that I'm not an engineer? As I said, I am one.
> yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers
I don't think you are a cunt. I think you are uninformed and have concrete opinions based on limited or self-fulfilling-prophecy-boosting experience.
No country trains software engineers in the way you describe.
(No offense to any other country's engineers -- the engineer mindset is the same everywhere, but the mindset of the non-engineer in other countries is the distinction. That is, Germany grants engineers a degree of respect, almost reverence, that I've never seen in the US.)
No place in the US does, indeed.
I sympathize your argument and I think the field is doing a great job right now demonstrating some of the upsides of a licensing authority, but actually getting one would be bad IMO.
Most software is not life and death, and licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means (today, these are mostly indirect because everyone is on the look out for them, but they are nevertheless still there). There are good arguments that the AMA and ABA have both seriously contributed to the astronomical expense of their respective services.
For engineers, absolutely.
For software engineers, absolutely, because they're still engineers.
Software development? Go wild, anyone can do it.
>licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means
That seems like a terribly US centric that I keep seeing online. Unions in France, as it's the one example I can be certain of, are in no way gatekeepers, and we are a country where they've been immensely powerful when it comes to influencing the state (whether talking about worker's unions or CEO unions). But you can get any job without being in an union, all they're doing is bringing everyone on an equal footing when it comes to negotiations.
Licensing authorities are purely a society thing. Either you have a numerus closus, because the end goal is for everyone that passed the selection to have a guaranteed job, with good living conditions, or anyone can pass, and good luck everyone. It works in some cases, doesn't in others.
Would you like to use an application that would not have been written without a low barrier of entry? Powerfull and easy tools are not necessarily the same thing.
For those of you who aren't old enough to have been around, Sun initially pushed Java as a "write once, run everywhere" GUI language. It quickly became clear that Java applications were ugly and terrible everywhere, even in the primitive days of X11R5, when programs used a mixture of Xt, Motif, Qt, GTK, and raw X11 protocol (xv was awesome). Having a Java program for some task was worse than having no program at all, since it would discourage someone from writing a decent native solution.
Fortunately Sun found ways to make money using Java server-side, and Apple helped kill it client-side by not providing it by default.
I think Javas problem was it's Unix engineering roots, with too little focus on UI, and perhaps a little too much of the "we don't care about performance"-mindshare. The latter being the only problem I see with some electron apps.
Subjectively, I'd say that Electron's performance overheads are not bad compared to, say, Smalltalk in the 90s, where one Smalltalk application could bring a fully loaded state of the art workstation to its knees.
And don't get me started about Flash. I had a whole project cancelled after an engineeer brought up the CPU meter during a fairly simple animation.
Chrome is widely considered the best current desktop browser in a very competitive space. If it has problems it's definitely in idle power consumption (which indicates wasted idle CPU) but it is used by a hell of a lot of people who have free alternatives.
There's nothing inherently electron-specific that makes an app "shite" any more than writing one in Qt would. You can write shit in any language, framework or platform. The day someone invents a system that protects us from our own stupidity will be the day humans become obsolete.
> They fail to integrate with the host platform,
Untrue. You can—if you need to—integrate with the host platform by writing a native node module, however it becomes less cross-platform at this point.
> they are slow, they hog memory and drink power.
Yes web apps use a bit—sometimes considerably, depending on what you do—more memory, and a bit more CPU (and hence bettery),
but they are not perceivably slower, unless you're doing something stupid (in which case the equivalent Qt app would probably suffer in the same way).
> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.
Maybe, maybe not. I think using a cross-platform solution wouldn't be ideal for them (they'd need to either rely on something like Qt, write their own rendering engine, or use something like SFML; all of these are overkill compared to Electron), and the alternative is writing it in different languages for each platform, which would probably inevitably cause the projects to get out of sync in one way or the other over time.
I get the feeling that UX and UI designers for web- and mobile apps are just a different breed than their native desktop equivalents. It might just be my Windows bias though, iirc Windows didn't have a strong / great UX guideline until their current one came around - and I haven't used many modern windows apps yet, the ones I do use are stuck in the 90's with their button bars and such.
I have a feeling it's more of an experience / competency issue rather than lacking APIs.
Debugging is also something different: https://camo.githubusercontent.com/a0d66cf145fe35cbe5fb34149...
Like hot module reload where you write your app live. You edit one component, everything else stays in place and maintains state. Or time travel with Redux, where each piece of state is inspectable. You roll back or slide through the apps history and see it literally build and deconstruct itself. That's possible because UI is just a function of state. Same state, same UI.
This is despite their mobile apps being the most minutes used apps of any company. Even then, they tried to go with a electron-esque approach and only backed off when the performance and UX tradeoffs became unacceptable.
If one of the most profitable companies in the world can't see the business case for supporting a first party desktop ecosystem, it's very hard to believe many other companies have the justification.
This isn't an engineering problem, it's a business problem.
There are some applications that do try so - 1password, GitHub for Mac/Windows to name a few that come to mind - but they seem to get less love than the cross-platform webapps. It feels like they get a certain amount of dev time before going to minimal maintenance mode.
In any case, the originator of the phrase, Knuth, specifically said that it related to "small efficiencies".
Nobody would call Electron's efficiencies that need to be addressed, "small".
There's a good discussion here: http://wiki.c2.com/?PrematureOptimization
EDIT: thanks for the downvotes. I'd love to hear your thoughts on how electron is keeping Slack and Spotify from building a massive business and how their desktop users find the experience so bad they don't use the tools obsessively. Clearly there are things to improve w/ Electron (energy usage), but "terrible experience" is not how I'd describe Spotify and Slack on desktop, and their businesses clearly reflect that.
Moving to a native stack has major tradeoffs, would it 10x their business at least? I strongly believe that answer is "no."
But its worse than that - most users don't even know why their battery life is awful. So they blame apple, or microsoft, or dell or whoever. And they just don't use their computers as much, because its all a bit gross and slow. And thats bad news for our entire industry.
I used the desktop slack client all of last year, and these kind of problems were present the whole time. This isn't some "oh, yeah there was some particularly egregious bug we shipped accidentally in October" thing. Whatever is making the slack client a bloated ball of crap is much worse than a simple, quickly fixed chrome bug. Its endemic.
I pay for both despite their poor quality software and bad UI/UX, not because of it.
Spotify makes sense as it can reliably use your file system for storage and thus download songs so you're making fewer network requests. Perhaps Slack could keep a short log to prevent "scrolling up"-related network requests?
A single page web app?
That's all most of these are. Particularly the Slack/Hipchat/Discords of the world.
> People are excited about the Desktop again
If by desktop, you mean "need to be connected to mains power to run for more than 10 minutes", yeah, people are excited for desktops again.
I use the Slack app for Windows, and the value for me is that it gets its own easily identifiable presence in my Windows task bar.
I have a (self-inflicted) problem with tab proliferation, and because I rely heavily on Slack, it's just way more convenient to use the self-contained version than to have it running in a tab buried in one of my browser windows.
It doesn't bother me at all that the Slack app is an over glorified web browser running the Slack web client.
Probably Firefox also has this.
Here's more info, with images of how it looks like:
It kinda should, though, because if the browser component isn't up to date on the regular then it's possible you're looking at (what should be) a simple chat app with a potential RCE.
"actual dev time per feature (easily 10x the cost)"
"And as for Qt, Qt has existed for over two decades"
It is C++, and we got great alternatives on Windows and OS X. still if you have to stick to C++, then Qt is probably the most widely used GUI toolkit.
"People are excited about the Desktop again"
I don't know what bubble you live in. This is just a bunch of hipsters who are excited because they can suddenly use their only programming skills to hack on desktop apps.
And it isn't all that important. Hardly any of the applications I use daily are Electron. I occasionally use Atom. Other than that great apps like OmniGraffle, Pages, Pixen, TextMate, Ulysses, Marked 2, Charles, Dash, Kaleidoscope, Tower, Keynote, GitUp, 1Password, Magnet, iBooks, ScreenFlow, Terminal etc none of them are electron based.
Why is Electron and similar so popular then, if it's easier to build native? Please don't say "because shitty hipsters".
Advanced native tools with a powerful language in the hands of a master programmer will be much more productive for the reasons he cites. The type system in particular, with a well-worn toolbox of primitives can make you extremely effective.
So, web technologies are more effective in aggregate across the entire industry, but native technologies can be more effective in the hands of an experienced single individual.
Wut? So the whole Mac App Store, MS Office Suite, Adobe's apps, many other stuff were just a dream? Until Electron desktop was dead? You're bending facts here, and I hope that's because you don't know much about the stuff you're talking about.
Desktop is big, just that other things have grown bigger. That doesn't mean desktop is/was ever obsolete or losing it's utility.
Googling terms like "shareware" and its history (not the only model, there were, of course, outright paid products too), will show some stories, maybe not a lot, since some of this was before the web, so not archived. But will show enough to get an idea. Jim Button was a classic example, but there were many other indie devs (many of them one-man shows), who may not have made it so big, but made good money from desktop apps.
Google term: shareware jim button
Edited to add:
Electron is just a way to reuse web developers for desktop development - I.e. a way companies now have to cut development costs.
It is and was, however, always built on Chromium Embedded Framework (the UI), even when you say it "was" fast.
So according to you Spotify was able to build a full featured client using a beta technology in a month. Impressive.
Anyway, here's the version I'm speaking about, built by the people that developed uTorrent IIRC, by the way: http://static.filehorse.com/screenshots/mp3-and-audio/spotif...
Also even if the current version is not using Electron, it's still using Chrome, so the same argument applies.
Yeah... I opened a 500KB log file in vim and Notepad++ and they are using 5 MB and 7 MB of RAM, respectively. They both also manage to use no measurable amount of CPU (even to blink the cursor!) unless you interact with the window.
Is it really possible? What kind of alien technology is it?
Realistically I would imagine that a native Win32 app (Notepad++) that's totally idle except for message loop and cursor blink requires less than a microsecond of CPU time per wall clock second.
But it does mean you're limited in the things you can do with it - when Office 2013 gained that fancy animated caret they had to do it themselves, similarly the caret in Atom and VSCode are both software-based.
Now - I know there are a ton of other options for Linux, but the thing is, I can't jump from system to system and have the same app with the same experience - even if the app was developed as a native app for all of the platforms, because each has a slightly different native GUI implementation and usage which doesn't translate fully between each.
So now I have to learn and use potentially three or more different programs/apps/whatever to do the same task. Or, I have to remember the quirks for each native implementation.
...and let's be honest: Not many companies out there are going to develop a native version of the same app for all the platforms, because most platforms have a lower number of users than others (in many cases, much lower - depending on the genre of the app in question - like games).
It's an economic tradeoff: We either get a balkanized system where for certain kinds or types of apps we need a particular machine for the native implementation, or we have the case of these larger cross-platform apps that anyone can use on any system, in the same manner everywhere.
Here's another thing - most of these complaints seem to have to do with laptop users. I don't really worry about these issues on a desktop, because there I can have a ton of memory and way more CPU than what I can get in most laptops, and I don't have to worry about battery power.
But for those who are stuck with laptops - maybe they need to bother manufacturers to increase the amount of RAM and CPU available, to handle these larger apps.
It's also funny that I hear people complain how these apps are too big, and use too many resources for editing text or whatnot; you make the case that vim and notebook++ use only a few meg of memory, and no CPU.
I tend to wonder how well they'd fair on my old TRS-80 at home - you know, I had a full-screen text editor on it that didn't use much CPU (sub-1 MHz) nor memory (less than 64K) - so why can't we return to that?
Honestly - I don't want to; but we can take this argument down the rabbit hole, because the argument that today's stuff is bloated compared to another case, can easily be made about today's stuff vs older stuff. Most of that bloat of your "smaller" example comes from abstraction; the same as the "new bloat" - not many years ago a program taking of 10 MB of RAM would have been insane. Today, it's normal and expected.
I daresay that in the very near future, programs taking up several hundred meg to a gig or so will also seem normal, because by then we'll have even better CPUs (with maybe hundreds or thousands of cores) and way more RAM (terabytes).
Some might argue that this is the case today, in the form of cloud computing and SaaS - browser-base stuff, in other words.
A higher barrier to entry has that effect. Low quality crap doesn't get in as easily.
>Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
I find this argument absurd. Desktop (and mobile native) apps are multi-tens of billion business. Ask Microsoft, Apple, Adobe, and thousands upon thousands of smaller businesses (down to SMEs like Panic and one-man-shops like Sublime Text).
If anything it's those web-based unicorns that are either merely burning VC money, or selling the user to advertisers -- in both cases, there are not much money in selling them directly.
And there's nothing about web development that couldn't be achieved just as easily if instead of all those money on browser engines and teams to create things like Dart, there was some effort towards a nice, cross platform, mobile and desktop UI library.
It could even have JS bindings for all web devs to use -- just without the web stack crap. React Native is something akin to that, but imagine if it had been going for years, and had more industry support, instead of the nth attempt to put lipstick on the web stack.
In the meantime, here's one of those "web-based unicorns" you so despise: https://www.google.com/finance?q=amazon&ei=viTtWMGfNMWQ2Aa42...
You know what's never going to go out of fashion? Performance. Especially since CPU speeds have stalled in the last decade. you will never get good performance if your design is: "embed an entire browser, and then use a small piece of it"
I would acutally pay Spotify extra money if they brought back the old, snappy, pre-Electron version of their Windows desktop app.
<edit>They replaced it with something that is simply too heavy to manage larger playlists (except maybe on the max-spec MBPs it's being coded on) just because they could iterate more quickly on some non-essential features which they might have packed inside a WebView, leaving essential features intact and responsive.</edit>
The fact that a Microsoft editor works on Linux is amazing. Underlying thanks to Electron.
These technologies you diss have been reliable and stable for over a decade. As an embedded developer, my code has to be running 24/7 for months without rebooting, without running out of memory, without crashing. But forget embedded, and imagine a web service. Can you truly honestly say that an app written in electron will give me that reliability? Or should we just accept that if you want to write apps using "modern" tech, you'll just have to deal with it? I personally can't imagine any web framework ever maturing and being stable enough to where you can invest money in it knowing that it will be around 10 years from now. Having your underlying technology platform in a constant flux makes your entire product stack brittle. That is a Big Deal.
What Electron did is it enabled all those web devs (even front end guys) to write "desktop" apps. And trade offs are certainly visible.
I am boycotting Electron. I have zero Electron apps installed, and recommend strongly against it. I hate that web dev mindset that has been pushed into desktop userland. It just doesn't work that way. Write native desktop app yourself and you will see how wrong it is.
They are very very close to each-other. Obviously they are not exactly the same, and I used QT in this example. But they are not wrong, or completely different.
> How is React + browser rendering any different then QT rendering?
Qt has far less overhead. A small Qt program can be in the single digit megabytes in both disk and RAM consumption, while using near-zero CPU for the UI.
> How is HTML/CSS any different from QT layout xml?
My understanding is that Qt's layouts are translated to native code at compile time, resulting in native performance and overhead on desktop. Browsers' dynamic rendering is relatively intensive and expensive.
> How is QT sockets/threading any different from NodeJS sockets/threading?
When it comes to I/O I don't think there's a significant cost to using Node/V8. I'm totally fine with CLI applications written in Node but when it comes to UI, a browser is just too heavy.
Writing Electron app felt like sketching, at least to me. You place one line of code with tags and boom there's a button. Now QT takes place somewhere in between, and it still provides nice native multiplatform environment that could be appealing to web developers.
Runtime issues aside (bloat, cpu, etc), why is this a bad thing?
I like that I can write a single line of code and "bam!" get a button (or any number of things) to appear. Why would or should I want things to be more difficult for me to develop a piece of software?
Sure, I could do things in some other language - I mean, I know a ton of others. But implementing the same functionality can be a much larger pain in those other languages (and honestly, for app gui development, I haven't found anything that beats the drag-n-drop editor of VB3/4/5/6 - there was something close to it in Visual Studio for C++, but it still required some manual "hook up" with the code for callbacks and event handling and such).
I mean - if I really wanted to do things "right" - why don't I just whip out my text editor and write assembly for whatever CPU I'm targeting? I can full control over everything, then! Best performance! Those guys and their compilers, I tell ya, they don't know what they're missing!
(heh - sometimes they don't - there's a whole generation or more out there who've never coded for a CPU by looking at a datasheet and finding the byte values needed to represent op-codes to hand-assemble a piece of code - sometimes I do miss hacking on the Apple IIe and monitor - CALL -151 ftw)
Anyhow - as someone who's been coding for longer than I really care to say (of course, I kinda gave my age away above!) - I don't want to return to those days; I kinda like living in the future of computing I could only dream about as a kid.
Simplifying in itself is not bad. Improving API with newer and better is not bad. Adding language features to advance productivity is not bad (thinking of C# iterations vs Java). Basing the future on a terrible language, ecosystem, practices and "developer" mindset is very bad.
You presented like the problem was in writing of the code, it's not, that's Electron's biggest strength. The problem is the bloat that it comes with in order to provide you with that experience. Like everything, it's the matter of tradeoff. For me, it's not worth it, nor I like writing HTML and CSS.
Unless they significantly rethink their approach it's significantly flawed without much room for improvement. Plenty of room for something better to come along. Even if it saw mass adoption, the demand for a better base would invite an alternative to gain traction against a sluggish goliath a la Firefox-vs-IE (or Chrome vs Firefox).
Why stick with a doomed formula?
If developers weren't so scared of Swift and C#, this wouldn't be a problem.
> (writing Desktop apps) is massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
I find the opposite to be almost universally true.
Writing a lightweight native desktop app is almost always cheaper than trying to build a JS-heavy app that has to run well in Mobile, and Tablets, and in a standard web browser, and in Electron fake-native-desktop web browser. Yes, you have separate projects with separate codebases. But two or three small lightweight projects is almost always cheaper than one big codebase with lots of targets, in terms of total cost of ownership.
I think all these "massive cost" comments come from sheer ignorance. Those "devs" are frightened at the need to learn a new language and frameworks, overestimate the time required to learn those, look around them and only see likewise clueless "devs", frightened of changes, and extrapolate some comically high overestimation of cost and time, when in reality, a properly written software is much more accessible to join in and support than a web "app" with the contemporary "sexy" observer pattern nonsense splattered all over, coupled with a horrible, horrible dependency management system and a language/framework combo that requires multiple dependencies to perform the most trivial of array loop.
Anyone would think it to be ridiculous if a carpenter told you he only works with saws because he is a saw carpenter and that for other kinds of work you should see the plane or sander carpenter. To me saying you're an [insert tech here] developer sounds the same...
...and yet there are specialties within like "framer" and "finisher", among others. Then you have interior workers like specialized custom cabinet makers, flooring specialist, drywallers, painters, etc.
What I'm trying to say here is that even in "carpentry" for putting up a house, there are numerous specialties.
When the mere notion of a language change or API/paradigm difference frightens you, you are a bad developer.
I can't speak for others, but for me, the "massive costs" isn't about having to learn another language or framework. It's instead the massive costs to my employer. It might even be a massive cost to me as a single developer.
By developing a cross-platform app using a single set of easy-to-use tools, a large audience of users can be gained, that would otherwise be prohibitively expensive to support if native-only was the mantra. Instead, that application would have to be developed for only one, maybe two of the "major" platforms (and guess which platform it wouldn't be developed for - that would be the platform that I like most).
Supporting and maintaining a single codebase for one platform is a monumental task for any company, let alone a single indie developer. Supporting and maintaining multiple codebases for multiple platforms can be debilitating for a company, let alone a single developer.
I lived and played in those times; back in the "second gen" if you will of the microcomputer days - you had games and apps by different companies, and developers. In most cases, a game or app was only developed for one of those machines (usually the Apple IIe or the C=64, sometimes both - maybe an Atari too), but the other systems were all considered "second tier" by most developers. You might get a port of a game or app - but most times, you had to settle for something else, or buy a second system (ha! only if you had real money! I look back on the costs of those systems back then, and wonder how my parents ever managed it).
There's a reason you see a lot less of that going on today; it isn't because devs are frightened of learning a new language or framework.
meaning he works for a company that heavily leveraged an existing protocol, IRC, for which there are already a metric fuckton of native clients and libraries for every platform under the sun.
Implementing their shitty web client was undoubtedly far more work than supporting a handful of native clients.
The codebase isn't lightweight to begin with, and duplicating it for a native app that maybe only one person could maintain was a non-starter.
What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?
Couldn't your team learn the technologies? These days there is an abundance of resources available (online tutorials, books, bootcamps, etc), especially for ecosystems as popular as the Appleverse.
I can't pretend to know your situation, but as a reference point we had an iOS project come up at work a couple years ago and I was able to pick up Objective-C and the Apple libs in a few weeks while still being productive on other projects. I followed Apple's official tutorials and built some toy apps, then learned the rest as I went on the real project. This is after having never owned an iDevice and doing mostly web and devops work in scripting languages for many years prior. A few peers of various experience levels were able to ramp up in about the same amount of time, so I'm not special.
I don't see how that is any different. If your Windows team members all left, that project would be dead or out-of-sync too. Wouldn't you replace valued team members who leave, in both cases? Or is this a concern that you won't be able to find developers willing to work with OSX and/or iOS tech?
> With my company, our users have always begged for an Mac OSX and iOS app.
Repeating this again because this should be telling. If your getting feedback begging for an OSX and iOS app, there's probably a number of really good reasons for that.
> What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?
Spotify, Slack, Twitter, Facebook, any streaming video service (Hulu, Netflix, Amazon Prime Video, HBO GO), etc.
Note that I'm not using "lightweight" to mean "small weekend project", but to mean "less complex than the codebase needed to reproduce these features in a browser or Electron browser".
If you are truly a small company / startup, and you truly have to support all platforms with a small team, then sure Electron makes sense despite all the drawbacks. I totally understand that.
But I usually hear this excuse from big companies, that still want to perceive themselves as small, but aren't. Slack is a billion dollar company, they are not a small business / small startup. If a company is large enough to have more than two people working in HR full time, then they are probably big enough to do this stuff right. "We're a small team" simply isn't true for them.
I agree though that Electron is a problem. React native will probably be the best way forward. Microsoft has recently taken over RN-Windows and ReactXP, a RN-web clone. RN runs natively, doesn't need a browser, while being able to tap into the JS eco system.
You also seem to agree now that both are completely different.
Yeah, they're that much better, that we grumpy old non-JS programmers keep wondering what the big deal is when somebody comes along and rewrites something that existed 20 years ago.
I'm curious, since I use PyCharm and I wonder what I could be missing out on.
The shell that i'm using for instance, Hyperterm, it does things no other shell can do, and they had more than a decade to evolve. JS doesn't have a problem displaying json as json, display webpages on link clicks, moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS.
The same flexibility you see in Atom, VSC and the others.
You know what else doesn't have a problem doing this? Literally any other programming language or tool that I've ever used to look at/edit Json.
> Hyperterm, it does things no other shell can do...display webpages on link clicks moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS....
Any pretty much any other terminal built with flexibility and extensibility in mind (even the base terminal in Linux can handle links lol, that's definitely not exclusive to js). ZSH springs to mind, with the benefit of being written in Case, so that you know, it's actually fast...
> The same flexibility you see in Atom, VSC and the others.
Ah yes, including the freedom to not open any binary file, or any file >2mb in size!
React Native isn't "native" (in quotes), it's native... it uses the actual widgets provided by the host OS. You use similar techniques to create your UI for each platform, but you do generally need to create separate UI for each platform with React Native.
- Everything GNOME does
- Everything MS does
- Everything KDE does
- All the apps in Mac OS
- Every browser
- Office suites
- Messaging apps
- Adobe's Creative Suite
- All the content creation apps for video games
- Video games
- Audio production suites
Web developers are excited about the desktop again, because to them the desktop is some kind of new frontier that Electron has opened up to them. I get that, but please be aware that just because something is new to you doesn't mean it's actually new.
Ah, yes, we are all grateful for Slack's 850MB of memory used when idle.
I can live without it, personally. Actually, if it could crawl back into whatever hole it came from, I'd be glad.
At what point of dementia do you say "hey, web APIs are easier to use, let's bring the entire damn browser along with it!"? The normal reaction should be fixing current APIs, making good wrappers around them.
My first computer saved data to a cassette tape; today, terabyte hard drives are nearly give-away prizes in cereal boxes (yeah, I know they don't do that anymore, either). We have machines in our pockets which are arguably (in some ways) more powerful than the machines only government-funded agencies and such could afford when I was a kid (and they took up entire rooms).
I hope I live long enough to think of 850 MB as a small amount of memory (and 8-cores at 4 GHz each a slow machine; actually, there are some multi-cpu server mobos I'd love to have as a desktop, but I can't afford it - yet).
Honestly, I'm amazed that we do have enough memory to support these huge apps. Sure they're bloated; I won't argue that - but at the same time its amazing that we can run them at all - a decade ago it would have seemed ludicrous!
Could it be better? Sure - but in my mind, all software is bloated - because even the simplest piece of compiled code I can't run on my old TRS-80 from when I was a kid (heck, even an Arduino has better specs!).
Clearly Java never existed. I've been experimenting with Swing lately because Java has a library I'm interested in using. It's a breath of fresh air compared HTML/CSS/JS. It may not be proper "native" development, but I don't have to deal with <div> hell. It has proper layout management instead of, what, 4 quirky CSS layout styles: float, table-, flex-, and now grid-. Sure if you want to hack something together, Electron may be quicker short term. But I question if it will actually be cheaper in the long run for non-trivial applications.
If you want something that's much closer to proper native, have a look at SWT. It's faster than Swing and actually looks/feels native since it's actually using native widgets. It's really a shame that the poster child application, Eclipse, is so bloated and slow, because people attribute that slowness to SWT when it's really primarily the plugin architecture that's responsible. And I actually find the more spartan developer interface to be more pleasant than Swing, so to me it's win-win. Swing will always have an uncanny valley, SWT has never had that issue, yet still allows you to write for the desktop in only Java.
I feel that Sun's choice of Swing was the main reason Java failed on the desktop.
I don't think it is better, to be honest, and so I also wonder why people aren't crying out for decent layout management for the web - is it just lack of awareness of how good things could be, or...?
They have been, and that's why today there are things like flexbox and grid layout. It just took a long time for the browser vendors/standards bodies to be convinced and then spec these things out to work within the confines of the existing layout models.
Smalltalk had a GUI based approach to design in the 70s on a 2 mb disk, and most web layout is generating html & css, then tweaking that stuff. It doesn't feel like web layout is a progression.
Imagine being a small shop of 1 person (or even 5) and having to learn desktop programming languages, conventions, and native APIs for the web, Mac, Windows, and Linux. Then on top of that to develop and maintain a product that moves at the pace of customer feedback.
That's why Electron is powerful. I could not write and maintain the Hemingway App for Mac and Windows without Electron. My tens of thousands of users would not get to use that software _at all_.
I think the implication is as much that developers who label themselves like you have as "non-native" or "JS" or "web" developers don't have the perspective to make an informed decision on the matter.
Many of the rest of us know how to build React SPAs (and have been doing it long enough to have used Angular 1, Backbone, Sproutcore, ...) but "JS + Framework of the Week" is just one of the many tools we could turn to when building a UI. Many of us have also used cross-platform native libraries like QT or GTK, or platform-specific toolkits like Cocoa or .NET.
Basically, if the only tool you have is a hammer... it might be time to learn to use some new tools.
Come on man, it's a little too easy, but NOBODY? People write desktop apps all the time. What you mean is "nobody I know", which probably says more about your social circle.
I'm not "excited" that macOS is flooded with apps that have poor accessibility experience while Apple itself is fanatical about delivering first-class accessibility experience - like Safari having accessibility always enabled.
It's graphical GUI builder is out of this world, and it works flawlessly on all three desktop platforms. It did even compile without a single code change on windows, after I finished coding in ubuntu, a utility tool for some researchers in my organization .
And about the language ... I finished my utility tool project in 80 hours, from initial concept to happy customer, while learning the language in the process.
It is really such a shame that people have so strong aversions against Pascal, for no good reason other than subjective feelings.
So if the Electron bloat is such as issue for you, just hit Slack in the web directly.
That's even more horrible, i've tried it. Chrome uses his own notifications on macOS instead of integrating with systems notification manager and always when one of these ugly, poor animated guys pops up, i can search through my 100 open browser tabs to find the damn chat. Millions of flamewars were fought about what's the best window manager and we are ending up with a single window with a tabbar? Are you kids serious?
This was the main reason I used the electron version of slack and tbh it makes little difference if I run a dedicated Firefox window or the app since my laptop is less than a year old and could run a few hundred instances of either without sweating..
I'm on Firefox these days; with noscript (once you go through the initial few days of pain whitelisting things) it works brilliantly; I would suggest you give it another chance..
Unless you try to book flights from britishairways though.
But I find it to be a humorous typo given the topic of this discussion as ionic is a browser-based cross-platform mobile framework.
You could have expressed pretty much the same sentiment about Java ~20 years ago. The trade-offs (high resource consumption, additional complexity, but less demanding of developer time) and even the fundamentals of the approach (an additional abstraction layer that works as a VM, APIs oriented around all the latest thinking) were also very similar.
Java is used for a lot of things, but mostly not as the basis of mass-market cross-platform applications the way it was originally intended.
This comment is completely wrong. You know what Slack was before it was an Electron app? It was a Cocoa app. Sure, it wrapped a WebView, but given Slack's nature that's not a surprise. Regardless, the Cocoa Slack app, while not great, was still much better than the Electron crap we have now.
I don't know if you're simply ignorant of the fact that the app this article is focused on actually was a native desktop app before it switched to Electron, or if you're just ignoring that pretty important information in order to try and make your point, regardless of whether or not it's true.
>if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
Desktop development has stagnated because of the proliferation of web technologies, not because desktop frameworks aren't very good. Qt is great, gtk is great, hell, with the right configuration, working with WinForms can be pleasant.
Electron packs a browser with each app, and that's unacceptable. Adobe used to have something called AIR, years before electron, but idk if they were just packing up a browser or not. I used to use the desktop version of tweetdeck, which used AIR, and it was pretty damn fast.
Anyways, you clearly don't know anything about desktop development, so rather than make claims that are wrong, just stick to what you DO know.
Agree fully with this. I find the complaints about how appalling it is that apps use a few extra 100MB of RAM and disk space really tiresome and impractical when laptops + desktops typically have lots of RAM and disk space now. Many of these cross platform apps either wouldn't exist or would exist on only one platform if everyone insisted on native apps especially for apps made my small teams or individuals.
Just because it's there doesn't mean you need to use it all, that's like spending your whole paycheck at once because "you'll get another one in 2 weeks!".
Any application that does the same job with less resources and the same or greater performance is unequivocally, objectively better.
Also, benefits of claims like easier and faster updated are rendered pretty moot when developers next spend that time changing things for the sake of it: Spotify app feels like it has since pointless, un-requested UI change every month. And VSCode can't even update Idle, decor that being one of the lauded benefits...
It's crazy to think that developing something native on Desktop using the default stack of the operating system means using "dated technologies".
What Electron has done is allow people with a particular skillset to apply it to a domain where it is both unnecessary and wasteful. These "developers" would be better off generalizing their understanding of software and UI design to the point where the tools used are irrelevant
1) Electron is obviously a system library now. Treat it as such. Install ONE copy of electron per version, not ten private copies of electron in npm. It should be simple to integrate with at least Windows and OSX install systems - instead of downloading the full electron module, download an installer that installs the required version system-wide. This can be done as a step during installation like desktop programs have been installing required dependencies since the dawn of time.
2) Find some way to make it slimmer. Maybe not every app needs the entirety of the browser loaded. Maybe we can do lazy loading of certain components? Maybe optimise the most used critical paths, and provide tools to developers (if those don't already exist) to show exactly what is causing all those wakeups?
Moving off Electron is not an option any more, it's just such an easy and familiar platform to develop for, it's not going anywhere. Maybe we should accept it as the victor of the cross-platform desktop application toolkit and work to make it better. It's shitty, but it's the right kind of easy, like PHP.
Electron isn't a better experience, and if it's in a browser, at least it can share some resources with other web pages.
NW.js uses less resources (less ram, smaller distribution sizes).
It is updated more often (within 24 hours of every Chromium and Node.js release, ensuring access to latest technology).
It supports about twice as many OS's (XP+, OSX 10.6+, Debian 8, Ubuntu 10+). Electron doesn't even care enough about Linux to merge in simple bug fixes.
It's much easier to get started with and takes a no-nonsense approach to everything. (thejaredwilcurt.github.io/website/quickstart.html)
Allows for HTML or JS entry for apps.
It offers actual source code protection, and even recently updated this so that there is no longer a performance hit when using it. Which is a pretty serious technical achievement.
The only thing wrong with it is that it's got a shitty name/logo, and has a smaller ecosystem. If you can get passed that you will have a much nicer experience.
>>all you web devs: Go learn C or Rust or something. Your program runs on a computer.
I don't think that is a realistic request.
When the cost of Slack's ram/diskspace usage starts affecting their profit then they can/will take steps to re-write the application in 'native' code. Until then its premature optimization?
My own prejudice aside - I honestly do think that if you don't have an understanding of how a CPU works at this very low level - maybe even lower - you don't know what you're doing; that, and I have done hand-assembled x86, 6502, and 6809 code in the past...
It's a different time; knowledge of lower level languages isn't needed today to be a successful software developer who knows their craft. And I recognize my prejudices as such, thus while I tell them, I understand that they really can't apply. Would I like it if these new guys all knew this kind of stuff? Well - yeah; but the same could be said of me by guys long gone and dead as to why I don't know how to wire up an analogue computer to solve a calculus problem, or why I can't wire up a plugboard to compute something on a IBM 401 or such. That doesn't make them a better developer, nor me or anyone else a worse one. We're just using different technologies.
That said, I do think developers should branch out, and at least have an understanding of other languages; maybe C/C++ - but even Java, Python, GoLang, Rust, etc - all could be just as useful to know.
Even as someone myself who knows (more or less) C, this is quite an elitist statement, and doesn't do justice to say, Lisp and people who use functional languages. The goal of programming isn't necessarily efficiency.
Im not trying to claim low level knowledge of the machine is inherently better than high level knowlege. What I am claiming is you need both and knowing how this stuff works isnt a charming novelty reserved for the curious
I think it's a mistake to redefine it as optimization that doesn't affect profits. Companies often get away with painfully broken software for a long time because they benefit from some other moat (e.g banks). But they're setting themselves up for disruption by doing that.
Also, profits are not the only purpose in life if you value your craft. (I'm not denying that you have to be able to afford high standards and economics does play a legitimate role)
Hm? How come? Is there a mental barrier here that I'm not understanding? (C and Rust are "hard" languages?) I taught myself to code on my parents' hand-me-down Pentium 100Mhz in the early 2000s, and I started with the K&R C book. Python and C were my swiss army knives at the time. I really can't see this stuff being super difficult, I mean I was a teenager and I had no understanding of algorithms.
I think it's sad that both Apple and Microsoft dropped the ball on desktop because they are only focusing on funneling users to their stores. If the Microsoft of 2017 (with open source .NET and Linux as a focus) had been around a few years ago when WPF was invented, they could have made a decent cross platform toolkit now that didn't come from html/CSS/js
What are you basing this on? There are plenty of iOS developers, and developing for Mac vs iOS is more similar than pretty much any two other dev platforms you can name.
Developing for Mac + Windows + Linux + Web > Developing for Electron.
What?... Are you saying what I think you're saying. I must be reading this wrong because surely you don't wish to imply that people didn't write desktop software before Electron came along?
What a load of tripe.
Try stuffing all the functionality of a large enterprise desktop app like SAP or Siebel into Electron and let me know how well that goes.
Maybe not a coin miner or a graphics engine; but cpu is never really the constraint for applications such as those inside electron containers. Where you see them chewing 100% of a core, those are bugs.
Also, while th e language has flaws modern; JS vms are pretty bleeding edge....
Have you ever heard of League of Legends? It's a desktop application made in that time frame. It does 10 figures a year in revenue.
Wouldn't it make sense to run a caching proxy on desktop and use a browser that is already there? That way you could still use some functionality not available to plain websites, like access to computer. But use system browser for rendering. Your proxy would also cache your website so it would run faster than normal website.
I understand, that this way you have to support more browsers, but you probably already have a web version of your app.
The problem could be with users seeing localhost address. How to overcome this issue? Virtual interface on which DNS server could run?
Maybe running from a file:// could be a solution for some apps?
You could open the Mac App Store, pick the 'top grossing' category and count the number of Electron apps.
Is that good or bad?
A browser can load remote payload (not under user control) that can then exploit browsers flaw, Flash AVM2 flaw, etc.
On the desktop the user run a local exe/app which has de facto "full system access".
A bit surprised you don't know that, it's kind of basic ...
Everyone in the last 40 years built and sold a multitude of destkop apps and they never stopped. Stop drinking the cool aid, and even if some bozo upvotes you on hackernews it does not mean that you are right (actually nowadays the opposite is more probable).
Maybe nobody in the Web bubble.
Perhaps someone can clone the outward api of electron with only a ~10% performance penalty instead of Two orders of magnitude
> I simply have no interest in learning any native
> […] programming
/I don't disagree with you one bit...
I used to be one of the real programmers, now I program in Flash, strange...
What? Have you heard of Adobe AIR, AppJS, Node-Webkit (now NW.js) and Mozilla Prism/WebRunner?
Lost count of the number of times in my day to day work I see decisions made that make devs lives easier at the expense of performance or features.
Do you think people 'knew' what was going to kill Flash before it died?
Does the weight of these just irritate you because of how wasteful it is (it irritates me for this reason alone) or does it actually have an impact on you?
People said the same things about gtk or qt when we had motif right? Apps are going to keep getting bigger, our machines get bigger too. I'm not sure what he problem is.
Edit: fat fingers
Nobody in the last ~20 years has cared about writing Web apps before JS came along. There's basically zero money in it, and it's massively expensive, both in terms of actual development time per feature (easily 10x the cost of working in sane languages with sane runtimes), and also in finding trendy hipsters who don't know algorithms or data structures but took four weeks of coding academy classes a couple years ago. And as for Electron, Electron has existed for over three decades (Project Athena launched in 1983) -- if its massive "Nixon throwing the peace signs before getting on the plane" moment hasn't happened by then, sorry, but it's not going to happen.
But now? People are making all kinds of great new languages, and more often than not, they don't repeat JS's mistakes. People are excited about programming again -- JS is so bad that it's single-handedly revitalizing interest in languages which two of the largest tech companies in the world are behind, yet couldn't make popular.
This is a Big Deal.
(You are being parodied mostly for being a Slack developer, not disclosing it up front, and then trying to convince folks that Electron is good, which makes you sound a lot like a pig farmer trying to sell pigs' feet.)
Zero money in web apps? You mean like Google, Facebook, Amazon, Twitch, Netflix, and YouTube?
There's way more money is web apps than desktop apps. hands down. I'm not talking about mobile because this is a convo about electron.
Those examples aren't event good, though. They all seems to be becoming less popular these days.
This can't be true seeing as Google Docs is used by many companies and students.
And "better" may not be the goal of many web apps. They may be trying to measure success by "convenience" rather than being objectively "better".
Win32, Java, MacOS, et al lost because there were such strong standards for doing things the One True Way that competing standards couldn't flourish and the APIs stagnated.
- apps from untrusted developers can be run safely
- app installs are measured in milliseconds and don't require switching windows
- app updates are invisible to the user
- beginners can modify apps without leaving the app itself (MySpace profiles, etc)
- references deep inside one app can be embedded in another
- apps run on nearly every device from one codebase
Actually - pig's feet are actually a pretty good eatin' portion of the pig. If you've ever had them, you might agree.
- why are people writing this stuff in C. It's so slow and you don't have as much control on memory. Write it in Assembly.
- why are people writing this stuff in Java. It's so slow and you don't have as much control on memory. Write it in C.
- why are people writing this stuff in Python. It's so slow and you don't have as much control on memory. Write it in Java.
Eventually people want to solve a problem with the best ratio cost/quality for them, not "doing it the proper way".
So unless you provide a better solution for them to do that, or pay big money to create such solution, they will find the ugly-hackish-half-baked-working-for-them solution that let them do this.
We had PHP. Now we have JS.
I hate it. You hate it. We all hate it.
But I created portable GUI apps with an installer before in my favorite language and others and it was a pain. All of it. The UI, the event model, the API, the portability edge cases, the new stuff to learn, the packaging, the dependencies...
So get over it or provide a solution, but stop complaining.
And before you start providing an EXISTING solution, remember people tried it and they didn't like it as much as electron. And since electron is so bad, that should tell you something.
So the market is saying you are wrong. You can ignore it from your better tech tower... Try to boycott electron apps all you want. But we know how it has played for betamax, lisp and the ogg format.
P.S: oh, and if electron is flash, remember that flash won the web for 15 years while it sucked. And you know why ? Because it allowed people to do stuff they wanted easily, like videos and animations. And it's not because we couldn't do it any other way, we could. Yet we had to wait almost 2 decades to see it dies, at the price of battery, stability, security and everything else. Feature trumps everything. People. Don't. Care. We only manage to kill it because we finally replaced it with systems that could compete on easiness and features. So you know what to do.
Managing tradeoffs is a fundamental part of development strategy. You can ramp up quickly in a productive language like Python and then stabilize on a more efficient language like Java. New platforms like Go/Swift/Kotlin are trying to get in this sweet spot (and mostly succeeding), and existing platforms are trying to inch closer towards it (Python/JS with performance and Java/C#/C++ with ease-of-use features) to avoid the "do over"
> And before you start providing an EXISTING solution, remember people tried it and they didn't like it as much as electron. And since electron is so bad, that should tell you something.
I doubt everyone who's built a desktop in Electron tested Qt first. Definitely not in C++, but probably not even in QML or Go.
> People. Don't. Care.
People absolutely care, they're just not software engineers so they can't articulate precisely what they care about. It's like you never heard someone complain about shit battery life, unresponsive web pages and apps, or slowness in general.
Most of these Electron apps work in network effects. People have Spotify because their friends have Spotify; people have Slack because their work uses Slack; essentially people are generally saddled with your app for non-technical reasons. But that should be an argument for giving them good tech, not an excuse for giving them bad tech. "Well, you'd have to use this anyway so I can get away with burning through your battery life and you'd still count towards my usage numbers". Ick.
You just ignore what the market is saying because "you are right".
You are completely missing the point.
I disagree with your assertion, because I don't think the only measure of progress is how productive programmers are (time-to-market, whatever). There are other measures:
- user experience
- program efficiency (battery life, fossil fuels burned)
- program security
- ease of contribution, maintenance, and improvement
I mean, correct me if I'm wrong. But I feel like I've directly addressed your points. I simply don't think that you can say, "well it's X% faster to build an app on Electron, therefore Electron is the best... even though it runs Y% slower and burns Z% more battery". That's why I talked a lot about tradeoffs, because there's more than one issue.
For what it's worth, I also dispute the productivity argument. It's hilariously easy to build an app in Qt Creator. Maybe web devs can build Electron apps faster, but that's because they're web devs. I'd bet if a web dev took a month to get to know Qt they'd be as productive. The difference is that the resulting app would be a lot more efficient.
The point is the web is the most active platform.
The point is doing things right is not winning.
The reason you are missing the point is that you think it's only a technical problem.
It is not. It's technical + cultural + societal + historical + economical.
> It's hilariously easy to build an app in Qt Creator.
You fail to put yourself in a web dev shoes:
- they are the majority on the most popular plateform
- they don't have the time to lear a hole new tech or API. You don't realize the ton of stuff you need to learn to do half of what you can do with HTML + CSS + JS with QT. It's huge.
- then you have to learn all the edge cases for packaging, distributing, updating and maintaining this software. On multiple OS.
- then if you need a tutorial or a doc to help you, how does it compare to the web stack ?
- then if you need to have new people added to the team ?
- then if you need a custom widget ? reuse something that has been done ?
- then the licence ? The versions conflicts ?
And if only QT was much easier than the web stack. But as a Python dev, I did PyQT dev. And Web dev. And the ratio power/easiness/flexibility is not on QT side at all, it's a mess of a badly documented powerhouse with abstractions everywhere. You want to get the value of a cell in a table as a date ? Unwrap 3 layers of composed classes, each of them with their own API.
Then of course you will end up creating a client/server architecture (with threads, multiprocessing or else) for your app eventually to deal with background tasks, an MVC layer on top of QT's, and then some DB for persistance. And use their markup to generate the UI in a declarative way.
So basically all what constitute a web app.
Basically, you have to put all that in the balance VS speed, memory and battery life. What do you think won ?
If we want native clean apps back in the game, we need to make it easy and convenient for the new generation of devs to provide the features the new generation of users want.
Otherwise, we have lost and we'll be dinosaurs, while people will just buy more powerful computers because "32gb of ram is not enough to display all those albums, you need to upgrade".
> people will use anything that allow them to do what they want the easiest way possible, not matter the cost and cleaness.
I disagree, because people are implementing things in Rust right now even though it's harder than doing it in JS. Generally, they do that because they care about things like performance, safety, and maintainability. For them, "harder" is a broader term than just, "I wrote this quickly", because they factor in things like bugs and performance.
> The reason you are missing the point is that you think it's only a technical problem.
> It is not. It's technical + cultural + societal + historical + economical.
I do not believe these problems are only technical, which is actually why I feel so strongly about this topic in particular. There's a certain level of "I know this is bad, but [my boss told me to do it]/[I have a deadline]/[competitor X is right on our heels]" in tech right now. No other serious profession acts this way. Doctors don't say, "I know this surgery is wildly unsafe, but my chief told me to do it, so if people die oh well, it's not my fault". They have professional organizations and laws to back them up, and maybe more than that they have professional ethics and pride too. I'm not trying to say all doctors/lawyers/etc. are perfect, nor am I saying devs are bereft of ethics, but software engineers need to realize that our work is shaping society in a fundamental way. We need to start sticking to our ethics, our judgement, and our values when faced with economic or cultural pressure, because there are more important things than shipping and making money.
In particular, we need to start taking security seriously. Starting an app in C/C++ should mean 100% test coverage, thorough fuzzing, and pervasive use of tooling to avoid security issues, and honestly it's better to use a memory-safe language like Go/Swift/Rust/Java/C#/Kotlin whenever possible.
We need to take performance seriously. Our code takes electricity, and we mostly still get electricity from fossil fuels. I'm not trying to dip into moral hyperbole by arguing that the cursor in Atom is causing future famines, but we need to realize that performance isn't some nebulous concept. Slow code costs money and burns things in real life.
We need to take user experience seriously. PGP is the poster child for this, but this concept exists elsewhere too: the best security software is the security software you use (the best exercise is the exercise you do, etc.). It doesn't matter if you built the perfect programming language if it's too opaque, it doesn't matter if you built a decentralized Facebook if users don't use it.
We need to consider that not everyone has fast computers, fast phones, and fast, low latency, ubiquitous Internet access.
We need to say no to addictive features and dark patterns. A lot of the reason the web is a cesspool is that we've allowed companies to build products that are addictive, or that are very difficult to get out of. Good luck building an open Facebook; they have huge teams of people working on ways to keep people on their site forever.
We need to push back on closed-source. Tech has boomed in the last 10-20 years because of FOSS. Linux is the leading server and mobile platform. Security research is where it is because of FOSS projects like OpenBSD. Practically all of our development tools are free software. Our web servers, our browsers, our JS engines, our UI toolkits, that's right, it's all FOSS. If you want an example of what happens without FOSS, look at the state of messaging.
> You don't realize the ton of stuff you need to learn to do half of what you can do with HTML + CSS + JS with QT. It's huge.
> they don't have the time to lear a hole new tech or API.
They absolutely do, and if their jobs don't give them time for professional development they should find ones that do. Lots of professions get this, nurses, teachers, attorneys, managers. This is in line with my "professional standards" rant above.
> - then you have to learn all the edge cases for packaging, distributing, updating and maintaining this software. On multiple OS.
> - then if you need a tutorial or a doc to help you, how does it compare to the web stack ?
> - then if you need to have new people added to the team ?
> - then if you need a custom widget ? reuse something that has been done ?
> - then the licence ? The versions conflicts ?
I guess just look at Qt projects like Telegram Desktop. Hey look they managed to distribute their code. Hey look custom widgets. Hey look, a lot of people worked on it. Or even stuff like GitHub Desktop written using WPF. Or major desktop apps like Photoshop, Nuendo, etc.
The Qt license comes up all the time, and it's seriously just don't link your app statically. Or advocate for your product to be open source. Or just Google LGPL.
> If we want native clean apps back in the game, we need to make it easy and convenient for the new generation of devs to provide the features the new generation of users want.
I view this from the "professional standards" perspective as well. I don't need to change my values and prioritize developer productivity above everything else, in fact I think that's disasterous. Instead, I think we need to advocate our values to "the new generation of devs": be willing to learn new things, take responsibility for the major role your work plays in society, realize none of this would be possible without FOSS, and consider that "better" doesn't just mean "I shipped faster".
> Otherwise, we have lost and we'll be dinosaurs, while people will just buy more powerful computers because "32gb of ram is not enough to display all those albums, you need to upgrade".
Hey, I like dinosaurs :)
But sentences like this one are a good example of why electron succeeded:
> The Qt license comes up all the time, and it's seriously just don't link your app statically.
The last 3 people I met coding electron have absolutely no idea what linking is.
You are, quite literally, overqualified to understand electron's success.
Re-reading your posts, I kind of get the feeling we're on the same side. While I think it's important to create a better culture in software dev, I also think it's probably just as important to make it easy to write a program "the right way". Right now, it's just way, way too hard.
I kind of blame Electron for making it easy. It's sort of a trick right, "hey, I don't know what those C++ people are doing, just use JS, it's super easy". I don't necessarily think projects should post a table of pros and cons of their software -- there's definitely a certain level of "do your homework" involved -- but I do think some level of self-awareness is warranted. Think of SQLite's "Appropriate Uses For SQLite" page.
Yeah, thinking about it a little more, I super agree. I don't think we should compromise all our values, but we definitely need to spend some time making software dev a lot easier.
Nobody ever learns from history and it is full of examples of what happens in an industry where speed is valued more than quality.
The debate here is about understanding why people are using it right now. Without acknowledging the reality, you can't change it.
The reality is already well-acknowledged. What would you do next? Surely, trying to beat people's faces in this reality via your 10+ comments in this thread isn't a next step, more like infinite repeating of the acknowledging part.
I'd definitely volunteer to make Qt more dev-friendly. As it is though, I am too late because my personal life and health are in a dire need of attention and money. So I'll sadly leave it to others. But if these others only repeat the reality, then what hope do we have, I wonder.
Business interests and the resulting pressure have been a fact ever since first written history. Again, the reality is well-acknowledged for a LONG time now.
I work on Wunderlist. One of our key differentiators, the thing customers love, love, love, is native apps on the major platforms (iOS, macOS, Android, Windows), in addition to an outstanding Web app.
We quickly got several million customers, were acquired by Microsoft for "an undisclosed amount".
Users don't care what the app framework is. They just want something that isn't in a browser and does stuff like notifications. That's what native means to them.
They don't care what the framework is. They do care, strongly, about how native something feels. Slack doesn't in lots of naggy and laggy little ways. Look up the "uncanny valley" of native apps.
Complaining is a necessary first step in finding a better solution. Before being able to solve a problem, you need to first articulate what the problem is.
I really miss the simplicity of a self contained swf file that could be embedded in an html page, but was usually also self contained and could be downloaded and run on its own. And of course they are tiny and fast. And I could run them in Linux, or even on my Palm Pilot. Except for battery consumption, were they really that bad?
But I get your point, it's was really handy. Only this year HTML5 videos are starting to be on par with flash videos. Those were so light to download.
And, based on the direction things are going, I think it's only a matter of time until you can't even select text properly or scroll in a predictable way.
I'm hoping react-native like dev experiences will make us re-think how we can have our cake and eat it too.
did anyone ever actually say this?
> why are people writing this stuff in Python. It's so slow and you don't have as much control on memory. Write it in Java.
this one is odd to me considering Python came first...
Wordperfect is famous for failing because they sticked to assembly while the competition moved to C.
> this one is odd to me considering Python came first...
It is indeed. But date of creation has no importance in this. Java became popular first, and is faster and consume less memory than Python. I'm a Python dev now, and I when I picked up the language more than a decade ago, people where looking at me like I was crazy.
>did anyone ever actually say this?
Yes, in the 8bit era.
The market is asking for more and more software, with more and more complex features. And they must be usable on many platforms, by users incapable or writing their name correctly, let alone comprehend a computer.
In the mean time you have a bit more trained programmers, but not that much more. And most of them are not remotely good enough to provide fast, reliable and usable software. The ones that can are expensive.
In this situation, any shortcut you can take, you take.
It's like the quality of food or kitchen wear. You want 1000 of fruits available all year long ? Ok, but the quality will suck. You want everybody to be able to afford 10 machines to do every single things instead of you. Sure, but they'll break in 2 years.
My mothers kept her machines for 20 years, and I can seldom keep mine for 5. I eat tasty tomatoes 25 years ago. Today I have to look for them with expert knowledge and a bag of money or they'll taste like plastic.
As long as everybody wants a piece of the cake but nobody wants to pay for quality and don't want to wait, well, you'll end up in those situations.
For software, this is only going to get worse. Every year, I get paid more and more, I accumulate more knowledge that the newcomers are struggling with. I refuse work. And I don't get penalized for any shortcut I take.
This situation is not going to end well.
But 160MB for a chat app? That's insane.
(The answer to both is no.)
When did we move on from this stage? Desktop java is a failure for good reason. Electron is an order of magnitude slower again.
Most widely used desktop apps are still written in c/c++.
A good example is the tech competence of the general population. Computer savvy or proudly dismissive? Now we have seen the technological improvements that allow people without tech skills to perform all sorts of tasks that they couldn't before.
The drawback is that the general population is at the mercy of the providers of these magic tools. If the provider decides to increase prices or move to a subscription model, they have to accept it. If the provider adds tracking & analytics and other privacy-invasive functionality they will be outraged, and then accept it. If the provider updates their app and puts features behind a paywall or completely redesigns the UI because their designers were getting bored, those people will have to grin and bear it.
e.g: Microsoft has decided to transform Windows into spyware: they can run code remotely, log keystrokes and upload them to their cloud, etc.
Most people either don't have a clue or can't do anything about it, because they outsourced their software skills to Microsoft, which worked well enough until Microsoft decided that their interests are not aligned with those of their customers.
Condemned to go along with any decisions made by Google, Github, Apple or other companies that they have no influence on, because by not challenging themselves to learn, they gave up their freedom.
You mentioned that (some? most?) web developers don't even know what linking is and you support going on the same false path, dumbing down things until they are understandable to those that didn't take the time to educate themselves. This will only result in a generation of helpless individuals cobbling components together in a decent enough way to call it software.
A new kind of browser engine, not based on any existing engine, allowing developers to use a subset of the web (webgl, 2d, layout) without being forced to use a bloated ecosystem (and which also works on mobile and low end device).
Previous HN discussion : https://news.ycombinator.com/item?id=6314961
We're French and it's indeed hard for us to get this perfectly right and focusing on the project itself at the same time.
We're planning to move this to github in order to let the community help us on this.
english is the most spoken secondary language.
- They're vs Their and similar normally denote uneducated native English speaker. The code might still be okay, but these typos passing to production makes me check code quality in case some similar QA mistakes are passed in the code.
- Grammar mistakes normally show a foreigner for whom English is not the main language. A small example just in the first line, "to create apps, games that run" => "to create apps and games that run", but this is normally due to a different language constructions. Similar to graphical softwares => graphical software.
So, kudos for OP to write the documentation in a non-native language! (upon light investigation it seems the main writer is French and his coworkers fix typos from time to time).
in germany only grammar nazi's would actually hate such things. they are too common, to say that only uneducated people do it.
people who write code and distribute code to the whole world are not writers, there is no need for "perfect" grammar and typos. i mean in my first application i've written quanity instead of quantity (inside the code), somehow it slipped in a lot of places, but i think many many people who will read the code would still understand it.
as long as the user interface is checked, i think it's okay if the readme/documentation has typos or grammar mistakes (on open source projects).
the most important thing is, is that people understand each other/understand what it's all about. which is just fine for nidium.
- The writer didn't do proofreading (or did it but is not aware of the mistake).
- It's made by a single person with no one else to check on the documentation.
Now while they are not technically a display of low code quality, they are normally indicative of low attention to detail and lonely programming, which is not the optimal situation for high code quality. For people using English as the second language I'm a lot more forgiving since they and their coworkers might not even know how to properly express some things due to translation issues (and not due to lack of proofreading/lonely programming).
In my experience when I'm doing proofreading of my own libraries I also catch ambiguous sentences, lacking documentation or examples and many other things that could be improved upon.
Is this abnormally large for a project like this?
- mozilla spidermonkey
- Google Skia
And various other stuff.
But I was hoping for something like Electron/Node.js with a different/smaller engine like Ducktape. There seems to be a Chakra version of Node.js so this should be possible in the future.
The only heavy weight in this constellation remains Chromium...
Also nidium focus on mobile devices.
Are the needs of specific applications (or games!) influencing the feature set of Nidium?
Given the discussion of the recent ReactXP thread, I'm guessing most people don't realize you can do this today. React Native supports macOS  and UWP .
Edit: There's even some early work on Ubuntu support .
I think the choice is obvious at this point, if the Windows version is complete and stable, I will start using it instead of Electron.
I would love to use React but if this is not addressed, there's no way I'm ignoring 1/2 users in the Windows ecosystem.
Is that the case or am I wrong? Can you run these React Native forks on Windows 7?
EDIT: Win 7 usage might be higher than is widely reported
Do you know if they have anything for linux yet?
You can also currently use React to share code across iOS , Android , UWP , macOS , web , VR  and a bunch of other stuff that is there mostly for fun (like React Hardware  and React Sketch.app ).
There's also experimental work on defining a set of primitives that works across all of these platforms .
To be honest, we would have never built this app without Electron. It allowed us to have a small team (2 devs) ship the first version of the app without learning an entire new dev toolchain on multiple platforms. There certainly can be performance issues with building on Chromium/Node but those can be pretty easily addressed.
We even had an early engineer with tons of Cocoa/ObjC programming experience and they still preferred shipping with Electron.
The HN crowd complaining about Electron kind of feels like the early criticism of Dropbox or the iPod. "This is technologically inferior and has fewer features so it is clearly worthless."
For some reason there's a deep-rooted philosophical opposition to Electron-like technologies from lots of hackers. But I predict it will be like Java-- everyone "hates" the JVM for years and calls it slow and then one day it's running nearly every mobile device and being used across Google.
It's not crazy to think Electron will evolve like this-- it certainly has the momentum. And the ideas of a web-based desktop app platform have been pushed by Google, Apple, and Microsoft in the past. Seems like we're finally getting there...
(For the curious: https://github.com/nylas/nylas-mail )
Many people use laptops and their battery life matters a lot, being one "minor" factor. Not to mention that in a technical sense, your app really shouldn't take any CPU while idling. Doing a socket wait is something a Raspberry Pi Zero handles without going to more than 1% CPU, for half a second.
At any rate, other people in this thread explained it much better than I could: https://news.ycombinator.com/item?id=14091782
I understand that you're frustrated, but this kind of language does not foster productive discourse.
As I understand it, Nylas (grinich's OSS Email product/platform) wouldn't have been possible to build with a two-person dev team had they chosen to implement it in something other than Electron.
Are you suggesting we're better off without competitors to the existing email desktop-clients?
This is a false dichotomy and you are aware of that fact. This polarization shouldn't exist in the first place. And it can be eliminated with a small time investment.
As developers of important apps, your contribution to the world influences culture. Other follow in your wake. Is that the influence to the world you really want to give to other programmers and business people? Time to market is not the only priority.
That you treat a non-flattering language as me being frustrated tells me you will not understand that side of the argument however. So let's agree to disagree and move on.
You're right; I was being presumptuous about your emotional state. I apologize. -_-
Let me try to clarify: Companies like Slack and Nylas and Spotify understand the performance implications of using Electron and it _is_ a real problem, but the _benefits_ of using something like Electron, at least currently, outweigh the performance troubles.
There's simply a _much_ lower cost to iterate quickly on a web-driven application.
That said, we're all frustrated about the performance implications (users and companies alike), and this this is reflected in the direction the industry is taking.
I predict Electron _will_ become less popular in the next year or two as React Native desktop support becomes proven by one or more major players:
- Linux: https://github.com/CanonicalLtd/react-native
- OS X: https://github.com/ptmt/react-native-macos
- Windows: https://github.com/Microsoft/react-native-windows
Upvoted for you being nice and constructive. =) I could've been less harsh myself. Sorry!
> ...the _benefits_ of using something like Electron, at least currently, outweigh the performance troubles.
I get it, man. I've been there in the past. I understand your position.
> There's simply a _much_ lower cost to iterate quickly on a web-driven application...That said, we're all frustrated about the performance implications ...
Not arguing that, that's an undeniable fact.
> ...as React Native desktop support becomes proven by one or more major players:...
Yes. Nothing would make me happier. I started with Windows' crappy MFC, ATL etc. horrifying libraries and I've been immensely let down by the promise of Java Swing community that never managed to get its act together well enough to give you a quick and easy way to create cross-platform UIs. Desktop UI is like a hobo on a celebrity wedding: everybody knows it's there, everybody feels awkward about it, and nobody wants to improve its situation. And that's going for like 20 years now.
Us the slightly older generation of devs are guilty for the situation as well. We too had to go on about our day jobs and find a quick way around.
I guess I was a bit harsh because I expected the next generation to pick up the ball and dribble it better than us... >_< Which is a very wrong and egotistical expectation, I'll be the first to admit.
Just don't give up! Make sure that you educate management periodically. Talk to them, convince them, give them analogical examples from the physical world so they get the problem on their level. Never give up the good fight; educate, iterate, improve.
I loved your app, by the way. I only uninstalled it because I am moving away from Gmail.
After you do that a number of times though, promise me something: please reflect on my words again in the future. Devs at important app shops are influencers for much more than technology. You should try and be a positive influencer of culture. Sometimes it comes with a financial cost and I've been guilty of going the way of the money in the past myself.
There are a number of email apps around, why would I want to use one from developers that put their convenience above mine (and my batter life)?
> But I predict it will be like Java-- everyone "hates" the JVM for years and calls it slow and then one day it's running nearly every mobile device and being used across Google.
Notice how the jvm is dead on the desktop? Notice how android apps are incredibly slow compared to a native one? Also, android has never run on the JVM, they created their own VM because the JVM was too slow.
I say this as someone currently writing a native desktop app and someone who doesn't like Android (having shipped substantial Android and iOS apps):
Android apps are not incredibly slow compared to native apps, and the ones that are aren't due to Java. Android has always _used_ the JVM, just not run binaries _on_ the VM – they created their own (Dalvik) because they couldn't ship a Java virtual machine, so they shipped a virtual machine and played bytecode games so the thing that ran wasn't Java bytecode, and it wasn't run on a Java-compatible VM.
Also, Android has and will be running on the JVM, because they're transitioning to OpenJDK.
I still have to wait seconds for simple apps to startup. Apps of the complexity that would have opened instantly on a desktop of 20 years ago. Part of that is the speed of java and part is the extra memory it uses.
>Also, Android has and will be running on the JVM, because they're transitioning to OpenJDK.
Android is switching to the OpenJDK class libraries, not the VM, they've already built their own VM twice to avoid it. I think your confusing the VM and the libraries everywhere else too.
* Don't use PHP its a fractal of bad design - most popular server side language before
* Don't use Flash its bad for security and accessiblity - most popular authoring tool for animated web apps.
* Don't use WordPress its a spaghetti mess - most popular CMS
* Don't use VB6! - most popular rapid application development tool
The most popular techs are the most practical and easy-to-use ones. Im not dismissing the arguments. The "dont use" arguments are valid. HOWEVER, you can't just tell people to "do the world a favor and don't use this tech because". That doesn't work. Unless you can release a competing tech that does better, with less amount of work.
Funny how a high priesthood of people who get paid a lot of money would pour scorn on anything that lets the "mundanes" solve problems with computers.
Just did a quick check on my quadcore MBP, having Slack fullscreen out-of-view with nothing animated in view gives it a CPU usage of 0,1% with spikes up to 2,5%. Switch to a conversation with a single animated party parrot emoji, go to another fullscreen app once again, and CPU usage never drops below a whopping 22%. For an animated parrot cartoon.
I'm not sure if this is a bug in Slack or Electron. Doing the same in Chrome does not cause any issues, as GIFs seem to be suspended when the window is not in view. CPU usage goes down to almost 0 when you switch away from your GIF-filled Chrome window. For Slack, however, it barely changes. A big shame.
(On that note, I wonder how much power is wasted by party parrots via CPU usage. Where I work at least one is almost permanently visible in any conversation as a reaction emoji, so it's probably not insignificant.)
Holy fuck, can you imagine the supercomputers that were required to display Geocities websites in the 1990s?
Its also worth noting that blog post is from a few months ago. Its possible the slack team has fixed the issue now. I wouldn't know because I deleted the slack desktop app and haven't looked back.
I have Atom and Mattermost (Slack clone) running right now with very little avg. "energy impact". MM is at about a half gig in memory though.
Personally I think every software engineer should have a CPU meter of some for running on their machine while developing. Its an essential element of seeing what you're doing. How can you write decent software without having even that much visibility over what your computer is up to while your code runs?
These huge CPU sinks making it through to release required nobody to even glance at a CPU graph.
A better solution would be to have CI monitor CPU usage so that increases/decreases could be monitored and reported over time/per commit.
No, you should force every developer to have a shitty Core 2 Duo CPU from 10 years ago. A lot of devs are working on new shiny i7s and that will hide a lot of performance problems because it's a top end CPU. Do your testing on a cheap box and if it is still smooth then you can ship your product.
I find them both extremely useful in finding when I'm running wasteful software (either written by myself or someone else).
Every bug looks egregious in hindsight; just because an app has one is no reason to assume the engineers who made it don't bother to test anything.
Also: that particular issue only hurt idle CPU usage, and the usage was something like 13% CPU usage IIRC. It wasn't exactly the kind of thing that sets off klaxons.
Most bugs are of the form "do X then Y then it doesn't work". These idle CPU bugs are reproduced simply by opening the application.
I accept that 13% idle usage is invisible for most developers - but that's a bit disappointing. I notice stuff like this just by idly glancing at my CPU meter from time to time. An app that's sitting on high idle usage sets off klaxons for me.
I noticed a few moments after startup that the CPU usage jumped to the 5-7% range for a couple seconds, then it notified me that an update was ready (I assume it was downloading/checking/processing that update). I don't know if this is Electron, or just a general trend in desktop apps, but it seems to be getting a lot easier to update them. So when those pesky CPU/memory hog bugs are found, they can be quickly patched. As for the general trade off that comes with running an extra Chromium process, I suppose it's up to users/developers as to whether it is worth it in each case.
(I continue to be amused that everyone records & shares gifs for "compatability" and "simplicity", but many of the places they're shared re-convert them into videos.)
You're paying for the codecs, might as well use them.
Similarly, it is way harder to write a desktop app than a browser app. DOM / CSS manipulation, however bad they are, are years ahead compared to what you can easily do with WPF and friends.
And what about Python for data analysis? The one language that is known to be 10x slower than C becomes the de-facto standard for a field where what matters most is code performance. Again: ease of programming.
The success of electron only shows that there is a market for a UI library that can be written just like what you do in the browser but for native desktop apps... Please someone, write that !
Almost everything done in Python for data analysis is just a thin wrapper on top of a fast C/C++ library: Numpy, scikit, Pandas, Tensorflow, etc
Pretty sure you could do that as easily in PHP, Ruby, Python, Perl before Node.
> And what about Python for data analysis? The one language that is known to be 10x slower than C becomes the de-facto standard for a field where what matters most is code performance. Again: ease of programming.
When code performance matters most, you're probably still using Hadoop, C/C++, Fortran. Python is mostly in competition with R, Matlab, SPSS and Excel. Julia is the new kid on the block, and it was designed to be more performant than Python and R.
PHP is awful but still the best option for making something simple very fast, as it is supported virtually in every kind of provider, even free ones.
And still when we want actual speed we port it to C++ or at least need to use Cython.
This doesn't make any sense. It's much easier to create UI with XAML (WPF) because it's designed for that purpose alone. HTML / CSS really isn't.
That is likely not the only reason. A lot of the libraries like NumPy, SciPy etc., that are one of the reasons (but not the only one) why Python is used so much for STEM / data analysis / data science etc., are written in highly optimized C or C++ or FORTRAN, with maybe thin Python wrappers over them. So ease of programming alone is not the reason. If it was easy but too slow to run (remember, many such apps process huge amounts of data), then people would not use it.
Edited to change "C" to "C or C++".
Having learned Ruby on Rails before Node, I view Node as "screwing around for hours before being able to respond to a simple http request."
Or, people who like closures. Or other aspects of JS. Or of the Node platform (like event-loop programming).
I mean, programming languages and developer tools could be easy AND with high performance.
We actually have written something similar for our product Elevate Web Builder (commercial, closed-source, written in Delphi), and have considered moving it to Free Pascal in order to allow developers to use it in portable, native desktop applications. The IDE for Elevate Web Builder has to provide a browser-like design-time environment for the WYSIWYG designers, so we created one. It effectively uses a DOM element tree structure like the browser, but doesn't use HTML or CSS. Instead, it surfaces all declarative styling as element properties (with real types, not everything as a string), provides its own layout management that is more application-oriented instead of document-oriented, and has a great batch update functionality that allows you to suspend updates/repaints until you're done making all necessary changes to the element properties of one or more elements.
The reason why it's such a compelling idea for us is because Free Pascal is a blank slate with all of the support code for multi-platform development, and we already have all of the code written for Windows. The reason why we're hesitant is because it's all in Object Pascal, so I'm not really sure if it's a compelling draw for JS developers that want to create desktop applications.
If we ever do something with it, though, we will certainly post it here as a "Show HN".
Developers are often willing and able to produce high-performance code when the barrier to doing so is low enough.
Go to Electron's website and download the demo (called API Demos). It's 3 processes and consumes 0.0 CPU on idle. Is it super efficient space-wise for what it does? Certainly not. And RAM usage is a valid concern. But churning CPU cycles on idle is not Electron's / Chrome's fault. I also suspect that if an app is behaving grossly on Electron, it's probably a bad actor within a generic browser environment, too.
My recommendation would be to not ship simple web apps wrapped in this huge machine without a good reason. But there are good reasons to use Electron or NW. It's just a poor choice when all you need is a thin wrapper around your website.
But I'd also argue that Flash could always be written to be efficient too. The problem with flash was never the little games people made. The problem was that one tab with a stupid punch-the-monkey banner add playing somewhere that would sit on 20% cpu even when the tab was in the background. I think Electron is similar, except in the desktop space. Well written apps seem to be able to keep CPU load down (although they still have a huge download and a huge RAM footprint). But there's been a running trend of noticing some dumb software running somewhere on my machine eating all my CPU. And in the last year or two its been an electron app, every time. Just like flash before it.
Gotta agree with the other response here. If you have shown that what would seem to be much larger and more CPU intensive applications have been able to reliably perform well, why suggest moving away from electron instead of suggesting that users learn how to use it correctly, or perhaps suggesting that electron better educate users on how to use it correctly?
It seems deceiving to conceal that information when your call to action is to ditch Electron for the sake of performance.
Can you please share how you came to the conclusion that Spotify is made with Electron? (hint: it's not)
The author is claiming it's built on Electron, which is false.
1. Electron makes it trivial for developers who don't regularly reason about native performance to release desktop apps. The result: a flood of apps that "don't do much" but use more resources than you'd expect.
2. At the other end, Electron makes complex apps easier to build. The Slack app might be low-to-medium complexity, but the Spotify app has a ton of functionality, and I bet dev effort was orders of magnitude lower than for a native app like iTunes. The result: in this day and age, complex cross platform apps with higher resource requirements are more likely to be built with Electron.
In short, there's some selection bias going on in both cases, and it isn't directly related to Electron. Electron is just the enabler.
I agree with the author on point 1: performance matters, memory usage matters, and we as web developers should broaden our horizons and learn lower-level languages. I feel that way regardless of Electron though.
On point 2: I spent the last year building a cross platform continuous sync app (basically, a Dropbox V1) in Electron . We were a team of 2. I also write C and C++ for a living, and I simply can't imagine building the same thing in a lower level language in the same timeframe. It's not just a UI concern either: writing a sync engine in node (the backend half of any Electron app) worked out amazingly well. Node is a great I/O orchestrator. The app runs at < 1% cpu when not actively syncing, 40MB backend process, 70MB frontend process. Reasonable for what it does. Dropbox uses more than double that although, granted, it's also not native code.
Hell, if you're not afraid of bloating in the codebase, you could use a large number of libraries to put together a basic, working version inside of a day or two. Stretch that to a couple of weeks (at most) and you have a prototype you can release.
The proof is in Atlassian and Microsoft chasing after Slack on this. Very few people came forward to wave the "Let's just go back to IRC" flag.
It's shiny, new, you can drag and drop gifs and files into it, and it runs pretty swiftly (ignoring all of the obvious flaws). Most, especially less technical, users would find it a charm to work with. In my experience, they do -- and above all of the competitors.
The rest of your points are rather valid. But maybe its not so much obliviousness, as eagerness and impatience, and the reality of the market being as timely and pressing as it currently is. It's a race -- just like the telephone and the radio (for my point I'm ignoring the scale of impact here).
That's the problem right there(?). People bringing a DOM to the desktop because the developers were fluent in JS.
> time to market
If there were no drawbacks in terms of complexity/performance/size etc then I'd agree, but now when you have a perf issue that is too large it might be a huge issue to fix. You also only get one chance for a first impression - and poor performance is a huge turnoff. Looking blingy and having all the features doesn't save an app with a 200ms lockup. or a 10% CPU at idle.
I saw this happen at work. My team were all experts on native mobile development. We could crank out stuff fast, because we knew our tools well. But then some management dude got the idea that we would do it 2-3x faster if we went with Java