Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies. And as for Qt, Qt has existed for over two decades - if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
But now? People are making all kinds of great new apps, and more often than not, they come out on all three platforms. People are excited about the Desktop again - Electron is so good it's single-handedly revitalizing the platform that
two of the largest tech companies in the world are behind, yet couldn't do.
That is a Big Deal.
The underlying issue here is that Electron reduces the barrier to entry for cross-platform development. That is, it's cheaper to build a single cross-platform application in Electron than it is to build two or three native applications, and you can re-use your existing web experience. I can completely understand why companies might choose this approach.
The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications. They fail to integrate with the host platform, they are slow, they hog memory and drink power. It's fine to make those trade-offs – in some ways, it's better that you can get an application at all than the alternative of 'no support for your platform'. But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.
There may be an interesting economic lesson here: it really is not that easy to externalize costs. It surely can be done (air pollution), but it requires some special circumstances for those costs not to be internalized in a different form. (These special circumstances might include information asymmetries, harm to a public good, enjoyed by people other than a firm's own customers, etc.--themselves classic risk factors for market failure.)
By the same token, there probably are some truly externalized costs in this example, but I would expect them to be very minor and indirect. For example, most people probably do not pay the 'true' cost of their electricity. So to the extent electron wastes power, some of the cost will be internalized in the form of user dissatisfaction. But some will also be externalized either because the user doesn't know about the extra power consumption, or because the user herself doesn't fully internalize the costs of her power consumption and therefore doesn't care as much as she might if all costs were properly internalized.
Precisely. I don't use Slack in part because its desktop application is irrationally bloated for its use-case. Although I have other reasons as well, they are enduring a cost--a small one to be sure, but presumably non-trivial in aggregate--in people refusing to use their service in part because their desktop application is poor.
But I do agree with the general emotion underlying the frustration of "externalities" here. As someone who advocates for high-performance and efficient web applications, I have toyed with the question of whether developers should confront the morality of wasting energy by having selected poor/low-performance platforms. Put in a somewhat comedic tone: low-performance software contributes to climate change.
You do have a point though. You make a difference where you can.
Also, the app gets its own entry in the task switcher.
Number of developers who would donate the increased efficiency to "idle": $denominator.
Number of developers who would fill the increased efficiency with more triangles: $numerator.
I'd argue $numerator is sufficiently large that the premise of your joke doesn't hold.
A) This works on all of the platforms that we use
B) Performance could be better on my computer
It's been a LOOOOOONG time since I worked on a laptop where I experienced noticeable performance problems...which is almost entirely because SSD's make them so much less noticeable if you start dipping into swap unless you're really working out your machine.
For most users, just knowing it will work on their machine is a bigger influence in using the product...and therefore a greater influence on business...than the performance of that system. It's especially true with a chat system where the most important feature is that everybody on the team can get access.
Because personally, I keep having performance problems on all laptops I have. Don't try running on battery saving mode, seriously.
I stop counting the 5+ years old laptops that have to be upgraded, they can't watch a youtube full HD video in good conditions.
Fun Anecdote: I had to trial an entreprisey SaaS solution not long ago. A coworker gave me the name and I opened the site on my laptop (on the move outside of work, just taking a quick look).
Their site froze my firefox for 30 seconds because these idiots put a high quality full screen video of a surfer in the background of the main page. Looks cool, doesn't it? https://www.wavefront.com/
Couldn't see the site. Had to be at work to read it, on my top end workstation, the video played smoothly there. Needless to say, didn't take the product seriously.
5+ years ago is that LOOOOOONG time that OP was talking about. It's also unfair to compare technical capabilities of old hardware for many reasons. I think the point was that, new hardware, _while its new_, is becoming more and more capable. Any new laptop today, even budget ones, can handle YouTube videos in HD. The problem is that HD today won't be the same HD that exists in 5 years (i.e., 4k), and it's sensible that a budget laptop today will struggle with the 8k technology that comes out 5 years from now. This is an old problem (pun intended) and should not be surprising.
> Because personally, I keep having performance problems on all laptops I have.
Selection bias. Programmers who compile code, run VMs or containers, and process tons of data, are not the average consumer laptop use case and have much stricter requirements. Many people are sitting in Facebook, YouTube, Gmail, or Google Docs for most of their day-- and likely inside of Chrome.
Where are the "Chrome is Flash for the desktop" posts?
The idea that Electron is any different of a user experience for the vast majority of users seems skewed to developer usage, to me.
It takes a surprising amount of power to decode. The cheap CPU from netbooks have been struggling for a decade, especially in battery power saving mode.
Lately, they get hardware acceleration just for that. Special CPU instructions and drivers just to achieve that decently.
Ehhh, you're in one, I think?
As for battery, again, my laptop battery has been little more than a UPS for at least five years.
Never had issue with RAM. I can remember a few friends who bought netbooks with 2GB memory some years ago, they quickly realized that they simply cannot run their development environment in that. (I'm talking swap death where a click that should take 1 second to act takes 1 entire minute).
Just bad tools leading to waste users notice. Best to avoid them if possible. Not always possible...
Are they all netbooks? Because those were crap the day they came out and even more crap 5 years later. You can find terrible discount desktop machines that can't handle anything just as easy as you can find terrible laptops.
My laptop is 5 years old and the only time I feel a lack of performance is when the Swift compiler fires up. Your example link came up right away (OK it burns away 30% of a core, but I got 4 of those).
A core i3/i5 starts at 150$, most devices don't have these expensive CPU.
Even a 1000$ macbook from 5 years ago would have one of the first i3/i5. It would struggle to read 1080p video, unless plugged to a wall plug with the span spinning seriously.
This annoys me a lot, for one of two reasons. Either:
1. Some developers have no idea what performance means -- it's crazy that I have time to notice a spinner when a glorified IRC client starts up or switches tabs, let alone have time to watch it for tens of seconds.
2. If these developers are claiming they have no performance problems, clearly the laptops that can handle modern applications are being hoarded, and I have no way to obtain one of these magical machines. All I can get are recent i7 processors.
With the power that even average machines have now, it beggars belief that we ever see a spinner at all for normal desktop-related stuff.
That's a complete other story... All that big open source movement for decades and in 2017 we are still depending on proprietary systems to provide a chat(!) for a company.
But these electricity costs struck me as only a small part of the broader point I was responding to, which is what I framed the point the way I did.
There is no broader point than converting natural resources into societal infrastructure. Be a responsible member of society. Don't obstinately ship wasteful code. I understand if you legitimately don't know any better, but if you're part of the community that's constantly writing blog posts about computers being so fast that its OK to burn the end users' CPU and storage just so you don't have to spend a couple more minutes thinking about what you're doing, you're adding harm to the world.
You seem more focused on making a case against shipping inefficient code. And you're point is a good one. It's just a little difficult to suss that out since you're framing it as standing in opposition to my related, but very different, observation about externalities.
Same thing for Web developers.
Getting a new computer will fix the problem though. :-)
GitHub, Slack, Spotify, Microsoft, etc., are all using Electron. As their developers gain experience with the platform and as they experience problems with resource usage, I would expect to see the platform improve. Maybe I'm wrong.
Currently if for your market, your primary target platform is the web, but other platforms are still important, going electron might make sense because you really don't want to write your business logic natively for each platform.
Web assembly allows us to use the same native libraries for all our business logic and data models, which besides being more performant, means we only need to be willing to write the UI natively on top of those libraries. We've actually taken (very) old desktop C++ code, and compiled it to asm.js and ran it in our web app, and then rendered its outputs with webGL onto a canvas and have had surprisingly successful results. This makes the prospect of WASM becoming standard across the board very exciting.
Now let's hope we don't decide to replace our native desktop app, that uses this old C++ library, with an electron app of our web app, running the asm.js compiled library :).
No longer laughing.
What machine are you using ? I have a 2 year old thinkpad, it's still 10h+ in full brightness, vscode, compiling, etc. And it's the screen brightness that consumes the most battery.
It used to do 20h, but the main battery is external and can be replaced so it's a good thing. I would really recommand thinkpads to everyone instead of going for macbooks for example.
I'm not even sure whether you're trolling. BBEdit provides a tiny subset of the features VSCode offers. VSCode is not a text editor, it's halfway between a text editor and a traditional IDE.
The difference to now is night and day.
Since both are written with electron the difference must obviously be the actual implementation, not the platform itself.
Keep in mind VSCode was written by Microsoft who have decades of experience writing IDEs and text editors, whereas Atom was written by GitHub who are mostly working on the GitHub product.
Could have fooled me.
I just switched to VSCode last month and it's been as nice to use as ST3 (actually even nicer because ST3 didn't have any code intelligence).
I've also used IntelliJ (or rather WebStorm) several times over the years and it always felt too sluggish and obnoxious.
For the record: I'm on Linux and have 32 gigs of RAM. So maybe I have lower standards for memory use and performance.
insofar as it benefits the budget, you should probably externalize as much as possible. consumers will provide the fitness function through deciding what products they prefer.
What about when your users don't have a choice?
The world is asking for more and more software, and the resources to provide it are not following so people are taking shortcuts.
Want something better ? Someone has to pay for it.
For macOS desktop applications are written with Objective C, which is C with fast message-passing, and doesn't trade much to speed. Swift is modern alternative, but it doesn't trade anything for speed either.
For Linux applications are traditionally written with C and Gtk or with C++ and Qt. Those options are both very performant.
For Windows main language for a long time was C++ and it remains supported language. There's movement to .NET, so Windows is an outlier here. But .NET is generally very performant language, it makes some tradeoffs for safety, but it has enough features to stay fast and its implementation is specifically tuned for desktop applications.
The only terrible platform with slow language is Android and it's well known for its lags.
There's very little desktop software written with Java and Python and usually those are specialized applications, when users don't really care about experience, but rather care about functionality.
If I'm about to buy application for macOS, I'm always carefully inspecting its bundle and trying to determine which technologies were used. Unless it's pure Objective C/Swift, I'm usually won't buy it. I hope, more users would do it.
That's true now. It wasn't true then. The point still stands.
Although to be fair I haven't really tried PyQT out still, but I don't like the idea of having to buy a commercial license for it.
Or anything in any other language.
But none of them are close enough to the benefit of being able to use the tone of experience from the Web UI into the desktop.
Eventually all GUI app toolkit ends up with a custom MVC framework, a client/server architecture, some kind of db for persistence, their own implementation of asynchronous event and communication models and a declarative layer to create the UI without code. For the most advanced this layer separate structure and layout.
Well guess what, this is what the Web has natively being doing for ever.
Since the web is now the most popular platform, with millions of libs and tutorials on it, people just reused that. It just makes sense.
The problem is not the concept. The problem is we should have driven this effort with a standard to sanely close the gap between the desktop and the web so that you don't have to spawn a freaking browser-engine-os for every one of your app.
But no, the web is the only platform with a standard. And it flourished while all the big players created closes gardens with proprietary shitty API. And this is the result.
Have you not seen Jurassic Park for god sake ? Life finds a way.
Web UI is a gazillion shades of shit, please don't dump that rubbish on the desktop. Thank you.
Seriously, how can one take Web UIs, the most cumbersome, unreliable, inconsistent, unreactive UIs, as examples to be followed? That's beyond me.
It's too bad it wasn't more universally adopted, by any of the 3 major platforms (including Linux, where the all-C Gtk+ has become the standard for the most part). Instead, it seems to have found its greatest success in, ironically, small embedded devices. Devices like this simply cannot take the performance hit of something like Electron.
I admit that for someone from the web electron is a godsend, but to be honest, responsiveness of the applications leave something to be desired
PyQt is literally the best cross-platform desktop GUI going, in any language.
Shrug, I found it much nicer than anything else I'd used, but I've never used WPF (which is single-platform in any case).
> And it was proprietary and it needed a licence.
Neither Qt nor PyQt is proprietary in the usal sense of the word (nor were they 5-6 years ago). If you're using a non-standard definition it would probably be more productive to use a different word.
> For sure it's not for me given that I find python a pretty average language with the huge handicap of duck typing (and before someone starts, yes, I'm aware of the 'type annotations')
I'm a huge fan of type systems. I wish I could find a UI framework that's anywhere near as nice as PyQt for an ML-family language.
It's this platform on top of a platform that is objectionable from a performance, memory, storage, and integration perspective.
Languages have evolve to change the way we handle constraints like memory, speed, readability, expressivity etc.
We are arriving at the pick of what languages can bring on the table. Sure we can improve things here and there, but the huge challenges now are integration, packaging, distribution, updates, communications, multi-tiers architectures and all that.
So we now tweak platforms to help us with that.
But because we didn't see that coming, it's not done in any structured way. It's done exactly the way we did everything since the beginning of computing, by stitching together stuff then hitting hard on it with a hammer until the job is done.
This is not new. IT is a joke of an engineering field. We hack everything, don't think about the future, and then end up using the status quote. It's always has been like that.
They are an abstraction over OS level isolation.
Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades. Which are all as close to the metal as can be. And in fact C/C++ can be as fast, or even faster than hand-rolled assembly most of the time (with few exception), so the whole premise is moot.
The few Java desktop apps that were around, people used to hate as memory hogs.
But the thing is, even when I write something for myself, I first write a command line app, then a web service. Never a GUI, because it's such pain.
Gets me the high-level, Smalltalk-ish productivity when I want it (most of the time) and the low-level C efficiency when I need it (sometimes).
Having my cake and eating it, that's me :-)
Quality of the tech is NOT the drive for success here. You are missing the point.
Well, maybe it's better to miss the point, than to succeed by selling crap to people who deserve better?
When are technies gonna stand up for quality of tech?
When the user notices a quality difference?
You can see everyday that people favor cheapness, easiness and convenience over quality. You would not have so much junk food otherwise.
What I'm saying is "it shouldn't matter" what people favor.
Professionals should still favor quality, even if their customers would just as well have crap (or are ok with crap when its all they can find).
One of my first commercial projects was a web-content management system written in Objective-C. Customers included Siemens and the German Bundestag.
Another couple of projects were written in WebObjects. If I wanted to, I could use Cappuccino, but I am not a big fan of web//client apps, so I don't.
> Can you make it portable to other OS ?
This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
> Can you reuse 20 years of knowledge, resources and libs ?
In the sense you meant it: yes. Except it's more like 30 years. However, programming skills are (or should be) transportable. With virtually no previous experience, I became lead/architect on a Java project, which succeeded beyond anyone's imagination.
> Can you hire tomorrow 10 experts to help you on it ?
Is this a serious question?
>One of my first commercial projects was a web-content management system written in Objective-C
You certainly didn't use any of your cocoa widget for the UI there. It was HTML + CSS.
> This product ran on: Solaris, AIX, NeXTStep, Linux, OS X. I think we also had a Windows port.
Yeah, GNU steps for GUI on windows... This is what you think could be an argument for electron users ?
> In the sense you meant it: yes. Except it's more like 30 years.
Again bad faith. The world has way, way more code, snippets, tutorials and doc about any HTML + CSS + JS code than any tech based on Objective-C.
Programming knowledge is transferrable, but the knowledge of the ecosystem is not, and is always the most consumming.
> Is this a serious question?
Oh yes, it is. Because you see we are living an era where it's hard to find any good programmer at all for anything. They are all taken, and are very expensive.
So basically, on a tech limited to one ecosystem, finding them will be even harder, and even more expensive.
The simple fact that you are pretending it's no big deal (while any company will tell you otherwise, so much that the GAFAs are spending millions just in their recruitment process) illustrate how much a troll you are.
It most certainly is not. You just don't know what you're talking about and keep making up new stuff when confronted with actual facts that contradict your fervently held beliefs.
It's not like Smalltalk is a bad language that just happened to have a productive live programming environment.
It's one of the best languages out there, and conceptually stands alongside Lisp et al.
This is generally true, but to be fair the reason is because we design CPUs differently these days. Modern CPUs use instruction sets that are specifically designed to work well with compilers, and aren't meant to be programmed in hand-coded assembly except for a few critical bits deep within OS code. Older CPUs weren't like this.
It still might be possible to write hand-rolled assembly that beats modern compilers, but you probably need to have seriously super-human mental abilities to do it.
You got the causality wrong. Assembly programmer-friendly CPUs died because CPUs which weren't as friendly were faster and cheaper; those same CPUs were instead more amenable as compiler targets.
Became one of my favorite toys. I'd still use it for GUI prototyping if it was FOSS and kept getting extended. I found even lay people could learn it well enough to get stuff done. Long after, I learned what horrible things lay people did with it. Yet, they got work done and got paid without the IT budget and staff they would've preferred. (shrugs)
No, it really hasn't. It was just the way Microsoft proposed businesses to write bloated internal enterprise apps, what they used to use VB for.
Those are not the same as desktop apps -- and no, or very very few, desktop apps, ever turned to C#. Not even MS own apps, like Office, and surely nothing like Abobe's or countless others.
Besides, it's not JS itself that's the problem (though it took millions and top notch teams to make it fast): it's the web stack on top of it. C# just runs on a thin CLR VM layer -- and the graphics are native.
I mean, if you're going to say Windows Forms and WPF apps are not "desktop apps" then you're going to have to do a lot more than just declare that they aren't.
You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
Windows Forms is a wrapper on top of MS Win32 API. And WPF is also based on native widgets wrapped (with some extended with managed code).
In any case, C# apps are not much represented in the majority of Windows desktop apps, most of which are written in C++ or similar, and surely all the succesful ones. Can you name your succesful C# desktop apps? (not in-house enterprise apps and no developer tools please. There where the users have no choice, even Java does well) I'll name the succesful C++/Delphi/native/etc ones and we can compare our lists.
>You're just listing ways that they are different. They both run in a virtual machine that abstracts away the actual machine. You know, the metal in the phrase "close to the metal."
A call to a native drawing lib that doesn't pass through 10 layers of abstractions and bizarro architectures is as good as a direct native call. Especially from something like C# that runs circles around JS performance.
But even so, few consider JS to be what makes e.g. Electron slow.
> Yeah, but until Electron and they like, we seldom shipped desktop apps in anything than C, C++, Delphi etc even after all those decades.
So things aren't any different than before. We've just replaced non C/C++ abstractions that were written by the platform-owner company to non C/C++ abstractions that are written by open source projects.
This seems pretty much in line with the general industry trend towards the adoption of open-source software.
This statement doesn't even parse.
Everyone who adopted python 2 on sizeable codebase is likely stuck there forever, with zero annotations and none of the new tools available, and they'll never be ported back.
But let's be fair, type related tooling in Python are not close to the ones you have in Java yet. It's just that eventually, everything comes around. Java got faster. C++ easier. Python ...toolier ? Etc.
Python broke all retro compatibility and put all existing sizeable software in a miserable deprecated state with the breaking of python 3.
I don't recall C++ getting easier. The few tools and IDE still fail at decent refactoring and code completion. The C++11 movement is adding few stuff more or less useful, piling on top of the vast amount of already existing complexity.
It does have a steep learning curve, but it's worth it. The number of concurrency bugs alone that I could have avoided if I had been able to use Rust years ago are sad to think about. Java has great concurrency tools, but doesn't do anything to make sure that your not shooting yourself in the foot.
Your list of 7 JVM languages (both here and in your earlier comment on this submission) seems to be from most widely used to least. Yet in your HN comment from 2 days ago at https://news.ycombinator.com/item?id=14068664 you ordered that list differently, i.e. "Java, Scala, Clojure, Groovy, Kotlin, Ceylon, Frege, etc". Have you changed your mind about the relative adoption of Clojure and Apache Groovy in the last two days?
There's something wrong when a testing framework hacks into a language parser to make the language statement labels have special meanings like function names do, and overload the `|` operator in expressions so data will appear as a table in the source code. "Lovely" isn't the word for that sort of thing.
That is definitely not the whole story. Costs are shared between developers and users. If it's more expensive to develop an app, you can bet it's going to cost users more too.
And if they don't want to, they should be forced to by said society.
Users may not notice the slowness right away, but if they run a bunch of applications at once they probably will, and they're more likely to notice it while running on battery.
Is that true from a user's perspective? The average user I know would not have the idea that Slack is somehow inherently worse than say MS Word or any other truly native app they use everyday. What would Slack gain by integrating better with my Mac?
Slack starts up too fast for me to read the little quotation (penultimate Macbook Pro). I actually wish it was slightly slower because I like those cheesy quotes when I can catch them. Everything feels almost instant. Not bash instant, but as fast as any mainstream messaging app.
I leave it running all the time and get what feels like normal full hours upon hours of battery life. There are apps or browser tabs that I've noticed destroying battery life, but Slack nor Atom nor MS Visual code are on that list for me.
The anti-Electron complaints all kind of feel like the same argument that comes up as each new layer of abstraction gains acceptance. I'm sure there's a lot that Electron can and should improve. Running multiple copies of Chrome does sound awful, but until I read that I didn't know the atrocities it was committing.
Just doesn't feel productive to try to make Electron go away vs. working to improve it or create a better cross-platform abstraction.
It is for this user.
> What would Slack gain by integrating better with my Mac?
Far better resource management, one would hope, for starters. As the article pointed out, when you're measuring your IRC client's memory use in hundreds of megs, there's a problem.
It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP. So Slack gets turned off, and maybe someone gets annoyed that I don't respond to something, while I run a few VMs in Slack's former memory space.
As far as other things, well, using a platform's capabilities as appropriate is generally considered good form - HCI concerns, and, generally, just people like things to be consistent. I don't know why we're supposed to forget this now.
I personally don't care if Electron goes away or what, but to the extent that I'm forced to run Slack by my cow-orkers' choices, it sure would be nice if it were less of a bucket of ass.
And then there was slack
The nice thing about slack is that it will run about anywhere. The app on my phone magically knows if the app is not running on my laptop and will then and only then send me notifications. But, I'm not suggesting this as a solution for you -- best productivity killer I know is having to pay attention to yet another device while trying to get work done.
Depends on your environment, but I have of one very effective and successful coworker who just flat out refuses to use Slack. The number of companies where that is acceptable is probably limited. Unfortunately.
Electron would have you write a web app and package it as an app installed to your Applications folder just like any native app. You'll write this code in JS, HTML, CSS and use Electron APIs.
Fluid also would have you write this in JS, HTML, CSS but use Fluid APIs instead. The two compete for the same role in that sense.
For what it's worth, since you seem to like IRC clients so much better, Slack has a built-in IRC bridge: https://get.slack.help/hc/en-us/articles/201727913-Connect-t.... It works just fine, in my experience.
I use it extensively for everyday Slack, where I don't care much about inline media and inline formatting. And for a text client, weechat + this plugin handle it pretty gracefully.
Not really, unless you're using most of your RAM or you treat this as a fundamental principle. I've never checked my Slack RAM usage, and I'm sure it's higher than if they built a great native app, but it's also never had any performance problem with Slack or any other application (okay, except for Eclipse, but I stopped using that when Android Studio came out).
> It isn't uncommon for me to need to choose which "essential" apps to turn off to get something done on a maxed-out MBP
And there's the key. You have a very specific use case that is probably extremely rare. I'm a full-time developer and as far as I know I've never been low on RAM on my first generation Retina MBP.
It is truly strange to hear being resource constrained being described as a special case in computing.
We really do need to be more mindful of resource usage. I'd rather write my own lightweight clone of something I want than use an Electron-based hog. There are menubar-only apps that come with an entire Electron dependency.
Further their machines are full of software that runs all the time for no reason including but not limited to multiple redundant antivirus that are trying in vain to scan everything in real time in a vain attempt to prevent the next malware infection from taking hold.
Various services like file system indexing and virus scans run at inconvenient times and render things slower than before.
Laptops are super prevalent because their portability is more important than power. Its not at all unusual to keep using the same machine for 5-8 years as long as it still works.
Real people have bad computers with bad specs and in a massive number of cases the browser is already using a significant portion of the entire computers resources.
As a native Mac developer I can drop in things like a predicate editor for defining filters or queries. They come with Cocoa. I can put in sophisticated table views and tree views. I can connect all of this easily to core data which loads from a database objects on demand without any code needed to be written by me.
You see this sort of shortcuts in the electron apps I've seen. They don't have proper GUI for preference configuration like any Mac app as they can't get that for free. Atom doesn't have a regular file open dialog. I use that a lot, dropping a file into it to jump to a particular location I got open in Finder (file manager).
My preferred editor, TextMate has very little development resources behind it compared to Atom, yet it has a far richer GUI. You got a GUI for creating and editing plugins. Not just editing config files. You got native rich UIs for selecting fonts and colors. You don't have to write font names and color names in some config file.
There is a certain irony in the claim about how stuff like Electron saves you cross platform code, when what is usually not cross platform is the GUI, and the electron apps I've seen has very little of it. Try making something with the complexity of Keynote, Pages or Numbers in electron and I think the lack of a comprehensive set of prefabricated GUI components will start making its mark.
Not to mention the numerous native APIs which exist which you have to duplicate, e.g. for vector graphics, animations, 3D graphics, audio, video, font handling, OpenCL.
How about people with disabilities or internationalization? You got great native tools for doing that, how do you accomplish that in Electron without re-inventing the wheel?
Well, VSCode certainly seems to contradict this.
This includes chrome and electron based stuff
The problem here is the base platform Electron and underlying Chrome/Nodejs
But as consumers we feel when our browser lags, so browser vendors optimize speed over memory and cpu, caching the shit out of everything. The Web is a dangerous place so they further isolate every tab as process, sandbox them and keeps lots or copies of the same thing in RAM because a security flaw is a lot more shameful than a memory flaw.
You are right, electron is a hog. Last I checked all, electron, chromium and Nodejs were open source. We can actually make a difference.
Making noise also makes a difference. When someone complained about VSCode cpu usage because of idle cursor and it blew on HN, next month's release had a fix (for all platform)
The truth is you can't move fast doing native development with different libraries. Electron keeps your dev costs down and allows you to move very fast.
Won't that encourage more companies to cut corners? How can producers of quality products compete when other companies can release prototypes that their paying customers finish for them?
And there's, "This really sucks, its an open source project, I think I can make a difference that would affect millions in a small way but would still make a big impact overall"
tired of this argument. if you put in the proper effort, your users won't care or notice.
Except for the very old 4.x Linux version - which is native Qt - and comparably old Windows ones, I thought, all of the recent versions are webbrowser wrappers (one sort or another) with some occasional native widgets around that.
But I don't think UI implementation has to do anything about why we love it so much. I mean, missing calls and messages every other week, random desync between clients and all the usual rituals that every other group call starts from (you can hear them accompanied with chants "can everyone hear me?", "$name are you here?", "tell me if you see my screen" and "let me drop the call and restart").
my point is, even if i concede native is "better" or whatever, the difference should be negligible for most apps for all users (let alone "most" users).
The sliders mean that you will trade performance off against ease-of-development. It will be easier to ship a basic Electron/web-stack app than a native one for multiple platforms. However, it will be harder to do this is in a high-performance and well-integrated way.
I went through all this with Cordova back in the day when trying to build cross-platform mobile apps. I was able to ship apps quickly, but at the expense of quality – it's okay to do this so long as you are aware of the trade-off.
i'm just exhausted by the "web apps feel so much worse than native!1" argument which is an over simplification and not a rule.
Are the other platforms native? I thought they switched to webrtc in 2015 to be able to just use a simple web window wrapper for all platforms, just not Linux.
Edit: Mass downvotes; lol ... Just stating my personal experience!
Are they though? The two applications that use the most energy on my Mac - by far - are Steam and Skype. Steam still has trouble with HiDPI and freezes when performing various UI interactions. The number of problems with Skype are uncountable.
I'm currently booted into Windows for work, looking at my current process stats, the top memory consumers are:
* Visual Studio (hodge podge of all kinds of things, 800MB)
* Chrome (215MB)
* Microsoft Intune (presumably native, 114MB)
* GitHub (.NET WPF application, 108MB)
* Explorer (native, 103MB)
* Search Indexer (native, 107MB)
* Lync (who knows, 98MB)
Meanwhile, the supposedly terrible Electron apps:
* Spotify - 58MB
* Slack - 93MB
* VS Code - 60MB
As far as interfaces go, Spotify, Slack and VS Code easily outclass GitHub, Visual Studio, Explorer and Lync in usability.
Here are stats on my (Linux) box:
* atom - ~500MiB (one window)
* slack - ~816MiB
* chrome - ~935MiB (two tabs + hangouts)
* google music electron app - ~500MiB
Might just be an accounting difference. Forked process applications in particular are very difficult to account, because even their private/RSS may be COW from another process.
If you are using Windows 10, your missing Atom processes will be under Background Processes in Task Manager. For the sake of the argument, I just did a fresh install of Atom and this is what I see on the first run: https://i.imgur.com/0ZRSumF.png. ~220MiB (no files open, zero extensions).
Adding up all the processes' (7 of them) private memory gets me 194mb.
It might just be a difference of platforms.
For .NET Core + TypeScript, VS Code is almost feature parity with full blown VS, while being an awful lot faster. The only thing I find particularly lacking is debugging, but even that is coming along well.
You missed this qualifier in the parent comment.
The right benchmark for VSCode is not Visual Studio, but Notepad++ (5.9MB on my system right now).
Notepad2 = 1792 kB.
P.S. Pay attention, this is kilobytes, not bytes :D
I don't care about the difference of 175mb of ram between the 2 as one of them (atom) is infinitely more useful for me than the other (notepad++)
Maybe for one of the processes, but on Windows Spotify needs at least three processes usually to run (five if you count the Web Helper and Crash Service which are probably native code). On my machine the three main Spotify.exe processes take up at least 170MB of RAM, often more. Although I wasn't aware they were using Electron as their app has a standard, native Windows menu bar.
We straight up would have not shipped it without Electron and the CPU it uses to sync is on-par with apps like Apple Mail & Thunderbird.
> Nylas Mail - The best free email app | Nylas - The best free email app
What exactly makes it "best"? It looks to offer nothing more than other "best" mail apps.
• It's got pretty much all the power features out there like Snoozing, Open Tracking, Send Later, Reminders, Enriched Contacts (i.e. Rapportive), Unified Inbox, Swipe Actions, Templates, etc.
• It's cross-platform for Mac, Windows & Linux with custom UI styles for each.
• It works with all mail providers including Gmail, Yahoo, iCloud, Outlook and even vanilla IMAP and on-prem Exchange servers.
• It syncs your data directly (not via a cloud service) for speed and security.
• It works offline, so you can use it on a plane or when you don't have WiFi.
• It's open source GPL available on GitHub with >20k stars: https://github.com/nylas/nylas-mail
• It's free.
It's also still improving and has over 800 GitHub issues and we would love help from anyone who wants to make email better! :)
I see from screenshots that Nylas has folders and labels. Can i use either of these in the following fashion?
- i can have a tree structure of them
- an email can be in two separate folders/labels at the same time
- folders/labels can be configured to learn which emails to automatically sort into themselves, based on the email contents, by dragging and dropping the email into or out of them
Ball's in your court.
E: Bonus round! In this screenshot there's only 6 emails in the list: https://www.nylas.com/static/img/nylas-mail/hero_graphic_mac... Is there a way to get a list of emails where each line is actually only a line of text tall?
• If by "tree structure" you mean a folder hierarchy, yep that's supported. I think we have a current bug with dragging nested subfolders but we're working on a fix. (Surprisingly >99% of users have a flat hierarchy.)
• A thread can certainly be in two separate folders (e.g. Inbox and Sent) but an individual message can't be in two folders at once. In that situation there are two copies on the actual mail server. For Gmail/Gsuite this is possible via labels where any thread can have an arbitrary number of labels. We support both systems.
• "labels can be configured to learn which emails to automatically sort into themselves, based on the email contents" -- this is a really cool idea and something we've talked about internally. AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters. It could also be an interesting plugin that anyone could build on NM. We have a Slack chat room where folks discuss stuff like this if you're interested: http://slack-invite.nylas.com/
• And for your bonus round (haha) yes there are 2 different ways to configure the UI. One of them is 3-pane with a reading mode like Outlook, and the other is 2-pane that navigates like Gmail. http://i.imgur.com/Lt0x7O4.png
Also in 3-pane if you make the message list wide enough it will switch into the compact version: http://i.imgur.com/SaGp9eV.png
(Obviously it will show your real mail data. We have a "screenshot mode" for sharing stuff like this without revealing sensitive information.)
> Surprisingly >99% of users have a flat hierarchy
You tend to end up with it only after really long-term usage. All the folders with sub-folders i have got them after they got too big to be just one, e.g. "Perl coding stuff" has several subs, "Shopping", "Clients", "Computer Game Emails", etc. Some clients have additional subs. All started out as a singular one though.
> threads, not singular mails
Ok, fair enough.
> labels auto-learning by drag&drop ... AFAIK there is no cross-platform mail client that does this today beyond things like manual Gmail filters
Opera M2 does it extremely well since ~2000. Google Inbox does it ... eh. Mobile and PC, none, right. The filtering is honestly super easy to implement. It's a bayesian filter. In older email clients those were used to filter out spam. Opera M2 simply gives each folder one (user-configurable) and runs all the filters on each mail that comes in.
And to be fully honest here, i still use Opera 12 as my main browser, along with its mail client and don't see myself jumping ship ... anytime really since for me the combination of mail client and browser is key. However to respect an email client i expect it to be a feature match to Opera M2 at least.
Not interested in Slack. If you had an IRC channel tough i wouldn't need to sacrifice a chicken and a CPU core. :)
Ok, that looks fine. I personally prefer to have the email below the mail list, but that's not a huge thing. Maybe an option to consider. Screenshot mode is cute. :)
I haven't tried Opera M2-- I'll check it out. Might be a fun hackathon project to train a Bayesian filter on every folder and auto-suggest routing at least.
There was a big IBM Research study a few years ago that showed it's dramatically more efficient to search email versus categorizing messages into folders. Here's a link to the full paper: http://people.ucsc.edu/~swhittak/papers/chi2011_refinding_em...
With electron, every OS is a third class port.
Chrome kills me. :(
So Steam was one of the first "Electron" apps. A very first one was Windows Explorer as of Shell update that came with Internet Explorer for Windows 95 (included by default in Win98). All the sidebars of Explorer were HTML based.
Here's a fun one. Start Steam with `-dev` and hit F7. Widget factory VGUI edition!
Oh also, https://developer.valvesoftware.com/wiki/VGUI_Documentation
Valve used a very ubscure/niche HTML render engine initially for Steam (2006). The company/website behind that isn't online anymore. An older version of the Wiki had some brief info, but all these info vanished.
Here's an old revision from 2005 by a Valve employee confirming Steam used VGUI back then.
search for HTML: http://www.plastic-warfare.com/SteamUIGray.zip
Funny how the old things stay online. Notice also the cyber cafés menu entry. http://www.steampowered.com/status/game_stats.html
Also WinXP used a forked Trident engine with some removed features for "Software" dialog and various other features (Windows Help, etc).
> The underlying issue here is that Electron reduces the barrier to entry for cross-platform development.
> The trade-off — and there is a trade-off — is that Electron applications are shite in comparison with proper native applications.
But native applications are shite in terms of portability.
> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support
See the part where the original post said "it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies". The costs may not be "prohibitive", but they certainly would multiply effort and resources, and divide profit.
Lately there are more UI-ish apps I value having everywhere including desktop (Spotify, Hipchat, Whatsapp, VSCode). I'd also love a decent cross-plat podcasts app.
I think it's clear there's now more demand for certain types of consumer-ish desktop apps (chat apps & music apps especially) than there was a few years ago.
Just as a counter-point, because native app fans often make this point as though it is universally recognized to be a good thing.
I don't want apps to integrate with the host platform. The host platform is not the thing I care about. I use several host platforms in different contexts (I have work and home computers and a smartphone, they all run different OSes) and I would prefer that Slack look like Slack and not have buttons in different places with different UI interactions just because that's the way Reminders.app works.
For me, the web is the host platform I care about. It's the one that I can use anywhere and only have to remember the URL.
* Text selection
* Caret behaviour (e.g. Option-arrows on macOS)
* Spell check
* Open/save dialogs
* File system access
* Window management
* Accessibility (screen reader support etc.)
* Standard right click menus
* Indexing (e.g. Spotlight on macOS)
You may be thinking to native UI idioms, which even Apple threw out the window several years ago.
Electron apps are mostly very good at the things in the above list, because the Chromium web renderer has spent years abstracting the mechanisms needed to feel native where it matters.
Non-native toolkits such as Swing and Qt also spent years trying to achieve native look/feel, mostly through emulation and host OS detection, and they still feel pretty crappy compared to Electron apps.
Slack, Spotify and friends do a good job of inventing their own "web but native-feeling" UI. An example of the exact opposite is Google Docs, which still, for all its technological impressiveness, feels like a crummy Swing app trapped in a web page. For example, Google Docs renders its own right-click context menus, which look and feel nothing like native context menus. Google Docs' mini-apps also have a menu bar and a toolbar, but it's part of the host window, so you get two levels of menu bars and tool bars.
A web site inside a browser cannot resemble that and all the hacks to try to are not getting much close.
To do the same in Microsoft Office, you need to dick around with OneDrive and/or SharePoint. The last decade or so, I've only touched Office when someone sends me some .xls or .ppt file and I'm being lazy and just want to view it.
It's 2017, this is how we work now. My colleagues (literally) across the world are not going to connect to some shared NFS drive or whatever via VPN to store documents.
One drive for your personal documents only for you. One drive for your team only visible and editable by people in your team. One drive company wide with common stuff.
You can send a link to your colleagues and it just works! That doesn't support multiple editing though. That's how things were done historically.
Google doc is good to send a documents to a bunch of emails and see/edit the documents. It's terrible to write longer documents with advanced formatting, pictures and schemas.
For the most part I think what people care about is that things work as they expect, which is primarily 1) placement of UI elements, and 2) interaction with/between these elements. If that's done right, nobody cares if the UI is flat, dark, light, or has a leopard print background.
Now I do understand that there's some overlap in ui/skin concerns, but the distinction still seems crucial to me.
For example, the web is clearly not consistent on the 'skin' of things. But I often know where to find things based on their location (header nav menu's, footer contact details, etc.), or general look (loupe for search, wide rectangle for inputs, some kind of wide rectangle with a doodad on the right hand side for a drop-down). Or a combination of placement and look (a search input field is an input field in the top right of a typical page).
Even lots of computer-challenged people I know seem to do pretty well in this regard.
But as you say, when it comes to interacting with elements, as long as the developer doesn't override 'native' behavior, a web-solution can be very native.
On the other hand, the vast majority of cross-platform native apps I use often look close to native, but their 'core', inputs, selects, text fields, and so one, often feel off.
Honestly, I much prefer a non-native looking app that uses native UI elements over an app that has an 'uncanny valley' native look that is slightly off and UI widgets that don't behave natively.
That clipboard shortcuts work the same is the only one that I'm used to enough to be annoyed if it were done differently.
There's something about Mac fans that are very preoccupied with all of the details of how Macs work. I'm not criticizing that, you like what you like, but you shouldn't be surprised that I don't care about Spotlight indexing.
Shift+arrows — select characters
Option+arrows — jump between words or paragraphs
Cmd+arrows — jump to beginning/end of line or text
Shift+Option+arrows — select words or paragraphs
Option+Backspace — delete one word back
Cmd+A — select all
I actually used all of these except the last one while writing and formatting this comment! Plus clipboard shortcuts.
If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe. But if you don't, I don't understand how you can have this opinion.
I get super annoyed with anything that somehow overrides these standard keyboard shortcuts, which is suprisingly often. Non-native UIs typically have to reimplement them because modern OSes have made the curious choice of not abstracting them.
I don't think it is a "Mac fans" thing. The exact same principles apply to Windows. Even to Linux, although the keyboard standardization there is next to non-existent. (I don't use Linux desktops often, but when I do, I get really frustrated that the terminals use Ctrl as a meta key instead of Command. So "copy" isn't Cmd+C, it's something like Ctrl+Alt+C.)
> If you use vim in a terminal 100% of the time, none of those will matter to you because vim invents its own keyboard universe.
I do, and this is one of the reasons I've never bothered with all of the details and shortcuts that you like.
Vim attempts to make the best possible text editor. It doesn't let "OS conventions" dictate what makes good text editing experience. What you get from apps staying to strict OS guidelines is a bunch of average -- not terrible but not inspiring -- applications.
2) If you know a platform then you should have no problems knowing how to use it.
It would be ridiculous to have an app from Windows behave exactly the same in Mac OS just because you don't want to remember the difference. You don't want minimize and maximize buttons put on the opposite side of all other mac apps because that is how it is on windows. You don't want copy paste in Slack to use Ctrl rather than command key because that is what you do on Windows.
3) Whatever time you save from doing everything the same across platforms would be wasted, for anybody not working cross platform who suddenly have to deal with an app with completely non-standard alien behavior. I want my standard mac hot keys to work in a mac app. I want preferences to be in the standard location. I want my color and font selectors to work the way they work all other places. I want drag and drop to work like in all other Mac apps.
We Mac users have seen this again and again. When companies don't give a shit about our platform, it is usually just a question of time before a competitor arrives which does, and knocks the other guy out. You don't survive that long ignoring the platform unless you got some lock-in advantage.
Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.
I don't, I use The Web for 90% of all things I use on a computer. A Chromebook is one of the computers I use the most when not working for precisely this reason.
> Why else do you think people make a big point of an app being native Cocoa? It is because they know it sells better, because they know customers want the native well integrated experience.
I think you're mistaken, the fact that so many company are switching to Electron is evidence that it doesn't sell better.
Hold on a bit with that assertion.
First: which apps built on Electron are being sold, period? All the ones I'm aware of are open source, like Atom, or front ends to services, like Slack.
Second: which companies are switching to Electron for development? Again, all the Electron apps I'm familiar with are ones that started out that way. While I'm sure there's probably an app or two out there that began as a native client and then went to "let's just be a web wrapper," I don't know of any big ones offhand. (I've come across companies that have shifted their strategy to using true native applications, however. Facebook famously shifted their mobile strategy from HTML5/JS to native apps some years back, and I know of several iOS apps that were using "write everything in JS, it'll be great!" toolkits that switched to actual native AppKit.)
Third, and admittedly anecdotally, in both my experience and what I've consistently heard and read from people who've had the opportunity to study the UX of both native and "wrapped web" apps, just because users don't use the language of developers doesn't mean they don't notice when apps are slow, resource-hoggy, and behave kinda weirdly compared to other apps. I run a Slack for a writing group that's mostly populated by non-technical people and it is not uncommon for users to complain about Slack "slowing down their machine." Just because people don't know the term "native app" doesn't mean they aren't going to be able to tell "this app over here is nicer to use than that app over there," and that might be because "that app over there" doesn't minimize properly, or has weird menus that put common things in uncommon places, or doesn't do what they expect when they right-click on selected objects.
Actually, the incentive is even stronger than this. The ability to visit a website and start using an identical version of the app immediately is just as important - I doubt that Slack and Discord would have had half the success they did if users had to download the application before using it, regardless of them being available on all platforms.
Discord has had the ability to give links to others to join a chat server since its inception, resulting in a two-step process to use it: click the link, type a name. This is miles less of a barrier to entry than: click link, download app, find downloads, install app, run app, create account, join server (rough process for most text/voice apps up to this point).
If I were to build an app where cross-platform support was crucial, I'd probably start with Electron as well because of time and budget concerns, and switch to native if the app 'takes off'. But on the other hand, I can imagine there being serious risks to building out an entire platform that way and having to rebuild it from scratch later on. Maybe there's never budget/time for it, leaving me locked in?
I suppose React Native could help in that regard.
I can see both sides of this argument (every time it comes up)
As a consumer of apps, I want the leanest, most minimalist, fastest thing going. I want native apps on my devices (If you think slack is a hog on the desktop you should try it on Windows Phone).
But as a developer, I know that electron is a shortcut that means my app will take less time to build. I can take my existing skills, take work I've already done on a WinJS app and publish it on Mac OS, Linux and Win 7. I can spend more time with my family, instead of spending all my evenings learning py+qt, or xamarin, or react native, or whatever the new fangled thing is. And I know people will use it.
Hell, I've even got better odds of pushing an electron app than a native one, as I can submit a pull request and maybe have it appear on https://electron.atom.io/
Obviously, seeing the Electron hate always gives me pause for thought, but at the end of the day it feels like the hatred is from fellow coders (if a dribbling front-end-js writing low-life such as myself can call you writing-assembly-on-a-napkin-while-you-quote-stallman-types fellow coders) and my apps user-base is overwhelmingly non-technical.
I'm an embedded devices programmer and I'm proud of it. Proud to know a little bit about my hardware, and proud to get the most out of it. And I'm ashamed when I find a more efficient way to do something: it's not an optimisation but a bug fix.
Proper this and proper that, and don't get me wrong, I agree, but parent is correct -- and that is these apps wouldn't even exist if it wasn't for Electron. It just wouldn't have been a consideration for it to be a desktop app. Instead it would have just been thrown up on the web. For example VSCode would have just been some type of online IDE clone like Cloud9 or something similar.
If you think this is nonsense, you are out of touch, and I don't mean this as an insult, although I realize there isn't a good way to say that.
Yes. Electron enables some new stuff. New stuff that wouldn't exist otherwise.
Yes. Electron is a bucket of bloat that saddles what should be small, simple apps with enormous amounts of crap that has nothing to do with the app's functionality.
Both of these can be (and are) true.
I wonder if one of the issues is that so many developers have now worked almost exclusively in the 'web' sphere, and aren't aware that native development maybe isn't as difficult as they think.
I've done the latter for a long time now, and only now I'm trying to teach some people, I properly realize much knowledge is needed to do it right. It's not just arcane knowledge of the quirks of CSS/HTML/JS, but also tooling, build steps, knowledge of 'expected' web behavior, frameworks, libraries, etc. Much of this complexity is still there and often worse if you go for vanilla js and static html/css (in part because expectations of a web app are higher these days).
I started learning native iOS development and expected things to be much easier and more sensible, but instead I get the impression that it's not that different.
Xcode is apparently a piece of shit, and everyone tells me to avoid Xcode's interface builder. There's tons to learn about how a project is set up, as well as stuff that just isn't a concern for the web like packaging it up submitting to the app store (hours just figuring out how to correctly supply icons and get a certificate). More than once, as I'm following an online course, the lecturer will say something like 'this might seem like a logical approach, but DON'T DO THIS and do <unintuitive thing x> instead.
Again, please correct me if I'm wrong. Perhaps when you put it all together it is significantly easier than doing the equivalent on a web platform. I'm just saying that I expected my initial foray into native, in particular Apple's 'walled garden', to be at least a little more like an actual garden rather than the chaotic, exceptions-for-every-rule (but kinda fun!) scrap-heap of the web I am familiar with.
Yet to hear a single non-dev coworker complain about Spotify being "bloated".
The real reason that parties like slack and spotify choose electron is because its easy for the devs they have that only really know JS/HTML.
Its not that they couldnt hire more/different devs that could do it in a saner way, its not that its too expensive or that the ROI is not good, when we are talking about companies worth hundreds of millions to billions of dollars focusing on a core market, that is just completely laughable post hoc bullshit.
Its that they dont give a shit, either about the user experience, or improving their toolset. They are happy where they are, and see no reason to change.
I think the issue is time and resources. Small teams, like Slack, would like to create a slick experience but they don't have the time.
Companies like Facebook went the other route -- HTML5 on mobile, got their fingers burnt, and went all-in on native. StackOverflow's iOS app appears to have improved a lot too, in v1 it was a thin shell around the web interface.
I hope React Native catches on. I'm not a huge fan of how 'heavy' Electron is myself.
5 years is huge when it comes to web technologies.
For those of you who aren't old enough to have been around, Sun initially pushed Java as a "write once, run everywhere" GUI language. It quickly became clear that Java applications were ugly and terrible everywhere, even in the primitive days of X11R5, when programs used a mixture of Xt, Motif, Qt, GTK, and raw X11 protocol (xv was awesome). Having a Java program for some task was worse than having no program at all, since it would discourage someone from writing a decent native solution.
Fortunately Sun found ways to make money using Java server-side, and Apple helped kill it client-side by not providing it by default.
I think Javas problem was it's Unix engineering roots, with too little focus on UI, and perhaps a little too much of the "we don't care about performance"-mindshare. The latter being the only problem I see with some electron apps.
This goes for mobile too.
It's ridiculous to pretend that you have to write 3 distinct codebases to get a multiplatform application. Are there 3 Firefoxes? 3 Chromiums? 3 VLCs? Back in the day, applications like Banshee, which written in C#, were the rage and were distributed as core parts of GNOME.
When I went to go build a Windows .dll, I rolled up my sleeves and expected to have a bad day. `cmake -G "Visual Studio 14 2015 Win64"` just went ahead and made a Visual Studio project from my source tree, and that project built and worked first try. I was using all C++14 stuff like std::lock, std::thread, etc, and there wasn't a single #ifdef required in the entire project. Amazing!
Macintosh users in particular are sensitive to things like the placement of and spacing between UI elements; if anything is "off" from the gold standard set forth by Apple, they will scream and bitch at you because perfect UI consistency is paramount with this crowd.
Web-based apps get something of a pass because they look and feel like Web-based apps (though not always; witness the grousing in this thread about the new Mac Spotify client). But things like Qt and the XUL-based Firefox, which try to look native but miss subtle details, fall into a sort of UI uncanny valley and are roundly rejected by the Mac community.
>If you target the native OS widget set, you must have a separate code base for each platform's native widgets.
I want to clarify the subtle distinction here. If your code's concerns are separated, having to directly provide native widgets on some platforms means a different "codebase" for windowing and widgets only, not for everything. You'd still compile your normal code, and use an #ifdef or equivalent to include the appropriate windowing/widget library.
But that doesn't excuse the abuse of that platform. Spotify used to have a pretty convincing native Mac app, which was spiked in favour of their current abomination, and I've watched performance plummet.
Using Electron as a cost-cutting measure is fine, but it's not good for user experience and it's OK to be honest about that.
It's very far-fetched to call a lower barrier to entry an issue. The easier we can make it for people to get started, the better, isn't it?
Lowering the barrier to entry is great, I agree. It's awesome that Electron and web technologies can be used to quickly launch proof-of-concept desktop apps. I find it significantly less awesome that companies with hundreds or thousands of engineers continue to use it after the concept has been proven, however!
So are you saying that these companies which have implemented these apps should instead say "You know what, we have this app developed, and working on the major platforms - but let's instead devote three new teams, one each for Windows, Mac, and Linux - to re-implement this app natively! I'm sure upper management will agree!"
That's not going to work. That's going to be shot down and laughed at. No company is going to re-implement a working cross-platform application over to three separate native contexts, and maintain all three. That just won't happen. It works already. The users probably like it just fine. What benefit to the company will this get them? Nothing - just more costs for maintenance across three platforms.
If native is wanted by the users, likely what would happen - if it happens at all - will be the company says "Ok - we'll make it native for Windows, maybe even Mac - maybe. Linux? Forget it!"
It's not the developers. Developers would love to make native apps, for all the platforms. But developers are limited by the companies they work for, and by the economic realities that all the platforms can't be supported natively; at most, only one or two can - because at that point, with the number of users on those platforms, the additional costs of maintenance and support are pretty much saturated. Adding additional native platforms doesn't just add onto those costs, it actually (in theory) multiply them - because a single person might use the application across multiple platforms. So if they have problems on two or three different native platforms, now there are three different support issues (needing more people to support) - instead of a single complaint for a single platform.
I get it, though - it would be great if these apps were native, and worked on every platform, from now and into the future (even on platforms that don't exist yet!). That's just not going to happen; if native is wanted, then only the most widely used platforms will be supported, and even then, one of those will likely be dropped, and it won't be the one from Redmond.
So - what can be done? I dunno. The concept the author brings up ("use React Native") might be the solution. Or some other interface that abstracts a platform's OS and other native functionality out to an API that is the same for all platforms. But now you have issues with security and other access - which you still have with Electron, but it is more contained and constrained, since it has to go thru the Chrome engine and the various rules/settings browsers have for sandboxing bad actors. Or - you leave it to the user and their operating system (and slim it all down - maybe that's what React Native does; I don't know, I've never used it).
Or - you (that is, the company) just says "Sure - we'll do a native only implementation - for Windows only." - because that's how it usually goes.
I don't think that's true in the case of Slack. They only have JS "hacker" webdevs, and those are notoriously resistant to any change to their comfort zone. C++? WPF? Swift? Cocoa? Scary stuff.
Would you like to use an application that would not have been written without a low barrier of entry? Powerfull and easy tools are not necessarily the same thing.
Unfortunately, no. Just as a trained electrician will wire up your house so as to not set it on fire as soon as you turn on a lamp, a trained developer will make apps that use a minimal amount of resources.
Especially more hilarious if someone writing Electron apps call him/herself an engineer.
If yes, what in earth makes you think writing Electron apps is a good idea?
If no, back to my point.
To quickly answer JetJaguar below you, yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers.
I have been trained as a Software Engineer in Istanbul. I am very sorry that my country doesn't fulfill your expectations.
> If yes, what in earth makes you think writing Electron apps is a good idea?
Engineering is about making trade-offs. There's enough discussion here about what those are for writing Electron apps.
> If no, back to my point.
What, "no"? What was your point? ...that I'm not an engineer? As I said, I am one.
> yes, I am an utter cunt, but being called a typical hackernews one hurts, considering how I can't stand most HNers
I don't think you are a cunt. I think you are uninformed and have concrete opinions based on limited or self-fulfilling-prophecy-boosting experience.
No country trains software engineers in the way you describe.
(No offense to any other country's engineers -- the engineer mindset is the same everywhere, but the mindset of the non-engineer in other countries is the distinction. That is, Germany grants engineers a degree of respect, almost reverence, that I've never seen in the US.)
No place in the US does, indeed.
I sympathize your argument and I think the field is doing a great job right now demonstrating some of the upsides of a licensing authority, but actually getting one would be bad IMO.
Most software is not life and death, and licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means (today, these are mostly indirect because everyone is on the look out for them, but they are nevertheless still there). There are good arguments that the AMA and ABA have both seriously contributed to the astronomical expense of their respective services.
For engineers, absolutely.
For software engineers, absolutely, because they're still engineers.
Software development? Go wild, anyone can do it.
>licensing authorities, like unions, quickly become gatekeepers that work to prevent competition whilst simultaneously enriching themselves through extortionary means
That seems like a terribly US centric that I keep seeing online. Unions in France, as it's the one example I can be certain of, are in no way gatekeepers, and we are a country where they've been immensely powerful when it comes to influencing the state (whether talking about worker's unions or CEO unions). But you can get any job without being in an union, all they're doing is bringing everyone on an equal footing when it comes to negotiations.
Licensing authorities are purely a society thing. Either you have a numerus closus, because the end goal is for everyone that passed the selection to have a guaranteed job, with good living conditions, or anyone can pass, and good luck everyone. It works in some cases, doesn't in others.
Subjectively, I'd say that Electron's performance overheads are not bad compared to, say, Smalltalk in the 90s, where one Smalltalk application could bring a fully loaded state of the art workstation to its knees.
And don't get me started about Flash. I had a whole project cancelled after an engineeer brought up the CPU meter during a fairly simple animation.
Chrome is widely considered the best current desktop browser in a very competitive space. If it has problems it's definitely in idle power consumption (which indicates wasted idle CPU) but it is used by a hell of a lot of people who have free alternatives.
There's nothing inherently electron-specific that makes an app "shite" any more than writing one in Qt would. You can write shit in any language, framework or platform. The day someone invents a system that protects us from our own stupidity will be the day humans become obsolete.
> They fail to integrate with the host platform,
Untrue. You can—if you need to—integrate with the host platform by writing a native node module, however it becomes less cross-platform at this point.
> they are slow, they hog memory and drink power.
Yes web apps use a bit—sometimes considerably, depending on what you do—more memory, and a bit more CPU (and hence bettery),
but they are not perceivably slower, unless you're doing something stupid (in which case the equivalent Qt app would probably suffer in the same way).
> But let's be honest here – there is nothing preventing e.g. Spotify or Slack from building native clients for each platform they support, and I find it difficult to believe that the costs would be prohibitive.
Maybe, maybe not. I think using a cross-platform solution wouldn't be ideal for them (they'd need to either rely on something like Qt, write their own rendering engine, or use something like SFML; all of these are overkill compared to Electron), and the alternative is writing it in different languages for each platform, which would probably inevitably cause the projects to get out of sync in one way or the other over time.
I get the feeling that UX and UI designers for web- and mobile apps are just a different breed than their native desktop equivalents. It might just be my Windows bias though, iirc Windows didn't have a strong / great UX guideline until their current one came around - and I haven't used many modern windows apps yet, the ones I do use are stuck in the 90's with their button bars and such.
I have a feeling it's more of an experience / competency issue rather than lacking APIs.
Debugging is also something different: https://camo.githubusercontent.com/a0d66cf145fe35cbe5fb34149...
Like hot module reload where you write your app live. You edit one component, everything else stays in place and maintains state. Or time travel with Redux, where each piece of state is inspectable. You roll back or slide through the apps history and see it literally build and deconstruct itself. That's possible because UI is just a function of state. Same state, same UI.
This is despite their mobile apps being the most minutes used apps of any company. Even then, they tried to go with a electron-esque approach and only backed off when the performance and UX tradeoffs became unacceptable.
If one of the most profitable companies in the world can't see the business case for supporting a first party desktop ecosystem, it's very hard to believe many other companies have the justification.
This isn't an engineering problem, it's a business problem.
There are some applications that do try so - 1password, GitHub for Mac/Windows to name a few that come to mind - but they seem to get less love than the cross-platform webapps. It feels like they get a certain amount of dev time before going to minimal maintenance mode.
In any case, the originator of the phrase, Knuth, specifically said that it related to "small efficiencies".
Nobody would call Electron's efficiencies that need to be addressed, "small".
There's a good discussion here: http://wiki.c2.com/?PrematureOptimization
EDIT: thanks for the downvotes. I'd love to hear your thoughts on how electron is keeping Slack and Spotify from building a massive business and how their desktop users find the experience so bad they don't use the tools obsessively. Clearly there are things to improve w/ Electron (energy usage), but "terrible experience" is not how I'd describe Spotify and Slack on desktop, and their businesses clearly reflect that.
Moving to a native stack has major tradeoffs, would it 10x their business at least? I strongly believe that answer is "no."
But its worse than that - most users don't even know why their battery life is awful. So they blame apple, or microsoft, or dell or whoever. And they just don't use their computers as much, because its all a bit gross and slow. And thats bad news for our entire industry.
I used the desktop slack client all of last year, and these kind of problems were present the whole time. This isn't some "oh, yeah there was some particularly egregious bug we shipped accidentally in October" thing. Whatever is making the slack client a bloated ball of crap is much worse than a simple, quickly fixed chrome bug. Its endemic.
I pay for both despite their poor quality software and bad UI/UX, not because of it.
Spotify makes sense as it can reliably use your file system for storage and thus download songs so you're making fewer network requests. Perhaps Slack could keep a short log to prevent "scrolling up"-related network requests?
A single page web app?
That's all most of these are. Particularly the Slack/Hipchat/Discords of the world.
> People are excited about the Desktop again
If by desktop, you mean "need to be connected to mains power to run for more than 10 minutes", yeah, people are excited for desktops again.
I use the Slack app for Windows, and the value for me is that it gets its own easily identifiable presence in my Windows task bar.
I have a (self-inflicted) problem with tab proliferation, and because I rely heavily on Slack, it's just way more convenient to use the self-contained version than to have it running in a tab buried in one of my browser windows.
It doesn't bother me at all that the Slack app is an over glorified web browser running the Slack web client.
Probably Firefox also has this.
Here's more info, with images of how it looks like:
It kinda should, though, because if the browser component isn't up to date on the regular then it's possible you're looking at (what should be) a simple chat app with a potential RCE.
"actual dev time per feature (easily 10x the cost)"
"And as for Qt, Qt has existed for over two decades"
It is C++, and we got great alternatives on Windows and OS X. still if you have to stick to C++, then Qt is probably the most widely used GUI toolkit.
"People are excited about the Desktop again"
I don't know what bubble you live in. This is just a bunch of hipsters who are excited because they can suddenly use their only programming skills to hack on desktop apps.
And it isn't all that important. Hardly any of the applications I use daily are Electron. I occasionally use Atom. Other than that great apps like OmniGraffle, Pages, Pixen, TextMate, Ulysses, Marked 2, Charles, Dash, Kaleidoscope, Tower, Keynote, GitUp, 1Password, Magnet, iBooks, ScreenFlow, Terminal etc none of them are electron based.
Why is Electron and similar so popular then, if it's easier to build native? Please don't say "because shitty hipsters".
Advanced native tools with a powerful language in the hands of a master programmer will be much more productive for the reasons he cites. The type system in particular, with a well-worn toolbox of primitives can make you extremely effective.
So, web technologies are more effective in aggregate across the entire industry, but native technologies can be more effective in the hands of an experienced single individual.
A higher barrier to entry has that effect. Low quality crap doesn't get in as easily.
>Nobody in the last 5-10 years cared about writing Desktop apps before Electron came along, there's basically zero money in it, and it's massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
I find this argument absurd. Desktop (and mobile native) apps are multi-tens of billion business. Ask Microsoft, Apple, Adobe, and thousands upon thousands of smaller businesses (down to SMEs like Panic and one-man-shops like Sublime Text).
If anything it's those web-based unicorns that are either merely burning VC money, or selling the user to advertisers -- in both cases, there are not much money in selling them directly.
And there's nothing about web development that couldn't be achieved just as easily if instead of all those money on browser engines and teams to create things like Dart, there was some effort towards a nice, cross platform, mobile and desktop UI library.
It could even have JS bindings for all web devs to use -- just without the web stack crap. React Native is something akin to that, but imagine if it had been going for years, and had more industry support, instead of the nth attempt to put lipstick on the web stack.
In the meantime, here's one of those "web-based unicorns" you so despise: https://www.google.com/finance?q=amazon&ei=viTtWMGfNMWQ2Aa42...
You know what's never going to go out of fashion? Performance. Especially since CPU speeds have stalled in the last decade. you will never get good performance if your design is: "embed an entire browser, and then use a small piece of it"
Wut? So the whole Mac App Store, MS Office Suite, Adobe's apps, many other stuff were just a dream? Until Electron desktop was dead? You're bending facts here, and I hope that's because you don't know much about the stuff you're talking about.
Desktop is big, just that other things have grown bigger. That doesn't mean desktop is/was ever obsolete or losing it's utility.
Googling terms like "shareware" and its history (not the only model, there were, of course, outright paid products too), will show some stories, maybe not a lot, since some of this was before the web, so not archived. But will show enough to get an idea. Jim Button was a classic example, but there were many other indie devs (many of them one-man shows), who may not have made it so big, but made good money from desktop apps.
Google term: shareware jim button
Edited to add:
Electron is just a way to reuse web developers for desktop development - I.e. a way companies now have to cut development costs.
It is and was, however, always built on Chromium Embedded Framework (the UI), even when you say it "was" fast.
So according to you Spotify was able to build a full featured client using a beta technology in a month. Impressive.
Anyway, here's the version I'm speaking about, built by the people that developed uTorrent IIRC, by the way: http://static.filehorse.com/screenshots/mp3-and-audio/spotif...
Also even if the current version is not using Electron, it's still using Chrome, so the same argument applies.
Yeah... I opened a 500KB log file in vim and Notepad++ and they are using 5 MB and 7 MB of RAM, respectively. They both also manage to use no measurable amount of CPU (even to blink the cursor!) unless you interact with the window.
Is it really possible? What kind of alien technology is it?
Realistically I would imagine that a native Win32 app (Notepad++) that's totally idle except for message loop and cursor blink requires less than a microsecond of CPU time per wall clock second.
But it does mean you're limited in the things you can do with it - when Office 2013 gained that fancy animated caret they had to do it themselves, similarly the caret in Atom and VSCode are both software-based.
Now - I know there are a ton of other options for Linux, but the thing is, I can't jump from system to system and have the same app with the same experience - even if the app was developed as a native app for all of the platforms, because each has a slightly different native GUI implementation and usage which doesn't translate fully between each.
So now I have to learn and use potentially three or more different programs/apps/whatever to do the same task. Or, I have to remember the quirks for each native implementation.
...and let's be honest: Not many companies out there are going to develop a native version of the same app for all the platforms, because most platforms have a lower number of users than others (in many cases, much lower - depending on the genre of the app in question - like games).
It's an economic tradeoff: We either get a balkanized system where for certain kinds or types of apps we need a particular machine for the native implementation, or we have the case of these larger cross-platform apps that anyone can use on any system, in the same manner everywhere.
Here's another thing - most of these complaints seem to have to do with laptop users. I don't really worry about these issues on a desktop, because there I can have a ton of memory and way more CPU than what I can get in most laptops, and I don't have to worry about battery power.
But for those who are stuck with laptops - maybe they need to bother manufacturers to increase the amount of RAM and CPU available, to handle these larger apps.
It's also funny that I hear people complain how these apps are too big, and use too many resources for editing text or whatnot; you make the case that vim and notebook++ use only a few meg of memory, and no CPU.
I tend to wonder how well they'd fair on my old TRS-80 at home - you know, I had a full-screen text editor on it that didn't use much CPU (sub-1 MHz) nor memory (less than 64K) - so why can't we return to that?
Honestly - I don't want to; but we can take this argument down the rabbit hole, because the argument that today's stuff is bloated compared to another case, can easily be made about today's stuff vs older stuff. Most of that bloat of your "smaller" example comes from abstraction; the same as the "new bloat" - not many years ago a program taking of 10 MB of RAM would have been insane. Today, it's normal and expected.
I daresay that in the very near future, programs taking up several hundred meg to a gig or so will also seem normal, because by then we'll have even better CPUs (with maybe hundreds or thousands of cores) and way more RAM (terabytes).
Some might argue that this is the case today, in the form of cloud computing and SaaS - browser-base stuff, in other words.
I would acutally pay Spotify extra money if they brought back the old, snappy, pre-Electron version of their Windows desktop app.
<edit>They replaced it with something that is simply too heavy to manage larger playlists (except maybe on the max-spec MBPs it's being coded on) just because they could iterate more quickly on some non-essential features which they might have packed inside a WebView, leaving essential features intact and responsive.</edit>
The fact that a Microsoft editor works on Linux is amazing. Underlying thanks to Electron.
What Electron did is it enabled all those web devs (even front end guys) to write "desktop" apps. And trade offs are certainly visible.
I am boycotting Electron. I have zero Electron apps installed, and recommend strongly against it. I hate that web dev mindset that has been pushed into desktop userland. It just doesn't work that way. Write native desktop app yourself and you will see how wrong it is.
They are very very close to each-other. Obviously they are not exactly the same, and I used QT in this example. But they are not wrong, or completely different.
> How is React + browser rendering any different then QT rendering?
Qt has far less overhead. A small Qt program can be in the single digit megabytes in both disk and RAM consumption, while using near-zero CPU for the UI.
> How is HTML/CSS any different from QT layout xml?
My understanding is that Qt's layouts are translated to native code at compile time, resulting in native performance and overhead on desktop. Browsers' dynamic rendering is relatively intensive and expensive.
> How is QT sockets/threading any different from NodeJS sockets/threading?
When it comes to I/O I don't think there's a significant cost to using Node/V8. I'm totally fine with CLI applications written in Node but when it comes to UI, a browser is just too heavy.
Writing Electron app felt like sketching, at least to me. You place one line of code with tags and boom there's a button. Now QT takes place somewhere in between, and it still provides nice native multiplatform environment that could be appealing to web developers.
Runtime issues aside (bloat, cpu, etc), why is this a bad thing?
I like that I can write a single line of code and "bam!" get a button (or any number of things) to appear. Why would or should I want things to be more difficult for me to develop a piece of software?
Sure, I could do things in some other language - I mean, I know a ton of others. But implementing the same functionality can be a much larger pain in those other languages (and honestly, for app gui development, I haven't found anything that beats the drag-n-drop editor of VB3/4/5/6 - there was something close to it in Visual Studio for C++, but it still required some manual "hook up" with the code for callbacks and event handling and such).
I mean - if I really wanted to do things "right" - why don't I just whip out my text editor and write assembly for whatever CPU I'm targeting? I can full control over everything, then! Best performance! Those guys and their compilers, I tell ya, they don't know what they're missing!
(heh - sometimes they don't - there's a whole generation or more out there who've never coded for a CPU by looking at a datasheet and finding the byte values needed to represent op-codes to hand-assemble a piece of code - sometimes I do miss hacking on the Apple IIe and monitor - CALL -151 ftw)
Anyhow - as someone who's been coding for longer than I really care to say (of course, I kinda gave my age away above!) - I don't want to return to those days; I kinda like living in the future of computing I could only dream about as a kid.
Simplifying in itself is not bad. Improving API with newer and better is not bad. Adding language features to advance productivity is not bad (thinking of C# iterations vs Java). Basing the future on a terrible language, ecosystem, practices and "developer" mindset is very bad.
You presented like the problem was in writing of the code, it's not, that's Electron's biggest strength. The problem is the bloat that it comes with in order to provide you with that experience. Like everything, it's the matter of tradeoff. For me, it's not worth it, nor I like writing HTML and CSS.
Unless they significantly rethink their approach it's significantly flawed without much room for improvement. Plenty of room for something better to come along. Even if it saw mass adoption, the demand for a better base would invite an alternative to gain traction against a sluggish goliath a la Firefox-vs-IE (or Chrome vs Firefox).
Why stick with a doomed formula?
These technologies you diss have been reliable and stable for over a decade. As an embedded developer, my code has to be running 24/7 for months without rebooting, without running out of memory, without crashing. But forget embedded, and imagine a web service. Can you truly honestly say that an app written in electron will give me that reliability? Or should we just accept that if you want to write apps using "modern" tech, you'll just have to deal with it? I personally can't imagine any web framework ever maturing and being stable enough to where you can invest money in it knowing that it will be around 10 years from now. Having your underlying technology platform in a constant flux makes your entire product stack brittle. That is a Big Deal.
If developers weren't so scared of Swift and C#, this wouldn't be a problem.
> (writing Desktop apps) is massively expensive, both in terms of actual dev time per feature (easily 10x the cost), and also in finding specialist developers who know these dated technologies.
I find the opposite to be almost universally true.
Writing a lightweight native desktop app is almost always cheaper than trying to build a JS-heavy app that has to run well in Mobile, and Tablets, and in a standard web browser, and in Electron fake-native-desktop web browser. Yes, you have separate projects with separate codebases. But two or three small lightweight projects is almost always cheaper than one big codebase with lots of targets, in terms of total cost of ownership.
I think all these "massive cost" comments come from sheer ignorance. Those "devs" are frightened at the need to learn a new language and frameworks, overestimate the time required to learn those, look around them and only see likewise clueless "devs", frightened of changes, and extrapolate some comically high overestimation of cost and time, when in reality, a properly written software is much more accessible to join in and support than a web "app" with the contemporary "sexy" observer pattern nonsense splattered all over, coupled with a horrible, horrible dependency management system and a language/framework combo that requires multiple dependencies to perform the most trivial of array loop.
Anyone would think it to be ridiculous if a carpenter told you he only works with saws because he is a saw carpenter and that for other kinds of work you should see the plane or sander carpenter. To me saying you're an [insert tech here] developer sounds the same...
...and yet there are specialties within like "framer" and "finisher", among others. Then you have interior workers like specialized custom cabinet makers, flooring specialist, drywallers, painters, etc.
What I'm trying to say here is that even in "carpentry" for putting up a house, there are numerous specialties.
When the mere notion of a language change or API/paradigm difference frightens you, you are a bad developer.
I can't speak for others, but for me, the "massive costs" isn't about having to learn another language or framework. It's instead the massive costs to my employer. It might even be a massive cost to me as a single developer.
By developing a cross-platform app using a single set of easy-to-use tools, a large audience of users can be gained, that would otherwise be prohibitively expensive to support if native-only was the mantra. Instead, that application would have to be developed for only one, maybe two of the "major" platforms (and guess which platform it wouldn't be developed for - that would be the platform that I like most).
Supporting and maintaining a single codebase for one platform is a monumental task for any company, let alone a single indie developer. Supporting and maintaining multiple codebases for multiple platforms can be debilitating for a company, let alone a single developer.
I lived and played in those times; back in the "second gen" if you will of the microcomputer days - you had games and apps by different companies, and developers. In most cases, a game or app was only developed for one of those machines (usually the Apple IIe or the C=64, sometimes both - maybe an Atari too), but the other systems were all considered "second tier" by most developers. You might get a port of a game or app - but most times, you had to settle for something else, or buy a second system (ha! only if you had real money! I look back on the costs of those systems back then, and wonder how my parents ever managed it).
There's a reason you see a lot less of that going on today; it isn't because devs are frightened of learning a new language or framework.
meaning he works for a company that heavily leveraged an existing protocol, IRC, for which there are already a metric fuckton of native clients and libraries for every platform under the sun.
Implementing their shitty web client was undoubtedly far more work than supporting a handful of native clients.
The codebase isn't lightweight to begin with, and duplicating it for a native app that maybe only one person could maintain was a non-starter.
What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?
Couldn't your team learn the technologies? These days there is an abundance of resources available (online tutorials, books, bootcamps, etc), especially for ecosystems as popular as the Appleverse.
I can't pretend to know your situation, but as a reference point we had an iOS project come up at work a couple years ago and I was able to pick up Objective-C and the Apple libs in a few weeks while still being productive on other projects. I followed Apple's official tutorials and built some toy apps, then learned the rest as I went on the real project. This is after having never owned an iDevice and doing mostly web and devops work in scripting languages for many years prior. A few peers of various experience levels were able to ramp up in about the same amount of time, so I'm not special.
I don't see how that is any different. If your Windows team members all left, that project would be dead or out-of-sync too. Wouldn't you replace valued team members who leave, in both cases? Or is this a concern that you won't be able to find developers willing to work with OSX and/or iOS tech?
> With my company, our users have always begged for an Mac OSX and iOS app.
Repeating this again because this should be telling. If your getting feedback begging for an OSX and iOS app, there's probably a number of really good reasons for that.
> What'd be a good example for the kind of lightweight project that can reasonable be duplicated for Native Mobile, Native Tablet, Native Desktop, & Web Browsers?
Spotify, Slack, Twitter, Facebook, any streaming video service (Hulu, Netflix, Amazon Prime Video, HBO GO), etc.
Note that I'm not using "lightweight" to mean "small weekend project", but to mean "less complex than the codebase needed to reproduce these features in a browser or Electron browser".
If you are truly a small company / startup, and you truly have to support all platforms with a small team, then sure Electron makes sense despite all the drawbacks. I totally understand that.
But I usually hear this excuse from big companies, that still want to perceive themselves as small, but aren't. Slack is a billion dollar company, they are not a small business / small startup. If a company is large enough to have more than two people working in HR full time, then they are probably big enough to do this stuff right. "We're a small team" simply isn't true for them.
I agree though that Electron is a problem. React native will probably be the best way forward. Microsoft has recently taken over RN-Windows and ReactXP, a RN-web clone. RN runs natively, doesn't need a browser, while being able to tap into the JS eco system.
You also seem to agree now that both are completely different.
I'm curious, since I use PyCharm and I wonder what I could be missing out on.
The shell that i'm using for instance, Hyperterm, it does things no other shell can do, and they had more than a decade to evolve. JS doesn't have a problem displaying json as json, display webpages on link clicks, moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS.
The same flexibility you see in Atom, VSC and the others.
You know what else doesn't have a problem doing this? Literally any other programming language or tool that I've ever used to look at/edit Json.
> Hyperterm, it does things no other shell can do...display webpages on link clicks moving up git logs with my trackpad, adding tabs with a plugin that contains a few lines of code and a little css, ... it just comes easy to JS....
Any pretty much any other terminal built with flexibility and extensibility in mind (even the base terminal in Linux can handle links lol, that's definitely not exclusive to js). ZSH springs to mind, with the benefit of being written in Case, so that you know, it's actually fast...
> The same flexibility you see in Atom, VSC and the others.
Ah yes, including the freedom to not open any binary file, or any file >2mb in size!
React Native isn't "native" (in quotes), it's native... it uses the actual widgets provided by the host OS. You use similar techniques to create your UI for each platform, but you do generally need to create separate UI for each platform with React Native.
Yeah, they're that much better, that we grumpy old non-JS programmers keep wondering what the big deal is when somebody comes along and rewrites something that existed 20 years ago.
- Everything GNOME does
- Everything MS does
- Everything KDE does
- All the apps in Mac OS
- Every browser
- Office suites
- Messaging apps
- Adobe's Creative Suite
- All the content creation apps for video games
- Video games
- Audio production suites
Web developers are excited about the desktop again, because to them the desktop is some kind of new frontier that Electron has opened up to them. I get that, but please be aware that just because something is new to you doesn't mean it's actually new.
Clearly Java never existed. I've been experimenting with Swing lately because Java has a library I'm interested in using. It's a breath of fresh air compared HTML/CSS/JS. It may not be proper "native" development, but I don't have to deal with <div> hell. It has proper layout management instead of, what, 4 quirky CSS layout styles: float, table-, flex-, and now grid-. Sure if you want to hack something together, Electron may be quicker short term. But I question if it will actually be cheaper in the long run for non-trivial applications.
If you want something that's much closer to proper native, have a look at SWT. It's faster than Swing and actually looks/feels native since it's actually using native widgets. It's really a shame that the poster child application, Eclipse, is so bloated and slow, because people attribute that slowness to SWT when it's really primarily the plugin architecture that's responsible. And I actually find the more spartan developer interface to be more pleasant than Swing, so to me it's win-win. Swing will always have an uncanny valley, SWT has never had that issue, yet still allows you to write for the desktop in only Java.
I feel that Sun's choice of Swing was the main reason Java failed on the desktop.
I don't think it is better, to be honest, and so I also wonder why people aren't crying out for decent layout management for the web - is it just lack of awareness of how good things could be, or...?
They have been, and that's why today there are things like flexbox and grid layout. It just took a long time for the browser vendors/standards bodies to be convinced and then spec these things out to work within the confines of the existing layout models.
Smalltalk had a GUI based approach to design in the 70s on a 2 mb disk, and most web layout is generating html & css, then tweaking that stuff. It doesn't feel like web layout is a progression.
Imagine being a small shop of 1 person (or even 5) and having to learn desktop programming languages, conventions, and native APIs for the web, Mac, Windows, and Linux. Then on top of that to develop and maintain a product that moves at the pace of customer feedback.
That's why Electron is powerful. I could not write and maintain the Hemingway App for Mac and Windows without Electron. My tens of thousands of users would not get to use that software _at all_.
I think the implication is as much that developers who label themselves like you have as "non-native" or "JS" or "web" developers don't have the perspective to make an informed decision on the matter.
Many of the rest of us know how to build React SPAs (and have been doing it long enough to have used Angular 1, Backbone, Sproutcore, ...) but "JS + Framework of the Week" is just one of the many tools we could turn to when building a UI. Many of us have also used cross-platform native libraries like QT or GTK, or platform-specific toolkits like Cocoa or .NET.
Basically, if the only tool you have is a hammer... it might be time to learn to use some new tools.
Come on man, it's a little too easy, but NOBODY? People write desktop apps all the time. What you mean is "nobody I know", which probably says more about your social circle.
I'm not "excited" that macOS is flooded with apps that have poor accessibility experience while Apple itself is fanatical about delivering first-class accessibility experience - like Safari having accessibility always enabled.
So if the Electron bloat is such as issue for you, just hit Slack in the web directly.
That's even more horrible, i've tried it. Chrome uses his own notifications on macOS instead of integrating with systems notification manager and always when one of these ugly, poor animated guys pops up, i can search through my 100 open browser tabs to find the damn chat. Millions of flamewars were fought about what's the best window manager and we are ending up with a single window with a tabbar? Are you kids serious?
This was the main reason I used the electron version of slack and tbh it makes little difference if I run a dedicated Firefox window or the app since my laptop is less than a year old and could run a few hundred instances of either without sweating..
I'm on Firefox these days; with noscript (once you go through the initial few days of pain whitelisting things) it works brilliantly; I would suggest you give it another chance..
Unless you try to book flights from britishairways though.
But I find it to be a humorous typo given the topic of this discussion as ionic is a browser-based cross-platform mobile framework.
You could have expressed pretty much the same sentiment about Java ~20 years ago. The trade-offs (high resource consumption, additional complexity, but less demanding of developer time) and even the fundamentals of the approach (an additional abstraction layer that works as a VM, APIs oriented around all the latest thinking) were also very similar.
Java is used for a lot of things, but mostly not as the basis of mass-market cross-platform applications the way it was originally intended.
This comment is completely wrong. You know what Slack was before it was an Electron app? It was a Cocoa app. Sure, it wrapped a WebView, but given Slack's nature that's not a surprise. Regardless, the Cocoa Slack app, while not great, was still much better than the Electron crap we have now.
I don't know if you're simply ignorant of the fact that the app this article is focused on actually was a native desktop app before it switched to Electron, or if you're just ignoring that pretty important information in order to try and make your point, regardless of whether or not it's true.
>if its massive "Beatles walking off the plane" moment hasn't happened by then, sorry, it's not gonna.
It's crazy to think that developing something native on Desktop using the default stack of the operating system means using "dated technologies".
Agree fully with this. I find the complaints about how appalling it is that apps use a few extra 100MB of RAM and disk space really tiresome and impractical when laptops + desktops typically have lots of RAM and disk space now. Many of these cross platform apps either wouldn't exist or would exist on only one platform if everyone insisted on native apps especially for apps made my small teams or individuals.
Just because it's there doesn't mean you need to use it all, that's like spending your whole paycheck at once because "you'll get another one in 2 weeks!".
Any application that does the same job with less resources and the same or greater performance is unequivocally, objectively better.
Also, benefits of claims like easier and faster updated are rendered pretty moot when developers next spend that time changing things for the sake of it: Spotify app feels like it has since pointless, un-requested UI change every month. And VSCode can't even update Idle, decor that being one of the lauded benefits...
What Electron has done is allow people with a particular skillset to apply it to a domain where it is both unnecessary and wasteful. These "developers" would be better off generalizing their understanding of software and UI design to the point where the tools used are irrelevant
1) Electron is obviously a system library now. Treat it as such. Install ONE copy of electron per version, not ten private copies of electron in npm. It should be simple to integrate with at least Windows and OSX install systems - instead of downloading the full electron module, download an installer that installs the required version system-wide. This can be done as a step during installation like desktop programs have been installing required dependencies since the dawn of time.
2) Find some way to make it slimmer. Maybe not every app needs the entirety of the browser loaded. Maybe we can do lazy loading of certain components? Maybe optimise the most used critical paths, and provide tools to developers (if those don't already exist) to show exactly what is causing all those wakeups?
Moving off Electron is not an option any more, it's just such an easy and familiar platform to develop for, it's not going anywhere. Maybe we should accept it as the victor of the cross-platform desktop application toolkit and work to make it better. It's shitty, but it's the right kind of easy, like PHP.
Electron isn't a better experience, and if it's in a browser, at least it can share some resources with other web pages.
>>all you web devs: Go learn C or Rust or something. Your program runs on a computer.
I don't think that is a realistic request.
When the cost of Slack's ram/diskspace usage starts affecting their profit then they can/will take steps to re-write the application in 'native' code. Until then its premature optimization?
My own prejudice aside - I honestly do think that if you don't have an understanding of how a CPU works at this very low level - maybe even lower - you don't know what you're doing; that, and I have done hand-assembled x86, 6502, and 6809 code in the past...
It's a different time; knowledge of lower level languages isn't needed today to be a successful software developer who knows their craft. And I recognize my prejudices as such, thus while I tell them, I understand that they really can't apply. Would I like it if these new guys all knew this kind of stuff? Well - yeah; but the same could be said of me by guys long gone and dead as to why I don't know how to wire up an analogue computer to solve a calculus problem, or why I can't wire up a plugboard to compute something on a IBM 401 or such. That doesn't make them a better developer, nor me or anyone else a worse one. We're just using different technologies.
That said, I do think developers should branch out, and at least have an understanding of other languages; maybe C/C++ - but even Java, Python, GoLang, Rust, etc - all could be just as useful to know.
Even as someone myself who knows (more or less) C, this is quite an elitist statement, and doesn't do justice to say, Lisp and people who use functional languages. The goal of programming isn't necessarily efficiency.
Im not trying to claim low level knowledge of the machine is inherently better than high level knowlege. What I am claiming is you need both and knowing how this stuff works isnt a charming novelty reserved for the curious
I think it's a mistake to redefine it as optimization that doesn't affect profits. Companies often get away with painfully broken software for a long time because they benefit from some other moat (e.g banks). But they're setting themselves up for disruption by doing that.
Also, profits are not the only purpose in life if you value your craft. (I'm not denying that you have to be able to afford high standards and economics does play a legitimate role)
Hm? How come? Is there a mental barrier here that I'm not understanding? (C and Rust are "hard" languages?) I taught myself to code on my parents' hand-me-down Pentium 100Mhz in the early 2000s, and I started with the K&R C book. Python and C were my swiss army knives at the time. I really can't see this stuff being super difficult, I mean I was a teenager and I had no understanding of algorithms.
It's graphical GUI builder is out of this world, and it works flawlessly on all three desktop platforms. It did even compile without a single code change on windows, after I finished coding in ubuntu, a utility tool for some researchers in my organization .
And about the language ... I finished my utility tool project in 80 hours, from initial concept to happy customer, while learning the language in the process.
It is really such a shame that people have so strong aversions against Pascal, for no good reason other than subjective feelings.
I think it's sad that both Apple and Microsoft dropped the ball on desktop because they are only focusing on funneling users to their stores. If the Microsoft of 2017 (with open source .NET and Linux as a focus) had been around a few years ago when WPF was invented, they could have made a decent cross platform toolkit now that didn't come from html/CSS/js
What are you basing this on? There are plenty of iOS developers, and developing for Mac vs iOS is more similar than pretty much any two other dev platforms you can name.
Developing for Mac + Windows + Linux + Web > Developing for Electron.
What?... Are you saying what I think you're saying. I must be reading this wrong because surely you don't wish to imply that people didn't write desktop software before Electron came along?
NW.js uses less resources (less ram, smaller distribution sizes).
It is updated more often (within 24 hours of every Chromium and Node.js release, ensuring access to latest technology).
It supports about twice as many OS's (XP+, OSX 10.6+, Debian 8, Ubuntu 10+). Electron doesn't even care enough about Linux to merge in simple bug fixes.
It's much easier to get started with and takes a no-nonsense approach to everything. (thejaredwilcurt.github.io/website/quickstart.html)
Allows for HTML or JS entry for apps.
It offers actual source code protection, and even recently updated this so that there is no longer a performance hit when using it. Which is a pretty serious technical achievement.
The only thing wrong with it is that it's got a shitty name/logo, and has a smaller ecosystem. If you can get passed that you will have a much nicer experience.
Maybe not a coin miner or a graphics engine; but cpu is never really the constraint for applications such as those inside electron containers. Where you see them chewing 100% of a core, those are bugs.
Also, while th e language has flaws modern; JS vms are pretty bleeding edge....
Have you ever heard of League of Legends? It's a desktop application made in that time frame. It does 10 figures a year in revenue.
Wouldn't it make sense to run a caching proxy on desktop and use a browser that is already there? That way you could still use some functionality not available to plain websites, like access to computer. But use system browser for rendering. Your proxy would also cache your website so it would run faster than normal website.
I understand, that this way you have to support more browsers, but you probably already have a web version of your app.
The problem could be with users seeing localhost address. How to overcome this issue? Virtual interface on which DNS server could run?
Maybe running from a file:// could be a solution for some apps?
You could open the Mac App Store, pick the 'top grossing' category and count the number of Electron apps.
Is that good or bad?
A browser can load remote payload (not under user control) that can then exploit browsers flaw, Flash AVM2 flaw, etc.
On the desktop the user run a local exe/app which has de facto "full system access".
A bit surprised you don't know that, it's kind of basic ...
Lost count of the number of times in my day to day work I see decisions made that make devs lives easier at the expense of performance or features.
Everyone in the last 40 years built and sold a multitude of destkop apps and they never stopped. Stop drinking the cool aid, and even if some bozo upvotes you on hackernews it does not mean that you are right (actually nowadays the opposite is more probable).
Maybe nobody in the Web bubble.
> I simply have no interest in learning any native
> […] programming
/I don't disagree with you one bit...
I used to be one of the real programmers, now I program in Flash, strange...
What? Have you heard of Adobe AIR, AppJS, Node-Webkit (now NW.js) and Mozilla Prism/WebRunner?
Do you think people 'knew' what was going to kill Flash before it died?
Does the weight of these just irritate you because of how wasteful it is (it irritates me for this reason alone) or does it actually have an impact on you?
People said the same things about gtk or qt when we had motif right? Apps are going to keep getting bigger, our machines get bigger too. I'm not sure what he problem is.
Edit: fat fingers
Nobody in the last ~20 years has cared about writing Web apps before JS came along. There's basically zero money in it, and it's massively expensive, both in terms of actual development time per feature (easily 10x the cost of working in sane languages with sane runtimes), and also in finding trendy hipsters who don't know algorithms or data structures but took four weeks of coding academy classes a couple years ago. And as for Electron, Electron has existed for over three decades (Project Athena launched in 1983) -- if its massive "Nixon throwing the peace signs before getting on the plane" moment hasn't happened by then, sorry, but it's not going to happen.
But now? People are making all kinds of great new languages, and more often than not, they don't repeat JS's mistakes. People are excited about programming again -- JS is so bad that it's single-handedly revitalizing interest in languages which two of the largest tech companies in the world are behind, yet couldn't make popular.
This is a Big Deal.
(You are being parodied mostly for being a Slack developer, not disclosing it up front, and then trying to convince folks that Electron is good, which makes you sound a lot like a pig farmer trying to sell pigs' feet.)
Zero money in web apps? You mean like Google, Facebook, Amazon, Twitch, Netflix, and YouTube?
There's way more money is web apps than desktop apps. hands down. I'm not talking about mobile because this is a convo about electron.
Those examples aren't event good, though. They all seems to be becoming less popular these days.
This can't be true seeing as Google Docs is used by many companies and students.
And "better" may not be the goal of many web apps. They may be trying to measure success by "convenience" rather than being objectively "better".
Win32, Java, MacOS, et al lost because there were such strong standards for doing things the One True Way that competing standards couldn't flourish and the APIs stagnated.
- apps from untrusted developers can be run safely
- app installs are measured in milliseconds and don't require switching windows
- app updates are invisible to the user
- beginners can modify apps without leaving the app itself (MySpace profiles, etc)
- references deep inside one app can be embedded in another
- apps run on nearly every device from one codebase
Actually - pig's feet are actually a pretty good eatin' portion of the pig. If you've ever had them, you might agree.
Ah, yes, we are all grateful for Slack's 850MB of memory used when idle.
I can live without it, personally. Actually, if it could crawl back into whatever hole it came from, I'd be glad.
At what point of dementia do you say "hey, web APIs are easier to use, let's bring the entire damn browser along with it!"? The normal reaction should be fixing current APIs, making good wrappers around them.
My first computer saved data to a cassette tape; today, terabyte hard drives are nearly give-away prizes in cereal boxes (yeah, I know they don't do that anymore, either). We have machines in our pockets which are arguably (in some ways) more powerful than the machines only government-funded agencies and such could afford when I was a kid (and they took up entire rooms).
I hope I live long enough to think of 850 MB as a small amount of memory (and 8-cores at 4 GHz each a slow machine; actually, there are some multi-cpu server mobos I'd love to have as a desktop, but I can't afford it - yet).
Honestly, I'm amazed that we do have enough memory to support these huge apps. Sure they're bloated; I won't argue that - but at the same time its amazing that we can run them at all - a decade ago it would have seemed ludicrous!
Could it be better? Sure - but in my mind, all software is bloated - because even the simplest piece of compiled code I can't run on my old TRS-80 from when I was a kid (heck, even an Arduino has better specs!).
Desktop development has stagnated because of the proliferation of web technologies, not because desktop frameworks aren't very good. Qt is great, gtk is great, hell, with the right configuration, working with WinForms can be pleasant.
Electron packs a browser with each app, and that's unacceptable. Adobe used to have something called AIR, years before electron, but idk if they were just packing up a browser or not. I used to use the desktop version of tweetdeck, which used AIR, and it was pretty damn fast.
Anyways, you clearly don't know anything about desktop development, so rather than make claims that are wrong, just stick to what you DO know.
What a load of tripe.
Try stuffing all the functionality of a large enterprise desktop app like SAP or Siebel into Electron and let me know how well that goes.
Perhaps someone can clone the outward api of electron with only a ~10% performance penalty instead of Two orders of magnitude