> The biggest gripe is that everything feels sluggish. Alt-tabbing between Firefox and a terminal takes one second, as does switching between Firefox tabs. As an extreme example switching between channels in Slack takes five to ten seconds. It is unbearably slow.
This is a hidden downside of writing code that's only just fast enough to work. It may feel fine for you, but anyone building a new computer will have to match your computer's performance, or everything will feel sluggish. We're raising the bar for new hardware way higher than it needs to be.
I don’t particularly agree with Facebook it’s ethical ideologies but one great thing they do (or used to do?) is 2G fridays where the devs could only test the app on a simulated 2G connection with throttled bandwidth and simulated delays / packet loss.
I’m pretty sure in general smartphone apps are tested for a decent variety of performance targets, perhaps it should happen more for desktop software.
Now that I think of it, the push for more and more Electron apps may be because all devs are living comfortably on 16GB and 32GB devices, where the voracious RAM appetite of Electron does not matter.
> Now that I think of it, the push for more and more Electron apps may be because all devs are living comfortably on 16GB and 32GB devices, where the voracious RAM appetite of Electron does not matter.
IMO its generally the JS hype producing these monstrosities. QT and Java have had fast and powerful cross-platform UI for ages.
I'm trying to push a new paradigm at work where we write most of the logic in Rust. UI view layer would be JavaFx for desktop and React Native for mobile. Only 2 UI's to target for all platforms, reuse of core code, and good performance all around.
> IMO its generally the JS hype producing these monstrosities.
If you use "the JS hype" as a synonym for what are considered best practices
by the folks in the NodeJS/NPM ecosystem, then you're right. (I.e., the fault
lies in the "hype" half, not in the "JS" half.) It was 10–15 years ago that
you could routinely hear one of the most prevalent criticisms in the
programming world about the bloat of Java.
I think that, where JS is concerned, for some reason we're seeing a regression where it's becoming
"conventional wisdom" that JS itself is slow, against the evidence to
the contrary. I've seen straightfaced comments here on HN in the last few
months, for example, that complain about the slowness of JS as a general rule.
But the reality is that the JS runtimes have billions of dollars of
engineering from top-tier teams invested in them, and today's JS engines are
by-and-large pretty fast. V8, in fact, shares parts of its implementation
with Java's HotSpot—specifically the parts that were developed by the folks
who made the StrongTalk VM and who were acquired by Sun, thus leading to that
work being incorporated into HotSpot in the first place.
So what is that reason?
There has been a noticeable shift in performance degradation that corresponds
to the rise of, for lack of a better term, "the NPM way of programming". As
with the case of Enterprise™ Java®, the problem lies in the way people in
those circles are writing their programs and what passes for an "idiomatic"
coding style. The NPM style used heavily in many Electron apps is relatively
recent, with respect to JS's lifetime. Even before JS engines were JITted, Firefox itself had hundreds of
thousands (millions?) of lines of JS doing a lot of the work both in what you
see on the screen when you're poking at your browser as well as behind the
scenes. Notably, the JS in those cases is not of the NPM style. There's
nothing in principle that means the "Emacs-like" application architecture
(compiled core, dynamic shell) needs to be slow, particularly on today's hardware.
As I've mentioned before[1], in the early days of Firefox, I used to use 1.0 and 1.5 on an 800 MHz
PIII with 128 MB of RAM. (For folks looking to leap in here with what they'd
like to consider a well-timed "well, actually…": yes, I'm acutely aware that
even that number is on the order of 100× or more beyond what is necessary to
get real work done with a computer—but the point is that it's nothing compared
to, say, stock 2015-era Chromebooks with 8GB of RAM, or a comparable quantity
in today's phones, for that matter.) Browser extensions are written in JS, and
my laptop now is several times over better than the laptop I used 10 years
ago—and yet... if I install any arbitrary extension today, there's a good
chance that I will encounter perceptible bloat there, too. A recent example
(within the last year) that I know of, is the WorldBrain Memex add-on, which
upon immediate use has the telltale mark of influence from the world of modern
"frontend" webdev, and the performance to match. This wasn't the case when
add-ons were authored in the sui generis style of yesteryear, before the NPM
practices leaked over and began influencing everything related to JS—and
tainting people's perceptions.
So I find the attempt to draw a contrast between JS and Java a little misplaced. Even ignoring the
common history (in both culture and provenance), there's the fact that Java
IDEs themselves have always been the poster children of bloat—second only (or somewhere in the running) along with Visual Studio proper. I know people
like to point to VSCode as an example of a "snappy" Electron app, and the
inevitable retort about just how lean it really is. (On my system, I don't
think it's possible to run VSCode without making sure that there's at least
350 MB of main memory to spare before launching it. Compare to the old joke
about Emacs's "bloat": that it was supposed to stand for "Eight Megabytes And
Constantly Swapping".) On the other hand, I have to recognize that the folks
calling VSCode snappy really are on to something. The previous statements
notwithstanding, the fact is that VSCode is still snappier than anything
I've ever experienced using one of the mainstream Java IDEs. If I were a
naive person, I could point to that and conclude that the problem lies with
Java-the-language. And yet every day I used my phone with large parts written
in Java—which, to be fair, does impart some impression of bloat and
sluggishness, so it's prudent for me to keep in mind earlier versions of
Android on older, more limited hardware that did have a fairly snappy feel.
And those observations lead us back to the root problem, which is if you
judged only by much the code being written today, programmers seem to have
forgotten (or simply never learned?) how not to write code that's bloated and
slow.
> I think that, where JS is concerned, for some reason we're seeing a regression where it's becoming "conventional wisdom" that JS itself is slow, against the evidence to the contrary.
It's because people have plenty of experience with actual real-world JS code being very slow. Which has nothing to do with the runtime or the ability of someone purposely writing optimized JS to optimize it well.
The fact that you can write fast JS doesn't mean that the language itself or popular frameworks encourage you to do so, whereas that is the case for a lot of other languages with a reputation for being faster.
I don't think you are actually disagreeing with the author's point- you could easily replace "JS" in your statement with Java, C#, Haskell or a number of other "fast" languages and it would still be an accurate statement.
I, at least, have had the displeasure of using slow gui and other programs written in those languages.
This attitude fails to recognize the point I was making in my comment, and so it ends up fundamentally getting something wrong. It happens here in this sentence:
> The fact that you can write fast JS doesn't mean that the language itself or popular frameworks encourage you to do so
One half of this sentence is bang-on, and the other half is not. That is, it conflates what is encouraged by the language with what is encouraged by the modern JS crowd, and suggests something in that vein—as if they're one tightly interconnected bundle—but in fact, they are two distinct things, and I made several remarks in my original comment alluding to this.
To successfully argue the point you're making now, you have to argue that "the NPM way" that is now prevalent is the inevitable result of merely setting out to write a program in JS. But, in fact, it's not. As I mentioned, the JS that made up a huge proportion of Firefox's codebase was written in a style that doesn't resemble the style now popular with NodeJS and NPM, but there was no big, conscious effort to do that by, say, avoiding pitfalls of the language and the things that might make it slow—it was purely a result of the lack of opportunity for being tainted by bad examples from the NodeJS/NPM world, since that world didn't exist then. The main influence on programmers writing JS for Firefox was the influence of Objective-C, C++, and Java.
You certainly can "optimize" your JS to make it faster, just like you can with any program, but that's not to say have to—you can leave the optimization to the engine itself most of the time. All that's really needed, on average, to make sure that JS is fast, is to avoid tainting yourself with mindworms that have proliferated in the NodeJS+NPM ecosystem, that is: just don't do the things that you would have never thought of doing were it not for having seen others doing it somewhere else on GitHub or in the packages hosted on npmjs.com.
In other words, if you want to write JS that avoids being slow, then you don't have to take any special effort. Generally, you can start by opting for writing the dumbest, most boring code possible. (Indeed, I routinely come across "clever" code by self-styled NodeJS aficionados exhibiting elaborate contortions to fight against the language, when it would be far better to just do the simplest possible thing and then move on. Refactoring to eliminate these contortions can even make the code more performant and more concise.)
For example, let's say you have something written in Java, or something with an analogue in C++ or Go, and your team has some motivation to either recreate it in or migrate it to JS. That's perfect—before you ever think about handing the task over to a professional NodeJS programmer, you really should give some heavy consideration to doing your best to copy the architecture from the existing, non-JS implementation, down to class names and code organization, and doing a straightforward, procedure-for-procedure port to JS. (Although, if in the case of migrating from Java, maybe also consider eliminating any unnecessary abstractions along the way—or don't.) There's a good chance that this will have satisfactory results that challenge your assumptions. Even if you're creating something from scratch rather than porting an existing solution, once again, all you have to do is to not worry about being fashionable and trendy, and just do the most straightforward thing possible.
There's a recent comment here on HN that's extremely relevant and really hints at what's going on with all these slow, bloated, and messy projects from the world of modern webdev:
The bigger problem is the use of dynamic HTML for UI everywhere. It'd be slow even if you used C to drive the dynamism, although having JS there doesn't really help.
As far as VSCode being a "snappy" Electron app - I still vividly remember that bug when users were seeing it hog an entire CPU core for itself while not being interacted with, and it turned out that it was caused by blinking cursor in the editor. You know, the kind of problem that was already solved by the time Windows 1.0 came out? And sure, they fixed it... but every desktop Electron app is a potential minefield filled with stuff like that. Some of it is just not obvious until you run it on slower hardware. Or in remote desktop - that also flushes out a lot of "GPUs are fast enough, who cares" problems.
JS is faster than scripting languages, but slow for a VM based language. The NPM way is to pull in thousands of dependencies, when running code only uses a small fraction.
I disagree that the code written is slower than in the past. There's simply much more of it. People build on top of existing stuff N layers deep. Just look at how far removed Electron is from the OS. It's crazy that it even works.JS also has some unfixable limitations.
It needs to be parsed every time. Java comes packed in an very efficient bytecode format that's both several times smaller than minified JS and far faster to parse.
JS lack of typing also means it uses way more RAM than Java in practice. Java has a lot of Object overhead, but nothing anywhere close to JS.
For UI's, JS lack of threading is terrible. In every language that supports threading, the main way to design UI is to have a "UI thread" that you never assign to do slow things. In JS it's extremely easy to accidentally block during UI render. I'm assuming a major of slack glitches and freezes are from the single thread model.
Modern Java apps feel native (on desktop). You probably use some that you don't realize aren't native.
JS is several times slower and uses several times as much ram. And the VM warmup is slower than Java because the code is distributed in a very inefficient format. It's not a good language for UI's but it's becoming the only option for mobile unless you want to write multiple implementations
> People build on top of existing stuff N layers deep.
Okay. You're not actually disagreeing here. This is the antipattern popular with NPM folks (and extremely reminiscent of Enterprise Java) that I identified as the problem.
> I disagree that the code written is slower than in the past. There's simply much more of it.
Even ignoring that "there is more of it" is part of the problem, you can say "I disagree", but all that gives us is a stalemate unless you're going to introduce data into the conversation.
> And the VM warmup is slower than Java because the code is distributed in a very inefficient format.
Not even the Java folks who work on Java agree with that. Warmup is one of Java's weak spots. The GraalVM team can and will tell you this. It's one of the things they bring up people when trying to get them to temper their expectations.
I recently ported a command-line utility more or less line-by-line from Java to JS, in an extremely naive way—the only concern was to make it work. When I finished, on a lark I checked how it compared against the Java version. Even with the JDK's wealth of specialized collections, compared to, say, the way that in the JS version all the places that expect a map got a general-purpose ES6 Map, the JS version running on NodeJS would beat the Java version every time. In this case, it doesn't actually matter because it wasn't performance-sensitive code, and in both cases, both processes would terminate in <1 second, but the fact remains that NodeJS was able to parse the JS program, compile it, execute, and then terminate faster than the the java process could launch, read the bytecode, verify it, and then perform the same job.
There's an interview from 2009 with Lars Bak on MSDN that you might look into, with the MS folks on the static languages side, and Lars on the other side explaining why in practice V8's performance can be comparable if not better than with "managed", bytecode-based, static languages like Java and C#.
FWIW, I'm not even a dynamic languages fanatic. Another of my big complaints about the NPM crowd is their lack of regard for making sure the "shape" of their inputs is easily decipherable. I've made money as a result of dynamic language folks thinking that using a dynamic language means you don't have to worry about types, and that attitude leading to CVE-level security problems. There are a couple Smalltalk papers I've enjoyed reading[1][2], both somewhat critical/skeptical of the promises of dynamic languages. In general, I advocate for writing code as if there is a static type system in place, even if you're in a dynamic language that doesn't require it.
Keep in mind, though, that this is all completely besides the point, because my original message was only that JS is now starting to be re-perceived as slow as a general rule because of NPM's hype-driven development, where programs are authored by which patterns are trending at the time or otherwise trying to imitate the styles of NPM tastemakers and "thoughtleaders", which leads to people creating huge messes. The entire Java versus JS issue was an aside.
1. "How (and Why) Developers Use the Dynamic Features of
Programming Languages: The Case of Smalltalk"
2. "An Overview of Modular Smalltalk" by Allen Wirfs-Brock and Brian Wilkerson. (awb was the editor of ECMA-262 version 6, FWIW.)
And on that note, here are two more of my favorite programming essays of all time:
What is this user doing lol? I've run Linux on an absolutely ancient Thinkpad because we had some stuff in the field that needed serial/parallel port for comms. It ran fine as long as you didn't have 10 tabs open.
And Slack has become a bloated piece of crap. It lags like hell on everything I own. It takes a few seconds to switch between channels and workspaces on my overclocked 3700x with 32gb of ram on fiber.
Modern software is soooo terrible. It's not just bloated, the animation of every action, "smooth" opening a menu or a window are so slow. Everything is so slow these days. When you'r just in an office and click a bit around it is all nice, fine and shiny. But as soon as you are in a hurry, these animations feel like hours .. you click 20 times on the wrong place because animations are so slow .. one day I will bend my pretty fast ZBook on work over my knee because I'm f*ing angry.
Oh God I hate the animations! Sometimes I wonder if "UI designers" are a net negative on many projects. Most software is functional machinery, not art.
It's important for sales pages and ads to be pretty. Web apps just need to work well, keep the UI guys away from me :)
"Ancient" doesn't really work as a measure of performance anymore. Sandy Bridge is almost ten years old but it's still about half as fast as the fastest x64 processors with the same number of cores and quite a bit faster than a lot of "modern" low-end ARM processors.
Is the performance of the app the same as the website? I’ve only ever run Slack in a browser tab, and haven't had much in the way of performance problems.
>It should be much faster than a Raspberry pi 3 that do not have all those problems.
I agree. User doesn't know what he's doing yet.
One tip I would offer him is avoid Firefox. Firefox is built with Rust[1], which doesn't support ARM as a Tier 1 platform. He would be MUCH better off using Chromium, which supports numerous ARM Chromebooks running this exact CPU just fine.
It's funny you should mention that; first of all, Chomebooks also run Rust code, at a really low layer. (Also, it appears the Chrome team is experimenting with putting Rust in Chrome too; no clue how serious or how likely though). The hypervisor stuff is in it. Secondly, ARM is on its way to Tier 1, supported by the company itself.
Rust benefits from optimizations in LLVM, same compiler chrome uses. You are right that they might be missing some hand optimized ASM for some routines, but I don't think that's enough that performance would be a lot worse.
Their point is that software is written so that it's only barely fast enough on machines much more powerful than a PineBook, which makes it more difficult than necessary to use perfectly functional machines that happen to be slower than average.
The problem is that fast software is harder to write than slow software (there are trivial transforms from fast to slow, but not vice versa). Thus, each generation back you expect your software to run smoothly on, represents more effort (or at least, more care) on the part of the author. We should expect software to be "just fast enough for the average user" essentially by natural law.
The same reason you use accessibility in general: there might be 1 in 1,000 people who might permanently need it, but considerably more who may want it at certain times (running a compile job in the background, on a business trip and only have low internet speeds, underclocking to save battery, …) and that can be the difference between "I want to use your software because it always works for me" and "I tried it and it works 90% of the time for me, so it's not reliable enough for me to pick it". Plus there's the usual feel-good "I'm helping more people using my software, even the (often) lesser-privileged" rather than "I shouldn't care about these people".
Those people don't pay you $$$, so in exchange for cheaper / faster eng costs, you pay via lower quality software.
In businesses that would actually pay slack money, they will probably pay $800 for a decent enough laptop to run slack every 4 years and employees bring their own smartphones now.
Capitalists also have financial incentives to prioritize against environmental concerns, and yet raising a criticism about a business's practices that neglect to take environmental impact into account isn't out of hand an unreasonable thing to do on the basis that addressing it would mean higher costs to the business. It's a reasonable criticism that reasonable people can have opposing perspectives on and find it worthwhile to talk about, rather than being immediately dismissed.
(I also don't see anything in this comment that actually addresses the accessibility metaphor raised by the person you're responding to.)
It's written in pure C by one developer, and it's super fast. Operations are on par with other software because fx libraries are usually already written in native compiled languages and optimized, but this one loads its interface in a lot less than one second, which becomes like 100 ms or less cached. Wouldn't it be wonderful if all software would at least load their interface comparatively fast? Why do i have to waste (tens? hundreds?) megabytes just to show a GUI that does absolutely nothing else than linking events to graphics elements when someone can write a full fledged software using a quite complex interface whose entire executable size is less than one megabyte?
We develop websites on $3000 Macbooks Pro's. The shitty $300 Windows laptop I see in most households is struggling with a lot of websites. As are $200 Android phones I see many people use on a daily basis. I rarely see any developer testing performance on a crap device or bad connection.
The Pinebook might be an outlier device by itself. But ARM Windows laptops are pretty common.
This is a hidden downside of writing code that's only just fast enough to work. It may feel fine for you, but anyone building a new computer will have to match your computer's performance, or everything will feel sluggish. We're raising the bar for new hardware way higher than it needs to be.