I work with a Qt-like C++ GUI framework (JUCE), but we've seen increasing portions of our front-end work replaced with Electron/React Native. It saddens me to see binary sizes and memory consumption balloon, because native development encourages you to care about these things. But even I cannot deny the order-of-magnitude improvements in productivity from hot-reloading, a deep (if byzantine) system for styling and layout (including text), and reuse of modular components across all platforms.
Because those are companies dominated by webdevs that can't be bothered to learn anything else.
Even VSCode is probably a side effect of Microsoft trying to be cool and cattering to the new generations that don't know anything else.
For me, it just made me buy a Sublime Text license instead.
You could argue that it's a similar story for things like Slack - for business reasons, they more-or-less needed to have a web-based UI. If you're already committed to that, then it makes sense to use Electron in order to maximize your code reuse.
I can't speak to Discord or Atom. Those ones make less sense to me, but there might be a reason there, too.
Speaking completely personally, Qt has never felt like a viable option for a commercial project to me. It does have some bindings for other languages, but, at least last I checked, they're all community supported. When you're talking about language binding libraries, it's reasonable to think of "community" as a polite way of saying "poorly". That can quite justifiably influence a person's willingness to rely on them for non-hobby projects.
Electron exists because of Atom (yes, in that order). Electron was previously called Atom Shell because it was created specifically to house Atom, although they mention it wasn’t created just for that. If there’s one app that has a reason to be built on Electron, it’s Atom, because it was an experiment to see how viable that solution would be.
But even they understand that solution wasn’t the best because it’s slow, so they’re experimenting with a model where the application logic is written in Rust and only the UI is built with web technologies.
Electron is good for an MVP if you have or need a web app. Companies keep using it afterward because it commodifies developers and externalizes costs.
I doubt Microsoft (VSCode) is in that camp. Or that that was the main concern of the others. And I think that's a facile explanation anyway: the same kind of companies flocked to do native mobile apps in the past -- but nobody much cared for QT on desktop still.
Otherwise I'm not sure why most people care about the internals of their editor.
They had a native app. They decided to throw it all away.
Tell me about it!
working on Max/MSP by any chance ?
> Those who might suggest that Qt covers that role have to explain why, even though it has been capable of fast, attractive apps for a long time, nobody built Slack, Discord, Atom or VSCode in it.
But someone built Telegram with it, which is of fairly equivalent complexity (https://aozoeky4dglp5sh0-zippykid.netdna-ssl.com/wp-content/...).
Besides, I'd say that there are enough quality Qt apps in the KDE ecosystem to not need $ELECTRON app :-) just yesterday Krita was making the highlights for instance. Likewise, I doubt people being used to the power and speed of KDevelop or QtCreator would feel the need to switch to VSCode.
Even going outside of the "libre" ecosystem, there are many many many nice-looking recent apps written with it... take a look at substance painter (https://www.allegorithmic.com/sites/default/files/DML_1.jpg), the Blizzard launcher (https://i.kinja-img.com/gawker-media/image/upload/s--zbfvud8...), Native Access (https://support.native-instruments.com/hc/article_attachment...)...
I did eventually download the Qt version. It doesn't support emacs keybindings. I can't use ctrl+b ctrl+f to move my cursor in text boxes, like I can in every macOS native app and Electron app.
I think this is a pretty strong argument for Electron: It actually behaves like a native app.
Substance Designer is a giant graph editor that should be perfect for multitouch trackpad pan/zoom gestures, but it doesn't support them. To zoom in and out you scroll up and down as if you're using a mouse wheel, but moving your fingers about 1mm will zip you instantly past where you were trying to get. And IIRC it expects a middle mouse button that you can drag to pan; on a laptop you get to two-finger-drag while holding down Option.
Electron, being based on a web browser, is probably easier to set up for various input types beyond keyboard and mouse.
EDIT: Here's an old video I recorded of the trackpad zoom. If you look very very very closely you might see my fingers move.
TLDR cross platform UIs still take work
From the thread's master:
> The fact is that a viable, efficient, native and cross-platform GUI solution has failed to emerge. Note the word 'viable'. Those who might suggest that Qt covers that role have to explain why, even though it has been capable of fast, attractive apps for a long time, nobody built Slack, Discord, Atom or VSCode in it.
Most industry standard VFX applications utilize Qt for the UI and other application parts. It helps that there was a standards body (admittedly, very Linux biased but starting to open up) set up between vendors and studios for choosing major frameworks/libraries (VFX Reference Platform):
* Autodesk Maya
* Autodesk 3DS Max
* SideFX Houdini
* Foundry Nuke
* Foundry Mari
* Foundry Katana
* Foundry Modo
* Allegorithmic Substance Painter
* Allegorithmic Substance Designer
With the exception of macOS in the very recent years, these applications work pretty seamlessly and efficiently. Qt is not as easy to get up and running with compared to web based frameworks like Electron, it does provide a pretty 'viable' framework for building cross-platform applications. Getting 'native' in appearance will probably never happen 100%, but Qt does a pretty bang-up job. Regarding the apps listed above, they all use custom styling so you get the same interface across all platforms, so native doesn't even matter anymore.
The bigger bummer is that “cross platform” for this sort of thing is only talking about Windows, Linux, and maybe Mac. Substance Designer and Painter would both be outstanding to use on an iPad Pro as long as you had someone on desktop platform feeding you the 3D models.
Designer has had proper Hi-DPI/scaling support since early/mid-2017. Painter has the support, too, but not to the same degree. Apparently it rounds between 100% and 200%, but nothing in between. Although a user said 150% works.
> The bigger bummer is that “cross platform” for this sort of thing is only talking about Windows, Linux, and maybe Mac.
That I think is between Qt and Apple. Mari 4 came out at the end of last year and the macOS version has been in a perpetual beta since about 6 months after GA, but should arrive with the 4.5 release currently in beta (fingers crossed for those users). As reported by the dev team, it comes down to some serious stability and useability issues from Qt on macOS (the way they've implemented it) that is just not working well between various macOS versions. Some users the beta works fine, others not at all. And this happens to users of the same OS version and hardware.
On the platforms, my bigger issue is the complete lack of iOS for these apps more than the iffy Mac support. It would’ve been really good.
Heck, even the Surface Pro would’ve been great if they’d properly supported gestures in the graph view. They didn’t though, and I don’t have a Surface Pro anymore.
And most of their UIs look alien and unusable crap only a professional forced to use it would...
It always takes a small adjustment period of getting used to the application's actions paradigm and you get over it. The UI's don't really get in the way, and you kind of stop caring about visual appearances after the first few weeks. At the end of the day, it's all panels and buttons. As long as it's internally cohesive and consistent.
Even so, 3D apps have it way worse than pro apps like Photoshop, Cubase, etc with equally "wide range of functionality".
The ribbons thing admittedly is more discoverable, so occasionally I’ll look for something there instead of going straight to google, but once I know what it’s called I’m right back to the command line.
Meanwhile, operating system vendors are doubling down on proprietary APIs. These days you practically need to use the right programming language for your OS, too.
A whole lot of groups seem to have completely missed the web's lesson. Instead of "standardization is great", they heard "HTML/CSS/JS is the ideal substrate for writing applications".
FreePascal have one. TCL, before.
The bigger problem, I think, is the language tie-in. For get the amazing Delphi/FreePascal UI you need to work with pascal.
To get TCL-UI, you need to work with TCL. This mean anybody that not want to code on that not even consider the option.
And how integrate your language of choice? You need to pass by C. C sucks. SUCKS SUCKS SUCKS. Is so bad that is the glue for native development.
In the other hand, Elector/React Native is tied-in with JS. If C sucks, JS sucks even more, BUT, you DO NOT HAVE A CHOICE. Is Js or the highway by proxy of the browser. So, even if you code in other language, you MUST (not everyone, but enough) soon or later to accept this fate.
And interface with JS is not as bad as with C.
The more of my project I can write in C, the happier I am.
Same issue with WxWidgets, which is otherwise great - AIUI, it was started as a C++98-ish library with support for lots of pre-standard compilers, and a lot of its deep weirdness is a direct consequence of that. OTOH, at least it has good bindings for other languages like wxPython.
In the other hand, I wish C get allowed to improve much more, and do python2/python3 split (with better handling of the situation, of course), to remove lingering issues, clean the syntax/API, and just add algebraic data types, match and Option<T>, and a good UTF-8 string (the only additions that I think are worthwhile without making C too different).
I envision a C-good that transpile to C-classic and slowly take over.
I experience bliss while writing stuff in Go. It is not as fast as C, but it is getting closer with every release. Garbage collection in Go is so fast now that there may as well not be any pause at all.
It's worth a look, in my opinion.
I was recommending Go to someone that wanted C plus a few things. For me, that's Go.
Why does the lack of generics come up all the time?
It didn't get very popular though.
If you learn to make great looking and functional websites then you can make that into electron. Add to the fact that many services want to have a desktop and web application, it just makes a lot more sense to design its core in Js in a browser environment
As for the obligatory "qt can do that" qt uses chromium to do browser related tasks. The alternative is webviews which are unreliable in any full sized application.
These days Qt is fairly close to this ideal, even on Mac: apps like calibre look a bit weird but their menus are in the right place and their coloring roughly matches my os settings. Electron apps are all over the place: vscode, slack, etc. all pick their own design styles and ruin the visual consistency of my desktop.
To some degree? It is incredibly anti-user, especially if the user happens to have a disability. I know I've said this before but it is absolutely ridiculous that having a "dark mode" is considered some kind of notable feature in 2018, when in 1995 you could just set all your colors and fonts however you preferred them.
But that's the world we live in now. Personal computing is dead, you just rent your computing from some corporation and they'll tell you when to update and how everything should look and what you can and cannot install (for your own good comrade!).
As far as personal computing is concerned we're living in the darkest timeline. Makes me ashamed to have anything to do with the tech industry.
For an apples-to-apples comparison, I’d rather use Slack through something like Adium that puts me in control of the ui. And VSCode would be nicer if it looked more like textmate.
I think Flutter means native in the sense that it runs at native speed, perhaps even faster in some carefully selected apps
As for a look and feel thing, Flutter widgets feel alright to me..
I have found the perf to be just fantastic on android. Especially compared to react native, which is a native offering.
What's interesting is that you can literally run the same app on all platforms, because flutter draws the whole ui in dart code, using the canvas. And well, dart can run basically anywhere now...
Bottom line, Its very safe to say that dart is a much better language than JS.
The language of choice has never ever, really mattered in software history.
And don't forget that the big G is behind this thing, which is well equipped to push it forward without the need of it being as popular as js.
Right now Flutter/Dart looks like an ongoing political war between ChromeOS, Android and Fuchsia.
By the way, Fuchsia is getting Android suport and a new language agnostic UI composition engine, Scenic, so lets see for how long Flutter stays as the Fuchsia official UI toolkit.
But isn't the point of a truly cross platform framework just that? So what if fuschia has a different native toolkit. Maybe they are trying to launch a platform specific api that's more bare metal.
I mean Isn't the whole point of flutter is that you take the same app and run it anywhere?
Maintaining parity between two applications running on different GUI platforms is very expensive - much more than twice the effort of maintaining a single codebase.
Otherwise we would all be programming assembly, or heck, just write machine code directly and skip the time-consuming assembler.
Speed is important, but so is good debugger support, productive UI tools, developer productivity, training requirements, available libraries, target market, etc. At the end of the day, most people make pragmatic decisions, taking lots of factors into account.
A good historical example of this is Word vs. WordPerfect. There are many reasons why Word won, but one of them is that WordPerfect was written in assembly. This was helpful at the beginning as it was good for performance, but their developer productivity suffered as the codebase increased. This resulted in fewer features, which is also something that customers care about.
Few customers care how quickly you can do something they don't want to do.
Also, he's conflating toy problems, where getting the big-O right is the only important thing, with real-world applications where the constants are also important.
A cross-platform UI framework could easily fulfill all your requirements, yet doesn't exist.
Several cross-platform UI frameworks do exist, all with different pros and cons. I don't think it would be possible for a single UI framework to override all other considerations when choosing a tech stack. It would have to be pretty magical.
There is something super nice about editing JS and being able to see the change immediately, but I’d still trade that off for some type safety in a “compile to JS” language that wastes a second of my time per compile.
The other point about compile speed is it doesn’t affect the end user (if we are talking about source code compilation not JIT)
Really I don't know about JS Hell, but C++ compilers aren't slow because nobody has tried to speed them up, rather the nature of C++ is that it is slow to compile. Every so often somebody rage quits C++ and starts a language like Go because of this.
The real data point is that Clojurescript is only slightly slower to execute than real Clojure, but takes two orders of magnitude more time to compile.
He's right though that people have to move on past Electron. Lately I have been building applications in Python/tkinter and I still can't get over how fast it is on both my mac and PC both to start up and when running. There is none of the "spend 30 seconds watching a splash screen" that we're are accustomed to with advanced applications in Java and C++ (eg. PyCharm and Photoshop)
Another thing that impresses me is the size and speed of small GUI applications written in C for Windows such as this little utility:
I think MacOS is also part of the problem. Lately I added a new keyboard to my Mac and I thought I was having Bluetooth problems but the main reason it was unresponsive was that the spellchecking baked into the OS would cause keyboard input to lock up for a few seconds periodically.
Later on I noticed that web browsers on Mac OS (Safari/Firefox/Chrome) seem to "go to lunch" and not respond to mouse clicks for 5 seconds every so often. I imagine this impacts Electron apps.
I think people have been "frog boiled" into expecting computers to be slow.
Are your applications of similar size and complexity as these Java and C++ applications? Photoshop is a huge beast.
I ask because I’ve written C++ GUI applications that start up instantly (sub second). I’ve also written Python GUI applications that did not. It depends on what and how much you’re doing on startup. Most GUI applications aren’t as large as PyCharm or Photoshop and start up significantly faster, regardless of implementation language.
> Later on I noticed that web browsers on Mac OS (Safari/Firefox/Chrome) seem to "go to lunch" and not respond to mouse clicks for 5 seconds every so often.
I’ve not had this problem personally, but I have had Chrome lock up my entire system on Ubuntu (input including mouse clicks simply not doing anything, but if I happen to have another window open I can alt tab to that and then everything is good again, or at least I can kill Chrome).
The other part where Qt can really slow down is when you have to instantiate a 3D driver module. That can block absolutely everything.
Once it has warmed up, it's close to C however.
Which can be further reduced by making use of AOT compiler, or SubstrateVM.
Additionally, Java always enjoyed the ability to AOT compile to native code, even though it was mostly available via third party commercial JDKs, or gcj for the adventurous ones.
that's the problem, a lot of workflows even on really complex applns like photoshop are firing up a project for a couple of minutes to get some output or correct something. enough time is wasted warming up that people who can afford it just leave the applications running. i just keep whatever VS project i'm working on running for eg coz the startup times are painful and i have RAM to spare
This was at a time when I had already wondered aloud, on several occasions, how long it would take for browser vendors to allow downloading precompiled JIT blobs for speeding up the most expensive parts of whatever they wanted their visitors to run. This never happened. The world hopped over the fledgling JS-AOT craze and came up with WebAssembly instead.
ß: the construction also introduced Ruby as a build requirement, because of course the macro-assembler generation was done with Ruby.
It's not really the application size though. It's whether you load/initialise things eagerly at startup, or do you do it as needed/in the background.
If you see a splash screen, you know it's the first option. The slash screen is visible and active after all - the app itself is loaded.
Worst thing is the Finder. Moving files between folders used to be instant. Now it sometimes takes a few seconds for files to appear in a folder after you drop them.
I have no clue why.
I tell myself that the problem is that the old, experienced programmers are leaving, and inexperienced young programmers are too busy adding new features (it works! good enough!) to make sure anything actually works well. And of course that never would have happened if Steve Jobs was still alive. (Just kidding. Steve Jobs loved fancy graphics that made everything slow)
You joke, but this is a real issue. It's hard to find any other explanation for why the quality of commonly-used software has taken such a massive nosedive in the latest 15 years, outside the FLOSS world (CADT syndrome notwithstanding!) even as software was supposed to be "eating the world" by now! Windows Vista, anyone? (Or, as some folks in Microsoft marketing used to call it rather presciently, Windows "Mojave"?) I guess it just goes to show that, yes, systems programming still matters, as do projects that attempt to open it up to a far broader audience while keeping and even strengthening its focus on rigor, performance and correctness.
Now, the average user has to totally relearn every few years how to do less of the things they could do before. There’s an obsession with reinventing the wheel that takes precedence to stability.
Commercial software is generally much better than it was 15 years ago.
Yes, Vista was bad but Win7 was good. You should have seen Win3 or 95 though. You’d expect to have the computer crash multiple times a day.
(Even Microsoft had to give up not only on Vista but on Windows 7 itself for "low-end platforms" - they ultimately promoted a "Starter" version of the latter that was specifically built to never even try to launch more than three apps at the same time! I won't bother to attempt even the most cursory comparison with what a proper, FLOSS alternative can still achieve on the same hardware, even today...that would be tedious and ultimately beside the point.)
Software Date Driven Development is a scourge, but shareholders (or potential purchasers) scream for their instant ROI, and this is what we get.
Has the GUI of tkinter apps improved over the past couple of years? I have yet to see an interface built with tk that doesn’t look like it was meant for Win95.
Reasonable or not, I have a hard time accepting workflows worse than what Microsoft Access or VB 6.0 had.
So much this!
And that's when you can reason about the access patterns. When you're running things out of a browser, you are even further removed from the metal and likely using a grab bag of web frameworks and libraries that you don't control (or fully understand). And it's easier to keep using them rather than think about what it would take to make things run well and invest in that future. (You may not even have the ability to make that decision).
Luckily, it does seem that people are becoming aware of the issue, at least in some circles. I've now seen a number of articles with titles like "Why I have turned my back on the church of Object Oriented Programming," and with people like Mike Acton speaking at cppcon  to talk about data structuring, hopefully the next generations of application developers will be equipped to make things fast again.
Then VSCode came along and showed that it wasn't all Electron's fault, there's a right way to do it. With faster platforms what you get is the room to do things wrong (inefficiently) with hardly anyone noticing.
I use sublime as a notepad.
Tixati is my favourite torrent client.
Don't think I run any other Electron apps. Used to run Slack and Spotify but now just pin browser tabs. Some devs run Gitkraken which baffles me.
Admittedly I was running 10ish not-so-microservices on a dev laptop so couldn't really spare the cycles or RAM. Now on a desktop machine with 32GB and more cores I probably wouldn't notice but just don't like the idea of it from my previous experience.
> So what do these numbers tell us? Basically, to process exactly the same code, you can either spend ~8 seconds, 24 seconds or 78 seconds. Your choice.
My choice is I don't care in the least about these differences, and would gladly take the 78 seconds if that slow language had slightly better syntax or was slightly more expressive.
If I were working on a google scale code base and the difference was an hour vs 2 days, I might care then. But for the vast majority of programmers and applications, that's not relevant.
But end users do care about programs not crashing and losing whatever they were working on, they care about features, and they care about how long it takes for them to get the features they want, all of which are impacted in some way by the language and tools used by the programmer.
My real point is that I care about everything more than speed until I have a specific reason to care about speed. Then I'll care about speed.
In contrast, many programmers fetishize speed and efficiency as a quasi-religious value, even when they have no practical consequences.
78 seconds was the compile time. The runtimes were almost identical: 0.45 seconds to run the entire test suite in the fastest case vs 1.3 in the slowest.
I'm not going to care about the difference just because "omg one number is 3x another number!?!"
Again, if you have a reason the speed matters, then it matters. The specific numbers here are not relevant to my argument, which boils down to: being "slow", in and of itself, is meaningless.
All up the article starts with reasonable discussion about how different solutions can have more than 6 orders of magnitude time difference, then it segues irrelevantly to a complaint about electron.
This is of course a bit of an overstatement, but I generally agree that (depending on the language you're using, and the problem you're solving), there are quick solutions and slow ones. However, based on the videos he posted, the author seems to have stopped just before the Advent of Code problem most likely to make his CPU cry! Day 14 didn't have an obvious optimal solution, so there were quite a lot of correct solutions that took seconds, not milliseconds. (If you're curious, check out this awesome post from /u/askalski  on getting it to run truly fast.)
Humans seem to be naturally inclined to pick the "best" option, and naturally frustrated by the fact that best changes so often depending upon the situation. In almost any situation as a programmer, you're deciding between several competing priorities. There's your speed as a programmer, your code's speed for the target hardware, your ability to get people to help you with the code, either via hires or online documentation, etc.
With regards to Advent of Code (which I also completed this year, and really enjoyed!), the "right" solution in Python could often be hundreds of times slower than the right solution in C or Rust -- just check out the benchmarks for similar toy problems . That doesn't mean the solution is wrong, it just means that the author has decided that a couple seconds of the CPU's time are worth the benefit of using a higher level language, which is often a reasonable choice.
But there's no point in turning something subjective into a one-bit moral requirement. Programs need to be fast enough to meet requirements and people have different opinions about what they require.
There is even an argument that slower is sometimes better. The large tech companies put enormous effort into reducing latency, but I'm thinking of writing a browser extension to add latency to some websites to make them less addictive. Where you stand on this is going to depend on what you're trying to do.
Woah, that might be really effective. I love the idea of finding ways to undo the powerful conversion and retention techniques that our industry has developed. Adding latency is likely not frustrating enough that I'd just bypass it (like I do with Screen Time), but maybe it'd work for my brain. So cool. I wonder what other opportunities exist in the "making this less sticky" space.
A native app removes any chance to run an adblocker (I guess it might be possible at the network level) and probably means that you won't be able to add keyboard navigation as you'd get from something like Vimium.
I always find it very jarring to be using something like iTunes and to realise that I can't Ctrl click something into a background tab.
This is completely true.
HTML is also more than a way to put pixels on a screen.
I'm utterly sympathetic to complaints about Electron apps (which I generally avoid because they feel wrong to me too), and to the idea that the web could/should perform better, and even that it is wrong for many applications.
But if one primarily conceives HTML as a pixel placement scheme, one is goig to make some poor judgments about what it's right or wrong for. This arguably works in tandem with the piece at some points -- "the web" in general (and some specific stacks or architecture choices) is/are probably used in situations it is wrong for because the related tech is what people know rather than because all the engineering decisions involve informed matches between characteristics and requirements.
And yet it reveals some limits about any judgment that the web in general is "wrong," and perhaps indicated the author should have demonstrated more of the wise principle of thinking about what one is getting. If there is one place where "it is fast or it is wrong" is often incorrect, it's in thinking.
(See also: Knuth's rules about optimization.)
The example discussed throughout the article is Clujure vs. ClojureScript.
The author takes issue with the difference in compile time (6.5 seconds vs. 78 seconds, respectively).
But this really is an apples/oranges comparison. If the user expects to run the program in their browser, then any solution that doesn't deliver that experience is wrong. End of story.
If the user can't be bothered to figure out how to install your native or byte code binary, then your solution is wrong. Find a way to give the user what they expect or stop trying to write software for a living.
Here's a common programmer dilemma:
You are familiar with development system X. However, your users demand that your software run on platform Y. Platform Y doesn't support X natively.
What do you do?
One option is to refuse to learn the development tools for platform Y, bringing shims, transpilers and a bunch of other dreck needed to get X running on platform Y.
But the user will notice the difference and not like it.
You're better off learning the native development tools for platform Y, and delivering a native experience.
> However, your users demand that your software run on platform Y. Platform Y doesn't support X natively.
What do you do?
Use Electron apparently…
> One option is to refuse to learn the development tools for platform Y, bringing shims, transpilers and a bunch of other dreck needed to get X running on platform Y.
> But the user will notice the difference and not like it.
Exactly. I do not like it at all.
On the other side of the same coin: Drop the web stack and write the application in an appropriate native development environment if your deployment target is desktop or mobile.
Some of the truisms in the article rubbed me the wrong way though:
> It’s not that they couldn’t build faster. They just choose not to.
I’m not sure I agree and I don’t see what it’s adding to the argument.
> in the end what matters is if you have a working program on your hand and if it can produce the answer in a reasonable time. It doesn’t really matter how the developer arrived there.
Is this sarcasm? From my opening comments you’ll see I disagree with this. But, I originally thought the author did too.
Then they go on to construct an analogy of a shitty train and an awesome plane that cost the same. Where did they get this part from? I thought the reason people chose e.g. Electron vs. native is that there is a different cost involved for the developers?
> It’s easy to find excuses why things are the way they are. They are all probably valid, but they are excuses.
I used to assert the opposite when I was young, to my parents: “It’s not an excuse, it’s a reason.” It’s easy to dismiss others’ arguments out of hand, but the fact is there is a reason for every decision made, even if you disagree with the reasoning. There is always room for more education. I just don’t see how we can make any headway if we ignore each other like this. This reads to me more like “I don’t understand why C++ or Rust compile slower.”
> We know way faster programs are possible, and that makes everything else just plain wrong.
It’s easy to fall into the right vs. wrong argument when talking about programming–binary logic, tests either pass or fail. But we’re in the human realm here, where I’m of the opinion that there is no right or wrong, only consensus. In the case of products, like Slack, consensus manifests as what the market will bear. I might not agree, but for the time being, the industry has deemed it “right.”
After researching how to create a GUI in Java, QT, GTK+, etc., I found out for myself that a platform-independent GUI design can be a difficult topic, especially when it comes to GREAT usability.
Especially when this attitude only shifts costs from the manufacturer to the customer, e.g. under the time-to-market aspect. While the manufacturer is saving time and money, the users and customers pay for it. And this is the real argument for me - manufacturers want to save costs. Costs for:
- qualified staff
- portable applications
- time to market
In my opinion the only way prevent more and more Web-Based GUIs is doing it better and being successful with it - not an easy task.
So thank you for this polarizing but interesting article.
> “Wow, wow, slow down! This is even more unfair! Now it’s not just apples and oranges, now it’s toothbrushes and spaceships. You completely ignore what each language brings to the table. There’s a reason they spend so much time compiling, you know?”
So what's the reason if the resulting performance doesn't differ much? C/C++ build times seem insane to me. Every time I try to install an Aur package and see it starts building C/C++ I cancel it and give up the idea as I don't want to wait a day for my CPU load to drop below 100% and my SSD free space usually happens to be not enough anyway.
Transpilation, however, is by definition not optional. It is frustrating to hear a complaint about someone who chose to write in Clojure, itself a layer on the JVM interpreter, and then chose to support a transpilation target, Clojurescript. If compile times bother you, then pick another stack. However, if you want to write code in a functional way and target both the JVM and JS, then this is the price you pay.
It is not an either/or proposition - we can have all the things, or at least something significantly better than the current state of affairs.
VSCode has no problem doing this on my Mac. I was scrolling through a 4000000 line file the other day like butter.
Win32 however can’t resize the most basic window without flickering - which is why every decent Windows app uses its own non-native GUI layer. There’s nothing wrong with non-native solutions - they’re often better. We can have the best of both words. Native on some platforms just isn’t that great.
This is so extremely wrong it's not even funny any more. WS_CLIPCHILDREN and WS_CLIPSIBLINGS are the key here. Of course, many people don't understand them (eventhough it's not hard and the Petzold came out, what, 25 years ago?) but win32 is very well understood by now and it's trivial to make extremely fast (by today's standards) UI's with it, and relatively easily, too.
Then I have to repeat that same years-long process for Android, iOS, OSX and Linux.
Or I can just make a web application that's slightly slower. The decision was super simple for me.
F* native development of UIs, it's stuck, everyone in it think it's fine - it probably is if it's all you do - but what matters a lot of the time is the sum of development cost across all supported platforms. shudder That means it's especially horrible if it's just you developing something, the chance that a single developer could support all of the most used platforms (Win, Linux, OSX, Android, iOS) is so unlikely due to the time required to learn just a single one of those platforms.
That's not true
When making decisions about software in the world, performance is just one of many considerations. If performance was the only dimension, we wouldn’t have had reason to move past Assembly.
I'll take Ruby vs C as an example. If I was to test out a weekend project, I'd rather choose ruby, even if it's going to be slow, as I can focus on finishing everything in the few hours that I have
[For non-developers, I’d guess 95%]
The crux of the issue is that the argument really relies on the notion of comparing two things that supposedly do "the same thing", but you really have to ignore almost everything that could ever factor into the decision between them in order to consider them the same, and what the thing is shifts from moment to moment.
The Advent of Code example is fine. Here, we have two different algorithms that perform the same computational task, and there's a performance gap because they're not the same algorithm and have have radically different time complexities.
In the Clojure vs. Clojurescript example, the author has picked a library that can run tests on the JVM or as transpiled JS code in a browser. The end result code is doing the same thing in a different sense because it implements pretty much the same algorithm (modulo technical differences in the underlying intermediate platforms). But the first performance comparison is between the Clojure and Clojurescript compile times. Trust me, there's almost no sense in which compiling the Clojure language to the bytecode it was designed to be compiled to and translating it into a different high level language in a performance-sensitive way can be considered the same task. So it's unclear why that's supposed to be meaningful. And the test-timing comparison is also a little funny, since the JS still has to be JIT compiled to machine code, which is much more complex than the equivalent bytecode compilation - there's a sense in which it's more similar to the Clojure compilation step (which takes 6.5 seconds) and it happens in less than a second.
But anyway, one can argue that all-in-all it's the same code and it's doing the same thing in the end and the one that does it in the browser is slower. But that's only the case because of that specific choice to compare library tests that can target either the JVM or the browser. That's not a choice that makes sense in general.
You don't pick between those two compilers in a vacuum - you're not going to use Clojurescript unless your code needs to run in a browser. And you _can't_ use vanilla Clojure in that case. If running in a browser is a requirement, then the compile speed and runtime performance of vanilla Clojure is meaningless because it doesn't run on the platform your program is being delivered on. The author is comparing a library that can be compiled either way and run on either platform - but the actual real world code that library is used in wouldn't have that property, and the performance comparison is pretty meaningless.
I'm going to skip the next bit because it goes further off the rails in ways that are mostly unimportant.
I think there are some less charitable ways to read the conclusion, but let's go with this: don't build applications (especially things like text editors) on the web platform, because it's super inefficient. You can make faster applications, therefore, you should.
Here's the thing: you won't, and if you do, they won't succeed. We wish our text editors weren't so resource intensive, but the responsiveness of basic text editing operations isn't what most of us look for in an editor. There are slimmer native text editors out there; their development and the development of the surrounding ecosystem moves incredibly slowly because they weren't built on a pre-existing extensible platform that thousands of developers are familiar with and so very few people want to work on them. And that makes them cumbersome for a lot of tasks, which means a lot of people won't use them.
The platform that our bloated text editors are built on is part of what they do. So a faster editor that isn't built on a familiar, extensible platform doesn't do the same thing faster - it does a subset of the things I care about, at the expense of the reasons that cause me to put up with my current editor's bloat.
You can make a faster X for any X that runs on the web platform, but to do that you're going to have to build it on less general-purpose, extensible technologies, and in many (but not all!) cases, that alone going to doom the project.
Sublime Text became popular because it was a cross-platform graphical editor that used a popular language for extensions. Atom became popular because it was all that and free. VS Code became popular because it was like Atom but faster.
My point was just that the new editors were successful in the ways that they were (e.g. for web developers working with a quickly evolving language) precisely because of the fact that they were built on an extensible platform, and in spite of the performance issues.
Atom used TextMate language definitions until recently. VS Code uses an RPC protocol designed to let any editor support any language. Emacs just isn't very popular anymore.
What "basic UI features" are easier to build in a browser?
Oddly enough, I'm actually pretty acutely aware of how much performance matters in this space - at my last job, I spent a year developing an IDE for a custom in-house language/environment with some unusual required features. I looked in to extending existing editors/IDEs. The web-based editors were too slow for my application (which required a very large number of page updates from remote sources) but extending the native editors was an unfathomable pain. When I looked into things that didn't use a browser back-end, I was budgeting a month just to learn enough about their plugin architectures to determine if what we needed was possible without rewriting the UI code from scratch, which I really wanted to avoid. From experience, working with raw Java/C++ UI toolkit code is a nightmare compared to html/js and requires orders of magnitude more LOC just to put anything on the screen that can be interacted with.
In the end, I wrote my own web-based IDE from scratch, using a lot of application-specific performance optimizations that would be really difficult for a more generic, modular system (lots more care about what parts of the DOM to touch and when went directly into the parser than would have been possible with Atom/VSCode/CodeMirror/whatever at the time). I spent 6 months optimizing it and it was only barely fast enough that you could type at close to 60fps in big documents, but I don't think I could have made a native version that worked at all in that time.
So I get it: performance is important, and there are cases where web-based things either don't cut it or are barely acceptable.
My point wasn't that performance isn't important, it's that it's not the only concern and pointing to the relative popularity of editors over time still illustrates that. Notepad++ was released 12 years before VSCode (which has only been around for ~3 years) and has had better performance this whole time, but in the 2018 Stack Overflow survey, VSCode beats it in popularity. So performance can't be the overriding factor in its relative popularity - it became popular in spite of being slower than things that already existed.
The author of the original piece argues that the whole web-stack is _wrong_ and should be thrown out and replaced with something else, because it's slow and people don't care about how easy it is to develop something if it's slow. I don't think that's true, and the easiest way to see that is to note that we already had faster things that were harder to develop and extend and they're increasingly being rejected in favor of the opposite.
More often than not, somebody with far more domain specific insight than yourself thought about the same thing, and rejected it, constrained by some other more important issue you can't yet see.
That's not to say that no improvement is possible, just that you are in a global competition with millions of other smart people and moving things forward is hard and takes dedication, not just hand waving.
To be fair, preventing a small group of people from dumping disproportionally large externalities on everyone else, to extract a little extra profit for themselves, isn't a working business model. If it was, we would have solved global warming and ocean pollution by now.
Slow software in general, and Electron in particular, is a collective action problem. Solid, fast software gets outcompeted by half-assed bad engineering, because the latter can get to the market first.
(Also known as the curse of "worse is better".)