I guess but compatibility issues on the web, while they existe, are pretty discrete these days. Browser monoculture is exceedingly worse, both practically and from a business perspective, in my opinion.
As someone who uses Safari on Mac, I can say that compatibility issues are a big deal. Granted many of the issues are simply from sites checking for Chrome and telling everything else to f--- off. I've even seen a site fail to run on (chromium) Edge because it really wanted Chrome.
However, real compatibility issues are a thing as well.
That said, I hate electron. I hate that I have to run 4-5 instances of chrome on my machine all day long for various different apps instead of the developer checking that their stuff works across the 3 main rendering engines.
You are correct that Safari has major compatibility issues, but incorrect that the problem is simply that developers target Chrome (it happens sometimes, sure). Most things that work in Chrome also work in Firefox, but WebKit is ~10 years behind both Firefox and Chrome because Apple hasn't invested in keeping up to date with web APIs where their competitors have. Hell, even Edge (before it adopted Chromium) was more compatible than Safari.
Personally, I view Safari in 2022 the same way I viewed IE in 2012 - as a backwards browser holding back web development because the company developing it just doesn't care about the web. The compatibility issues aren't as bad, but I have to care about Safari because iOS still won't let any browser use anything other than WebKit, where there was a point where saying "Anyone choosing to use IE11 deserves a bad experience" made a certain amount of sense. No one gets a choice on their iPhone, so Apple makes things harder for web developers, probably because they want them all to be iOS app developers instead.
Personally, I think the last few versions of Safari have caught up a lot.
Safari development does seem to go through spurts, depending on who is working at Apple. Let's not forget that in the past, it's been a major driver of web innovation, particularly in the late 2000s to early 2010s.
To be fair, I hate Safari with a passion. It took until iOS 15 for support for WebAssembly.initiateStreaming and there’s still no support for WebM or Opus on iPhones. Curiously, on platforms where Apple doesn’t mandate a browser engine, Safari is magically able to support codecs that aren’t patent-encumbered…
Just to clarify - WebKit does support Opus, but using their CAF container [1], which is a pain to deal as it requires double the space to store essentially the same encoded audio twice.
Nobody tests on a Mac. If Apple wanted the platform to be supported they could make it much cheaper to test on, but instead they've made developer hostile moves for the last ten years.
I don't know about that, one data point: an ES2018 feature, regex lookarounds, is still not implemented in Safari. And the JS engine is the thing that's the most compatible across browsers, nowhere near the level of incompatibility of the rendering engine for example.
Ignore stuff that was added to HTML, CSS, and JS for last 4-5 years. You'll still have a pretty solid? GUI platform, likely more capable and accessible than Qt or GTK or AWT.
With the usual compiler / transpiler stack, you'll have a nice, fast-to-market, non-esoteric development environment. All without the need to ship 100MB binaries.
"requiring web devs to ignore the last 4~5 years of progress is just unacceptable."
But this it not for "web devs", but for general UI development on desktops. Given some conventions there are decades old 4-5 years does not sound very ancient in comparison. And for desktop you probably want to trade bleeding edge hotness for tested and tried methods anyway. 4-5 years on desktop is a very brief span of time.
Then do as I: develop in Firefox and if it works there (and isn't a PWA where maybe you get in trouble with Safari?) then it works everywhere.
Less testing, less bugs. Whats not to like?
Contrast to Chrome first developers who often get caught by cross browsers incompatibilities just like they did back in the days when they were IE first developers : )
This is absolutely not true. I wrote a simple flexbox layout in Firefox last year and it was majorly broken when I tried to test it on Chrome, since Firefox and Chrome disagreed on how to compute "min-width" for inline images and other replaced content. In this case, my layout worked in Firefox, but Chrome was the browser that was following the CSS standard correctly, and my code was broken.
There’s plenty of weird cruft in Firefox’s ~30 year old codebase that causes bugs and unspecified behavior. For example, Firefox’s ContentEditable code deletes things “backwards” compared to all other browsers and operating systems (bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1735608) - this behavior isn’t standardized in any spec, and Firefox is different from other browsers.
There are plenty of gotchas in layout/rendering as well, where either the standard is under specified, or Firefox has some small bugs. Maybe Chrome has many-chrome only APIs, but the developer will always need to test in Chrome and iOS Safari 13 (or whatever your oldest supported iOS version is).
Chrome and iOS are where the users are, and a good website or app should be usable and beautiful for everyone.
Developing UI components based on ContentEditable is trial and error, with a lot of code held together by cross-browser integration tests where you always have to pray that a browser upgrade doesn't brake something new in a subtle and surprising way...
1/5, would not recommend. (But sometimes there's no way around it.)
> There’s plenty of weird cruft in Firefox’s ~30 year old codebase that causes bugs and unspecified behavior.
Just wondering where you're getting "30 years" from?
That asked, a Chrome-shaped monoculture doesn't help anybody. We need more competing implementations, not less. Anyone feel like collaborating on such?
Netscape was founded in 1994, Firefox is derived from Gecko (1997) layout code code open-sourced from Netscape in 1998. See https://en.m.wikipedia.org/wiki/Gecko_(software) for some background. I was thinking 1994 when I wrote the comment, but it’s closer to ~25 years if you reckon from Gecko’s birth year of 1997.
> Contrast to Chrome first developers who often get caught by cross browsers incompatibilities just like they did back in the days when they were IE first developers : )
I'm not entirely sure how I feel about this statement (and comparison to IE). If anything, Safari is "the new IE" rather than Chrome.
MOST (hopefully the nitpickers pick up the caps lock) stuff in Chrome are drafts or standards.
Sure, Google pioneered/championed some of them, but that's kinda irrelevant if developers voted for those features. There's very little Chrome-specific stuff.
Also, other browsers have vendor specific stuff in them too, yet people rarely fling shit in their direction?
FWIW I also mostly use Firefox for dev because I prefer how some devtool features are designed/implemented.
Most of my cross browser issues in FF were "brief" in the senses that they were bugs that got fixed eventual.
> I'm not entirely sure how I feel about this statement (and comparison to IE). If anything, Safari is "the new IE" rather than Chrome.
Some people who either don't know history or willfully ignores it keeps claiming that Safari is the new IE, at one point one even made a webside out of it.
Don't fall for it.
Chrome is the new IE:
- technologically advanced? Check!
- implement a number of things without asking or waiting for consensus? Check!
- will be abandoned as soon as they have crushed every competitor? Well, it is produced by the worlds most famous company when it comes to killing its own software.
I can't believe you seriously just suggested Google is going to abandon Chrome.
Next up, Microsoft is going to abandon Word. Also, Facebook is going to abandon Facebook.
This is why no one takes these conversations seriously. All vitriol, no substance.
But chromium is opensource. Even if google stop devoting to chrome, we can develop it. In fact, I think Mozilla should turn to chromium too like Microsoft, just be aware of keeping the licence open.
Unless I'm missing something that doesn't fixes the issue, the problem is symmetrical, browser A is different than browser B, so browser B is different than browser A, testing in either of them doesn't guarantee a correct output in both of them.
Chrome - like IE before it - has a number of "features" that only work/ed in Chrome/IE.
Writing for a standards compliant browser like Firefox makes your code more well defined today just as it did back then because you won't get away with the same sloppyness. (This also makes you catch and fix problems in early iterations over the problem instead of after QA calls to complain so it saves you time and context switching too and if you are good you might look like a cross browser superhero almost for free ;-)
Earlier on not every basic thing was supported everywhere, many people here will remember the ACID tests. Younger devs won't remember them as we stopped talking about them after every browser became compliant. Today every mainstream browser has comprehensive test suites to cover everything we need from CSS I think.
Testing on ~~Chrome~~ Firefox would save you from accidentally using Chrome-only features, but that's only part of the problem, caniuse.com kinda works better for that as you get data about other browsers too.
>>Testing on Chrome would save you from accidentally using Chrome-only features, but that's only part of the problem,
>How?
Sorry I meant "Firefox" and "Chrome-only" there.
In general though it's not like Firefox is spec compliant and others aren't, pretty much every browser works a little different in some areas. Just for the sake of saying something verifiable: no browser is spec compliant because the spec mandates a precise maximum length for strings and each main browser has a different, arbitrary (= I have the RAM, they just won't let me use it), lower limit on that.
> Sorry but requiring web devs to ignore the last 4~5 years of progress is just unacceptable.
No it's not. "Web dev" is one of the things in my toolbox and I still clicked on this well-knowing it was likely not truly cross-platform and keeping up with bleeding-edge features.
Truth is all development is about tradeoffs and Electron is one heck of a blob to ship to users... in a lot of applications a lighter weight artifact may be desirable where the trade off of the last 4-5 years of browser advancements may be perfectly OK.
Is it unacceptable? Sure - if your application needs features out of the last 4-5 years of browser advancements... but that's not most applications. If you need a bleeding-edge solution that's truly cross-platform then Electron clearly still is your choice as you're just shipping around a fancied up Chromium.
By this logic requiring web devs to write ES3 code should be acceptable too.
The platform changed significantly, improved significantly, in the past few years, some of these major advancements can't be ignored just because a browser doesn't implement them.
Nope, you can compile code back to whatever ECMAScript version you like with tools like Babel or TypeScript. So the devs don’t even notice that they’re compiling to 4-5 year old ES.
That's actually not true in general, tell me how to compile Proxy and regex lookarounds to ES5 or whatever version that doesn't support these features. In fact tell me how to polyfill these features in any way at all.
There's a proxy polyfill. And if you really wanted there's at least one pcre2 wasm build you could wrap to make a polyfill, lol. Barely anyone uses lookahead/lookbehind even in pcre, though.
Proxy polyfill: assuming you are referring to this [0], since I haven't seen anything else like this, then I'll paste here what the readme says:
> The polyfill supports just a limited number of proxy 'traps'. It also works by calling seal on the object passed to Proxy. This means that the properties you want to proxy must be known at creation time.
i.e. that's not a polyfill for Proxy. It's a polyfill for a subset of the thing, maybe that's useful for somebody, but it's useless for the use cases I had for Proxy so far.
Shipping an entire regex engine with your app: right, that's the only way to do something like that. Not that that's actually the same thing though, I can't just load this and use lookarounds as normal, i.e. it's not a polyfill.
For all practical purposes these features are not polyfillable. If your idea of a polyfill includes not actually polyfilling the entire thing or shipping an entire engine with your app then sure, anything is polyfillable, you could even run Java in the browser.
Electron does not weigh 300+ MB for starter, and one means basically statically linking your app so that it ships with its own dependencies and works more consistently, while the other means shipping a language because the one you have to use is not implemented properly by the platform, pretty different things when you look at them. I'll give you though that in both cases you are shipping a big engine with your app, so it kinda sounds like the same thing.
For language issues I assume using Babel, which in my opinion is not a big deal if you’re already making an app.
Render-wise, browsers are pretty uniform these days. I experience very few problems in this regard, and my app Pony runs out of the same web codebase on all platforms (iOS, Android, web). The worst offender is Safari, but it’s not that bad. The potential gains from something like Tauri (and I plan to try Tauri for Pony desktop) far exceed the compatibility concerns for me (which I’ve already had to address due to web).
That's a big "just", but yes in theory that's doable, however:
- The performance you are going to get will be terrible compared to the native implementation.
- Oniguruma, which I think is Ruby's engine, weighs half a megabyte on its own, that's comparable to the core bundle of the app I'm working on. That's a lot to my eyes.
- If you need to use this engine in dependencies that just assume that lookarounds are available then you are going to need to fork every single dependency just to wire it with the regex engine you compiled, super messy.
And now we have display of "flow-root" instead of clearfix.
However the only reason I ever used the clearfix hack was when using float to do layout. I haven’t done that for ages (i.e. since CSS grew to include flexbox and grid). And I only ever use float to—well—float images inside text where the inside display is always inline. So I haven’t ever seen a need for display flow-root in the wild... Although I’m sure it exists.
If I'm building an electron app, it's because I need native code running in nodejs/V8 as native addons, often in the render process, or I'm using bleeding edge APIs not available on Safari.
I really don’t care if my hello world UI is 60MB to download, I care that it consumes 1 GB of my precious ram to run.
How is running js with a rust backend any better than running js with a C++ backend?
I guess your “backend” is rust here, which is nice (because I <3 rust), but tell me this won’t sit there guzzling all the memory it can get it’s hands on for the UI?
I’d encourage you to rethink whether the “all” you describe is desirable. I couldn’t care less whether an app is consistent across platforms - in fact I consider that a strict negative since apps should be consistent with the platform on which they are running not with themselves on other platforms.
I don't think that "consistent" always necessarily refers to pixel-perfect equality. When using this term in web dev, most would refer to browser standards as in spacing, layout, JS apis, availability of native elements etc. In this context, consistency is rather the opposite of negative. I don't think anyone really cares about how date picker looks on Firefox vs. on Chrome.
Maybe you are referring to Gtk/Qt vs. win32 etc. with their integrations in the operating system. I agree that they look and feel the best for their respective OS, but eventually require you maintain multiple unrelated codebases. It is understandable and necessary for this approach to die. A middle ground would be best, say, Qt support on all existing platforms. Or more likely, prettier browser defaults for standard elements like lists and tables, with browsers themselves respecting the underlying OS theme. Probably never gonna happen...
I'm not that interested in using lower common denominator software on any platform - for my purposes I am only interested (and willing to pay for) software written to integrate with macOS in a first party manner. That means I want all of the accessibility controls to work, all of the system integration to work and so forth. I'd expect anyone serious about software to want the same on whatever platform they use. Lowest common denominator crap is the thing that needs to die.
I can remember using 56k dialup and downloading ISOs around 650MB.
A 60MB download would have been fine.
Indeed, with the annoying habit of proprietary software to install "Download managers" that download the actual software, I would be happy for just a 60MB runtime for the actual program itself.
I'm not sure you really remember. With 56k dialup, downloading 650MB could easily take days. Downloading 1MB could have taken 3 to 4 minutes.
60MB, even today, is really big when you have to download it while boarding a train or with poor connectivity (even in rich countries, just being in a metallic building is enough for 60MB to be painful to download.
I remember leaving the computer on to download 650MB overnight, which used to be an ISO for an operating system. It was usually finished sometime the next day.
And 60MB would have been a couple of hours, not bad to wait for some software.
It wasn't prohibitive to do this.
Indeed, I seem to be back at square one, as downloading a AAA game for me once again takes about a day.
Add a little line noise for the average 30-70y/o house wiring and you get 43 hours for 33.6 or 50 hours for 28.8. It took running a brand new line in order for me to get 56k. So, the parent comment is accurate.
It was absolutely prohibitive for the average person with a shared line, and the direct comparison to shared line today is a metered connection. Even in the US it can be as much as $30 per gigabyte which would make that 60MB download cost $1.80.
Arguably both Slack and Spotify (and Visual Studio?) have reached significant scale with Electron apps - so at the end of the day it seems to be less of an issue for end consumers.
> I really don’t care if my hello world UI is 60MB to download,
If you're writing Hello World for fun, then sure. But _I_ won't be using any of your software if it's that bloated. I'm not going to even complain to you about my internet connection or hard drive space or personal preferences. If you are not going to respect my resources as a dev, I will not use your software. Just like I won't ride with a cabbie who curses and cuts off other drivers.
Games with hundreds of GB are using high poly models and high res textures. The actual code segment of the game is much much less. This isn’t really the same.
Games at that size are not clocking in because of their models and textures. It's from use of lossless, uncompressed audio files, and shipping multiple language files.
The issue is most common on games that release on consoles, but also for ones that want to support older hardware: the audio formats they use are designed to minimize the amount of decompression necessary for play, and so reduce resource requirements in exchange for storage space.
i’ve not yet seen a sound sample above a few 100mb, uncompressed. however it’s quite common to see a source model, or even a cooked down version, can absolutely be above 1gb. sound, except soundtracks (mostly), are also reused whereas models are not usually. couple that with one of your reasons, supporting all possible resolutions (PC games are the big offender here), and i don’t see sound being a huge issue. you can also compress sound while in transit, textures will only compress so much. wavs/PCM compress quite well
I don't play video games, so I have no opinion of them. I actually do play Kerbal Space Program, though I think that it is a few hundred megabytes, certainly not even a single gigabyte.
One can hope that the code of the engine will be shared across apps in memory, thanks to shared libraries, and that the engine itself will use shared things too, by virtue of being distributed with the OS.
... if you happen to run other browsers and apps using this WebKit / libwebkitgtk / webview.
and that embedded apps won't be obese frontend code with bazillion npm dependencies and a heavy, all too widespread framework, of course.
Hopefully, developers turning to Tauri will have some sensibility to lightness.
I’ve always felt that engine inconsistencies in electron-alikes are a bit overblown, particularly now with the old IE-based webview out of the picture (modern Windows uses a Blink-based webview). They’re all capable of the basics, and the greatest inconsistencies will be found in newer features. Stick to a slightly older well supported set of features and you’ll be fine.
There are some significant differences between engines when it comes to things like local file access, but that’s what the native component of electron/tauri/etc is there for.
I'm not sure why downvoting is needed. What's said is true: on the web we already have to deal with different browsers and versions. It should be a fact of life for web developers. And things today are still infinitely better than back in the IE 6 days: many modern features work on all major targets. Why do people insist so much on engine consistency for "desktop web apps"?
Yeah, and large fraction of the dev appeal of Electron is that a concern that has haunted web development for a quarter of a century is completely out of the picture when you do Electron. You're not even concerned with possible differences between past, present and future versions of the engine because your code runs with exactly the version you bundled.
Sure, if your app is just a faux-native variant of something that is developed for the browser anyways nothing is lost, but if it's an offline first application that just happens to use web tech for UI the difference is huge.
Most cross compatibility issues have been resolved in browsers. Firefox and Chrome are pretty much at parity, Edge is chrome and Safari is the exception.
Web views are less standardized though and require more finesse.
In my experience it’s Chrome that does weird things. I write for Safari and it works great in FF. Chrome and Edge will require some tweaking of CSS to make it look correct.
Chrome is the new IE6, and guarantees cross platform compatibility everywhere it uses its own renderer though. (Basically !iOS.) It's a strong reason for choosing to Electron for your "native" app.
Nice IE reference, helping Google to spread ChromeOS is exactly one reason not to package Electron everywhere, besides the bloat to have multiple copies.
Everyone is basically using Chrome or Safari nowadays - so WebKit. Very little incompatibilities to consider compared to writing something that works on different WebView implementations
Luckily, that is not the case. In Germany, FF still has 20% on Desktop and 10% total (and I wonder if those numbers are maybe too low, as FF blocks those trackers by default).
I don't understand why this relative metric matters. Past the size of a small city it just shouldn't matter how many more users the competition have. There are more firefox users today (~200M) than there were total internet users in 1998 (~150M), and surely you would agree that it does not make sense to discard that, any more than it makes sense to discard, say, UK or France from diplomatic relationships because they have ~70M inhabitants while India and China account for almost 3 billion.
Just because only 17M people live in the Netherlands, does not mean it's OK to block them. Just because people in wheelchairs don't wall, does not mean we don't need to build streets such they can get around too. Just because some people use the Greek script, does not mean ASCII is enough.
It's called accessibility, and it's a very good thing.
WebKit and Blink have many significant differences, especially when it comes to supporting newer features. In terms of incompatibilities they're about as different as Gecko is from Blink, if not more so.
As someone who has been playing with some web dev in early 2000s, that sounds funny. Gecko, WebKit and Blink seem very consistent to me. I remember dealing incompatibilities between IE6 and "standards-compliant" browsers (mainly Firefox/Mozilla Suite back then). And don't get me started on dealing with "classic" Netscape!
Not sure how familiar you are with the history of browser engines, but Blink and WebKit are definitely more similar than Gecko and Blink.
Gecko comes from the heritage of Netscape. Blink comes from the chain of KHTML > WebKit WebCore. They don't share any points, while Blink probably wouldn't have existed without WebKit.
But yes, today there are differences between WebKit and Blink.
"In terms of incompatibilities they're about as different as Gecko is from Blink, if not more so."
> Not sure how familiar you are with the history of browser engines, but Blink and WebKit are definitely more similar than Gecko and Blink.
I see we interpreted parent poster differently, he wasn't talking about the history of rendering engines, but rather compability with the various web specs. Since Chrome forked WebKit quite a long time ago now and Blink has been reworked quite a bit they have deviated from eachothers implementations. Since everyone is trying to follow an open spec it might be that in certain areas FF and Chrome implement more of the same API's than Safari, which doesn't have anything to do with codebase history.
You might and probably will feel more comfortable in the WebKit codebase if you're a Blink developer or vice versa in comparision to Gecko, but that is if you're writing the engine itself and not pressing the pedals (targeting it).
WebKit, WebKitGTK and WebView2 are all webkit-based too currently. WebView2 was always chromium-based afaict, I think the old trident-based edge had a separate control.
Arguably that's like observing that the same feature works differently in 2+ browsers and blaming web devs for it. It's the job of the browser people to make correct, or at least consistent, platforms.
While part of being a web dev unfortunately is also dealing with this you can't necessarily blame web devs when Safari breaks IndexedDB for the nth time, or when after using ES2018 features like regex lookarounds you discover that Safari still hasn't implemented them (which year are we in now, 2022?).
You may also discover that mobile and desktop browsers work differently in some aspects, not everybody even has the resources to test every single thing in 3+ desktop browsers (which may require at least 1 VM already if you are not developing on macOS, which I think might even be illegal to run in a non-mac hardware, bizarrely) and 3+ mobile browsers (which may require at least one physical device or a macOS VM for the iPhone, and another physical device or another VM for Android, and all these VMs aren't exactly lightweight).
Besides an Electron app doesn't necessarily have to run anywhere else, there's no point in checking compatibility with Gecko when your app never has to run there.
Arguably that's like observing that the same feature works differently in 2+ POSIX OSes and blaming devs for it. It's the job of the OS people to make correct, or at least consistent, platforms.
Does the Chrome engine, embedded within Electron, contain any Chrome features which one could legitimately call into question (such as sync)?
If not, what's wrong with it "spread[ing]" everywhere?
It's hardly lazy dev work. It's a pragmatic approach to developing cross platform apps quickly. Apps that, it might be added, don't have a horrendous UI, which is common with other frameworks.
> If not, what's wrong with it "spread[ing]" everywhere?
Because then it's a monopoly. I'm taking shortcuts but having a single browser engine controlled by a single company means that you rely on that company to define what is tomorrow's web like.
Electron feels to me a lot like stand-alone flash apps used to feel.
I get that same feeling when I open an program using it; kind of a 'oh...' slight disappointment. I get that it makes sense sometimes for a developer to sacrifice performance and size for ease of development... but as a user, it feels like a loss, a sign the developer will be taking too many shortcuts, or that I just won't like their general design philosophy.
At this point why not focus on some actual GUI toolokit, like write a Qt clone in Rust, for real world apps we don't need all the CSS and HTML crap, you need simple layout, GUI components and an option WebView you can embed in the app if needed. Probably there is no commercial interest to pay professional developers with real experience to implement this.
HTML and CSS, while not perfect, are still the best tools I've found to create a pleasant UI. Other libraries like Qt do work, but it's more difficult to get things looking exactly how you want them to look.
It is definitely slightly subjective, but come on. God damn winforms were more productive than HTML+CSS hooks you have to go through. In sane frameworks you have proper layouting (not some third party css library with ton of boilerplate to do something like vbox/hbox) and easy customizability through inheritance (eg. try to create a datepicker in HTML.)
Hell even Swing had good layouts decades before CSS caught up with things.
WPF has amazing layouts.
CSS is irritating as hell. Even modern CSS, justify-content, justify-items, seriously?
Flex has so many weird edge cases it is overwhelming. I have used flex type systems in other frameworks that worked 10x better, while Flexbox is way better than what existed before on the web, it is still and endless source of frustration.
And yes, Winforms is 100x more productive than HTML. Awhile back I designed a fully functioning UI in Winforms to get an idea what I wanted my website to look like. I hadn't used Winforms in years. Took me less than a day to get a UI up and running with data binding to my backend and all the features implemented.
TWO MONTHS of web dev later I had the same thing working in a browser.
Now the browser was styled, and responsive, sure. But 2 months vs 6 hours. The loss of productivity there is insane.
CSS has too much old crap, how many ways you can center a thing? How many ways you can horizontal align some stuff? There should be only 1 way and that way should not have hacks like use negative margins.
What a Qt or other similar frameworks gives you is consistency, all components in all apps will look and work the same (with the exception of customized ones). This means you can focus on UX and not on bad design. My experience is that some bad designer will force his opinion on the users, will force his preferred fonts, font sizes and font colors on the users(making text hard to read), will disabgle text selection for some weird reason, will fuck with the scrollbars because the native ones are ugly etc.
For a working app that is not a music player you don't need the "power" of css, say you seen video games have a small Launcher/config window that hase buttons,drop downs, checkboxes , I noticed those use Qt or Windows Forms. I am also noticing that modding tools , open source tools (that are not GNOME) also focus on functionality and you will not see buttons with round corners, fancy fonts and animated borders.
IMO Electron advantage is not his the "powerfull"css and html and it is that you can reuse the existing node ecosystem and existing web developers and the alternatives are also lacking for higher level languages (GTK is shit IMO)
>There is: flexbox and grid. All kinds of alignment can be done with basically a one-liner in both layout systems, without any hacks.
You missed the part with "there should be ONLY 1 way". I know about flex and grid, this are new and thanks the gods we finally we have something decent (not good).
Flexbox is great I wish to magically remove or magically fix all the code that does not use it and instead uses "float" or other shit.
You would say "don't use the other old shit" but my point was that we need a GUI framework that does not have 1 million lines of code for supporting this old stuff. If we really want to use HTMl a language for documents to write GUIs then we should make a new version that is the modern subset of html and css, where you ONLy have 1 way to do a thing (remove or limit the use of float, don't support all boxing models, simplify the layout rules so I don't have to google and find that to make something to work I have to set min=width=0 so the css engine follows a different path and does the correct thing)
I'm curious if you've found any GUI frameworks for any OS, in any language, that have ripped out all their legacy code and/or the design tradeoffs and hacks their legacy code required.
>I'm curious if you've found any GUI frameworks for any OS, in any language, that have ripped out all their legacy code and/or the design tradeoffs and hacks their legacy code required.
Yes, Qt4 is not compatible with Qt3 , Adobe Flex4 was not compatible with Flex3 , I only used at that time this new versions and did not had to work or learn the old stuff. Old projects continued to use the old stuff and continued to work.
This is obviously a different camps kind of thing, but I don't think you're supposed to make everything look exactly pixel perfect to how you want it, you're supposed to follow the look and feel of the OS with native controls and such.
I understand both, I don't agree with either more than the other.
HTML and CSS are probably the best tools for styling, layout and accessibility. Compared to other UI platforms though, their built-in controls are missing, limited, and inconsistent between browsers. See https://open-ui.org/ for an example of what controls could be improved and added.
I don't think "basically a Qt clone" is a fair framing, in both directions. (sixtyfps aims to cover things Qt doesn't, Qt does things sixtyfps doesn't and won't for a long time if at all)
I have written a full OS like GUI in JS proving that it can be tiny and memory efficient. The GUI part of the application is about 2k loc across two JS files plus CSS and includes full file system display and navigation. It’s all vanilla JS and static DOM methods so it’s as fast and memory efficient as the browser allows.
It’s amazing how fast the browser can be for this (and tiny) when you aren’t using querySelectors, vDOM, or event listeners.
Content is dynamically generated in response to user interactions. For example a user clicks a button to navigate to a parent directory the new directory contents are fetched and populated into DOM artifacts using a function which recognizes the returned object as a microservice instance.
Instead of querySelectors I use things like getElementById, getElementsByClassName, getElementsByTagName, and some custom DOM navigation utilities I wrote for this application like getAncestor, getModalsByModalType, getNodesByNodeType, and so forth.
You don't need any kind of framework or fancy wiring to work with events. You always know what you are working with from within the event handler, because event handlers receive an implicit event object as their first argument. From that there is event.target which returns the element the handler is assigned to or event.currentTarget which returns the element the user interacted with that fired the event after bubbling.
Tauri allows you to build apps that are pure rust - the js is optional and has multiple configurable integration models called 'patterns' that allow defining and constraining the engines includsd, the interfaces between them, and the tradeoff between binary size and distribution simplicity/robustness.
I used this a bit, it was really great. Writing a Rust backend & exposing it to TypeScript was really slick!
There was one issue I ran into that made me think about jumping to Electron mid project, but I can't remember what it was now, but I think it was something like making my app bleed the entire MacOS window while still being moveable.
The other downside is you're going to be tempted to go down the rabbit hole and do everything in Rust. [1]
Thanks, I came to this thread to find personal anecdotes about using it but of course it's just hacker news hackernewsing about electron and what have you. I think I'm going to start my project with Tauri+Svelte and see how that feels.
At https://www.waiterio.com we use plain webviews for Android, iOS and macOS without any framework and Electron for Windows and Linux.
The problem with frameworks is that once a year Apple make a change to their signature and it can break the framework for several weeks/months before a fix/hack around is found.
By using native webviews you can quickly implement the change needed and get back online in days.
Projects like Tauri use libraries like Wry[1], though. Getting Wry fixed to use the new API shouldn't be any more difficult than the work you need to do, right? And that work is then shared by everyone using the project.
Vanilla webviews is definitely possible on Windows with Webview2 https://docs.microsoft.com/en-us/microsoft-edge/webview2/
The problem though was compatibility since that webview might be based on old versions of Edge which used to be a much more outdated browser than Safari/Chrome.
So far in 5 years of using Electron Windows never broke Electron except on a minor instance.
Our Windows Electron app doesn't work on Windows 10 S. The problem might have became fixable by now but Windows S had so little marketshare that we never found time to fix that issue.
The app store reviewers will allow apps with only webview only if you are offering some feature that would not be possible to implement in the browser alone.
How are you measuring this? Slack on my Mac is 325 MiB on disk (probably because it is a universal binary). A freshly-started Slack uses 461MiB RAM for all it's processes (usually getting worse when it has been running for a while).
I'm on Windows, this is what Task Manager is showing. IIRC that means this is the physical memory being used.
Some time later, some of these numbers have changed a bunch. I'm on a desktop so it's always on. Element has climbed up to 800mb of usage. Signal's dropped to 44mb. I closed Slack after posting that comment. Discord's about the same. I now have VS: Code open, it's taking up 626 mb.
I haven't, I've actually been curious about this app though. I wasn't sure how easy it would be to use it while retaining all ownership of my data. I'm not interested in third parties hanging onto that, nor in software subscriptions (their "Sync" offering). I'd be happy to pay for the Catalyst tier if it's something I end up using, I'll check it out and see how much effort it is to integrate with Nextcloud. I use Joplin now, it's OK, hard to complain for the price but not my favorite.
You retain total ownership of your data with Obsidian. It's basically a tool to view a directory of markdown files and their connections between one another. I just track mine via git and push it to my remote whenever I want to back things up.
Obsidian is pretty sluggish on my computer, especially when scrolling even short documents. But that's probably possible to improve in the future and no fault of Electron. As a comparison, VS Code is super snappy even on very large documents.
Huh, that's super interesting. Well, VSCode, for me, at least, uses way more memory than Obsidian. I believe you that it's faster, I'm just very confused (and curious) as to why...
I chose Electron for the app I was building because it would make sure the app I create (1 code base with Angular) would perform exactly the same way on Mac, Windows, and Linux. It required virtually no learning of anything new (just look up some Electron API like "minimize window") and I was done.
As a single developer, I was able to get an app out in a few months and have been improving it for 4 years now. I love it (enough to create a Renamer app too: https://yboris.dev/renamer/ ).
> I have a hard time understanding why platforms like electron are so popular.
For an example. I'm writing a FOSS app mainly for myself, but i am publishing it for everyone of course. I want to support Browsers, but also "Apps" in OSs. I'm on Linux, MacOS, Windows and iOS every day.
I do not, by a large margin, have the time to write my application in 3 or 4 different native UI toolkits. Furthermore, my application has a lot of text editing and rendering functionality, one i'd have to then reinvent in various UI toolkits, unless i used something that crossed all OSs above perfectly. Finally, my app has WASM plugins, similar to Obsidian.md, to allow the user to easily extend the application.
All together i will not, by a large margin, use anything Native. I barely have enough time _(don't, honestly lol)_ to write the app once - let alone supporting all the above platforms.
I'm targeting the web.
As an aside, i'm writing this in 100% Rust lol. No JS, because i prefer Rust.
Aside from the improvements to project timelines, Electron and similar products are popular because they improve the developer experience, and in 2022 the developer experience is what matters the most to many software companies (yes, in many cases ahead of the customer experience).
DX directly drives engagement and retention, and good developers are hard to find (and keep!).
And like it or not, there are more and more developers entering the industry with web-only training, so we have products that reflect the stack and tooling those developers are most proficient with.
It's not just the developer experience, in fact that is probably lower on the priority than you would think. The entire business IME struggles to reason about a product, in the details, that is logically 1 product but physically N products by virtue of all the different target platforms and codebases required.
JS DX always seemed really quite bad whenever I looked. A constantly shifting landscape of frameworks and packages where nothing will stand for long before being eroded away.
> Let's just imagine for a while what UI would look like if we only had Swing (Java) or QT (C++)
I can easily imagine that, because that's more or less what we had before Electron. And it was much better, because apps (mostly) looked consistent across the OS, rather than each and every one coming up with its own custom UI theme.
At least for Qt, the answer's quite simple: you'd have KDE and the associated apps.
As for the stability of the front-end ecosystem, it doesn't exist. The many articles that were posted on HN over the years complaining about the endless quagmire of front-end frameworks, libraries and technologies explain this better than I could.
if you like the pricing and licensing, functionally QT may be for you. The web frameworks mentioned on the other hand are free to use for everybody and anything.
Qt is LGPL. You know what else requires you to respect the LGPL ? Electron becauses it is built mainly on LGPL components (Blink, FFMPEG to name the biggest ones). If you are ok with the license of Electron you are ok with the license of Qt.
The MIT license only covers what Electron adds to Blink (the rendering engine, which is under LGPL). It does not replace all the licenses of the various components used as part of Electron for which you still have to comply.
That is the exact same license than Qt, you do not need to expose your IP when using it (Tesla used LGPL Qt for their car dashboards and you don't see that IP floating around on the internet)
> Electron and similar products are popular because they improve the developer experience
> more and more developers entering the industry with web-only training
aren't these very different things? Is Electron popular because web-skill are so common, or because the dev experience is better - I sceptical of the latter, as I find JS very dependant on framework/ecosystems for compatibility.
Sorry if I implied those were tightly coupled. I do think they are related but more loosely than perhaps I meant, and not exclusively.
Electron improves DX for all the usually stated reasons (build once for many platforms, etc) but I do think there is a connection to the fact that a lot of developers out there are learning web tooling, and if a company wants to put out a desktop app in 2022, it's an easier (and therefore better for DX) path to use something like Electron or Tauri - where devs can use the skills they already have - than try to either upskill or hire a team that can build native apps on all your desired platforms.
We built our product for the web. People wanted our product as a desktop app. So we wrapped it in Electron and now we have 3 desktop apps. People wanted our product as a mobile app. So we wrapped it in Capacitor and now we have two mobile apps.
> So efficient for you, but 1000x more resources everywhere it runs.
And yet, to the people who do use their app on the desktop, this is obviously a preferable situation to not using the app – which would probably be the case had the developers decided against Electron.
And yet, we have single indie developers writing multiplatform Qt [1] and macOS native [2] Slack apps. Last time I checked, Ripcord used 30-40MiB RAM, a small fraction of the official Slack client.
Writing native apps (or a Qt app) would well be in reach of Slack. In fact, they already have native apps. E.g. Slack for iOS/iPadOS is native [3], just allowing M1 Mac users to use the iPadOS version would most likely be a large net improvement in resource use for those who'd choose to install the iPadOS version. Unfortunately, they disallow iPadOS installs on macOS to force people to use the terrible Electron version.
Exactly this is what I don’t get. Huge corporations are so hard for cross-platform development while Facebook, Google, Microsoft et alia could just spawn as many developer for a new platform as they can. I know that management is trying to cut back on every single penny, but is financing React Native really that much cheaper than just writing that tiny amount of code for each platform that will call into that single native library/web API used everywhere? Especially when you have to have a native dev either way because bugs will happen that will need platform-specific knowledge as well.
Because I can honestly understand going cross-platform by a small startup that can’t finance n*numOfPlatforms devs, but for Slack and the like it makes zero sense to me.
It's amazing how bad the options for native UI frameworks are, even without considering cross-platform, more so once you do. Android is actually decent but with caveat it only works on Android... Windows had a decent WPF but then they came up with 5 other framworks that got rebranded and now nobody knows what to use anymore. Is WinUI the latest and greatest there? Wouldn't be cross platform anyway.
Perhaps Unity? But that's more tailored for 3d scenes, rather than UI widgets.
Genuinely looking for suggestions on what is the best choice to pick there.
Given that there's no other language that works cross-platform with as little jank as JS, I don't think that distaste is adequate reason to avoid using it
While I agree that maintaining one codebase is preferable, the 'teams' part is a bit off.
No-one wants 'three teams building one product' which is why it almost always is far better to have one team with android, iOS, web and desktop mixed. Obviously if their mixed erpertises are around one codebase, that is even better.
> It makes perfect business sense to use electron.
In many cases, it should also make perfect business sense to use PWAs. I've heard Adobe has brought a significant part of the Photoshop and Illustrator functionality into their web apps.
> I've heard Adobe has brought a significant part of the Photoshop and Illustrator functionality into their web apps.
This is true. We managed to wrap a very large portion of the desktop code base into a “portable” library (with some customization at the point the library hits the OS, e.g., file IO.) This library is compiled specifically for the OS it’s going to run on (iOS, Web) to give us the best performance we can muster.
The UI for each implementation is bespoke. This lets us build native interactions on the platforms we ship to, giving the end user the best look-and-feel for the platform they’re on. The flip side to this is development cost and time. In the long run, we believe it is worth it.
Rewriting Photoshop in Electron would have been infeasible and the performance hit a non-starter. The path we’ve taken has some trade offs to it, but is the right one given the legacy tech we have and what we want to do with it.
There's a terrible performance regression in Illustrator 26 when working with files with many objects. Works fine in 25, UI freezes for several seconds in 26.
Windows 10, pulling in part of an engineering drawing from a PDF (so many thousands of objects).
I get that it is basically an impossible problem to triage incoming issues on such large and widely used software, but my impression of the popular "community" approach is that it is useless and designed to provide an outlet for complaints rather than to identify issues.
On the Photoshop team we have dedicated staff to monitor, triage, and respond to these issues. You’re right in that we cannot address _everything_ that comes in from these boards- the sheer volume is too much. Nevertheless we do use it to identify top customer issues and our dev schedule prioritizations are influenced by them.
To wit: I’m known on the team for sussing out information from corrupted PSDs, and get called on about once a quarter to look into a bad file that’s come in. That wouldn’t happen as frequently without the community site.
(1) Branding. Businesses want their app to be thoroughly branded, so they'd rather have a canvas where they can invent their own buttons than use something cross-platform native like Qt.
(2) Hiring. Existing developer base trained on webtech. Qt is a C++ thing - there's a far greater hiring pool for webdevs than C++ devs.
(3) Ignorance. Some developers don't really know that Qt exists, or want to go through the trouble of learning it.
None of these reasons benefit the user - but Electron isn't chosen by those who want to benefit the user in the first place.
Web development tools have become fantastic UI debugging tools. You can inspect live running UI and tweak it in real time without a rebuild.
CSS is very powerful, and it's relatively easy to build complex layouts with animations. People joke how it's impossible to center things, but CSS has matured beyond that (IE is dead).
I must admit, I haven't been following Windows toolkits since MFC. I've only heard about WPF in the context of dead Longhorn features. Isn't it deprecated in favor of WinUI or whatever replaced Metro?
Officially, it isn't (e.g. it got ported to .NET Core). De facto, it is, but it still works just fine, as do WinForms, and all the tooling support is still there.
That said, I've mentioned WPF mostly because that's what I'm personally familiar with. The same inspector-type tooling is available for newer XAML-based frameworks, as well:
It is not hard for me to understand that even though I am not a client-side engineer.
Web tech (HTML, JS & CSS) are widely understood with millions of tutorial reference. On top of that, you get a cross OS build that looks the same everywhere.
If all you are paying are just CPU and RAM, then that is a great tradeoff.
"Probably"? On the contrary, the incredible complexity of the JS ecosystem makes it likely easier to write in Qt.
I've had direct experience with this myself - with no prior Qt or webdev experience (although knowledge of how JS the language works), it took me only around an hour to figure out how to write a Qt application - but after 5 hours (and counting) of struggling with Angular, I wasn't able to figure out how to use it.
I wasn't even talking about the penalty of writing a Qt application in C++.
That's a different penalty. Qt doesn't really compare with what can be done with a good UX/UI dev on the team, and in much less time. And there are far more front-end devs than Qt experts.
> Are you telling me that an Electron developer will be able to implement a system significantly faster than an equally-experienced Qt developer?
Yes. Hands down.
I'm actually saying two things: there are far fewer Qt developers, and the learning curve is much steeper. This has huge impact on maintaining and improving the code. We started with Qt and abandoned it because it is way easier to bring someone new in and get them started than having someone climb the curve to learn Qt and C++. Plus spinning up new features in Qt is laughably slow compared to how quickly a frontend dev can do the same in Electron: the former takes days, the latter is practically interactive.
It was such a clear choice to abandon Qt.
I was primarily worried Electron wouldn't last long, but we wrote our first app with it 6 years ago and it has remained completely stable. The biggest dev hits have been in Node peripheral support as they get better, like BLE and serial port interfaces.
> I have a hard time understanding why platforms like electron are so popular.
To what extent is Electron's popularity due to getting cross-platform availability without writing your code 3x?
The advantages of a native app have to be worth the costs... and I'm seeing plenty of Electron apps so there's a lot of people making that cost-benefit tradeoff.
I share the dislike for bloated apps... but every time my stuff takes 10 seconds to start up I think more about keeping it running the whole day than submitting a PR to remove cruft.
Probably never. The web is becoming, if it hasn't already become, the universal platform for application development and distribution, and javascript the One True programming language. Every holdout will eventually and inevitably be assimilated, either transpiled into javascript or compiled into WebAssembly.
And it's partly the fault of the native programming community. It should be as easy to write a native, cross-platform application as it is to write html, css and javascript. Native developers should recognize what works about the web paradigm and adapt to it. There should be forks of these technologies specifically designed for native application development rather than documents. But that never happened, GUI development is still basically programming and it still sucks and now the train has left the station. The only relevant innovation likely to happen now will be iterating on the web-app model.
I mean, I'm looking at the layout tutorial for Flutter now[0]. It's nesting function calls and you have to update a yaml file to include an image, whereas with HTML it's a simple table or maybe grid and the img tag. This example for Rust[1] is ridiculously verbose and noisy compared to the web stack. All GUI programming is. Meanwhile I can write a fully functioning website with nothing but a text editor. No need to install a language runtime, package manager or IDE, no need to learn a company specific workflow or follow a style guide. No need to memorize a new set of quirky verbs for a CLI.
I don't even like the web-app paradigm, but I can totally understand why it won.
With all due respect, I think the web’s layouting (or lack of) is the most complicated out of any framework I have ever used (including qt, swing, javafx, winforms, wpf). I think the great majority of web devs would have trouble recreating something like bulma (regarding layouting) even without responsiveness — I definitely would have quite a bit of trouble doing that.
So if anything, we are just accustomed to the web’s way and other (saner) approaches look stranger for some reason (even though with some insane complexity js frameworks come back and mimic the two decade old frameworks here and there)
I agree with that. Doing layouts with HTML and CSS is too open, flexible, which means you have to do everything yourself from scratch every single time.
Another thing you have to do from scratch is the eye candy aspect of every webapp. I guess the web is more flexible in the sense that you can do anything you want UI-wise, but honestly, how much time and work and money is spent on making pretty webapps? How much more productive our society would be if the useful tools were just tools and not masterpieces of design?
And my biggest gripe is that the product you would get out of the aforementioned frameworks without any design would look.. ok, usable. But even if you just want a website that is not just a bunch of text (though even that require some css to make it readable smh) you have to get quite invested into CSS, or at least bring in a css lib that provides you with some basic design. Otherwise it will look like it is the remains of some website from ARPANET.
I've done pure markup + classless CSS a few times. It's nice, readable and really the way I think the web should look like. But then you realize you don't get paid for doing good things on the web. I couldn't handle this feeling and just quit front-end work entirely except for pages I own.
Can you explain more the part about native apps being hard to make? I mean, I can open Xcode, start a new project, hit run, and there's an app ready to go, with a simulator and everything. It has API's that allow safe access to OS-level resources. It has powerful multi-threading, built-in views that I can reuse (like collections), stack based browsing is native, etc... Also, the client is not broken by default (I.e. client apps in the browser are by default spyware, at a technical level).
All the weird tooling that exists around actually getting html + js + css to work like an app is an indication to the contrary. The constantly evolving ecosystems is an indication of reaching stability, not having yet achieved it.
I will say though that targeting different platforms is an issue, which is not the same thing as your claim. Yes, targeting all platforms is hard, but I don't think any one native app ecosystem is really any harder than web.
I do think there's a lot of survivor bias from people whose job it is to build apps with web technologies.
And to your point that "it won". I don't think anyone, even lay-people, like electron-based apps. It's just what they have to use because there is an infinite amount of web developers. However, I feel that it's a bit premature to call call something that's ubiquitous that everyone hates as "winning".
Also, the most used software as a matter of time on task would probably be excel, email clients, and web browsers. All of these are built natively. So with the exception of slack, and taking a broad view of the road that lies ahead, most of the stuff that's built on electron is sort of vapor in the grand scheme of things.
I think we can actually expect to see more and more native apps, due to power consumption requirements. I stopped using Chrome years ago due to its aggressive power/memory consumption and never actually really looked back.
Any way, sorry for the unnecessarily long response, but seriously what you're saying is very hyperbolic. We are literally just at the beginning of the absolute beginning in terms of what technology is going to look like. Things will be very different in 5 years and super different in 10-20 years.
With electron I can actually just build the apps for the other platforms and then do specific platform level changes (if required) for Mac. For native apps, I literally cannot even begin to code anything unless I have access to hardware
This is not true, most languages are multiplatform. And there are dozen of multiplatform native GUI frameworks out there. Flutter is one of them, the other ones I cited too.
from my experience the struggle with making native apps is in implementing all of the services that are in browser environment out-of-box manually.
image caching, websocket handling etc.
which enables more fine tuning for the end product, where in html you can put img tag and that little thing will fetch the image for you, store it somewhere and render in accordance with some layout structure.
in mobile dev you need to do more so to say, low level stuff in comparison with browser.
Well they're definitely very different technologies too, and the things that you mention, in that specific context, are what the web was actually designed to do well. Basically, the web is purpose built for digital publishing. This is really good for content, as it allows the publishers to update the content and leave the client relatively the same.
Your example of an <img> is a perfect one. Yes, if all the images are going to be loaded remotely, I would say this is a client/server messaging paradigm, and one that excels in digital publishing.
However, if you're going to build an app (which is what we're talking about), then you are probably going to want to store the app's images in one giant executable... which means all the caching and everything you want is going to be extremely fast, probably much faster than what the browser will do for the same task.
If you really want to understand what I mean, just ask yourself "what is the life-cycle of a web app", e.g. what is the "main" method? Are there clearly defined transitions between views? What is the execution/memory model?
When you answer those questions you realize that the browser was built to display pages of text and images, loaded by a server. From there it should be clear why people don't think there's any comparison when it comes to building apps, because one was designed for it and the other inherited that responsibility.
I'm not saying web development as a paradigm isn't popular, but I'm arguing against it being easier than native because it's not really. However, if you wanted to add like a digital publishing component to an app, I'd just embed it into a web view, which is the best technical choice.
Has the web won on mobile? No. In fact there was a time in which Cordova-like apps were getting very popular, but react-native and then Flutter, Kotlin and Swift have ended that. Or at least that's my impression. react-native is almost like the web still though, but the fact that people are doing react-native and not Cordova must mean something.
Meanwhile, 10 years ago the only ways to do desktop native apps were the old -- very good, but old, with that old feel -- technologies like GTK and so on. Now there are these other things I cited above, and more! It's a renaissance.
But isn't this the danger of evaluating something via simple example?
You can make fine websites with just a text editor (although I'd argue how easy CSS is to use) but as soon as it needs to be an app you need JS code, and suddenly the difference isn't so great. Infact, I'd probably find a traditional prog-lang easier to use in that case.
I think the real advantage of JS is the robustness of its sandbox & permission system, that's what really needs reproducing.
For DomTerm (https://bothner.org) I've used both webview/webview and wry (on Linux only) - and both are usable. The main problem with webview is that it is semi-abandoned (last checkin was in March) and the features set is pretty minimal. Wry is actively developed and has more features. However, the wry-based executable is quite a bit bigger than the webview-based one. And of course leaning to use Rust is an issue.
Oops: For the record, DomTerm is https://domterm.org, while bothner.org is (I believe) some unrelated German family. (My personal/family website is bothner.com.)
This seems to be a good fit. I guess when you can reduce the api calls to bindings (accessing filesystem, network, etc.) cross platform compatibility should not be a problem.
One very important thing I'd like to highlight: using one shared browser instance rather than N is not gonna make your apps that consume 1GB+ of memory suddenly consume much less than that, the problem for those apps is the code they run, it's not the language, it's not the platform, it's the badly written code, and Tauri doesn't change that.
This is a great point. I think last I measured a basic electron browser/webview is about ~20-36mb which is not far from an about:blank chrome tab. The 300mb+ memory usage you see is mostly the javascript from the apps themselves.
Either way, I'm glad more people are in the space!
Back in the day people coded lightweight because it wouldn't run otherwise, and speed mattered in single core sub-GHz CPUs. We don't have that same constraint today, how do we get people to write more efficient code?
Operating systems should being pointing fingers at egregiously heavy apps.
It’s not perfect but the battery menu on macOS pointing out apps consuming a lot of energy has inspired a good amount of efficiency work for macOS ports of things because users see it and gripe at developers about it.
I would like to see that taken a step further. Something like the system showing a notification banner saying something to the effect of, “BadApp is consuming excessive amounts of energy. Quitting it will increase your battery life by approximately 3 hours and 15 minutes.” I believe quantifying the loss that the user is suffering as a result of the developer’s laziness will go a long way to inspire displeasure in users, who will then apply pressure on developers to fix it and opens up space for competitors who sell themselves with better efficiency.
Power optimization doesn’t improve memory footprint. In fact, it can do the opposite and increase memory allocation. Think tradeoffs between memory and processing, e.g. caching of intermediate results.
Sure, but that’s just a single facet of optimization, and I think it could be argued that just requiring developers to take a closer look at their usage could in many cases free up enough memory to cancel out intentional increases memory usage, meaning in many cases memory usage is nearly unchanged while other facets of performance are improved. It’s still an overall win.
I think this is a great point. The delta in performance must be visible for people to even care about, and that's not obvious even for technical people (if something takes 15ms or 0.15ms to render the difference is massive, but probably you can't even perceive it with your eyes). Energy usage is a proxy for that that anybody can understand, and care about on a portable machine.
You can't. Humans are lazy and working within constraints requires discipline and effort. Waste expands to fill the available space. The only thing that made old software efficient is that it wouldn't run if it wasn't efficient enough. And BTW there was crappy old software that made your computer chug back then too, it just looks better now because computers have gotten so much faster that apps can be a lot sloppier before users notice.
People still write efficient code where it counts. There is a trade off between development velocity vs efficiency & reliability. In most consumer software the advantages of building and shipping more frequently largely outweigh saving on compute, memory, or disk, which is historically cheap.
Who says using a ton of RAM isn't efficient? Unless your system OOMs what's the downside of having a bunch of allocated memory? Especially if it isn't even paged in.
In general, you're correct - RAM does literally nothing if it's not being used.
Several reasons why this principle doesn't apply in this specific (Electron) situation:
(1) Every single Electron/webtech application I've used hasn't just consumed tons of RAM, but also had a noticeable CPU (-> battery & performance) impact.
(2) Most webtech apps I've seen have had memory consumption in the 200-400 MB range - which isn't a problem on my 16 GB desktop, but is a problem on my 4 GB RAM laptop. People have less RAM than you think, and want to run more applications than just yours. Which is better: to be able to run Spotify, Discord, Slack, Matrix, Obsidian, your web browser and a video game all at once, or to have to manually open and close applications when you OOM?
That is - wasting 200 MB of RAM isn't bad if your available RAM is far in excess of 200 MB. For most people, it isn't. If Electron apps each used only 5 MB more than necessary, you would see virtually no complaints at all.
(3) Inefficiency is making bad use of available resources. Not only are chat applications like Slack and Discord not intrinsically difficult problems, but the very existence of third-party clients like Ripcord[1] show that these applications are making extremely poor use of the resources given.
I do, because your program isn't the only thing running on my system. Low free RAM means paging, and generalized slowdowns when something else RAM-hungry, like a game or a web browser, is invoked.
Memory is a limited resource to be used judiciously, not an all-you-can-eat buffet.
what's the point of having a bunch of ram sitting around doing nothing, I would rather have a system that had zero free ram but managed its address space well, so that changing ram usage was painless. why pay good money to have hardware sitting idle?.
The same reason I refill my car's gas tank long before it hits zero; low/no resource problems range from irritating to catastrophic. Unused RAM isn't wasted, it's headroom.
But you OS doesn’t have to find a petrol station it may not even be able to reach — it can just swap to SSD or the best - swap to RAM itself by compressing pages (zram on linux and mac does it as well). Empty RAM is seriously wasted unless it is some embedded system where you are managing memory and need some strange latency requirements.
As described below, that's what's supposed to happen, but the default kernel configuration in mainstream distros will happily hold onto caches while swapping so hard to be unusable, or worse, loosing the OOM killer to wreak havoc on my workspace. This is unacceptable, and this is why I want headroom.
At the end of the day, if my system wildly misbehaves under high memory pressure, and forcing the pressure down resolves the misbehavior (or keeping a certain amount of headroom prevents it from happening outright), "linux ate my ram" is an accurate description of what happened and no amount of tut-tutting telling me that it doesn't work the way I just got done seeing it work changes that.
I'll give zram a try, but the problem here is poor usage of memory (both in priority and badly-behaved bloatware), not quantity of memory available. I'm not a kernel developer, I shouldn't have to dork around with these kinds of knobs to get sane behavior.
Unless you're seeing other processes crash because of a lack of RAM this isn't an issue, your OS will page things in/ out, including in Chrome, based on memory pressure.
Yes it is an issue. Under high memory pressure, we start digging into swap, and at that point, the UI is starting to significantly chug.
Worse, this often happens when there's plenty of cache to evict. I can and have restored a nigh-unusable desktop to normal operation many times with a painfully entered `echo 3 > /proc/sys/vm/drop_caches` from a new TTY, instantly resolving the pressure and giving me time to find and terminate the presumptuous program who thinks its entitled to 3/4 of system memory (usually some flavor of web browser or electron bloatware).
Why's the kernel so jealously guarding its cache allocation and making the UX suck harder? Not a clue. Whatever performance penalty I take from nuking caches is far, far less than from allowing free memory to fill up and dealing with the pathological behavior surrounding that.
Just to again note that just because a program says it's using N MB of RAM doesn't mean that all of that RAM is actually paged in. Every thread you execute has an 8+MB stack but most of it won't get allocated for the majority of programs.
> we start digging into swap, and at that point, the UI is starting to significantly chug.
Only if you're constantly swapping in and out of swap. Just putting something into swap and never retrieving it won't case issues.
I'd generally recommend disabling swap altogether though and just letting OOM take out misbehaving processes.
This isn't all to say that using less memory is 'bad', but when people say 'oh that program is such a memory hog' I wonder if they might be measuring incorrectly, or not realizing what it's doing with that memory.
>Only if you're constantly swapping in and out of swap.
Which, in the experience I just gave, is what's happening. System memory at some high 90s percent utilization, swap usage creeping up, kswapd with a ton of CPU usage, and worst of all, UI chugging. If it wasn't 'actual' memory usage, why does dropping caches, instantly freeing up some amount of memory, restore responsiveness?
I've tried operating swapless before, but that just means OOM killer kicks in even when there's cache to evict. That seems like a priority inversion to me - of anything paged in, shouldn't cache have the absolute lowest priority, and be the first thing to go when memory's needed for other things?
The things you pointed out are the problem. On RAM-heavy machines you're right, it's not noticeable and can even be a performance boost. The problem is that same app is run on machines with all kinds of capacities and system loads.
Phones have gigabytes of RAM. It's a relatively niche situation where someone is running Chrome on < 1GB of RAM, which is less than what modern OS's require to begin with.
Besides, memory usage is hard to measure. Lots of memory may never get paged in.
Environment should be the new constraint. Inefficient apps collectively consume energy that otherwise would have been saved and therefore contribute to global warming.
Will you also wipe out every old-generation CPU in usage as well? Because a CPU using 4x the energy compared to a modern mobile one will be much more detrimental to the environment than that application running for 2 hours using up 10% more energy.
People around here seem to really struggle with the miracle that is getting into soft dev from nothing and then actually being employable in the capitalist sense after 6 or so month.
Of course it's a trade off! We enabled this miracle by training people that have a very focused, narrow understanding of not even a field but a particular tech. Electron is basically the perfect fit for this type of education. It enables someone to build something where previously they could build nothing. It makes getting from 0 to 1 that much easier.
From a business standpoint, it's simply smart to be wasteful with resources that are abundant (average computing power on personal devices), when it helps you save on resources that are not (dev time). But of course, there is much not to like about the side effects.
>"It enables someone to build something where previously they could build nothing. It makes getting from 0 to 1 that much easier."
Making GUI apps using Electron tech for front end is no less time consuming than doing GUI in Lazarus for example. But the end result is way more frugal in the latter case.
The original statement assumes that someone does not know nothing and has to start from scratch.
In any way I am all in for the type of developers that know how to screw size 8 bolt into size 8 nut and the rest be damned. It keeps some healthy niche and remuneration for little more versatile types.
And since you have to know the web stack to exist at all on browsers that box is checked for basically everyone. Break the dominance on the DOM and JS in browsers and you open the door to better cross-platform toolkits.
I don't think that the Js devs are entirely to blame for this - Chrome literally eats memory for websites even with little to no Js. That's not to say that some sites are not guilty of this - Reddit is a travesty for example.
The problem is that Chrome does things that make sense for a browser - like hanging on to a lot of cached stuff in the page history, or starting a separate renderer process for an iframe, but are horrible ideas when you are running a desktop app.
Additionally, I'm pretty sure stuff like React is also horrible for memory usage - if you create a reference to a HTML element from Js, that means all the native resources that that element uses are subject to GC lifetime, and React, with it's shadow DOM does exactly that.
i've forked chrome to create something like this (meant for native apps in a multi-process environment) and can confirm.
The multi-process architecture and the chrome platform is already a overkill just for a browser (in my opinion), its much more if for each application you have a browser process + gpu process + several renderers.
If you have only one main process, + 1 gpu and a process for each application running you solve that problem, even more if those process are not running javascript as in my case.
I feel you've answered your own question. If you want to make a single app that can be web or offline, and cross-platform, Electron and friends allow you to do that. It's a trade-off of dev effort for user experience, and as much as many devs hate that philosophy it allows many things to exist that otherwise would not be worth the effort.
Because some of us have to build things which, for biz/customer reasons, have to be desktop apps, but we, as devs, are more familiar with web technologies/there are more web libs out there.
There is a moderate demand for local desktop applications, but there is a very large supply of Web developers. Solution: by bundling an entire web browser with your application you can have web developers making 'desktop' interfaces.
> Wails automatically makes your Go methods available to Javascript, so you can call them by name from your frontend! It even generates Typescript versions of the structs used by your Go methods, so you can pass the same data structures between Go and Javascript.
That does sound really nice - would love that with Tauri.
I've never used Typescript, so at the moment I just have the pain of untyped structures in JS-land (and making sure I get it right passing back in to Rust-land) - but, at least as far as I know, if I did use TS I'd just suffer from manually duplicating data structures instead.
(I suppose ultimately I intend to do that, it's just so imperfect anyway that I haven't bothered yet.)
Lot of talk on how it's a bad idea to use web tech for desktop apps because bloat.
Web tech is ultimately an API. APIs don't create bloat, developers do.
What is missing (or unknown) is tooling to avoid bloat.
One could have something like a compiler of sorts that analyses the HTML+CSS+JS of an app to be packaged and generates just enough code to implement what's actually in use. It only needs to implement the semantic (for example no need for a CSS parser in the app if the CSS is not generated on the fly in response to user interaction, it can compile straight to layout code).
Even packaging an existing browsing engine, it should be possible with a bit care (no aggressive use of runtime generated code and the like) to make an advanced dead code remover that recompiles the embedded browser without all the stuff the app being packaged does not use (e.g. no video => no video decoding code, no use of CSS feature XYZ => not compiled in). It could even be brute forced in the presence of an exhaustive test suite: for each function in the embedded browser, replace the body with an exception, run the test suite, remove if it passes, repeat.
This also applies to GTK or Qt which suffer the same problem as electron (must be co-packaged with the distro, or compiled on): with efficient dead code removal you could make AppImages (or similar all-in formats) that are not zillions megabytes each.
Is there any company/project that made and published their 2021 in the beginning of the year, and successfully completed all the tasks verbatim in the end of the year? My guess is that the number of companies/project who completed everything is close to 0.
They had a feature-freeze for a security audit, I'd guess that's a big contributor, and also that possibly there's a bunch of unmerged progress as a result.
Last time I checked it's using libwebkit to render html/js/css, which means, on top of my chrome typical daily browser, Tauri brings another browser(memory and CPU) into the system, which nearly doubled my 'browser resource' consumed on the computer, not good.
Tauri uses Wry[1], so it should actually be using the webview that comes with your OS. FWIW, every Electron app includes its own copy of Chromium, so that's even worse from this perspective.
I just ran vscode on ubuntu and checked its memory usage, you're correct that vscode brings its own chromium instead of sharing any libraries with my running chrome.
Wry said to use 'default web engine', which under Gnome is libwebkitgtk(i.e. webkit engine), that is different from its default browser firefox which uses Gecko as the engine.
so Electron and Tauri both will bring their own demanding/heavy cpu/memory needs from their built-in web-engine that shares nothing with your already running heavy browsers. Neither is a light-weight solution.
Neither is as lightweight as native, but if you've got a collection of apps running against OS webviews vs a collection of apps running against their own Chromiums, it seems like the webview ones would share at least some memory. (It could just be that all you gain is disk space and smaller downloads, which I agree isn't that much of a win)
When I was playing with Tauri I had to install libwebkitgtk etc for it, since neither firefox nor chrome brings in that dependency. Unless you're using some gtk browser that disk space was not saved by Tauri at all.
Anyways other than Tauri is a Rust-flavor, on the resource usage side, it shall not have much difference from using Electron, which uses Nodejs.
Is there still an easy way to use node.js with Tauri? I'd appreciate an easy to use cross-platform tool on the backend. Or is it better to build the backend features needed (e.g. file access, config management, network etc.) fully in rust?
I really miss the days where many things were made from scratch like operating systems and compilers. Nowadays it's something built on Linux, Android or Chrome.
Why not just run a local server (localhost should be secure)? I mean: You create a web app and kind of a backend / some logic anyway.
You could embed all assets and the server into one executable file and distribute it. The user then runs this executable and uses their default system browser.
Not the op but, giving this rely in whatever is on the host OS, the DLL's on the host OS will evolve changing its interface or even not ship anymore while with applications that rely the bare minimum like electron this is not a issue.
I wonder if the processing / size overhead of Chromium / Node could be reduced by making both more modular? I.e. compiling only the features you need. But I guess Chromium isn't built for that.
Let's put this meme to bed. Tauri has been developed for two years now. People don't just write Rust because it's trendy. If you want to throw stones at Rust, there are many more substantial criticisms to make.
Our compilation times are long.
Those sweet, ergonomic macro interfaces are only possible because of proc-macro dark magic that tends to be very verbose and special case-y.
Tauri's pitch is something like, "Electron, but lighter weight and more secure," not "written in Rust." If you're skeptical of the value in that, hey, so am I.
I'll cite my own experience:
I learned Rust because I needed a modern systems language in my toolbox. I specifically wanted something that could be used to write firmware. So I chose Rust over alternatives like Go or D, because I couldn't tolerate a garbage collector with those requirements. Trendiness had nothing to do with it; in fact, I was hesitant to pick it up because I was worried it wouldn't have staying power. (Having taken the plunge, I have no regrets.)
Six years in a row is more than just a fleeting thing; obviously we have no way to know for sure what will happen in the next few years, but it seems likely that Rust isn't going anywhere anytime soon.
I don't do anything because it's trendy and I don't use Stack Overflow, but I have used Rust in a project. Its popularity with me is 100% merit-based. In fact, the project failed only because of management not understanding its value and me being unable to communicate to them why Rust is superior.
The language ultimately chosen for the project ended up being 100% a political contest, and that language was not Rust.
Or maybe people just like working with it? Trend and hype bring people to it, sure, but that doesn’t take away from the fact that it’s an incredibly economic language that’s also fast.
I don't think the criticism is of Tauri. As far as I can tell from their website, THEY aren't promoting THEMSELVES on the basis of their implementation language being the #1 feature.
I think the criticism/meme is directed more at overly enthusiastic fans of Rust. Even if a project doesn't promote itself as "Written in Rust!", that's how it gets promoted online by the fanboys. It IS a substantial criticism, as generally they are not doing the credibility of these projects any favors.
I understand it wasn't about Tauri specifically, but I still think it's instructive. The implication is, no one does anything serious in Rust, they only post half-baked projects on HN for karma. Which falls pretty flat in a thread about project that's been actively worked on for two years, wouldn't you say?
People are actually building stuff in Rust because they have, for whatever reason, decided they care about that language and ecosystem. There's no requirement that _you_ care about it. But the reasonable response to seeing something you aren't interested in is to do something else, and to leave the discussion to those who are interested. Entering the conversation just to let everyone know you don't care is tiresome and gratuitous.
It's pretty common for people to tag a tool with the language it's written in. It's interesting information. For instance, I don't work with JavaScript, so I'm not particularly interested in what's happening in that ecosystem. Putting JavaScript in the title lets me know I'm probably not the audience. People don't seem to complain about posts like, "Foo: Bar for JavaScript".
The "Rust is for clout" meme exists in the minds of it's detractors, not in the minds of the people who are actually writing in it. So no, it isn't substantive. It's more a form of straw man. "I don't care about this thing, and no one else should either, because it isn't a real language. It's just for karma."
Here's a thought experiment. Let's say that writing something in Rust really is the recipe to hit the front page of HN. Why is it that that works? Can it be for any reason other than there's a lot of people who are interested and want to learn more about it?
And if you're not one of those people - doesn't it make the most sense to just click a different article?
> "Entering the conversation just to let everyone know you don't care is tiresome and gratuitous."
No one broadly commenting on the "... written in Rust!" meme is saying that they "don't care". They are saying that they DO care. About the clout-chasing aspect, going unchallenged, clogging up a generally high-quality forum with what amounts to spam or clickbait.
If you get a lot of spam in your email inbox, you would take a dim view toward the pompous admonition, "If you are are not interested in homeopathic erectile dysfunction pills, then you are free to scroll onward and leave it to those who are". In the case of a message you DO happen to be interested in, you would still right to discourage the use of clickbait tactics or dark patterns with that message.
BOTTOM LINE: Either the topic of this this post is "Tauri", or the topic of this post is "Rust".
* If the topic is Tauri, then the clickbait meme detracts from the topic (I note with interest that the Tauri project itself does NOT promote itself on the basis of its implementation language).
* If the topic of this post is Rust, then well... I honestly think you'd be better served promoting a single "Look at all of these 'serious' projects written in Rust!" post, rather than sidejacking a myriad of topics individually in a clickbaity manner.
Either way, if you don't care about the spam/clickbait criticisms that always arise in these threads, then I invite you to leave that discussion to those who are interested.
"This topic is overvalued in the marketplace of ideas" is equivalent to "I don't care about this idea."
The problem with spam is that it is involuntary and fraudulent. The thing about articles on Rust is that participation is voluntary, and the claims they are only for clout are untrue, as I've argued.
I'd be curious what your answers to my thought experiments are.
ETA: I apologize for making you feel admonished, that wasn't my intention. I'm really not interested in criticizing you or anyone as people, but the behavior and the sentiment behind it. My intention wasn't to throw mud, but to hold up a mirror and show how this message is received; I did my honest best to compose a convincing counter argument, and the reaction I was going for was, "Gee, I hadn't thought of it that way", and not, "This jerk is calling me out."
Maybe I’m reading between words in this thread too much but sounds like there’s a bit of hating. I welcome our Rust (hybrid) apps, sounds like a net positive for software.
they promote "written in Rust" more than the product
that is the only thing i remember of the countless of projects advertised on hackernews, "written in Rust", they still haven't got it, the language is the least important piece of a product
the fact it use Apple internet explorer make it a dead on arrival.
One does not needs more than three braincells to understand the the true electron successor will simply have an api that enable a range of allowed chromium versions installed natively. If not found then it will transparently auto install the desired version, at the same time making this version available for other apps.
It doesn't bundle it though, it uses the one that comes by default on the OS. Which is already a huge improvement, because it should, in theory, be able to share resources across different apps (e.g. the DLL should only be loaded once in physical memory, etc...).
These things are not a big deal! Bugs happen. I would be more than happy to contribute and try to help fix these things. However, when this was mentioned (politely!) to the devs in the discord by a user one was actively hostile in reply: https://imgur.com/a/sHzSaae
This does not seem like a team that cares about its users one bit. I'll take the perf hit (is there one?) and stick to electron.
EDIT: After posting this knee jerk comment spawned from a night of frustration using Tauri a couple weeks ago, I think I would like to retract calling it a “horrible project.” It’s technically interesting, and even good (when it works, I hear), but this was easily the worst experience I’ve ever had in FOSS.
“Sorry if I was rude but it was intentional… if you fcking people would read…”
I don’t know if this is a reason to embargo a project in its entire (is this person in charge? What do the other devs think of this behavior?), but jeez, that is unpleasant.
For starters, I'd never introduce a project with such people exposed to the community in my company. We always get paid support when available but humans sometimes don't read and they do ask stupid questions, and such answers can get some people responsible with the choice in big trouble, or at the very least, it forces us to drop it for an alternative.
I'd expect them to have some sort of code of conduct and remain compliant to it.
I agree the one interaction is not enough to embargo the whole project. If it works for you, great! I even find the work they’re doing with rust technically interesting. However, the damn thing just not working AND that interaction (not me in screenshot) really put me over the edge after a night of trying and failing to get my app to build + run.
This is a reason why discord is not good as a "project discussion hub." Make most of your knowledge sharing ephemeral and it means you have to repeat yourself over and over.
IRC has loads less overhead with equivalent ephemerality! :)
I'll not bemoan folks choice of comms channels, just would be nice to have a consistent place to do it. Between Gitter, slack, IRC, Discord, and others, fragmentation of chat clients is annoying. Pidgin solved this all a long time ago but then XMPP got dumped.
XMPP is still alive (= there is a community maintaining protocol, clients and servers). I am using XMPP exclusively for chat since about one year, pulling in most of my friends. It seems that if users do not demand the usage of Internet Standards like XMPP we will be stuck with corporate walled gardens forever. See also: https://blog.samwhited.com/2019/02/whats-wrong-with-xmpp/
But the contents aren't very problematic. This is one person tired of repeating himself in a chat.
The issues up there have a much more reasonable discussion.
I wouldn't embargo the project due to it, but it's worth looking if they have a better channel than the discord one. Anyway, the Mac isn't a good environment for webviewers (notice that there are other projects with this same known bug).
Hi folks, I'm the user from the screenshot above (thanks for sharing this btw).
I have to say, this was a quite unpleasant and unwelcoming interaction as my question was genuine and I indeed did do some extensive research before asking this question in Discord.
As a devtools founder myself (co-founded www.prisma.io) I highly value welcoming and helpful communities and offered my help to the people behind the Tauri project to turn it into a more welcoming community. I hope they are open to it since I actually really enjoy using Tauri as a project.
Denjell here from the founding team of Tauri. Thanks for reaching out directly, and I just wanted to state for the HN record, that we are going to be having a board discussion surrounding the unpleasant event raised here, and will make a public statement very soon.
That said, on a personal note, I am quite sorry this happened. We do support and uphold our Code of Conduct, and this is not an appropriate response and the community tone we want to nurture.
I'm excited to hear you're looking into this and take it serious. I think this is a great opportunity to for the Tauri community to become a more welcoming and inclusive place. Happy to help!
> This is a really horrible project and should be avoided IMO.
I'm glad to see that you've retracted this, but note per HN Commenting Guidelines: "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
The project is horrible because one of the devs said something that wasn’t nice? Are you saying the project is horrible on its technical merits, or you just don’t like the developer’s response to someone?
> [leverages] WebKit on macOS, WebView2 on Windows and WebKitGTK on Linux.
So cross-platform compatibility isn't guaranteed, unlike Electron.
https://github.com/tauri-apps/tauri