> Inconsistencies both small and large had crept into our apps over time. From small things like password strength being different between platforms to larger things like differences in search results and entire missing features.
The solution that has worked well for me in the past, has been:
1. Keep all of your business logic and as much of the rest of your code as possible in platform-neutral C or C++: Something you can call into from all native platforms.
2. Write a very small layer of code using the native language/frameworks to do the UI and interact with platform-specific APIs.
This has a few advantages: You get a single code base, for the most part, to maintain. You have the opportunity to implement as native a look-and-feel as you want, on each platform if you want. Most of your application is C or C++ so you don't need a team with deep expertise in multiple languages. Most of your development should be on the business logic, right? This architecture also lets you easily add a command-line version of your product, for example to help with automated testing of your business logic. And finally, you don't have to ship a huge, memory-devouring browser and JavaScript stack with your application--your users will thank you.
It's not rocket science or anything to be terrified of! I think people too quickly dismiss cross-platform native because they imagine re-writing their application in 3-4 different languages, but it doesn't have to be that way.
> Most of your development should be on the business logic, right?
In "creative" applications, the term "business logic" is sometimes hard to use, but accepting it as "stuff unrelated to the actual user-interaction" ... I can say that in Ardour (a cross-platform DAW), the UI is ALWAYS by far the hardest component of everything we do, and takes far more lines of code than anything in the backend (roughly speaking, where the "business logic" lives).
IME for a lot of apps, most of the development is the UI. Lot's of apps have most of their business logic on a server these days anyway. The problem is not typically how to share business logic, it's how to avoid writing 5 different UI's that do the same thing.
Isn't this exactly what a system like react-native provides? Just off the shelf for commom patterns, and backed by a turing complete language. If you have yo build out the UI toolkit yourself yhen you haven't really saved much.
Is a JavaScript VM really that different to intepreted JSON files? With react-native, the UI controls are using the platform native toolkits, there's just an abstraction layer on top.
Definitely, because first JSON was just a possible format, secondly, whatever format is used, it is serialisation format for a native language, using the native APIs of the platform.
There is no interpretation going on.
The abstraction layer for react native has JavaScript engine in the middle, plus a marshaling layer to go through OS APIs, and on Android is double slow, because NDK only has game related APIs, anything else requires a second marshaling call to go through JNI.
Would it be possible to use Go for this? That compiles to native code, although it does include a much more significant runtime than C, it's safer than C or C++, and it's famously easy for developers to pick up as as second language.
Probably not very well, as these solutions typically involve FFI, and integrating thw runtimes would be a big pain. On the other hand, Rust is ideally suited foe this use case. Personally I'd love to see a Rust + react-native combo.
Multiple event loop integration is something projects get to very late in their lifecycle, if they ever get to it at all. Tokio is no exception: https://github.com/tokio-rs/tokio/issues/2443
When I used to write more networking C code, I used to be careful to write my libraries to be event loop agnostic. And if my library had its own event loop, I made sure that it could be stepped independently so it could sit atop another application event loop. This is alot easier on modern Unix systems because kqueue and epoll can be stacked--you can poll a kqueue/epoll descriptor from another kqueue/epoll descriptor, so sockets, timers, etc registered with the first will bubble up to the parent and your library-specific event loop can just export a single file descriptor to be polled by another event loop.[1] Windows doesn't have such a capability so there's no simple way to bubble events.
Of course, you can run your different event loops in different threads, but then you have to deal with the nightmare of shared data multithreading. Rust makes multi-threading "fearless" because it heavily restricts multiple threads from holding references to the same object, so Rust doesn't help in this regard, especially considering you're integrating with non-Rust code and, worst of all, GUI APIs, which tend to be some of the most inscrutable and hostile code with which to integrate.
[1] This is effectively structured concurrency as a first-class kernel interface, an elegant characteristic of kqueue and epoll that is sorely underappreciated and rarely used.
The thing with C and C++, is that they are part of the SDK, anything else is on you to improve the development experience, integrate libraries, or debug FFI issues.
Not easy because they must also be willing to endure the 2nd class tooling that Google creates for C and C++ on the platform, on top of the usual development issues.
But still, it is much enjoyable than using any other framework that adds even more layers to the puzzle.
> Keep all of your business logic and as much of the rest of your code as possible in platform-neutral …Something you can call into from all native platforms.
This to me is the big win of Webassembly: you can compile with gcc/Clang to target it as well. It makes the browser another “native platform”
I actually agree and chose C++ as well, but eluded that part of your quote because it could be a more general choice than just those two.
That's a great article. I think I understand better the anger that certain people (often mac users) express when a product switches from a native app to a cross-platform app. I always thought it was about cost, and that the solution was to pay more. But it seems to be about the business thinking they know better than the user what a good user experience is. If you're picking a platform for the consistency of user experience, I understand how frustrating (and patronizing) that can feel.
I want to apologize to the few people that I talked to about this here on HN. I didn't understood your side.
It's hard to justify my desire for awesome Mac-only apps without considering the spectre of future Apple lock in.
Electron has made it possible to use a lot of apps that otherwise wouldn't have been ported to Linux work with no hassle, and for that I'm grateful. I have only recently been able to comfortably run Linux full time. My dev experience on Linux is effectively identical to what I do on macOS: Docker, VSCode, Alacritty, and Discord (to talk to my team). Of course Discord and VSCode are both Electron apps, and Docker just lives on the command line. Alacritty, on the other hand, is an extremely interesting example to me, as it is a true native program that uses cross-platform GPU acceleration through OpenGL. I feel that route is overlooked as a path towards compatibility for a lot of programs outside of games.
I prefer apps built using APIs like Vulkan or OpenGL to apps built using electron because they do not feel as sluggish. I think the Blender UI also uses OpenGL to render the interface. One criticism of this approach that has come up in many threads on the same topic however was the lack of accessibility. This is a big pro for electron (and also native development).
It not even a drawing API, even. It's a 3D triangle rendering API. Things like lines, bezier curves, gradients are all do-it-yourself. None of it super-hard, I've done them all, but before you write your UI you have to write all your drawing routines, which probably wasn't what you wanted to be doing. And you get all the fun texture management issues for any gradients and images you want to display. And you probably won't have optimized it like the platform rendering will have.
And then there's text... You've got to load a texture for every glyph of every point size used in the app for every font used in the app. Mind you, the OS already has a cache for this, but you don't have access to it, so that's more memory wasted. Which would be sizeable if every app on your system was using Japanese, or worse, Chinese, and making a copy of all this stuff.
it isn't the main point of your comment... but why is Chinese worse than Japanese? is it about the font itself being different or about Chinese text containing on average more unique characters?
More characters. I believe there are a mandated 2000 characters that learned in school and used by newspapers. I believe 3000 Chinese characters only gets to 97% and 5000 is still 99.44% [0]
This is a very good point that I feel gets overlooked way too often. When people talk about the UI feeling sluggish most of the time they are talking about responsiveness.
Like, who cares if your app is rendering at 120 fps if it takes me 3 seconds to focus on a text field? I feel like that’s where most cross platform tooling has their work cut out.
Hi Paul, not sure if this an appropriate place to mention this but I hope you please consider at some point using OpenGL or Vulkan to at least render the track view in Ardour. I have a large project with around 50 tracks that is extremely slow and laggy to render when zoomed all the way out, it is making the program unusable for me. I could provide a profile to put in a bug report but I'm not sure the best way to go about that. Thanks.
We use Cairo for all rendering. Cairo has various backend implementations, and when it gets a Vulkan backend and you're using Ardour on a system with Vulkan available, it will be used.
We're not going to write renderers at a lower level than Cairo at any time in the foreseeable future.
Also, most of the issues with slow rendering are caused by your video interface driver and/or bugs in Cairo that we provide a few workarounds for already. Edit > Preferences > Appearance > Graphics Acceleration has two options ... you need to try all 4 possible combinations to exhaust the possibility that we've already solved the issue you are facing :)
Thanks for the tips, I will try that and try to get some perf, I am currently compiling Ardour 6.9 :)
On a side note, I don't mean to be the bearer of bad news but I think you need a better plan than this, cairo has been in various states of inactivity and deep maintenance mode over the last several years, if anything more backends will be removed, not added. Also, cairo is an immediate mode-style renderer following the PDF imaging model made for print -- it's fundamentally the wrong approach for high performance rendering. You may not see much improvement even if it did have a Vulkan backend.
I really feel as though Vulkan is a brilliant leveller, with the chance to replace DirectX with a cross-platform, open source standard, funded by Valve and Sony? Sign me up.
In my experience it's the coordination and mental overhead between the platform teams that kills productivity.
I suspect that hiring and training cross platform developers, then having them own a feature across all platforms will significantly increase speed of delivery of that feature. These are typically called feature teams and they are most effective when there are backend developers on the team as well.
The trade-off is that now the new feature teams are not communicating between each other and coordinating on the common code between them, which starts affecting productivity.
In the end, regardless of how the new VP of engineering likes to split the teams and get praised for solving the current problem by introducing another one down the road, the trade off for increased size is loss of productivity due to synchronization overhead between the people involved.
I've been the cross-platform dev implementing every feature across 3 platforms, its a miserable job. Besides feeling like you're living through Groundhog Day, the mental burden of constant context switches between languages and APIs wears you down over time, and that's before you look at the effort required to stay current on all your supported platforms.
I would never organize a business like that again.
> “What is it about enterprise companies that make so many of them abandon native apps, when they could surely afford to develop one app for each platform?”
If this is the case though, why when I follow links to strong web apps like Twitter and reddit and Facebook, do I got plagued with “try the app! It’s better!” prompts?
I developed/maintain 2 apps that both have native variants. So 4 apps total. And I often bemoan the duplicity and wonder if we shouldn’t unify them. But then I see that the web guys are still pushing me to use native apps. I find it very circular/confusing.
I think that's pure marketing. If you download a "native" app, you're more committed to their product. And they can show an icon on your home screen. It is probably more comparable to a bookmark.
> “What is it about enterprise companies that make so many of them abandon native apps, when they could surely afford to develop one app for each platform?”
Another way to think about the article's thesis is that organizations never deal directly in simple "costs", like "Can I afford to spend X on Y?" They always evaluate in terms of opportunity cost, "Would I rather spend X on Y or Z?"
Once you frame it like that, you realize that the choice between native versus cross-platform is, as the author states, often a choice between fidelity and velocity. And in today's software environment where hardware and user needs change very quickly, velocity often wins.
Evergreen reminder that with just a little bit of work to avoid platform-specific native-compiled dependencies and Node-Gyp, you can vendor your dependencies for non-library Node projects, and sometimes you should.
Certainly, Electron apps themselves should not be pulling dependencies at install time, those should be bundled into the app, that's the whole point of Electron. But they also don't need to pull their dependencies from the official NPM repos at build time either. You don't technically even need NPM installed on anything other than your dev machines, Node's dependency resolution during builds and at runtime doesn't rely on NPM at all.
I think some languages (Go springs to mind) are starting to slightly popularize dependency vendoring more, and at one point this was the official recommendation for Node apps/websites in the official docs. It fell out of fashion for some reason, but I think a lot of orgs would benefit from picking up the practice again.
It's also a good way to be a lot more deliberate about what you install, and a good way to make sure that new dependencies don't get snuck in without anyone noticing. It's very hard to hide a giant commit that introduces a thousand new files, and that can sometimes be a good opportunity to take a step back and ask yourself why you're adding a dependency that introduces thousands of files.
Check out Cosmopolitan which vendors all its dependencies. https://github.com/jart/cosmopolitan You can't use Node with it, yet, but you can use QuickJS if you `make o//third_party/quickjs/qjs.com` and `o/third_party/quickjs/qjs.com -m third_party/quickjs/repl.js`. Cosmopolitan goes so far with vendoring dependencies that it actually has its own C library implementation that prevents functions like socket(), connect(), and getaddrinfo() from talking to the Internet if the software is running under GNU Make. Since sometimes we've found when porting software it'll have unit tests that do sneak leaks to public servers. It goes without saying that in order to make it even possible to vendor all dependencies, down to the binary level, there's a very long list of topologically ordered things that need to happen before you can even talk to the Internet. https://github.com/jart/cosmopolitan/blob/50937be7524197f23a... It's all cross platform and native at the same time for CLI/TUI. It'd surely make a more trustworthy tooling stack for a password manager than something like Electron since the temptation for people to incorporate things like FullStory into such GUIs is too great.
That’s not how electron works - it bundles dependencies. Like any other app bundle (exe, msi, dmg). Same goes for non-electron websites (/apps).
I guess there’s a way to also auto-update those, but that defeats the goal of using Electron - reproducible, predictable app builds to ship to consumers.
This hits the nail. Cross-platform as a cheaper option only comes true if the multi-platform feature delivers real value to the users --which is obviosly the wager that Agile Bits is doing. Otherwise someone else is bound to deliver a similar feature set either with a less price point or with better execution using native frameworks.
Counterpoint: multiplatform frameworks are web-based and thus are easier to learn and have more developer mindshare than native frameworks. But native frameworks are not that much more complicated these days and can use proprietary services: would you roll your own sync or use CloudKit?
CloudKit is definitely satisfactory, but it's limited to Apple devices.
I don't know much about the space, so I may misunderstand the following technologies, but I wish more apps used P2P protocols like IPFS, Hypercore, etc. to keep my devices in sync with an optional pick-your-own-cloud-backup as well, so I can just use Dropbox, Google Drive, OneDrive, etc. The latter seemed to be pretty popular amongst developers for syncing (i.e. pick your cloud backup service) not too long ago and while there's still apps that do this, like Notability, I don't know why so many developers moved to rolling their own sync.
> CloudKit is definitely satisfactory, but it's limited to Apple devices.
That was my point, exactly. If you target only specific platform users, easier development (ie, using Electron and web technologies) does not necessarily win over native frameworks because now they are decorated with powerful vendor services.
Obviously if your target is multiple platforms, you are now on Electron land. That's not a bad thing per se, though.
> It's not there, but it's probably too big of a project for one person.
I mean, just take a single subset of functionality like handling multi-touch input. It's probably an hour or two to read through the Pointer Events[1] spec until you get a better idea of why that API was designed the way it was. Now imagine coming up with such a design after at least dozens of hours studying the limitations of prior APIs (but probably more than that). And now, imagine either a) conceiving of and implementing that design (at least dozens of hours, probably way more), or b) cloning that design (still dozens of hours to implement).
Now compare to the minutes it takes to read a tutorial and start using the API in a browser that probably already runs on probably any modern device that has a screen.
That's just one cross-platform API, and probably not in the top ten to implement first.
There have been many attempts to build such things, going back decades. None have really succeeded.
I am no expert, but my understanding is that it's easy to get to 75%, but extremely hard to get to 100%, or even 95%.
Not to mention that there are some differences between layout and navigation idioms on different platforms. Making an app truly native is not just a matter of using native widgets.
I do wonder if there is mileage in building an abstraction layer at a much higher level than a widget toolkit. Imagine being able to write a very abstract description of your app in some DSL, saying what screens it has, how they are linked, what information is on each one, and then having tools to compile that down to appropriate native code.
I don't think that "success" here is particularly well-defined. There are lots of Qt-based applications running on multiple platforms with a single codebase. That could easily meet a fairly strong definition of "success".
There are a million reasons. The main one being that you can't put the same UI on different platforms and have it fit in, no matter how well you use the native components.
The components are just one part of what makes up a UI. Different platforms have different conventions of how they are used on a higher level, and how everything fits into the rest of the platform UI.
Often the choice is between a cross-platform app vs a native Windows application only. That is because Mac users are of course important, but maybe not enough to pay for a separate native Mac app.
Some companies use Java as cross-platform, STMicro comes to mind. Their stuff is wonky, and that's possibly because they don't have committed UI/UX designers, but it is consistent cross-platform.
The web is a pretty damn good distribution method for software. The user simply clicks a link and is provided with valuable features. No install process. No IT approval process. No thought at all, really, about how that code runs on their machine and does things that are valuable to them.
Frankly - We're still inching towards the browser being the OS. I don't want a clunky app that needs to be installed on all my devices, and which has a limited feature set compared to the web: I just want the website.
And for the vast majority of "productivity" tools - networking and sharing aren't backseat things, they're literally the most important part of the product (see Figma, for example)
---
Long story short, you're saying this is a "problem to correct". I disagree - I think the problem these tools are correcting is that native distribution methods (up to and including the current mobile walled-garden app-store fuckery) simply suck. They suck SO much compared to simply opening a Url in a browser.
The web is a good distribution method. However, HTML/CSS was intended to display the equivalent of MS Word documents, and is completely unsuited for general purpose UI. In fact, we know how general purpose UI should look like, because all the non-browser UI toolkits look pretty similar. And it's nothing like HTML/CSS.
> The web is a pretty damn good distribution method for software.
... that can and does run inside a browser. There's a wide-world of software that doesn't run inside a browser right now, and some serious arguments about why it probably never should.
That's where the problem lies, not with yet another app/web-page that provides you with a view of some database columns and rows.
> The web is more fickle and opaque than the App Store.
You have some examples of what you mean by fickle? I'm not really sure I understand.
In basically every situation I can think of, the browser is less fickle, more consistent, and more backwards compatible than an app store.
The only benefit to using an app over a website is offline access - which is moot if you don't have the app installed already, since the app download will certainly be bigger than the page load, and it's also the problem electron nicely solves for those cases.
> In basically every situation I can think of, the browser is less fickle, more consistent, and more backwards compatible than an app store.
There is no guarantee of anything from any web app. Your data can be used for anything the provider likes. There are literally no guidelines.
> The only benefit to using an app over a website is offline access - which is moot if you don't have the app installed already
In almost every single use case you will have the app installed already.
Electron solves literally no problems with web apps. Electron apps are not web apps and do not run in the browser. Web apps do not use electron. The fact that some code may be portable between the two is irrelevant to the user.
You really think the browser sandbox limits what the website can do with the data on their servers?
> This makes me think you don't actually know what electron is.
Do you think electron is a web browser?
I think it’s a toolkit that let’s you use JavaScript, HTML and CSS to build a standalone app.
Web apps aren’t defined by the programming language they use. They are defined by the fact that they run on the web - I.e. in a browser talking to web servers.
The whole process model for electron is almost identical to the process model for WebExtensions (a single process manages renderer processes, which are instances of browser window objects.).
I don't recommend this, but I've done it and it works - your whole electron app can be a main process that literally caches the resources from your site, and then displays them in a renderer view.
> You really think the browser sandbox limits what the website can do with the data on their servers?
And that is different than the mobile app store, how, exactly?
Data you provide to a company is theirs to do with as they please - whether it was ingested through a Iphone app or a website makes absolutely zero difference.
The difference between App Store apps and web apps is that many App Store apps work on local files or files stored in iCloud that are never ingested by the company that provided the app.
I'm not saying the we is wrong, I just miss there being a diversity of methods for fun and experimentation.
A huge issue of free software under capitalism is it's really hard to sustain more than one way of doing things. And so we got monster metastasized too-big-to-fail Linux, Chromium, etc.
I believe the web is the future, and ChromeOS is a great example. Native apps are dying, and with good cause - it’s not worth having hundreds of developers split across multiple platforms.
The goal of any company is to provide a useful service/product, not necessarily a “native” app to satisfy HNers.
The confusion factor here is that 'cross-platform' has become fairly synonymous with 'Electron' (if desktop is one of your targets). It is possible to build cross-platform stuff that is performant. In the past I used wxWidgets for the classic Enterprise app that was cross-platform between Windows and Linux. The UI wasn't the prettiest thing in the world (you regularly end up with dated looking widgets) but it was snappy, and the UX was good.
It's continuously astonishing to me that there isn't another workable UI platform that doesn't require bringing a whole browser. Or is the HTML/CSS/JS UI the killer feature because you can share things with the browser implementation (or share skills with a browser frontend team)?
As long as there are platforms with enough market cap, there will be a need for cross-platform. And the easiest path wins. ElectronJS is the easiest path. My company relies on it.
First, I'm not even sure I understand what the author means by "cross-platform". At some point, he mentions Figma and Slack. What platforms are we talking about? Does cross-platform mean "web-based"? Lots of people use Figma's web incarnation ... is that cross-platform? Or is it cross-platform in its "native" Android/iOS apps? More people use Slack in it's app-based incarnation (probably), but those are not implemented with the same tools as the desktop version(s).
I suspect the article suffers from the myopia often on display here at HN, in which essentially all software development involves some datastore, a means of entering data (often by one set of users) and a means of display the data in various ways (often by a different set of users). This used to be called "application development" in the 1980s, and these days there's a lot of writing about s/w development (and a lot of comments about that writing) which seems to be based on the idea that this the ONLY kind of s/w development there is.
This vision excludes most "creative" software, all gaming software, most development tools, a great deal of technical/scientific computation, a large amount of automation software, and all actual computing platforms (kernels & user-space environments).
Those of us who have been doing native desktop application development for decades have an entirely different take on this stuff from people for whom "cross-platform" means "web, android and/or iOS". We're not "more right", but we know about toolkits that work on macOS, windows and linux (gasp!). We've been compiling our software on multiple platforms for a long time, not relying on interpreters and VMs. We've had to grapple with packaging and runtime library questions for longer than web browsers have existed.
Electron has got a group of people excited because it appears to move web-based development approaches onto the desktop, which is a totally different model than the one taken when using a cross-desktop-platform toolkit. It doesn't add to the list of such toolkits, it creates a totally different approach to tackling the problem, with the promise that the result could also be used for a web-based version, somehow.
In my case, having been developing a "creative" application for 20+ years that runs on windows, macOS and linux via a cross-platform GUI toolkit (and C++), switching to a model that included web front ends (e.g. Electron) let alone mobile platforms, would be even more disruptive than just switching to another of the existing desktop cross-platform GUI toolkits, and in that sense, it's essentially a new development process entirely.
Video games often have not that much consistency between each others, aside from a few basic keys. Thus, they haven't really solved the tradeoff. Part of the complaint against the cross-platform apps is that you have to adapt yourself to the workflow of the app, instead of the app having to adapt itself to the workflow of the platform.
I switch between computers often, and I am happy that my spotify looks the same here and there. I really don't want my spotify app to feel like gimp on Linux and like Excel on Windows...
That's true. I'm personally happy enough that I get to have almost all of the apps I need on Linux with this cross-platform trend. I also use Linux at home and Windows at work, so a cross-platform experience fits me. But I can see how it's frustrating to be left behind when you were used to your platform and people adapting their software to it.
For videogames, I think it makes sense that many have different controls. Trying to force the same controls would make some very hard to play. But I can see people arguing that most apps aren't as specialized as video games, and thus shouldn't need this level of personalization. There is the same argument to be made with special fonts for your apps, vs trying as much as you can to use the system ones.
To give a counterexample, Microsoft Teams notifications on MacOS can't be dragged offscreen; I have to wait for them to vanish. They're not using the native notifications tech (who knows why), and so they're jarring compared to all the nice MacOS applications.
Video games are immersive experiences: you're interacting with only the video game when you play it (or at least, that's the design), and you don't experience the jarring from task switching because you don't switch.
For desktop applications — especially a password manager whose main use involves switching to it quickly to find something to enter into a different application — there is a lot of that resistance when you go between apps that feel totally different.
(I currently work on an Electron app — but that's because it was already one when I joined the team, and it's in a space where time to market for multiple platforms is definitely more important than a good polished experience, at least at the stage we're at. Sad as that is.)
And with that you will lose all sorts of quality-of-life. e.g. tab order, accessibility, hotkeys, etc. All of that has to be recreated to be a replacement.
The solution that has worked well for me in the past, has been:
1. Keep all of your business logic and as much of the rest of your code as possible in platform-neutral C or C++: Something you can call into from all native platforms.
2. Write a very small layer of code using the native language/frameworks to do the UI and interact with platform-specific APIs.
This has a few advantages: You get a single code base, for the most part, to maintain. You have the opportunity to implement as native a look-and-feel as you want, on each platform if you want. Most of your application is C or C++ so you don't need a team with deep expertise in multiple languages. Most of your development should be on the business logic, right? This architecture also lets you easily add a command-line version of your product, for example to help with automated testing of your business logic. And finally, you don't have to ship a huge, memory-devouring browser and JavaScript stack with your application--your users will thank you.
It's not rocket science or anything to be terrified of! I think people too quickly dismiss cross-platform native because they imagine re-writing their application in 3-4 different languages, but it doesn't have to be that way.