Cross-platform GUI development in general is a dumb idea. The user chose a specific platform and expect applications to follow that platforms UX guidelines. Building a cross-platform app is basically saying you don't give a fuck about your customers. It's an insult to your users.
That's easy to say for a file picker on desktop OSs, but what happens when you need to layout a page on all 3 desktop plus Android and iOS?
Do you need to design 5 designs that include 5 different buttons, 5 ways of navigating through the app, 5 different sets of icons, 5 different styles of animation, etc...
Hell even with a file picker, what happens when you use it on a platform that doesn't really have the concept of "files" like iOS or up to recently Android (at least in what is normally exposed to the user)
Roll in various versions of each OS and you are up to needing 10+ ways of doing most things. How are you supposed to create anything even remotely complicated?
I agree; if you build a system that completely covers specifying layouts for all widget needs and a DSL that handles wiring views into models, for all operating systems and device formfactors, you will have invented HTML, CSS, and JavaScript, with the DOM api and a browser as the default interface.
Not sure I agree here. There are a great many browser features (e.g. web speech api for recognition and tts, webrtc, etc) that are just not lowest common denominator. Even if we stick with UI only, features like cross platform vector graphics are not trivial. Of course some libs like qt abstract it, but you're back to your same complaint except it's qml, css, and js (or often times worse, c++ if using widgets)
"lowest common denominator" is kind of a feature, when it comes to something used on a ton of different platforms. The rest can be built up out of those. I'm not saying that html et. al are amazing, just that I am doubtful something could be built that spans such a large vector space, so to speak, so expressively, without being subject to the same complaints.
Desktop and laptop systems tend to be in portrait orientation; phones and tablets tend to be in portrait. So you already have to either make two layout styles or accept a mediocre general solution.
Cursors and styluses are accurate, but fingers are not. So, if you make buttons sized for fingers, they feel awkwardly large (takes large hand movements to change between UI components, and the spacing means more scrolling or pages of UI), but if you size them for cursors/styluses, they are frustratingly small targets for touchscreens.
You can probably assume that most devices in portrait orientation are that way so they can be held in one hand leaving the other free to work a touchscreen, so you only need to handle three of the four combinations for a comfortable user experience in those regards.
However, there's more to it than that. Desktop users often run apps windowed, while phones tend to omit that option entirely. Mac puts the menu bar at the screen edge regardless of the window size/position, but other desktop/laptop OSs usually put the menu bar within the window border. You can remove the menu bar entirely and draw a hamburger button into your UI, but desktop users have the expectation of a menu bar and the change in convention will make the app feel like an outsider.
A desktop/laptop user will never try a long press and don't have a menu button, both of which have been used to hide extra functionality on phones, but they often do have a keyboard, and there is a massive list of keyboard shortcuts that they will automatically use. Desktop UIs have a logical tab order and that selection can be moved with the arrow keys. Cut, copy, and paste function as expected. On windows, F2 almost always means "rename selected item", F1 for help, alt-enter to toggle full screen, and so on. What about the multitude of multi-finger phone/tablet gestures? You have two very different modes on input, and users' muscle memory expects your app to fully support whichever one they are using. As an anecdote, I tried writing something in an Electron app's text box, then went for ctrl-left to jump back a word, but instead it caused a page navigation and lost the entire message I was typing. That was a frustrating "this app is an outsider on desktop" experience to me.
Your decision is effectively mediocre everywhere or customized to fit the conventions for each platform.
I'd love to be shown an example, because I honestly don't believe it is doable, let alone doable easier than just making several different applications.
Non UI code lives in a static/shared lib that gets called from UI code. The non-UI code shouldn’t be coupled to a UI at all, instead reporting progress/completion via callbacks or other notification mechanisms.
If you must run on every platform for whatever reason, there is Rust, C, C# (Xamarin), etc.
That's a valid architecture, but it's not a cross platform UI library that uses fully native looking UI on every platform which is what I'm skeptical of working.
I don't think that fully native-looking UI is a very important thing. Not all of the software has a fully native UI on my computer, and it doesn't matter at all.
I completely agree, but the conversation above was speaking to native UIs being a benefit, and there being alternatives that allow you to "write once run anywhere" with native UIs (which I disagree that this even exists in any usable form)
I'm late, but I think wxWidgets tried this. It was OK, I wrote some stuff with it, but mainly targeted Linux. There were definitely some differences in widgets between platforms, but it looked pretty good for not much effort.
Yes, it takes some initial effort, but it only needs to be done once.
It needs to be redone for each platform every few years as apis and visual styles change. Cross platform libraries are notoriously hard and usually end up in the uncanny valley (qt, swing etc).
Platform vendors of course any devs to buy into just one platform, they want and need the lock-in. Using web UI is a way to sidestep that.
Web browsers like the one used by Electron do this as well. The styles of buttons, inputs, etc. vary from platform to platform. They can be overridden, of course, but that’s true of any native GUI framework as well.
See the Office related talks at CppCon 2015 about how to write portable code using what Microsoft calls the hamburger model, as part of their migration to a common codebase.
Basic a shim layer for low level OS APIs, the middle layer with the majority of code and another shim layer for UI APIs.
Which is exactly my point, the application architecture has to be done in proper modular layers to separate UI specific code that makes use of the UI elements that provide a good UX, from the common code.
Microsoft calls it hamburger design, and has held three talks at CppCon 2015 presenting how they unified their codebase across all platforms. They are available on YouTube.
You misunderstand. Even forgetting the UI, the Office programs are different between platforms. One know this, because functionality differs between platforms.
As a user of those apps, that’s a clear indication they are probably shit.
If you didn’t even take the effort to get your interface with your user (read: customer) right, how lazy did you get with the parts of the app I can’t see ?
The UI of your app is the one highly visible point where 99,999% of the interaction with your customers takes place. If you can't be bothered to have a good UI, how shit is the stuff I can't see as a user ?
A bad UI is a huge smell. In my experience there is a 1:1 relationship between the quality of the UI and the quality of the software as a whole.