SwiftUI is a big opportunity that Apple's not investing enough in, IMHO. It's good tech, and the reactive approach is excellent for many typical view-based needs, but at the same time the docs are terribly lacking at how to handle any kind of edge cases.
Success to me looks like steering clear of SwiftUI for now, and advocating for Apple to hire documentation editors/leaders who can 1) create SwiftUI documentation of code examples for how to escape out of SwiftUI to handle more-complex cases, and 2) accelerate Xcode speedups of SwiftUI testing-and-instrumentation techniques such as View testing (e.g. ViewInspector) and integration testing vs. unit testing (e.g. reactive vs. dependency injection vs. extension tests vs. protocol tests etc.)
It's a great idea but not fully baked yet. My former colleague is using it for internal apps and it seems to work fairly well (UI is not fancy), but not for anything going into the high volume apps, which has much more complex behavior (designers are asking for stupidly complex UI). It's much better in the upcoming OS since they added better navigation, but requires the latest OS, which is not supportive for many companies who try to support back at least 1 year.
Will it ever be mainstream for iOS/MacOS? Maybe, if Apple would use it way more than they have so far, thus improvement by dogfooding. It is nice to build apps this way, but there are way too many bizarre gotchas still for most people unless you can live with more flexible designs.
At my company we've been slowly rewriting our UIKit app in SwiftUI. We were due for a rewrite when SwiftUI dropped. I'd say we're about 75% done.
For those "stupidly complex UIs", SwiftUI just doesn't have enough hooks for customization. And the improvements are too slow. Using SwiftUI has been bitter-sweet. It feels like you just can't use it to craft a high quality app that doesn't look like a generic iOS app.
> It feels like you just can't use it to craft a high quality app that doesn't look like a generic iOS app.
To be honest, as a user, I'd really rather see more "generic iOS apps". I think too many design teams are self-centered and do not consider that users need the apps not to admire their wonderful creations, but to get certain things done. In other words, your app is the center of your world, but it is not the center of mine. It's something I want to use to get stuff done quickly, with minimum effort, and with minimum cognitive load. Generic is good.
I agree with you. I’m currently in the process of developing a app aimed partially at an educator / student crowd, and partially at a general audience of people who are obsessed with words & dictionaries. My app is almost the poster child of generic iOS interfaces.
It’s not released yet, but my beta testers seem to love it. Most of them are non-technical. I haven’t written the on boarding stuff yet but the generic UI/UX means they get up to speed quickly even without a tutorial. I’ve received considerable praise for how intuitive it is.
It’s developed in SwiftUI. My first time working on iOS. My experience is similar to others. 75% of the time, it’s a joy to use. 25% is spent pulling my hair out over internal bugs and bad documentation.
I think both Instagram and Discord is quite bad on the UX-front on iOS. Strangely enough Microsoft’s Outlook is quite decent while it still looks a bit distinct. It is not generally true of google’s iOS apps that I assume started using Flutter and it feels from time to time (mostly not-smooth scrolling)
I think Instagram is actually quite fabulous considering how navigating stories has become muscle memory for hundreds of millions of people. And even after they have been around for some time it is still satisfying to tap that profile pic with the ring around it.
We are also in the middle of a rewrite to SwiftUI, did you include the multiplatform option in Xcode 14? Or is automatic. We are just like 5% progress.
SwiftUI is an interesting toolkit. It is really good at making "generic iOS UI". But it's also very good at making something completely custom. I actually think this might be a good thing, because the UIs that people hate are the ones that pretend to be standard but are drawn by hand in ways that are not standard at all.
I think a mobile app should have a stupidly simple UI instead. Designers traditionally use a graphics program to design pixels. SwiftUI let’s you design layouts and components. Maybe designers should change how they design and start using SwiftUI as a design tool.
Your comment reminds me how asinine some Designs orgs have become at companies. Basically chasing fashion statements and demanding Engineers “just do it” with little consideration to the development, maintenance, and testing costs.
Modern development, at some companies, can truly feel regressive at times.
Obviously, this should be something Apple invests into itself. But they're not going to do it because their culture does not value it; actually, they're often culturally blind to the fact that it might even be useful to provide this. With SwiftUI this kind of nearsightedness really hurts the platform.
i think they have a very strong momentum to pushing os releases since... forever...
decoupling is hard to do and i was shocked (and gratified) they backported async/await... but swiftui is ui-level, im guessing there must be some real pushback for backporting it...
as a customer/developer, the whole os release every year is getting kind of old tbh...
We've been hearing this for years now, though. And we've seen SwiftUI takedowns on here before, a year or more ago, and even then SwiftUI wasn't exactly new.
Meanwhile, my team has built an entire cross-platform desktop app in Qt with QML, which has been around for years and works refreshingly well. You're telling us that Apple, starting fresh and with several years under its belt now, still hasn't gotten its shit together with SwiftUI?
And the word is that nobody at Apple even understands how Xcode works at this point. It's a patched-together shitshow that desperately needs a ground-up rewrite... but we all know how those go. Remember the "ground-up rewrite" of Finder we were promised several major OS releases ago? Still waiting.
Also a quick search found this guy referring to it in 2009:
"I have always wished that I could change both the font and spacing between both sections and items. I was really hoping this would be possible with the rewrite of finder in SL 10.6."
This already happened. Finder was ported from Carbon to Cocoa many years ago. Carbon is now gone from macOS. Carbon died in the 32-bit to 64-bit transition.
I legit didn’t want to believe my eyes that it takes half a second after each code change before the IDE realizes whether a given change was correct or not. Also, you need clean builds quite often and I even got a “type inference times out” error message once which I haven’t seen in any PL ever even though I did dabble with quite a few static languages with even wider reaching type inference.
Well this isn’t a runtime time language it’s a compile time language they’ve actually done some great work behind the scenes to enable live view debugging (Previews).
> but at the same time the docs are terribly lacking at how to handle any kind of edge cases.
I agree that Apple’s documentation needs help (this year’s improvements are a nice step) but can we really expect them to provide extensive documentation for handling edge cases (you’re telling me UIKit docs offer that)?
My feeling is that Apple hates the developers that develop the software that makes them rich.
Forcing myself to be rational "hate" is probably too strong a word. If you are down the Apple cul-de-sac there is really no turning back. So investing in things like code examples for all of their API (there are a few, a very few) is a waste for Apple. It does not attract new developers and the old ones are stuck in the cul-de-sac.
I'm building an IDE in (mostly) SwiftUI, and have been using it since release, so I feel like I've worked with it more than most people.
A couple observations:
- SwiftUI is really complex
It's going to take you at least a year to get used to the declarative way it works, and be able to make UIs without struggling to figure out how to shuffle data around. If you look at SwiftUI examples/code, most of the complexity is hidden, but it's there, and you're going to have to interact with it to do anything non-trivial.
- Is SwiftUI the way?
SwiftUI has enabled me to build out huge amounts of beautiful UI quickly – faster than anything I've ever used. But you will run into things you can't do, forcing you have to fall back to AppKit/UIKit. And the area where AppKit/SwiftUI meet will disappoint and frustrate you. I just got done refactoring a huge amount of my app to use the "AppKit App Lifecycle" instead of the SwiftUI App lifecycle since I need some specific windowing behavior. Now I can't use .toolbar, .menu, and other stuff, which makes things much more complicated for me, and forces me to think about both SwiftUI/AppKit. Switching mental models like that, and trying to remember all the intricacies of both SwiftUI and AppKit is rough.
--------------
Regarding the performance issues of this specific post, my guess is they put everything into an ObservableObject, and used @Published properties to get bindings for their UI controls. This means any view with a reference to the ObservableObject gets re-rendered when @Published values change. Instead you need to silo your changes, so your entire App isn't redrawn every frame. This goes back to the complexity – it's not straightforward at all.
Yeah you end up designing models not to be logically separated but to instead isolate updates to Views. In theory those should align but they don’t always. And the method of finding objects thru the environment makes it all too easy to have big models that everything is listening too which of course really hurts perf. So easy to hold SwiftUI wrong.
How do you silo the changes? The issue was that changing a property of an object caused the inspector to be redrawn. With old fashioned UI libraries you would only change the value of one text field in the inspector. Can this be achieved with SwiftUI? To be honest I'm surprised that it's that slow even if the entire inspector is redrawn. Surely on a modern computer you can redraw a couple of text fields at 30fps?
> Surely on a modern computer you can redraw a couple of text fields at 30fps?
No, this is actually the thing you should not be doing in SwiftUI. Ideally you text fields are not redrawing at 30 fps, actually, ideally nothing in your interface is doing this except when an animation is running and even then only the components that are animating. SwiftUI makes it really easy for you to accidentally start redrawing things that don't really need to be redraw far more often then you want, and that's how you get performance problems.
> Surely on a modern computer you can redraw a couple of text fields at 30fps?
Yeah this is the tricky thing, with how (I'm guessing) the code looks, you would think it's only updating the text fields. But in reality it's probably re-rendering the entire hierarchy using the @EnvironmentObject. So the SceneKit view gets setup and rendered every update of the Slider.
------------------
I have real-time color pickers for changing all windows in my app. Same with font sizes. It's possible to make it work, you just have to modify your design (which is unfortunate).
Ah I see, so each of the values that are shown in the inspector should become its own observable object? That sounds reasonable.
> But in reality it's probably re-rendering the entire hierarchy using the @EnvironmentObject. So the SceneKit view gets setup and rendered every update of the Slider.
I get that this is slower than only updating the single value, but what I don't understand is why even this is so slow? Browsers can render the right hand side of this inspector UI (https://chsxf.dev/assets/posts/5/map-editor-with-inspector.p...) at reasonable speed even if you rebuild the entire DOM for the fields in the inspector. I mean, it's one checkbox, one slider, and 7 text boxes and a bunch of labels. What is taking so much time here?
This isn’t particularly the case changes to the render server are handed off before the runloop iteration ends, and before the runloop iteration ends all changes are aggregated (with the latest evaluated, i.e. @State, etc.) then we see the view invalidated once commits are handed off to the tender server.
What you’d need to be careful off is ensuring your @State or @Published changes don’t too take too long to skip frames
Out of curiosity, what framework/library/solution are you using for your text editor? I've been researching this space for a while and syntax highlighting text editors are rare and sparse.
> Out of curiosity, what framework/library/solution are you using for your text editor?
I'm still at the beginning of that part, but I'm doing it myself with TextKit 2 as well.
> I've been researching this space for a while and syntax highlighting text editors are rare and sparse.
I had the same experience as you when looking for information on how to go about it. I've come to the conclusion that no matter what an editor/IDE is going to be an ugly beast. This gave me some comfort in just forging ahead and trying to make good decisions as I go. And since I'm doing this all as a native app, I think I have a lot of wiggle room.
As far as syntax highlighting, I just recently started calling into Zig (the language I'm building this for), and using the tokenizer to do basic syntax highlighting. I'll need to go a bit deeper into the compiler to do more advanced highlighting, but it's cool to have it working.
> How did you do about providing autocomplete, etc?
I'm not there yet, but these are the things I'm looking forward to working on. I'm currently working on sort of "core" things and trying to design them well. Things like the settings, window management, a command palette, etc. Getting them close to right early on seems important.
I don't really do very technical posts, but I'm trying to blog a little about it: https://austinrude.com/tags/zig-ide/. My biggest issue is second-guessing using Swift/SwiftUI and not trying to do it all myself with Zig, leading me to procrastinate. But I'm pretty far along, and think I'll probably release something eventually.
This discussion feels incomplete to me without a mention of developments in the opposing camp: Google's Compose UI framework.
On Android, at least, Compose feels fully baked to us. Our shop is fully committed to new development on Compose UI; bridging tools for legacy components work great, and they work great for hosting new Compose views in our legacy framework as well. The Compose team is engaged with the community, and is generally ahead of the ball on problems real engineers are having.
In addition to all that, the structure of Compose has led to its new tooling being used in a variety of other domains. Engineers outside of Google are wiring Compose up to TUI frameworks, to platform-independent view APIs, and even running Compose without any UI at all as an asynchronous programming framework.
I don't have any experience with SwiftUI to compare it to. As an outsider, I know that the naysayers are sometimes louder than happy consumers of a new technology. But I will say this: there's a whole narrative in the Android world pointing out that Compose and Compose UI are two different things. Compose is available to use and work with even if you never touch the concrete UI framework Google has built.
Does any equivalent narrative exist in the iOS world? Or is SwiftUI mostly just a UI framework?
The main issue with SwiftUI is sheer lack of maturity.
People don't think about it, but UIKit now has ~15 years of effort put into it, and got a serious boost from its shared roots with AppKit (even though big chunks of UIKit were freshly written, its structuring and API design were largely informed by AppKit), which has history tracing all the way back to the 1980s. Of course something as fresh out of the oven as SwiftUI is isn't going to be able to compare in terms of polish or capability.
So for now I'm sticking to UIKit in the apps I'm responsible for. I think SwiftUI's day in the sun is coming, but it's not here quite yet.
That doesn't mean that there aren't issues with how it's being engineered and documented, though. It has serious shortcomings that need to be shored up, and hopefully that happens with its maturation.
I can’t speak for AppKit, but UIKit/Cocoa Touch was pretty rough early on. Major chunks of “standard” functionality were missing and standard widgets were somewhat inflexible. It was common to pull in third party libraries and write custom widget code for relatively basic things.
Based on my experience, the UIKit dev experience didn’t start to reach the level of completeness it’s known for now until some time between iOS 7 and 9… at least that’s when I remember the number of “required” third party libraries start to steeply decline.
>I doubt AppKit, UIKit or any of the predecessors were in a bad state for three whole years after their launch
I was there for Cocoa development in Mac OS X 10.3. AppKit was not mature. I wasn't there but have heard it was less mature in '91 3 years into NextSTEP.
AppKit was so not mature that Apple shipped its whole legacy UI framework Carbon for a decade to come, with even the Finder itself taking until 2009 to be rewritten in AppKit.
That was a side effect from the likes of Adobe and Microsoft not wanting to rewrite their Mac OS flagship products into Cocoa than anything else regarding maturity.
The declarative, "reactive" approach is good, and worthy of rolling out a new UI API even given that Apple has an excellent MVC-style API...
But it looks like the task was somewhat beyond the people who had the responsibility for rolling out SwiftUI at Apple.
We're several years in to SwiftUI.
Apple's in a tough spot now. I'm really glad I don't have anything too high-stakes dependent on it. (I only have a project with ~200 hours invested -- not zero! -- it's heavily UI, but it's tiny... maybe ~30-40 hours to port to UIKit, if needs be.)
People nowadays praise Apple for their hardware, mainly their silicon - not their software. Maybe it's time for Apple to shake things up and promote someone else to VP of Software engineering?
I view it a different way: macOS and many of Apple's built-in apps/utilities are, at the very least, a less bad option compared to many others.
OS utilities like Image Capture, Preview, Apple's screenshot utilities, Spotlight, Quick Look, and quite a few others have been basically "killer apps" that make me want to use macOS over Windows for decades now.
Even basic areas like system settings have been a point of frustration for Windows for an extremely long time now, with Microsoft taking many years to truly migrate off of the Control Panel and deliver something half-decent in the Settings app. The Windows 11 iteration finally starts to feel like it's a little bit cohesive (but I still hate the Devices settings panel). The whole situation has really been a mess going back to Windows 8.
Another example: anytime someone says that printers universally suck I know immediately that they're Windows users because of just how much better Macs interact with printers and scanners and how much better and more reliable their configuration interface is. Apple single-handedly saved the entire printer industry from the depths of hell with AirPrint, before that Bonjour, and before that shipping OS X with preinstalled printer drivers.
Windows 10 couldn't decide what screenshot app I was supposed to use, thank goodness it's been consolidated after Windows 10's inclusion of two separate apps both worse than Apple's tooling.
Plus, Apple has to get credit for the sheer number of decently high quality non-enterprise apps that Apple just gives you for free.
Microsoft doesn't make anything that approaches the quality you get from Apple's free iMovie, Photos, Podcasts, Books, iTunes/Music, Contacts, Mail, Pages/Keynote/Numbers (without paying).
I’m with you, however mac os remain the best dekstop OS and iOS remain the best mobile one, even after all those years. So, they must be doing something right..
iOS lacks many features that have been standard on Android for years and the only reason it feels so smooth is because the UI thread has pretty much the highest QoL that anything can ever have. iOS would rather drop your network call than drop a single frame.
Android variants all have things that iOS can only dream of having in five years (notifications was a fun one), just spread very unevenly throughout manufacturers. Samsung currently has a very good Android build.
When I switched from Android to iOS in 2016, I was shocked at how little was different, and I can only assume gulf has narrowed since then. A lot of features Android users just assume iOS users don't have are there: vendor-agnostic password manager integration, Safari browser extensions, Safari content (ad) blocker API (Chrome is restricting ad-blockers soon!), more complete home screen customization and widgets, lock screen customization and widgets (iOS 16, upcoming), custom third-party keyboards, grouped/customizable notification system, native console wireless controller support from all three consoles.
Features like granular privacy controls and screen recording (with the exception of some obscure Android OEM builds) debuted on iOS first.
I'd be skeptical with anyone declaring either platform "best," I think they're both roughly equivalent.
However, I do think that anyone who uses macOS is insane to go with Android. There are simply too many useful integrations between the platforms to ignore, combined with the fact that, even if the iPhone isn't the best phone, it's usually in the top handful of choices in terms the overall package.
Sometimes it's the little things that matter. The single most used app on all my Android phones I've ever had is Kindle. And one feature that I absolutely demand from any phone is that I can flip pages with volume buttons - when reading for long periods of time, it is much more convenient than swiping with your thumb. On Android, pretty much all the reader apps can do it. But, so far as I know, this is outright impossible to implement in iOS.
Personally, I don't think the OS should ever allow an app to hijack the physical volume buttons or other hardware buttons on a phone. That seems like an avenue for abuse.
(Tip: you can tap the screen on the right side to go to the next page.)
Personally, I think that my phone should be convenient for me to use.
I'm well aware of different ways to swipe pages. The reason why the hardware volume buttons are so convenient for reading is because you just put your thumb on the volume/page down button, and you no longer have to move it at all - only press down slightly every now and then. It's much more ergonomic for long-term reading than having to raise the thumb every time, even to tap.
That seems like a really small benefit at a potentially high cost, at least to me.
You can imagine that someone might create malware or otherwise hostile app that plays a loud/embarrassing sound and hijacks your volume buttons.
Having to lift a finger to turn a page seems like a really small problem in comparison to that one.
Smartphones are general purpose devices and have to make tradeoffs like this all the time. They can't just greenlight every useful function that every type of app might want. IMO if you want an e-reader, get an e-reader.
For example: Let's say I'm a private detective. It might be nice for there to be an app that records audio and video at all times without any visual indication of my phone doing so. However, having that kind of OS level permission available to apps on an app store is probably a bad idea. I'll need to go out and buy a dedicated recording device.
Sure, we can argue about where the line gets drawn. If you like Android for allowing apps to modify hardware buttons, fine. But, I would prefer a device where physical buttons perform consistent functions. A middle ground might be some kind of buttons dedicated toward custom or app functions – but, to me, why bother when the entire screen is a customizable button?
All I can say is that I've been using Android for well over a decade now, and not once have I seen malware that hijacked volume buttons. It might actually be a permission the app has to request - I don't remember.
Either way, this was meant as an illustration of how small factors can affect decisions. I have had an iPhone as my primary phone for a few months, and this one thing was the single biggest issue I had with it at the end of the day - not that there weren't others, and some were actually more annoying when they happen, but this one is at the top because it's something that's constantly in my face. Thus, I'm not going to buy another iPhone for this reason alone; my response to "you're holding it wrong" is "you're making them wrong".
iOS’s security was praised by GrapheneOS’s creator many times - I believe according to him the best choices for a secure mobile at the time of writing was GrapheneOS on the latest pixel OR an iphone.
So you want icons to be able to overlap each other, or be different distances from each other, or what? I'm trying to figure out what exactly you're trying to accomplish. What would be the benefit of totally arbitrary, down-to-the-pixel placement ability?
Just talking about being able to grab an icon and position it anywhere I want on the screen (possibly snapped to a grid).
Right now, icons have to be aligned in rows anchored in the top left corner, so that whenever you insert a new one, all the ones after that get pushed right or down, messing up your entire layout.
We've had this ability on desktops since Windows 3 thirty years ago, and Android has had it since day one. As does macOS. Why not iOS?
Why do your icons move around so much? I've added or removed applications from my iPhone and the only limitation I see is that they have to be in a grid with no empty spaces between them. Otherwise the same application icons are in the same place, every day, year after year.
Do you have a screen shot that could illustrate what it is you want to be able to do?
To answer your question: No, I do not, ever. I don't want shit all over my desktop. I have them auto-arranged in a grid. On Windows they start on the left, on Mac they start on the right. So when I (for example) take a screen shot and it gets deposited on the desktop, I know where it's going to be.
But now that you finally gave a concrete example of what you want to do, I can at least picture it.
Yes, this is a very real downside, but Apple has put effort into making the home screen more customizable than it was. It has an app drawer, it has the ability to add/remove home screens without filling one up first, and it has widgets, the major omissions that Android hasn't been missing for many years.
And, yes, it would be really nice if Apple had a system to install third-party launchers, there's no denying that.
I know this is really silly, but you can definitely workaround the issue and effectively have the same end result as Android if you really want it:
Overall, every individual omission from one platform to another is going to be a question of what is important to the buyer.
For some, it might seem ridiculous that, in fifteen years, Apple still doesn't offer a truly customizable home screen. That's a fair criticism. At the same time, it's a fair criticism that it took until 2021 for Google to release a phone that will get 5 years of security and feature updates (Apple never guaranteed this directly, but the iPhone 6S delivered that level of support from when it launched in 2015 until the release of iOS 16 this fall that will drop iPhone 6S support: 7 years of full feature and security updates).
Meanwhile, if you bought a Pixel 3 in 2018, you got your final update this year. I personally think that's downright unacceptable considering the maturity of smartphones when it was released.
The problem with me making comparisons like this is that they'll always seem like a cheap whataboutism, but that's generally how these feature-to-feature comparisons work. For me, in 2016, I saw a situation where Google was not delivering everything I needed in a smartphone, but I also recognize that other people have other needs.
It's not just for cosmetic reasons, there are practical, actual benefits from a user experience standpoint.
For example, I put the apps that I use the most often on the right hand side of the screen, where my thumb can reach them faster. I can't do that on iOS (well, I can, but then any new app I put on the home screen will mess the entire arrangement and I will spend another ten minutes calculating what to insert and where to get all these icons back on the right side of the screen).
I just don't understand how Apple, a company that prides itself on great user experience, is still putting their users through this UX hell fifteen years after the release of the iPhone, and they don't seem to think it's an important feature to add.
> iOS lacks many features that have been standard on Android for years and the only reason it feels so smooth is because the UI thread has pretty much the highest QoL that anything can ever have. iOS would rather drop your network call than drop a single frame.
This is a bit of a simplification but in broad strokes it is kind of correct. And it turns out this actually makes good UI!
You must be kidding. There's no excuse for Android's incompetent architecture, which orphans millions of devices with every release because it apparently lacks a competent hardware-abstraction layer and driver model.
You can install creaky old Windows on millions of devices with disparate hardware configurations on the day of its release, but Android users must wait weeks, months, or forever for their telcos to dribble out hacked, proprietary versions of Android for every model of device ONE AT A TIME. Seriously, WTF? And this is from the great "open-source" OS that was supposed to free us all from vendor and telco tyranny.
Welcome to the wonderful world of ARM and driver blobs. Half of the waiting time is waiting for Qualcomm to release a blob (and that's if they even want to), the other half is including it in your build, and eventually making your fork's changes if you want to do so. For every single SoC in your lineup. Then testing if they can support the new requirements of Android.
Apple would have the same problem if iOS was open source and distributable by anyone, their chips being able to be made by anyone. Once again, Apple has picked the easiest (and least open) path, so, yeah, they can upgrade easily.
Thanks for the info. Why can't the driver be abstracted enough to allow a new OS install as long as the ABI of the driver doesn't change? Major Windows versions installed over the same drivers for years. I've never written a hardware driver, so I'm left to guess or speculate.
No, because generally very few/no people want to install a totally untested OS onto their phone. But the work needed by phone makers to update to a new Android is drastically reduced, so they can reduce the latency by a lot.
Yeah, android is lightyears ahead in a few things, but hardware and privacy-wise iphones are simply that much better. I try to reevaluate the platform every now and then, but the closest I got to changing was Pixel 6 with Graphene - which I decided against due to frequent hardware bugs..
The need for low latency audio on your phone is... debatable.
Power efficiency, absolutely, and a lot of this is due to them not really giving a damn for a while. Which was great! You truly could do anything with your phone, run things in the background forever, etc. Nowadays, Doze, Android Resource Economy and the need to go through Foreground Services/WorkManager makes it quite a bit harder to do.
> The need for low latency audio on your phone is... debatable.
No, it's just not important to you.
Garageband seems to be pretty popular on all the iOS form factors. iOS has for most of its life had a strong offering of music composition and production apps. iOS has had a high quality media framework for years, Android has not.
And if the need is so questionable, why did Android finally get around to addressing it?
And this seems like an odd argumentative path for an Android fan to go down - downplaying the importance of more niche needs outside social media and cow clicking as not important. iOS does just as well or better for messaging, Instagram and TikTok. If you only care about the smart phone basics for the masses iOS ecosystem is pretty hard to beat with better battery life and support.
I agree, in my experience, MacOS is one of the most reliable operating systems I've used in terms of lack of crashes or spurrious bugs; the most common bug I see is my mouse pointer dissapearing, which is fixed by opening the tasks view.
If only they could so something about the truly terrible window switching crap, where you'll have to hit the same icon in your doc multiple times to eventually with a bit of luck get to the window you wanted. Why can't we have window previews like Windows has had since Windows 7 or just about every DE on Linux
Right or control click the dock icon and choose the one you want. You can also press control-down arrow key to show current app windows or up for all windows/desktops.
It stagnated enough for android to come really close, while the other issues didn’t improve much (side loading will be on Apple’s dead body, default apps setting still being too few), and there is no end in sight to the constant nagging for more service revenue.
Nowadays both OS are mostly on par, for instance android 13’s per app language setting is one of those QoL things that were iOS’s turf but are now more prominent in android.
Even though I really dislike how Google screws Java developers with their Android Java approach, I really appreciate their "my way or the highway" regarding the use of managed languages on the platform.
So I wouldn't consider iOS the best mobile one in that regard, as probably no one that suffers being brought sundenly to the home screen due to pointer errors in Objective-C.
Not really. Remember that their "competition" at the time the iPhone came out was utter shit, and Android (although a far cry from those days) is still pretty much shit.
I remember its existence but never used it. It was also not popular in the USA. Motorola dominated, and if you looked at their offerings and their syncing software it was incredible how utterly incompetent it was. For example, their contacts didn't have addresses... I mean, WTF? And the syncing software (to Outlook, for example) just straight-up didn't work. I had a Motorola phone with 32MB of memory, and it couldn't hold the data my 8MB Handspring Visor could.
When the iPhone came out, that was the dominant landscape: PDAs and horribly incompetent phones. Internet browsing meant WAP (a clumsy attempt to dumb Web pages down enough to show on tiny, all-text phone screens) or a Blackberry. Blackberry rested on its laurels and a shitty, shitty browser until dead.
There are plenty of things that Apple remains bafflingly ignorant of, or just petulantly refuses to fix on its mobile devices. Great example: The iPhone, after 15 years, STILL doesn't notify you of missed calls. That's right: You don't even have the OPTION to get audible alerts of missed calls, but meanwhile you can get up to 10 alerts of missed TEXTS. That is simply stupid.
Then came the Apple Pencil... and no support for it in iOS. Every app developer had to implement handwriting recognition independently. WTF? Why didn't Apple simply allocate a square area in the on-screen iPad keyboard to accept written characters, which Palm OS nailed in the '90s?
Annnyway... I looked up Symbian because I work in Qt a lot now, and it was originally from Nokia. I knew it had been developed for phones, but could not imagine why. That's because I didn't ever interact with Symbian or see it in the wild and know it was from Nokia. So thanks for the note. I think Qt is pretty cool, and my team just made a good desktop app with it. I'm curious why the C++ Symbian experience wasn't good.
Symbian C++ had a couple of issues, namely several restrictions of accomodating C++ for a microkernel, before the days of C++98, and organically growing from there.
So there were several idioms and restrictions on how the language could be used, and the toolchain went through several iterations, MS-DOS batch files, Perl scripts, Metrowerks based compiler, eventually replaced by an Eclipse based IDE (after a false start), Carbide.
Qt came later into the picture as Symbian was being modernized, and after a POSIX compatibility layer (PIPS) was added into the platform.
You had the security rules that iOS "invented", C++, Java and Python toolchains, being able to run http server on the phone, 3D support for C++ and Java applications.
But Symbian C++ was a bit of a pain (see link below), it was being modernized via PIPS and Qt (as mentioned), but then Elop came to Nokia, the famous burning platforms memo came out, and everyone went away.
Nokia development culture was pretty much anti-MS, so most went elsewere, instead of bothering with Windows Phone 7.
I agree, SwiftUI's progress is just very slow. Especially their multiplatform support in Xcode 14. UI will look good in iOS but on macOS it's just very bad.
Any UI code running on a modern CPU should take close to 0ms to update, no matter how many buttons, toggles, sliders and shadows.
For the rendering part, it depends, I'd say it should take between 0.5 and 2ms with many layers, a lot or transparency and not much care for optimization.
In my experience working with IMGUI-style libraries, it's definitely feasible to perform full layout and rendering for complex UI in 1-2ms at most. For simpler applications it should be basically free. It's depressing that people are willing to accept complex, slow layout APIs at this point considering it's been possible for stuff to be fast for a long time.
Perhaps surprisingly, the most expensive part is usually text layout and rendering - text shaping is just really expensive and in the industry standard libraries it can take a long time to lay out a string, so you have to aggressively cache and do fancy things to get good performance. ASCII text is fast, though.
Whenever people come up with low numbers like that their mental model is really what they see as sticking some ASCII on the screen with a couple of colors blitted in. Doing text rendering that is actually good, transparency, shadows, the things that make a UI worth using makes things somewhat more expensive. It doesn't have to be overbearingly expensive–modern computers are fast, after all–but it's definitely going to be more than what you think an imgui thing is going to take. And this is pretty important when you're making an OS, otherwise you're really making something that a lot of people will not want to or be able to use.
I've been doing complex UI with a few different IMGUI libraries for years now and they can all lay out and rasterize complex scenes in under 2ms, even when written in C# instead of performance-tuned C++. Modern hardware is just plain fast, in particular drop shadows and transparency are effectively free if you're drawing them using the GPU. (I say 'effectively' because if you opt out of transparency and antialiasing, you can do front-to-back rendering to skip drawing stuff entirely... but I don't know of many cases where people do that since it's not that much faster.)
Well, text layout is not trivial. But computers are incredibly fast when dealing with small amount of data.
I don't see why all of text layout calculations could not be done at maximum speed with everything in L1. Probably with quite a few branch miss-prediction, but still.
The amount of work you can achieve in 1ms on a modern CPU is astonishing.
My intuition is that there are probably a bunch of O(NNN) complexity algorithms in typesetting for things like ligatures, and that's where the time goes... but I haven't written a shaping engine.
SwiftUI can be ridiculously fast – closer to a 3D framework than a typical 2D windowing framework – because it can reduce to a set of uninterrupted draw calls.
Rendering is not why SwiftUI can be slow.
If you don't know what you're doing, it's possible for SwiftUI to think your new view is unrelated to the old view and everything needs deallocation and reallocation.
The cost here is not from rendering speed — as you point out, UI rendering is so fast in most modern environments, even on a bad day, that drawing the UI itself is pretty much never the actual load that slows anything to a crawl.
Based on the author's description, it sounds like the real cost here was from way over-diffing a set of models that weren't very efficiently bound to the UI. This is a very common source of bugs in all declarative frameworks.
In SwiftUI, the consequences can be particularly bad, because there are a couple somewhat innocent-sounding ways of injecting dependencies into your UI that actually invalidate the whole thing on any value's change and cause major updates (including, if a custom "Representable" that uses its own GPU tools is not implemented carefully, potentially reallocating all kinds of buffers and drawing tools).
This is all stuff that you learn how to deal with as you get used to the framework and learn its more advanced tools, but adapting to this part is not really something there's a lot of documentation for.
The author talks about SwiftUI on macOS, which I also find to be much much more buggy than what's on iOS.
Very unexpected things happen, like items in a specific region in the App UI stop responding to clicks next time once you interact with them.
However I find it fine on iOS and the issues are usually around Apple changing something in the behaviour of the UI or API and breaking it, then you need to fix it for specific versions of the iOS. That's what you get when a framework is not mature yet, I guess.
That said, I don't think there's a turning back. It might be buggy but it definitely feels like the right way of building UI.
Oh, about the performance, you can build very sluggish or very responsive UI with SwiftUI because there's a night and day performance cost difference between "re-calculating and re-drawing" the UI and adjusting a property of it. Always try to structure your components in a way that changes happen by changing passed values instead of inserting or re-doing the views.
SwiftUI works quite a bit like React, many performance lessons from React will translate into SwiftUI.
Tying the SwiftUI version to the iOS version is seriously asinine. In the Android world Compose is a regular library so devs can pick the version. That's partly why there has been much more adoption of Compose even though it was released years later than SwiftUI.
I wonder what is the technical justification for it too, if any. I mean, sure - you can save space by making it a dynamic library that is shipped with the OS but considering how immature it is, shipping a specific version of it with the app should be an option IMHO.
It's probably the same as the technical justification for why Safari updates are bundled with iOS and not separate, why XCode only runs on macOS, why no other browsers are allowed on iOS, why macOS can't have separate scroll directions between mouse and touchpad and many many others. It's Apple's ecosystem, nobody is asking you, you're holding it wrong.
I've worked on mice before and frankly I have no clue what sort of technical requirement would lead to an upside down charging requirement and disabling the functionality in the meantime. Can you explain what that reason might be?
Also if most people can't just pick up the mouse and use it, it's a broken design. In my particular case, it's so much smaller than my palm that I'm sure any grip would result in RSI.
Simple, the front is not thick enough to accommodate a common port like USB or Lightning. If you put it on the sides, it defies the purpose.
Changing the design to accommodate a port in the front doesn't make sense because the primary function of the device is to act as a human-computer interface and optimising the design for that purpose is paramount, charging is not a primary function but something that we have to do with electronics and as a result having long battery life and short charging process is good enough solution(the battery lasts about a month, charging takes about 2 hours but will work for hours even with a few minutes or charging). Why would you compromise a design for something that needs to be done occasionally and can be done out of the service(i.e. at nigh, when not using it).
So put the port on the side where you aren't supposed to grip it. If we assume that being totally nonfunctional for a couple hours a month is acceptable, merely being a bit inconvenient for the same period is surely an improvement.
Also, we don't agree that the magic mouse is good human-computer interface because I'm starting from the position that causing physical pain automatically precludes it from that category. Another example of an interface in the same position is laser keyboards. They're also very cool designs, but the the fact that tapping on solid surfaces starts to hurt after awhile precludes them from being 'good' HCI.
You can't put it at the side, the cable will interfere with the keyborad or the laptop that is on the left and won't work at all for the left handed people.
I mean, if you really think that charging a few minutes a day or leave it charging overnight once a month is a deal breaker, simply don't buy it but this is not a bad design. It is very unrealistic to expect that the mouse will be used 24/7 every day forever. If you have this use case of using mouse 24/7 everyday, this must be some kind of industrial operation and obviously this mouse is not for you but for everyone else it's a non-issue.
Won't work at all for left-handed people, as opposed to now where it works for no one during the same period? Seems like an improvement, despite my doubts that it truly wouldn't work for left-handers.
Anyway, hopefully we can avoid wild mischaracterizations here. My expectation obviously isn't that things work 24/7 indefinitely. Think about how the typical user is going to discover that the battery is low. Either they'll get the notification just before it dies and have to charge it "soon" [1] or the mouse will simply die and force the matter. In some cases (think multi-user computer labs), the person who receives the notification and the person who has to charge the mouse might be two different people. I think it's reasonable to find waiting around on a mouse rather than whatever they were planning on doing irritating.
A few observations here. The first: this product is an optional thing. There are many mice manufactured, they all work with Macs. I, personally, have a Kensington. The last Apple mouse I had possessed a tail, and was only okay, not, for instance, good.
The next one: the charge lasts a month to six weeks with normal use.
Next one after that: it's your unlucky day and you didn't idly think "oh, hey, I'll plug this in over lunch" on week three like a normal person. What you do is plug it in, and take a ten minute break. Everyone has ten minutes worth of things they can do. This gets you to a real break, and now you're good for another six months. Yes, I said months, just plug the damn thing in sometimes. Set yourself a reminder. Don't be a vegetable.
The amount of time you've dedicated to being irritated online about a product you don't use, is longer than the irritation a typical user of that product will experience the entire time they own it.
This would be a very different forum if the only things we discussed were the bare necessities of life.
Anyways I'm not irritated. This conversation was started because I asked what the good reasons behind obvious design issues are. So far the conversation has been about how you can work around them. That's great, but not the point.
Sure, better notifications can help. Maybe once idle, can send a notification to the phone to make the user put it on charge overnight.
I had the occasion where the battery died by surprise. That’s why I keep repeating that the device works for hours after few minutes o charging, you don’t have to fully charge it before use.
>Simple, the front is not thick enough to accommodate a common port like USB or Lightning
Well, that's the broken design part.
>Changing the design to accommodate a port in the front doesn't make sense because the primary function of the device is to act as a human-computer interface and optimising the design for that purpose is paramount
There's nothing about HCI that prevents a different design.
In fact, the Magic Mouse is one of the least ergonomically optimized mice out there...
It doesn't matter how large the battery is - at some point, someone forgets to charge. With any other mouse, you plug it into the charger and continue to use it while it's charging.
It certainly can have separate scroll directions between (in my case) trackball and touchpad. You have to download a free menubar widget to get it, but so what.
This is in contrast to focus follows mouse, which you really can't do, and I wish it weren't so. In general though? The OS provides a lot of affordances for automation and customization, it exposes some of that through preferences and leaves the rest for developers.
A good example is what you can do to the keyboard from System Preferences (which is quite a bit) vs. what you can do using Karabiner.
>I wonder what is the technical justification for it too, if any.
Well, apps using it get a uniform look, that uniformly changes when the OS look is updated so that it matches it, and more importantly, they and also get to have the same widgets and widget functionality [1].
Unlike, say, Windows where you have 20 generations of MS GUI lib versions, with different looks and behavior, running at the same time, even from MS apps themselves...
[1] Yes, Apple has its share of inconsistencies (e.g. Swift UI vs regular Cocoa UI). But nothing like what would have resulted if they allowed apps with different historical macOS UI libs/versions to be installed at the same time independently from the OS UI libs.
Windows also ships many of those generations of GUI libs with the OS. Your Win11 install will have user32.dll, VB6, WinForms, WPF, and WinRT (might also have MFC, not sure about that one). This is completely orthogonal to look-and-feel.
Not completely orthogonal, as several different generations of MS GUI APIs implement different look and feels.
And bundling it with the OS means devs aren't forced to update their apps to the new APIs. It also makes it impossible for MS to automatically switch the look and feel (they could try for trivial things, but they wouldn't be able to properly accomondate old UIs based on pixel positions, nor would they be able to offer many of the new features given older widget designs).
Style changes and quickly become usability and accessibility issues. I'd rather my app continues to work well as I tested it, than having to test all of the shit on every other OS patch.
In what world would security critical parts affect the UI. What does my ui care if the system libcurl changes? Swiftui is a ui framework, keep that stuff separate
My guess would be a monetary justification. After all you can't just have developers working on year old laptops and devices when you can make them buy new ones by making this one simple change.
These kind of "conspiracy theories" for this or that what amounts to negligible money always make me cringe. Not that companies don't do shitty things for money. But they don't do things like that for negligible money compared to their other operations, especially since there are far more important reasons to do them...
The reason is platform coherence and control, and lock-step updates. To avoid cases like this:
...and not the insignificant money that could be made from devs updating a laptop a little earlier (especially since you can trivially use a still supported 5-6 year old laptop anyway with the newest Apple OS, and you wouldn't see any major performance issue, except in cases like the Intel to ARM switching. So what would they gain? Developers forced not to use an 8 or 10 year old laptop? As if they would?).
Google was forced to do it, because contrary to iOS, Android updates only take place when one buys a new phone, for all pratical purposes, even if Google pretends we have lots of nice OEMs doing updates.
They were not forced to do it because of their OS woes. They just did the correct thing.
Apple's not doing the correct thing. SwiftUI just created a barrier where people pre-iOS 13 lose access to a lot of apps, and those apps will be raising their minimum version a lot more often than before. So if anything, you could consider this a ploy by Apple to get people to upgrade their phones.
Nope, if Android updates were a reality, there was no need to ship a system library with the application.
That is the whole point of Jetpack libraries, to ship newer versions of Android with the application, with polyfills, as the OS updates will never happen.
> Very unexpected things happen, like items in a specific region in the App UI stop responding to clicks next time once you interact with them.
I've been able to workaround this by sticking a .id() on the view in question, I've seen similar issues with views falling out of the keyviewloop when you have a bunch of dynamic forms.
My SwiftUI codebase for https://equater.app definitely has some "// TODO: when apple fixes X" comments, but there's no denying the MASSIVE developer velocity I gained from using the framework.
If the bugs are bad enough I can always do something custom in UIKit via UIViewRepresentable.
I like what SwiftUI will become, but I'm not there. yet.
The current project that I'm working on, is pretty big. It peaked at 40 screens, but I'm trying to get it down to half that.
I've been working on it for two years. I also wrote one of the backends (and most of another).
There was no way that I was going to start on a project of that complexity on SwiftUI, which, at the time I started, had only been demonstrated in a few small projects. I already knew that UIKit could do it.
But I think that SwiftUI will become the standard, sooner or later. I look forward to being able to write more reactive applications in it. In my experience, if I use UIKit, it's a fairly bad idea to stray from the old MVC model (This, I have learned. <Pulls up shirt> See this scar?).
People burn an astounding amount of CPU and brain cycles pretending that UIs are something they are not. (i.e. "pure" functions)
I hope Apple never goes the way Microsoft did with their fad UI toolkits that utterly destroyed developer trust in native Windows development (MFC, WinForms, WPF, UWP, WinUI, .Net MAUI). They are pretty wise for keeping SwiftUI be the "for kids" vanity UI toolkit to lure in React webdevs, while keeping UIKit and AppKit for serious stuff.
> They are pretty wise for keeping SwiftUI be the "for kids" vanity UI toolkit to lure in React webdevs, while keeping UIKit and AppKit for serious stuff.
At WWDC 2022 they specifically said that Swift+SwiftUI is the best way to build apps, and the future of their platforms. I thought AppKit/UIKit would stick around, but I guess not.
---------------------------
Edit: Here's the direct quote from the "Platforms State of the Union":
> We're continuing to expand our adoption of SwiftUI across our apps and system interfaces. For example, iOS's new Lock Screen widgets were designed from the ground up using SwiftUI. The new Font Book app was completely rewritten with it. And the modern, forward-looking design of the new macOS System Settings app was built using it. Swift and SwiftUI were designed from the start to provide a single, native language and API for all Apple platforms. You can learn them once and apply them everywhere. Whether your vision is to provide quick access to information at a glance on Apple Watch, productivity tools on MacBook Pro and iPad, new experiences on iPhone, or a new way to relax with Apple TV, Swift, SwiftUI, and Xcode provide a next-generation integrated development platform to help you build apps for all of our products. Now, if you have an existing app, it's easy to adopt these new technologies incrementally. And if you're new to our platforms or if you're starting a brand-new app, the best way to build an app is with Swift and SwiftUI.
The slide had a Swift, SwiftUI, and Xcode logo with nothing else.
I'll start getting concerned when they rewrite any of their serious apps in Swift(UI). (i.e. Logic, FCP, Xcode, Finder, Instruments, iMessage, Mail, Calendar, etc.)
> The Objective-C language, AppKit & UIKit frameworks, and Interface Builder have empowered generations of developers. These technologies were built for each other, and will continue to serve us well for a long time to come, but over time new abstractions become necessary. For a while now, you've seen us hard at work defining the next generation of integrated language, frameworks, and tools: Swift, SwiftUI, and Xcode Previews.
>
> Tight integration in a development platform like this requires that all three pieces be designed and evolved together, both driving and driven by one another. Swift result builders were inspired by SwiftUI's compositional structure. SwiftUI's declarative views were enabled by Swift value types. And Xcode Previews was specifically designed for, and enabled by, both. Now, the result is the best development platform that we have ever built. And this year, Swift, SwiftUI, and Xcode all have fantastic updates that take this vision further, and make it even easier for you to build great apps for all of our platforms. And it all starts with Swift. Now Ben from the Swift team is gonna tell you all about what's next.
The ironic thing is that functional reactive idiom is not about pure functions. It's about dealing with the impurity only once, at state management.
But if you don't keep track of the impurity dependencies on your language, you will need to rediscover those dependencies during compilation, and besides surprising the developer all the time, that's a really nasty problem to solve. That's why it's always solved badly.
MFC? As somebody who has worked with MFC in some way or another for the past 25 years, there are worse examples out there. The beauty of MFC was that it was pretty simple layer on top of Win32 and the source code was provided. You can continue to use it with some of their newest stuff using XAML Islands.
Ironically it is still the best way to do C++ GUI development on the Microsoft stack, after they killed C++/CX, and replaced it with the pre-historic tooling C++/WinRT.
However for a real nice C++ GUI development experience on Windows, going with C++ Builder or Qt would probably be a saner option, as the WinUI team doesn't seem to get it, and MFC only gets minor updates nowadays.
On the Microsoft side, it is still the best way to do C++ GUI development, after they killed C++/CX, and replaced it with the pre-historic C++/WinRT tooling (return to the ATL days but without VS tooling support).
I just don't get what purpose it really solves over UIKit. Everything seems far more complicated in SwiftUI, and the data dependence problems it solves did not ever really seem to be a huge issue in UIKit anyway.
Apple's sell for SwiftUI was "it's like going to a chef who knows how to cook for you" instead of "cooking yourself". But is that really a good thing? Do we really want to be taking less control of our applications when we write them?
The worst part is all these tutorials that have to combine SwiftUI and UIKit because the former is very buggy in certain cases, like navigation trees on iOS have really janky animations using SwiftUI. So you have to coordinate all your Views using UIKit. It just seems barely easier to use SwiftUI most of the time.
EDIT: Basically, in summary, it seems like a very complicated (look at the crazy language features they had to add for it) plus very opaque ("it magically does it for you just how you wanted") way to write the same apps you were already writing fine in UIKit and AppKit
I have some friends at Apple... The story goes something like this: SwiftUI came about because someone who has never been a software engineer, got concerned about losing mindshare to ReactNative et al. So this person created a "task force" to deal with the problem by tacking something on top of UIKit that resembles React, to create a "friendly" ingress funnel into the iOS world.
I recently began developing with Flutter for a project. And whilst I dont love Flutter/Dart that much, I must admit that Google have done an amazing job of proving resources and documentation for the frameworks!
The whole Flutter community and available resources feels very rich, welcoming and embracing!
Tech like Flutter is fresh and exciting to a lot of people, and Apple is right to be worried!
So say what you will about Flutter/Dart, love it or hate it, the docs and community are excellent!
I’ve recently began developing with Flutter as well. Although I’m not sure about the community, but you’re right, the resources and guidance provided by Google are excellent especially for beginners.
There’s a lot more variety of learning resources compared to other languages; there are texts, workshops, videos, code labs, etc. It’s also nice seeing members of the Flutter team facilitate some of the workshops themselves; it’s refreshing to see developers being involved in educating users, instead of ‘outsourcing’ everything to others.
Writing controllers is complicated and underspecified in UIKit and AppKit; AppKit has old unmaintained "bindings", UIKit doesn't have them at all, and it seems to lead to third parties reinventing the whole stack rather than using any of it.
> I profiled the whole thing and discovered several things. First, the view provided by the selectable object was completely recreated with every redraw. I gained some performance back by caching it, but things remained barely usable.
That... is exactly the same thing with React. You don't notice everything is redrawn until suddenly everything is unbearably slow. And then it's useMemo etc. galore.
I'll withhold my judgement on SwiftUI for now though.
IMO, you shouldn't use React-like tools in performance critical UI code. Direct DOM manipulation in such cases is much better and more maintainable option. I.e., it's easier to maintain straightforward direct DOM manipulations than all the tuning around making a React-like system more performant.
To write code quicker, and make it easier to maintain. If it's a moderately complicated UI, then I could code it up much faster in React (my guess is around 10x). IMO, React isn't about functional programming, it's just a good DSL for writing UI.
React Native doesn't seem to have any degree of simplification vis-a-vis regular Android Views for example.
On the web - that's a different story entirely, but when doing strict comparison I just don't really buy it for mobile apps.
The amount of code is similar, but with React you have to deal with extra layers of abstraction, and some things are obfuscated by the framework. No performance gain and often a performance lost.
> To write code quicker, and make it easier to maintain.
Yes, it's faster to write. Is it easier to maintain? Doubtful. Especially when you inevitably run into issues that your whole UI re-renders a couple of hundred of times a second, and you have to go in and butcher everything with useMemos, caching etc.
On the web frameworks like Solid.js are exploring these fine-grained reactivity approaches to actually re-render only what's needed, and not huge chunks of UI
I would call Svelte a React-like system that actually does do direct DOM manipulations and works great for performance-critical UI code as well as everything else.
I like SwiftUI even if most of my views are still NSViewRepresentables, it makes the application structure so much nicer and I don't have to use Interface Builder as much. I know you can use AppKit without Interface Builder but it's more annoying.
I agree. The amount of plumbing behind the scenes is really complex. This immediately becomes apparent if, for example, you want to use @AppStorage with something that isn't a very simple type. Then suddenly you are in no mans land reading Combine documentation with no examples
I just wish that Apple had used this big effort to update their ui tooling to instead 1. Bring in stylesheets to UIKit/AppKit, 2. Let us define UIKit/Appkit views with simple straightforward xml like android or xaml.
What surprises me the most is that Apple didn't open source it even many years since unveiling has passed. It doesn't look to me that they have any risks here. The library could be used only on Apple platforms. It will let users submit fix PRs, and make debugging code (by framework users) much easier.
Perhaps there isn’t enough internal sponsors of open source initiatives now that Chris Lattner has left ? (I of course don’t think he was the only one pushing for it, but I also assume his departure triggered/coincided with other events that might not have been made public and shuffling happened in keys teams)
The first step is usually to figure out what is actually taking all the time, Instruments is good at this. What did it say? Was it your code, or AttributeGraph stuff? Or maybe layout code in UIKit/AppKit? Once you find that you will probably want to isolate which part of your UI is actually triggering this. Ideally identifying if it's a "I got called once and SwiftUI freaked out" or "SwiftUI keeps calling me for changes and I don't have any changes, but this is still bad for performance".
The entire view shouldn’t be redrawn but certain views only (not the entire tree) maybe look into prevent entire view from being redrawn itself (React does this too and you can prevent redraw without utilizing useMemo by understanding it’s internals)
Compose bakes this into the framework at the function level. By default, if the function parameters are immutable and the function doesn't return a value (basically emits compositions by calling other Composable functions) the entire composition is memoized and the function can be skipped.
The tricky part is knowing what is immutable, because standard JVM collections are not.
SwiftUI has this too. It's not particularly discoverable but if you do it correctly you can avoid these kind of issues. The problem is that SwiftUI doesn't really make it particularly obvious when and how you might want to use this, which leads to issues like these.
I've encountered the problem he describes in WinUI, too. It can happen in any toolkit where you bind to change events of "observables". In my case I had a list view and a details view, and when you pressed down in the list view, it would get janky and struggle to update the detail view in time. To make it worse, sometimes selecting a different row would hit off a HTTP request in the background to update some cached data.
The art is to get the behavior just right. You want the first request to go through immediately, so on a single click the user sees low latency. But afterwards you need to throttle it so that it only updates every, say, 500 ms. And you must not forget to display the very latest state, if there is no new event after e.g. 50 ms.
This seems like one of those things that are really hard to get right and the toolkit developer should have solved for you. I've struggled with it a couple of times now.
If you mean WinUI 3.0, it is in worse state than SwiftUI, to the point Windows 11 is still making use of plain UWP (hence why WinUI 2.x releases keep happening).
And in both cases, still not up to the set of WPF capabilities in that regard.
This is an interesting article, because SwiftUI has a lot of constraints and quirks (which I definitely will not go into here) but it identifies performance specifically as something they ran into problems with. SwiftUI, like all "reactive" frameworks, specifically tries to make your UI a function of state, trying to "logically" redraw everything so that it's always up-to-date. Obviously, this kind of UI is untenable so it will try to form a dependency graph of what needs updating and change only that.
The issue is that Apple sort of claims that this is all you really need to do, and that the framework will kind of handle it for you. In reality you need to be really careful about what's being updated to avoid spurious updates. Actual SwiftUI code for anything non-trivial necessarily requires some manual debouncing and equatable checks. I hear React Native often has the same issue, but there people have experience with applying memo when necessary. With SwiftUI it's considered "advanced" and bites people before they really know how to deal with it. And, of course, that's assuming that the dependency graph is working correctly: if it's not, you're often stuck and there's often not much you can do.
Not only spurious updates, this kind of architecture generates tons of garbage every couple of seconds, which at least Swift can improve thanks to its support for value types.
React and Jetpack Compose aren't so lucky, having to rely on the underlying optimizations of JS engines and ART, on escape analyis and GC algorithms.
I never understood the love for reactive for interactive GUIs, when they are so bad in memory management and its effects to jank.
Is it really that bad (the garbage generation)? Modern GCs are insanely good, short-lived objects can be quite cheap.
Also, aren’t they basically push-based reactive frameworks? I would assume they just call a bunch of registered functions and that’s it. The graph itself shouldn’t change too often (and may be static as well, eg. React vs SolidJS) — but I am way out of my depth here.
Dealing with SwiftUI recreating Views too often can be a challenge. I would have liked to have seen more details on how this guy setup his models.
In my experience, you definitely have to minimize work done in Views and maybe avoid `@Published` properties on your model in favor of more explicit calls to `objectWillChange.send()` to signal when you really are ready for the Views to be updated. SwiftUI does not seem to do a very good job of coalescing by default.
The post didn't go into much detail as to why it seems to be so slow. Is SwiftUI recreating an entire UI's worth of components when a state update happens?
To me that’s the red flag. That an experienced game developer wasn’t able to properly troubleshoot down to the correct level, but ended up facing a black box and gave up.
I am not developing on Apple platforms, but just like any reactive UI framework, i am almost certain the documentation of SwiftUI starts with a massive red flag that says "make your updates granular and don't repost a whole ass struct on every frame" at the very beginning which is exactly what the author seems to be doing.
If all you want is a declarative UI, just dump in ImGui and be done with it.
Well that's supposed to be the point of SwiftUI. You write out your declarative structure, let the library worry about how to make it fast, and move on with your day. They literally introduced it that way. Obviously that is very detached from reality
Game developers typically redraw the entire world on each frame. This is not how UI frameworks are intended to be used, even though SwiftUI presents an interface that kind of looks like that. I'm not blaming the author for not understanding this, but I think this distinction is very important to make when working with SwiftUI.
I quite like SwiftUI most of the time, and have got past most of hurdles, such as integrating state with menubar views. But my biggest hangup is that it's used for 3 different platforms with 3 different sets of variation, and most tutorials focus on iOS
It's also not capable right now (macOS 12/iOS 15/Xcode 13) for some parts. For example, there is no simple way to declare initial focus for a text field in a view. One has to resort to "onAppear" hacks, such as:
.onAppear {
// 0.05 is a guess. anything lower seems to run too early
// and does not have the desired effect.
DispatchQueue.main.asyncAfter(deadline: .now() + 0.05) {
focusTextField = true
}
}
Similarly, dismissing text field focus & on-screen keyboard upon scrolling/tapping away is a source of programming pain. It seemingly was bad in the old APIs. It is worse in SwiftUI in my limited experience. Same for scrolling a view such that an active text field isn't covered by the on-screen keyboard.
Sure, I didn't necessarily mean it in a fully general way, just for the project being described. That is the obvious (comparatively) way it is still incomplete.
In 2022 developing software for iOS (now iPhoneOS) feels like you're working for Apple for free.
You have to look closely to new requirements that they roll out every few months. You should use Apple products in your app or gtfo (e.g. login with Apple ID). You have to update your laptop every few years because Xcode (which is monumental PoS) requires new MacOS that your hardware doesn't support. And when you go through all this trouble, you have to literally beg Apply to admit a new version of your app. You even have to pay Apple to be able to write apps for their phones/tablets.
I'm so glad I jumped off iOS development a few years ago.
If you're a young developer and still not decided what area suits you the most, think twice before entering iOS and MacOS development.
I haven't really coded things since I was a kid and since I have an iPad/iPhone/iMac and wanted to learn how to code again and potentially use it as a side hustle or some sort of future money making opportunity, I decided to learn swift/swiftUI and i'm currently taking an online course, but when I read through hacker news, all I see is articles bashing it constantly.
Here is a 30 minute podcast episode titled, "Different, But Not Worse," on SwiftUI vs UIKit.[0] This thoughtful take is a whole lot less adversarial towards adopting SwiftUI than what you tend to read on this forum.
If you’re just starting out, a lot of those complaints will probably be addressed by the time you start your Swift/SwiftUI side-hustle. Unless you’re a very quick-learner in which case you’ll just have to use it in its flawed state, or learn UIKit I guess.
If I was to build a new mobile app using a shiny new declarative framework, I’d rather use Flutter, because at least that provides the benefit of targeting Android for free. Depending on the type and scale of the app, of course.
I know that the HN crowd has a weird relationship with both Apple and Google that go in two very different directions regardless of the evidence but honestly I think what you are proposing is actually a really good long term plan.
Flutter is currently rewriting a key part of their graphics rendering pipeline as we speak that should clear up the remaining issues people seem to have with it when it comes to performance and the rest of the project is incredibly well supported and documented and most importantly as you hinted at genuinely cross platform.
It’s a much better bet for basically any project outside of one that you know for a fact is only ever going to target Apple operating systems.
1. SwiftUI seems mainly suited for building quick proof-of-concept simple intro apps, with the difficult 20% still out of reach. Which doesn’t sound all that different from the notorious downsides of cross-platform frameworks, some of which can allow for small simple apps to be quickly spun up, but the tough under-the-hood cases still intractable.
2. Flutter, and React Native are more mature than SwiftUI is right now. SwiftUI will get there, and maybe it’s only 1-2 years away, but it still hasn’t hit its Swift 5 moment yet. That opens up the possibility of breaking API or even conceptual changes until then.
Built an macOS app (https://posturenet.app) to monitor my posture in real time with SwiftUI last year for fun.
The lack of support for Video and Camera from SwiftUI was the most challenging part for me as a newbie Mac dev.
Took me a few weekends to get the core part done (ML and algorithm), but hooking up all the UI components and connecting them with the video stream gave me so many headaches.
With that said, it was still much easier for me (complete newbie to Mac App at the time) to learn and develop in SwiftUI than UIKit.
Hope SwiftUI keeps getting better and keeps investing in the team.
You're getting downvoted because HN filters emoji, and because people who aren't on Apple devices won't see the icon you expect, because the codepoint is in the Private Use Area, so they'll see tofu.
Success to me looks like steering clear of SwiftUI for now, and advocating for Apple to hire documentation editors/leaders who can 1) create SwiftUI documentation of code examples for how to escape out of SwiftUI to handle more-complex cases, and 2) accelerate Xcode speedups of SwiftUI testing-and-instrumentation techniques such as View testing (e.g. ViewInspector) and integration testing vs. unit testing (e.g. reactive vs. dependency injection vs. extension tests vs. protocol tests etc.)