Hacker News new | past | comments | ask | show | jobs | submit login

SwiftUI might be slower than UIKit at the moment, but it has significantly more potential for performance improvements in the frameworks.

With UIKit, developers mostly express the "how" – put this control at these coordinates etc. This makes it quite hard for the framework to a) understand if anything has actually changed, and b) optimise how the UI gets rendered.

With SwiftUI, developers much more express the "what" – show the user this information, with these constraints, etc. This allows the framework much more scope to optimise how the layout is accomplished. For example a List of 3 items could skip a bunch of complexity around the handling of swapping views in and out of the hierarchy as it's unlikely to scroll far enough.

Additionally, by making the data model a first class concept more than it was with UIKit, SwiftUI has much more understanding about data flow and when the UI needs to be re-rendered, or which parts need to be re-rendered.

It's still early days, compared with the maturity of UIKit (which stems from AppKit, which is 20+ years old at this point), but the scope is much deeper and I'm confident the performance will improve as Apple iterate, and as they understand how it's used by developers and improve those code paths.




Curious how you address the counter argument: That in practice declarative UI has historically been slower than coordinate-based systems? E.g., we're talking here about SwiftUI vs. AppKit, but there's the long history of slow interfaces with HTML/CSS.

Based on the historical evidence I'm familiar with "declarative UI = slow" is just something I assume to be true now. For that to change, declarative UI frameworks have to stop talking about how they could be fast, and start actually being fast.

(I'm also just very sensitive to latency, and I've watched new declarative UI frameworks rise in popularity, while at the same time the latency in the software I use increases, so in my head they're linked.)


The key point is this:

> significantly more potential for performance improvements in the frameworks

"Hand-crafted" imperative code will always have more potential, but most code is not this sort of code, and most engineers probably don't have the skillset to be able to do this, at least not when traded-off with implementing features and shipping customer value (no criticism, this is likely the right trade-off).

Declarative code shifts that control to the platform owner where they can improve things for everyone. Apple has a significant vested interest in this, I'd say more so than React, as people blame their iPhone for being slow, or their browser, but not React or SwiftUI.

I'd also suggest that most declarative UI frameworks I'm familiar with have been in higher level languages such as JS, HTML, or QML, or things that run in a browser environment. I'm not sure we've seen something that's in a relatively performant, compiled language, for an environment that assumes reasonable graphics performance.

Apple are explicitly targeting 60fps here, it's clear performance is a first class concern, rather than just an afterthought, and this is the most convincing case for a declarative UI framework achieving it that I've seen.


Eh, I don't think it has to do with being "hand crafted".

I think the notion of declarative is some flawed bunk - to tell the framework "what" to do you need to specify "how" - removing the how and pretending you will magically infer the intention is a recipe for disaster.

"and most engineers probably don't have the skillset to be able to do this"

The irony being declarative languages require a higher bar of skills to use well, without the UI falling over.

"implementing features and shipping customer value"

Again, you can build bad code fast, but eventually more garbage is just more garbage, you have to go back and "fix" it or the value proposition shrinks faster than value added by new features.


If I want an input box to flow to fill the width from its starting point to the right side of the window/display minus a margin... what's so bad about the UI tooling handling this? I mean, I've worked with tools where it's a lot of work to maintain that simple thing, that you wind up doing over, and over again... What value does it really add in spending even a half hour (often more) just handling layout reflows... we're talking about computers in your hand that are more powerful than super computers a couple decades ago, and faster than desktops even a decade ago.

It isn't a new problem, HTML solved it relatively well several decades ago now... not that HTML has been the pinnacle of performance, it's been pretty damned effective at delivering a layout/form that can scale to the user without excessive effort.


That's not what declarative is, though, "imperative" or OO frameworks have layout managers (too).

There is some confusion, i.e. a spectrum (declarative first or imperative first, somewhere in the middle), about what is meant by declarative e.g. this article mentions android as "imperative" https://medium.com/everything-full-stack/declarative-ui-what...

While this one claims android is by default declarative. http://www.cberr.us/tech_writings/essays/declarative_vs_prog...

Having a DOM or element tree with layout applied doesn't make you declarative, this is more what is meant by declarative:

https://en.wikipedia.org/wiki/QML or https://www.solidjs.com/examples/counter

I.e. "low code" coding with little actual behavior specified.


In addition to layout managers, UIKit and AppKit both have constraint based layouts if you use Autolayout. Autolayout itself follows the declarative paradigm (well unless you use it to manually and directly set every X, Y, width, and height constraint, then I guess you are using imperatively). You can declare that a button be two thirds the width of its parent, for example. It’s also fast (well I guess I mean between Apple’s optimizations and modern cpu speeds, there is only noticeable latency when you do intentionally ridiculous things like trying to layout one view per pixel with nested, relative constraints).

It’s kind of funny comparing it to SwiftUI’s new .layout API. It’s the exact opposite! A declarative UI framework with an imperative layout option.


> "Hand-crafted" imperative code will always have more potential, but most code is not this sort of code, and most engineers probably don't have the skillset to be able to do this, at least not when traded-off with implementing features and shipping customer value (no criticism, this is likely the right trade-off).

This trade-off sounds like declarative frameworks make it easier for bad developers to make mediocre software while making it impossible for great developers to make great software?


In my experience SwiftUI gets the right balance here.

For a few hero screens we were able to implement what we wanted, with the performance we wanted, in non-idiomatic but not-bad ways. For almost the entirety of the rest of the app SwiftUI pretty much did the right thing without much work and gave us a performant app essentially for free.

In our previous UIKit app so many screens had edge cases where the engineers implementing it hadn't had time to optimise for particular ways of using it, and the whole thing felt a bit janky to use. Not criticising the work they did, but the tradeoffs were such that we didn't have time to polish that much.

Declarative frameworks can either make it easier for bad developers to make mediocre software while limiting great developers, or, with the right tradeoffs that I believe SwiftUI does well, they can allow bad (or time-constrained) developers to make pretty good software, while leaving the door open for great deveolopers (or more highly resourced teams) to make great software.


Depends on the definition of "great" ... if you're a company building a one-off app that only a couple people are ever going to use, do you want efficient, and lower cost to build/deploy getting the job done, or do you want "great"? And would you be willing to pay out of your own pocket for others to do that work?

I'm all for software quality and craftsmanship. I think far too often, far too many corners are cut, and there are often huge projects which have been poorly written. That said, it really depends. Most people only use a handful of one-off apps. But most apps are one-offs. The developer time is far more costly than the resources to run the app, or the time of the people using the app, generally speaking. Saving 0.01 seconds may make things seem smoother, but it's not really going to make the person using the app more effective.


I'm not saying every piece of software needs to be great, just that not having the tools to be able to aloe great software at all is a problem (and I'd argue that's the direction Apple is headed in today, i.e., I don't think Apple's tech is a good choice to build another Sketch https://en.m.wikipedia.org/wiki/Sketch_(software) today).


I would separate your post into two things: One is the idea that declarative interfaces can be faster, and the other is that Apple is dedicated to fast interfaces.

The former, as you are seeing from several other posters, is not a new thing, and is in my opinion a history of continual and significant failure. It is so consistent that I now have an almost visceral revulsion to people singing me the song about how wonderful "declarative" can be, which is sort of ironic because they intend the opposite. Calling something "declarative" is one of the strongest signals that a technology is going to be a pain in the ass to use. (In fact I'm sitting here racking my brains for a stronger one and I'm not sure I can come up with one. "Enterprise-ready" perhaps? Maybe "Hosted by the Apache project", which often indicates a quality project but one that is definitely going to be a real pain.)

The real reason this will go fast is that Apple is prioritizing speed.

I also was concerned about your comment above "developers mostly express the "how" – put this control at these coordinates etc." So far as I know, that is a strawman; no modern UI toolkit works that way. The web has had a major influence on them and every major toolkit has a more web-like layout available (and I am collapsing history here for simplicity, I am aware that relative layouts were available before the web, but the web definitely made them all step it up another notch, further consolidated by interface diversity between touch & mouse & screen sizes). If anyone using a major modern toolkit is dropping text at a particular coordinate, that's on them using the toolkit incorrectly, the toolkit has long since stopped forcing that.

What the toolkit needs in order to do what you're talking about is basically the DOM for the thing it is displaying right now (for whatever the toolkit calls that concept). It is not particularly a problem for the toolkit if that DOM is built by imperative code or by some 'declaration'. It probably will end up supporting both anyhow, because how the DOM tree gets built isn't the important part. Basically identically to how a browser functions, it doesn't matter to the browser (as a UI renderer) whether the DOM it is working with was built "imperatively" or "declaratively" or "reactively" or "purely functionally" or anything else; the DOM has what the browser UI needs to render quickly, and to the extent it has troubles rendering quickly, the solution is more information in the DOM for the browser to chew on and use in its decisions, not a rewrite in how the DOM is generated. Declarativeness is entirely irrelevant here, in some sense because regardless of how you get there, the DOM-equivalent is already by its nature guaranteed to be "declarative".


It's all about the baseline. Baseline declarative has "good enough" performance while the same money/time investment into imperative UI will get you worse performance. It's only when you've poured in a lot more resources into the imperative version that you start to see it overtake the declarative version.

This problem is illustrated well by InfernoJS. When the JS framework benchmark was first released, the vanilla (imperative) version blew away all the frameworks on the list. InfernoJS came along and beat the vanilla performance, so they updated the vanillaJS version. They went back and forth a few times with InfernoJS constantly upping the game and them looking at what Inferno was doing to get ideas into improving their performance further.

Today, the vanilla version is faster, but only by a little and only after a bunch of rewrites. Meanwhile, the InfernoJS declarative code didn't change a huge amount. The underlying framework did most of the updating. This means that Inferno would be fast on a second project while vanilla would once again start off slow and have to find ways to optimize their specific app all over again.

Only the most demanding companies and legacy apps will continue with anything other than SwiftUI once it has improved a bit. Saving all those dev man-years while giving up just a little bit just isn't a tradeoff that most companies (even big ones) are willing to make.


I'm not familiar with Inferno JS, but it appears to be based on the DOM, which is fundamentally declarative (i.e., flow-based layout rather than coordinate-based layout)? I.e., correct me if I'm wrong, but that doesn't sound like a pure enough way to compare declarative vs. imperative if it's built on a declarative foundation?


Yes and no. The DOM is declarative internally, but the fastest way to use it is imperative where you manually create nodes one at a time, update properties on those nodes individually, and string them together one command at a time.

The effect for the programmer isn’t so different from a series of draw calls except that it’s more sensitive to individual updates tanking performance which requires even more consideration from the imperative programmer.

To me, the best comparison is between an AOT language with manual memory (eg, C) vs a JITed language with garbage collection (eg, Java). In theory, the JIT can provide equivalent or even faster code, but that’s not generally the case in practice. The GC isn’t generally faster, but it is safer and easier. Some companies need their software written in C, but Java is good enough for most given the productivity gains.

In theory, runtime inference could dynamically improve performance of declarative systems beyond what AOT imperative code can do, but in practice, companies don’t invest the resources to make this happen. At the same time, not having to manually manage the output to ensure garbage doesn’t creep in accidentally greatly improves developer productivity.


The theoretical performance improvments of declarative UIs are just.....theoretical.


Same observation here.

I remember a time, not so long ago, when serious people were advocating JITed languages (Java, C#) for performance workload because the JIT was theoretically able to produce better code for the platform given it had full knowledge of the architecture and execution context.


For (long running) data stream workloads, JIT does have an auto-profiler benefit. It doesn't preclude writing code well.


For long running workloads with consistent usage patterns it has a benefit.

It is not merely enough to be long running, that is a requirement, but not enough by itself.


Are there any studies measuring it on real-world workloads?


Building UIs with coordinates is hard and effortful. Because of this, you have no time to make it slow.

Building the same UI declaratively is much easier. And a simple UI is fast. But because it was so easy to make it, now you have extra development time to make it slower (by adding extra features).


Elm is really fast when it comes to rendering web interfaces


But this is also one of the big problems of SwiftUI. Because it is effectively a fancy DSL for combining interactive lego pieces, many subtle interactions are just assumed implementation detail. Like how should a "tap" on a Rectangle with a transparent color be interpreted if the rectangle is offset. Because the underlying framework (UIKit) has no notion of these things and because the SwiftUI DSL can't foresee all permutations of its members (Rectangles, HStacks, Spacers, GeometryReaders, etc), oftentimes people build a certain combination of things that results in a certain behaviour. But then the next iOS update this behaviour changed because it was just an ever-changing implementation detail. This leads to constant churn.

I agree that at some point we will see less of these surprising behaviours, but as long as Apple iterates on SwiftUI they will happen again and again.

Also, regarding performance, any performance optimisation will lead to more of these surprises. To keep my example: Maybe at some point an engineer will add a SwiftUI optimisation for offset rectangles and suddenly the tap doesn't work anymore.

After using SwiftUI a lot I'd rather use a more deterministic framework. Or, have it be open source so I can understand what's happening under the hood. The current game of build, inspect and pray every version update is frustrating.


That line of reasoning reminds me of the fabled sufficiently smart compiler: one day, the sufficiently smart compiler will understand what we mean in our programs and generate code that is faster than what an expert could write in C or even assembly. Unfortunately, the sufficiently smart compiler still hasn't arrived.

And similarly, the performance potential of SwiftUI has not been unlocked. Maybe it will happen, but while we wait for the SwiftUI to implement those fabled optimizations, developers have to wrestle with—from what I gather from the article—a lot of extra accidental complexity so that they don't impose an unusably slow interface to their users.


This is a fair point, but from what I've seen from Swift UI these optimisations are already happening.

Lists used to be backed by UITableViews in all contexts I believe. Now, if I remember rightly, they are backed by a mix of UICollectionView and raw combinations of views depending on various factors like the number of elements or whether that number changes.

These sorts of changes didn't go super smoothly, I think iOS 13.0 was the first one with SwiftUI, then 13.3 changed a lot of the under the hood details and broke some things (we had 13.2/3 specific hacks), but since iOS 14 things have been much more stable and there have been fewer breakages.


Optimizations have been happening in compilers for years, but they've never come close to the "sufficiently smart compiler" ideal.

IOW, you need more than an observation that optimization is happening.


> one day, the sufficiently smart compiler will understand what we mean in our programs and generate code that is faster than what an expert could write in C or even assembly.

Who is making this argument? Pretty sure I’ve heard most compilers (and transpilers maybe) will do a better job of the low level stuff than your average dev, but AFAIK no one notable is arguing a true expert couldn’t squeeze out some additional performance using say Assembly if and when it’s sensible to do so.


> That line of reasoning reminds me of the fabled sufficiently smart compiler: one day, the sufficiently smart compiler will understand what we mean in our programs and generate code that is faster than what an expert could write in C or even assembly. Unfortunately, the sufficiently smart compiler still hasn't arrived.

Hasn't it? Less and less gets written in C these days, even e.g. HFT or game engines tend to use higher-level languages.


It hasn't. It's possible for programmers to get massive speedups by re-architecting their code to make better use of the CPU caches or avoid branch mispredictions and that happens because the compiler was not sufficiently smart to do that transformation itself.


> It's possible for programmers to get massive speedups by re-architecting their code to make better use of the CPU caches or avoid branch mispredictions

Sure, but they can, and do, do that just as well in Java as they could in C or assembly.


How could the framework have improved to fix the problems in the OP?

It seems like they were mostly caused by (developer) design flaws by oversubscription to events causing wasteful redraws. What kind of changes are you expecting to fix this?


Your argument doesn't work.

It seems like a variation of:

"If I tell the compiler what instead of how, it should be able to produce the msot optimized code, even if it doesn't do so now, a sufficiently advanced compiler could".

This kind of thing never materializes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: