Hacker News new | past | comments | ask | show | jobs | submit login
How Swift Achieved Dynamic Linking Where Rust Couldn't (gankra.github.io)
306 points by gok on Nov 9, 2019 | hide | past | favorite | 170 comments



From the article:

"... Swift's Stabilized ABIs on Apple platforms ... are not actually properly documented, xcode just implements it and the devs will do their best not to break it. Apple is not opposed to documenting it, it's just a lot of work and shipping was understandably higher-priority."

Having worked in Apple's Developer Publications department for eleven years, I can confirm that this statement is mostly correct. Apple has the financial resources to hire more technical writers and properly document all their APIs, but it's just not a priority.

The sad thing is that developers outside Apple need this documentation and would even be willing to help write it. But they can't because, as a matter of policy, Apple does not generally involve the developer community in writing their official documentation.


In some areas Apple's documentation is dismal to the point of being ridiculously useless. Like this (from CoreAudio):

    mWordClockTime

    The word clock time.
And nothing else on that page apart from that it's UInt64. And it's been like this for years if not decades already. Nothing's changed since the APIs were also bridged and the documentation re-written for Swift. I doubt it's the intent (to keep the developers in the dark with regard to CoreAudio? unlikely), just neglect. Apple, otherwise one of the few companies that pays attention to details and have unlimited resources for this type of tasks, what's their problem really?


In fairness, "word clock" is a term of art in professional digital audio: this is literally just a sample count so it is unit-less.


>And nothing else on that page apart from that it's UInt64.

What else exactly do you need?

It's a reference page for the members of the struct AudioTimeSTamp:

https://developer.apple.com/documentation/coreaudiotypes/aud...

I wish many C/C++ libs had such a good documentation...


> What else exactly do you need?

What unit it reflects. What epoch it uses. When does it wraparound? Is it realtime? Or best guess? Is it wall-clock like CLOCK_REALTIME or is closer to CLOCK_MONOTONIC? Is a valid range the entirety of possible uint64 values, or is there a range limit?


The answer is “not applicable” to all of these. The word clock is unit less, just ticking up each sample. It’s not actually a clock in the normal sense, there is no leap years or anything in it. It’s used to synchronize devices.

Granted, the documentation isn’t the best at explaining it. But if you know what a word clock is in terms of audio, you know what the value is; a counter.


> The word clock is unit less, just ticking up each sample.

That should be in the documentation. Each sample is guaranteed to have an increment compared to a previous sample.

But, it doesn't answer all of the questions.

Does the world clock always start from 0? Is that an assumption that can be made?

Or is the valid range anywhere inside a uint64?

What happens if the number of samples exceeds the length of the world clock unit? Is it allowed to overflow? In which case it violates the assumption that it always just increments.


Even if it counts in picoseconds it’ll take several centuries to roll-over, so these probably aren’t questions you ever need to ask. Though that’s kind of the point: the API and its documentation should highlight the stuff you need to worry about, such as where you get these values from and what they do, and hide everything else that you don’t (internal implementation details).

But that particular API does the opposite. Just typedef-ing that data as an opaque `WordClockTime_t` would go a long way to fixing this, telling API users to ignore how it works internally and enabling automated documentation tools to to locate and list every other API that produces/consumes this particular value. A simple automation-friendly abstraction that would reduce—if not eliminate—the need for additional manually-written documentation. i.e. Put the knowledge in the code and the automation can graph it.

Alas, there’s something about C programming that seems to bring out the worst abstractions in many C programmers… and if they’re being lazy in their API design, they’ll be twice as lazy when it comes to manually documenting it.

--

"What is wrong with giving tree, here?"

"Well, he don't know talking good like me and you, so his vocabulistics is limited to 'I' and 'am' and 'Groot.' Exclusively, in that order."


> Even if it counts in picoseconds it’ll take several centuries to roll-over, so these probably aren’t questions you ever need to ask.

That's an assumption, that an edge case won't happen. Docs exist to spell out where the edge cases are.

Cisco thought a 32bit number for RTP timestamp would never rollover. It happened. [0] Centuries it might take if it's initialised from zero, but it doesn't have to be. And if you don't give the documentation, then you can't expect reasonable defaults to be used.

It's important to know when something like that happens, so that they can also know how to handle behaviour that may well be completely unexpected. Hiding the type doesn't help. It just tells you you're even more on your own if you want to handle exceptional events, which leads to code with holes so big you can drive a CVE through it.

[0] https://quickview.cloudapps.cisco.com/quickview/bug/CSCvc865...


Aye, I’m well aware what can go wrong when an integer overflow occurs. My point was the way Apple presumably[1] uses this particular Uint64 precludes such an event ever occurring within macOS’s lifetime, therefore there’s no need to explain it. If a macOS API generates that value and a macOS API modifies it and a macOS API consumes it, and users should only ever pass it around as-is and never screw with it directly, it’s opaque data and its internal workings are none of their business.

Okay, it would really help if C’s so-called “typesystem” would actually enforce a custom-defined type like `WordClockTime_t` so that client code can’t do stupid/malicious things with it like stick its own arbitrary integers in it; but hey, C. While a sensible runtime would also chuck an exception if a fixed-width integer overflows, rendering rollover dangers moot; but again, C. It is what it is; and so it goes.

But if, as an API designer, you’re going to document every single way your API may potentially blow up during normal/abnormal use then perhaps you should write that documentation in the form of defensive code that validates your API’s inputs and handles bad inputs accordingly. e.g. A timestamp API should not be making its users fret about (never mind cope with) C integer overflows; guarding against any edge-case crap is the API’s job, not its users’.

Again, the problem is not a lack of documentation so much as lack of clarity. A good abstraction shows only what its users need to know and hides everything that they don’t; the more that can be left unsaid, the better. (If an API can’t be documented clearly and concisely, that’s a huge red flag that the API design is bad so needs reworked until it can.) The problem with an API like this is the not-knowing, which indicates deeper, more systemic, failings than merely “needs more documentation”.

..

TL;DR: If your API is puking on its users then don’t start documenting the color and odor of that puke; fix its code so it doesn’t puke again.

--

[1] I say “presumably” because damned if I’m going to spend hours spelunking Apple’s crappy documentation just to find out exactly where this mWordClockTime value comes from and where it goes to.


> If a macOS API generates that value and a macOS API modifies it and a macOS API consumes it, and users should only ever pass it around as-is and never screw with it directly, it’s opaque data and its internal workings are none of their business.

But that's not the case. You get to set mWordClockTime as part of the init [0]. If you can initialise a value, but aren't given bounds for the value, then the documentation has screwed up.

The value is something the developer can create, and pass in when creating any AudioTimeStamp, which you will be doing a lot of if you're dealing with sound.

This isn't an arbitrary value you can just rely on to be correct, there may be good reasons for altering the value, such as when splitting sound into several thousand chunks and rearranging them.

It's a part of the exposed API - it needs to be documented how it behaves.

For a different take on a similar problem, let's look at how PulseAudio handles it [1].

> pa_core doesn't have any interesting functions associated with it, it is just a central collection of all the components and globally relevant variables. You're not supposed to modify the fields, at least not directly.

This is how you abstract away an API safely. The dev knows up front that pa_core is the type that'll be used, and that other functions will be modifying the sample cache for them - that is, they can't supply a value directly to the type or they've entered unsafe behaviour.

They can go off and find the right setter.

What follows on that page is only a courtesy, and can clearly not be used safely in most programs. It doesn't need to be there at all.

So the dev finds [2], where they call the client to get the sample.

And whilst duration is a uint64, is also has a few more things in the docs, such as the value can be lazy, so you need to check it exists before using it, and the property can raise an error when you try to access it and it doesn't yet exist. You'll also find that this a property (so you won't be creating it), generated by an interface, and where to find that interface.

I mean, I'm not one to compliment PulseAudio's documentation. It is an awful mess, just like the internals.

But they've given us a lot more than Apple bothered to.

[0] https://developer.apple.com/documentation/coreaudiotypes/aud...

[1] https://www.freedesktop.org/wiki/Software/PulseAudio/Documen...

[2] https://www.freedesktop.org/wiki/Software/PulseAudio/Documen...


“You get to set mWordClockTime as part of the init”

While I’ve never had the pleasure of dealing with Core Audio, I’m getting the strongest impression that our problem is not that its API documentation is inadequate, but that its API was designed by a bunch of absolute hacks and bums.

In which case, asking for additional documentation is like asking for additional band-aids after severing your leg with an exploding chainsaw. Never mind for a moment the current bleeding; fundamentally you’re tackling the wrong problem. The right approach is first to make absolutely sure exploding chainsaws never get sold in the first place. Once that’s addressed, then worry about providing non-exploding chainsaws with adequate training and safety gear.

If the user has to initialize the struct directly then yes, the docs absolutely should state what values to use. However, unless there is some absolutely overriding reason for doing this then really, no, the right answer is to do what the old Carbon C APIs and Core Foundation do, which is to provide a constructor function that correctly populates it for them, and then document that. The documentation is shorter and simpler, and there is less to go wrong. Plus users do not need to write their own guard code—a massively pointless duplication of effort.

For instance, the LSLaunchURLSpec struct (https://developer.apple.com/documentation/coreservices/lslau...) is a good example of a manually-populated struct with appropriate documentation.

But in most cases, when working working Carbon/CF you don’t need to know any of these details because the structs are initialized and disposed for you, and these functions all follow standardized naming conventions so are trivial to locate as well. This is Abstract Data Types 101, and shame on the CA PMs for thinking they’re special snowflakes and simply dumping all their shit onto their users instead, and shame on the dev managers above them for letting them do so.

..

Incidentally, this is why I always fight anyone who says developers are too busy/special/autistic/etc to documentat their own APIs. Yes, producing high-quality public documentation needs specialist technical writers and editors on top of (but not instead of) its original developers, but there is no better first test of the quality of your API design like being forced to explain it yourself. I know this; I’ve thrown out and replaced more than one working API design over the years simply because I found it too difficult or confusing to explain.

--

TL;DR: Don’t document Bad APIs. First make them into good APIs, then document that.


There is a glossary in the CoreAudio docs, but it hasn't been updated since 2010. And it doesn't include an entry for WordClock.

You could argue that generally the docs are good enough to Get Shit Done. And people are Getting Shit Done on the platform as a whole. So there's no problem.

But it's a shame Apple doesn't consider better docs a priority. IMO it's not a good look for a trillion dollar company.

More, I don't know if insiders use the same doc base. If so it's definitely wasteful, because newcomers are going to be wasting time and effort trying to get up to speed.


> What else exactly do you need?

I always like to know the units (seconds? nanoseconds?) and the reference-year (1970? 2001? 1601?)


Also, how are leap seconds reflect in this value if at all?

Apple should look at linux's manpage on the `time` syscall for inspiration.


That’s much better than some of the other “documentation” that Apple has…


What exactly would a third party developer do with good documentation for the Swift ABI? The only practical application I can think of would be implementing some kind of Swift FFI, which has a realistic target market of fewer than 10 developers. And in practice, they would probably just use the Swift source code instead.


Well considering the future of apis on osx and iOS is swift, any language that wants to target those needs to be able to talk to swift in the future.

My compiler can do basic interop with swift now, but it's pretty much undocumented, unsupported, and when asking for any info the best I got, we're welcome for contributions on the docs. The docs that do exist are out of date and incomplete.


Function hooking is a lot more popular than you think it is.


Seems like that really depends more on calling conventions than an ABI overall?


Calling conventions are part of the ABI, no?


And usually forbidden in modern sandboxing.


Sandboxing has almost nothing to do with (and does not prevent) function hooking.


True, they are orthogonal.

However Windows, xOS and Android sandboxing are relatively restrictive in that use case.


How so? With dynamic linking function hooking is not all that difficult.


For the stuff that comes with your own package sure, but don't expect to override all the OS libraries without eventually get the process killed, like trying to use DLL injection and OS Hooks on Windows.


> Apple has the financial resources to hire more technical writers and properly document all their APIs, but it's just not a priority.

Related question, what is this seemingly reccurrent trope of Apple to continually understaff their teams/projects or behave as if they still were poor instead of throwing ressources at solving issues (e.g. better developer tools, better documentation, improving QA, ai, servers, reducing bugs, improving security...)? Is it greed? Is it the fear of growing too big and the cult of keeping teams small? Is it an inability to scale up sw projects? I'm dumbfounded by this strange behavior which more often than not lead them to unforced shortcomings.


Great question.

With respect to documentation, when I worked at Apple the understanding was that management thought developers could learn what they needed to know from reading the header files. Of course, that's nonsense. In reality, so much new software was being developed and shipped in each new release of macOS that it would have cost a fortune to document it in a timely fashion.


Developers should be documenting the APIs at a bare minimum.


I love Swift very much but every time I look at the disassembly view in Xcode while debugging, I lose faith in it bit by bit. With my (rather limited) knowledge of what a C or C++ program would compile into I have some expectations of what I'll see in Swift's case but the reality ends up being orders of magnitude more complex. Orders of magnitude is no exaggeration. For example this:

    (myObject as! SomeProtocol).someMethod()
translates into hundreds of executed instructions, a bunch of nested calls that somehow end up in objc_msgSend (!) even though none of the objects on that line have anything to do with NSObject. Let alone ARC atomic acquisitions, etc.

For one thing, Swift is hardly ready for application domains like audio, video or games. No doubt it can make the development process so much faster and safer, but also less performant by exactly that amount. Swift is beautiful, surprisingly powerful and non-trivial (something you typically don't expect from a corporate language, having examples of Java and C#), but the run-time costs of its power and beauty are a bit too high to my taste. A bit disappointing to be honest.


> For one thing, Swift is hardly ready for application domains like audio, video or games. No doubt it can make the development process so much faster and safer, but also less performant by exactly that amount.

I've done quite a bit of experimentation with the performance characteristics of Swift, and I think that's a slight mischaracterization of the situation.

For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance, more in the neighborhood of C/C++ than a managed language, especially when dipping into the unsafe portion of the language for critical sections.

But it's a double edged sword: while it's possible to write high-performance swift code, it's really only possible through profiling. I was hoping to discover a rules-based approach (i.e. to avoid certain performance third-rails) and while there were some takeaways, it was extremely difficult to predict what would incur a high performance penalty.

Currently it seems like the main limiting factor in Swift is ARC: it uses atomic operations to ensure thread-safe reference counts, and this, like any use of synchronization, is very expensive. The ARC penalty can be largely avoided by avoiding reference types, and there also seems to be a lot of potential for improving its performance as discussed in this thread:

https://forums.swift.org/t/swift-performance/28776


> Currently it seems like the main limiting factor in Swift is ARC: it uses atomic operations to ensure thread-safe reference counts

This is exactly what Rust avoids by having both Arc and plain-vanilla Rc. Plus reference counts are only updated when the ownership situation changes, not for any reads/writes to the object.


Rust also backs up this design with the Send and Sync traits, which statically prevent programmers from, say, accidentally sending an Rc<T> between threads when they really should have used an Arc<T> instead.


Now I'm curious, what is the difference between automated ref counting and "vanilla" ref counting? And of these two, where does the C++ shared pointer fit?


ARC as in "Atomic Reference Counting". ARC uses atomic operations to increment and decrement reference counts. That means these operations must be synchronized between threads. synchronization between threads/cores tends to be an expensive operation.

This is required for reference counting objects between threads. Otherwise, you might have one thread try to release an object at the same time another thread is trying to increment the reference count. It's just overkill for objects which are only ever referenced from a single thread.


Well that is confusing. Any Apple developer has known for many years that ARC means Automatic Reference Counting:

https://docs.swift.org/swift-book/LanguageGuide/AutomaticRef...

I never heard this alternate and very different expansion for that term.


It's an overloaded acronym to be sure. Atomic reference counting is a familiar concept in systems programming languages like C++ and Rust. It just so happens that Apple's automatic reference counting is also atomic.


It's a bit less confusing in practice - the full types are std::rc::Rc and std::sync::Arc (where std::sync is all multithreading stuff, and you have to actually use that name to get access to Arc in your code), and both are well documented (including spelling out the acronym):

https://doc.rust-lang.org/std/rc/struct.Rc.html

https://doc.rust-lang.org/std/sync/struct.Arc.html

...I could see this causing merry hell if trying to do advance interop between Swift and Rust, though, and it's admittedly probably going to be a minor stumbling block for Apple-first devs. (I managed to avoid confusion, but I just port to Apple targets, they're not my bread and butter.)


> GP: For one thing, Swift is hardly ready for application domains like audio, video or games.

> For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance

I also have a pure-Swift ECS game engine [0] where I haven't had to worry about performance yet; it's meant to be 2D-only though I haven't really yet put it to the test with truly complex 2D games like massive worlds with terrain deformation like Terraria (which was/is done in C# if I'm not mistaken) or Lemmings, and in fact it's probably very sloppy, but I was surprised to see it handling 3000+ sprites on screen at 60 FPS, on an iPhone X.

- They were all distinct objects; SpriteKit sprites with GameplayKit components.

- Each entity was executing a couple components every frame.

- The components were checking other components in their entity to find the touch location and rotate their sprite towards it.

- Everything was reference types with multiple levels of inheritance, including generics.

- It was all Swift code and Apple APIs.

Is that impressive? I'm a newb at all this, but given Swift's reputation for high overhead that's perpetuated by comments like GP's, I thought it was good enough for my current and planned purposes.

And performance can only improve as Swift becomes more efficient in future versions (as it previously has). If/when I ever run into a point where Swift is the problem, I could interop with ObjC/C/C++.

SwiftUI and Combine have also given me renewed hope for what can be achieved with pure Swift.

I actually spend more time fighting Apple's bugs than Swift performance issues. :)

[0] https://github.com/InvadingOctopus/octopuskit


> translates into hundreds of executed instructions

My guess is that this would also be true under Rust, as soon as you start using some pretty common facilities such as Rc and RefCell. (Swift does essentially the same things under the hood.)

That said, "hundreds of executed instructions" are literally not a concern with present-day hardware; the bottleneck is elsewhere, especially wrt. limited memory bandwidth (as we push frequencies and core counts higher, even on "low-range" hardware), so it's far more important to just use memory-efficient data representations, and avoid things like obligate GC whenever possible - and Rust is especially good at this.


> "hundreds of executed instructions" are literally not a concern with present-day hardware

Depends on the context. I have that line in a very tight loop in a CoreAudio callback that's executed in a high-priority thread. It should produce audio uninterrupted, as fast as possible because the app also has a UI that should be kept responsive. Least of all I want to see objc_msgSend() in that loop. Of course I know I will remove all protocols from that part of the app and lose some of the "beauty" but then what's the point of even writing this in Swift?

For most applications Swift is good enough most of the time. No, it's excellent. I absolutely love how tidy and clever your Swift code can be. Maybe a few things you wish were improved, but every language update brings some nice improvements as if someone is reading your mind. The language is evolving and is very dynamic in this regard.

However, it is not a replacement for C or C++ like we were made to believe. And now that the linked article also explains the costs of ABI stability (even the simplest struct's introduce indirections at the dylib boundaries!) I realize I should re-write my audio app in mixed Swift + C.


> Of course I know I will remove all protocols from that part of the app and lose some of the "beauty"

Protocols/traits/interfaces are just indirection - we all know that indirect calls are expensive. Fixing this need not be a loss in "beauty" if the language design makes direct calls idiomatic enough.

> And now that the linked article also explains the costs of ABI stability

I definitely agree about this, though. ABI stability and especially ABI-resilience, have big pitfalls if used by default, without a proper understanding of where these drawbacks could arise. They are nowhere near "zero cost"!


Rust allows you to opt for static dispatch with traits when possible so that there is no runtime cost. See https://doc.rust-lang.org/1.8.0/book/trait-objects.html


> Protocols/traits/interfaces are just indirection

They are indeed. Look at how C++ handles multiple inheritance, for example: literally a few extra instructions for each method call, not more than that. Swift's cost of protocol method call and typecasting seems too high in comparison, and I haven't even tried this across dylibs yet.


> literally a few extra instructions for each method call, not more than that.

Yup, C++ does this by building in lightweight RTTI info as part of the vtable. Swift expands on this trick by using broadly-similar RTTI info to basically reverse excess monomophization of generic code. (Rust could be made to support very similar things, but this does require some work on fancy typesystem features. E.g. const generics, trait-associated constants, etc.)


> even the simplest struct's introduce indirections at the dylib boundaries

Not if you freeze it. The indirection is only required for resilient structs.


>That said, "hundreds of executed instructions" are literally not a concern with present-day hardware

People really need to stop saying this and stop accepting it as a "truth". It only applies in _some_ applications, and even there it stops applying once you want to do it many times over and over again.


It is "truth" in many cases. On a reasonably high-frequency, high-core count chip, instructions are almost free once you've managed to saturate your memory bandwidth. (Of course, that assumes that the code itself is "hot" enough that it's in cache somewhere, but this is the common case.)


But more instructions means a smaller portion of your code will fit in cache which means its less likely for any given code to be "hot".


       (myObject as! SomeProtocol).someMethod()
   
The `objc_msgSend()` call you're observing in this case is likely a call to `NSObject::confirmsToProtocol`[1], are you absolutely certain that NSObject is not involved anywhere in your class hierarchy?

[1]: https://developer.apple.com/documentation/objectivec/nsobjec...


  > For one thing, Swift is hardly ready for application
  > domains like audio, video or games.
I am curious, is above based purely on what you said first (hundreds of instructions generated) or you have some evidence for that? I know nothing about audio processing but is not the bulk of the work done inside highly optimized Core Audio libs and Swift would not have big impact here? I am pretty sure SpriteKit/SceneKit/ARKit work fine with Swift.


And as a counter example - the most widely used platform for mobile games is Unity and most Unity games implement their important stuff in fully-managed C# (which has among many other performance issues a fairly intrusive garbage collector).

Yeah there's a move away from C# towards a Burst-compiled unmanaged subset but it hasn't happened yet. And yes - Unity itself is C++ but all your game code is still in Mono/C# and calling in to the engine doesn't make all that go away. There's still plenty of tight loops in managed code.

In short - a lot of mobile game developers seem happy to sacrifice bare metal performance if they get something back in return.


As long as you use the standard AU components which themselves are written in C, you should be fine. However, just one step outside of the standard functionality, e.g. you want to process or generate the audio sample stream yourself in Swift, is where it can become troublesome. I tried to profile my audio processing loops and I saw the bottlenecks in some RTL functions that deal with Swift protocols. Like I said in the other comment, I will remove protocols from that part of my code and lose much of its "swifty-ness" but then why would I even write it in Swift?


I think another way to view this is, while Swift could be performant, but idiomatic code in practically any other language may be utterly wrong in Swift.

For instance recently this nested data structure "issue" was brought up (again) in the Swift community: https://mjtsai.com/blog/2019/11/02/efficiently-mutating-nest...

If you had a nested set in a dictionary:

``` var many_sets: [String:Set<Int>] = ... let tmp_set = many_sets["a"] tmp_set.insert(1) many_sets["a"] = tmp_set ```

vs.

``` var many_sets: [String:Set<Int>] = ... many_sets["a"].insert(1) ```

The performance is entirely different (e.g. you are making a copy of the Set in the first example). Prior to Swift 5, you would have had to potentially remove the set from the dictionary in order to make sure there were no unintentional copies.

While the examples are contrived to some degree, I think at least a few new Swift programmers would lookup something in a dictionary, pass the value into a function thinking it's a reference, and then when they realize it isn't being changed in the dictionary, set the value in the dictionary after the function returns like:

``` var many_sets: [String:Set<Int>] = ... let changed_set = process(set: many_sets["a"]) many_sets["a"] = changed_set ```

It is "easy" to understand what is happening when you know Swift's collections are value types and about copy on write and value vs reference semantics, but it is also an easy performance issue.

Furthermore, when web framework benchmarks like: https://www.techempower.com/benchmarks/#section=data-r18&hw=... show Java's Netty vs. Swift NIO (which is based on the architecture of Netty), I think that it indicates that you cannot just port code and expect anywhere near the same performance in Swift.


Yes, collections within collections (or any structs for that matter) is another thing in Swift that you ignore at first, until you discover some side effects and realize how horribly inefficient your code might have been so far. But to be fair you are not protected from similar inefficiencies even in C, where structs can be copied without you fully realizing the associated costs, especially if the struct is declared by someone else, not you. And I like how C++ protects itself in this area: define constructors that you think are most suitable for the type.

I really wish Swift moved a notch towards C++ in some areas especially where the designer of the type can define the usage in very precise terms. Is it copyable? Is this method abstract? Maybe also struct deinit, etc etc.


It does seem to me moving in that direction - not at the type level but at the member level.

Property wrappers already allow some interesting possibilities with customizing the storage and usage of particular member variables, and there was a thread today about exposing the memory locations of reference type members, which would unlock a lot of optimization opportunities:

https://forums.swift.org/t/pitch-exposing-the-memory-locatio...

I'm not sure whether swift can ever really get there with respect to performance, given the foundational decisions regarding ARC and copy-on-write. But I would love a language with Swift's type system and sensibilities and a bit more control over how memory is handled.


For future reference, code blocks here on HN use Markdown syntax (that is you indent them four spaces), not GitHub-Flavored Markdown syntax (triple backquotes).


FYI, that’s a pretty expensive line of code. The compiler has to search myObject’s type metadata and protocol conformance records to find the conformance, then it has to create a new SomeProtocol existential container, copy myObject to the new container (potentially incurring retain/release traffic), use a witness table to dynamically call the method, and finally destroy the existential container. Dynamic casts are slow; if you can restructure your code to avoid the cast then it won’t have to do a bunch of that extra work.


Yes, with all that in mind the complexity I see in the generated code still far exceeds my intuitive expectations. Of course I'll end up removing protocols from critical parts of my code, but like I said in the other comments, then what's the point of writing those parts in Swift? Protocols are the core part of the language, they are offered as a substitute for multiple inheritance and even for e.g. enforcing abstract methods (no other idiomatic way in Swift), they are elegant and look cheap except they are not!


The really expensive part here is not the use of a protocol, it’s the downcast (which isn’t really idiomatic Swift). Static dispatch is always faster than dynamic dispatch/polymorphism, but protocols are usually reasonably efficient (even more so if you can use generics instead of existentials).


> For one thing, Swift is hardly ready for application domains like ... games.

That is untrue; See my comment at https://news.ycombinator.com/item?id=21491597


I really enjoy Swift's philosophy of setting up higher-level abstractions, and then pirouetting under the hood to get performance near the ballpark of C++ and family. I'm a big believer in "Just Works pretty-fast by default, optimal by deep-dive".

Too bad Apple hasn't shown much interest in supporting Swift on other platforms; I know efforts exist but they all seem like second-class citizens. I don't really want to invest the time learning a new language that's locked to a single ecosystem.


> get performance near the ballpark of C++ and family.

That's a bit of an overstatement, Swift isn't really in the same ballpark than C++. Its performance characteristics are more the like of a managed language.

In the ixi driver implementation[1], they ended-up with results comparable to JavaScript in terms of throughput[2], and C# in terms of latency [3].

[1]: https://github.com/ixy-languages/ixy-languages/blob/master/R...

[2]: https://github.com/ixy-languages/ixy-languages/raw/master/im...

[3]: https://github.com/ixy-languages/ixy-languages/raw/master/im...


The main problem with this challenge is that a) most Swift developers are more used to writing front-end code and b) the specific network card isn't available to everyone that would like to look into this problem (like people with MacBooks). So while this driver seems to spend 3/4 of it's time in retain/release and I see the word `class` everywhere and the word `inline` nowhere (while inline is used in the C version it's based on!) I just can't do anything to improve it's performance.


Yeah I had the same reaction. If you look at the source, it looks like they lean heavily on reference-types, which is a worst-case for Swift performance. Specifically running tight loops with reference-counted objects comes with a huge cost, which seems to be exactly what this code does. I would love to take a crack at optimizing it, but I can't run it on my system.

Still, Swift definitely makes some compromises when it comes to performance. ARC is pretty costly in general (in terms of time, it's pretty cheap memory-wise), thanks to heavy use of atomic operations, and copy-on-write for value types has really nice properties as far as making it easy to write correct code, but it can result in unnecessary data duplication which is hard to optimize since you're basically at the mercy of the compiler to make it mode efficient.

A lot of these problems have possible solutions which haven't been implemented yet - in the long term I'm curious how much performance could be improved since I think Swift really could be the "sweet spot" language in terms of performance and usability.


Swift has been made available for Linux for the past four years:

https://swift.org/download/#releases

https://www.digitalocean.com/community/tutorials/how-to-inst...

I’m not sure what’s missing as far as libraries, other than UIKit.


Didn't realize the Linux version was offered through the official channels. That's something.

Still, to truly compete it would need to have Windows support too. And ideally real buy-in from at least one other major tech company.


As the other poster has mentioned, IBM has put a lot of effort into server-side swift. Also Google is investing in Swift for Tensorflow, which means there is a team at Google who's job it is to work on the Swift compiler every day.

edit:

If you want you can use Swift on Google Colab right now:

https://colab.research.google.com/github/tensorflow/swift/bl...


A team that is lead by Chris Lattner, creator of Swift. Not bad!


Which is why Swift with its poor CUDA and Windows support instead of Julia.


https://dev.azure.com/compnerd/windows-swift

"Support" is the kicker - I consider C# and C++ to have Windows Support because the platform vendor publishes and provides support for their own developer tools.

Do you mean that, or maybe something closer to the level of "Support" where interested parties submit improvements, and platforms are included in the CI/CD process?


> Windows Support because the platform vendor publishes and provides support for their own developer tools.

It's not from Microsoft directly, but it's worth noting that LSP support for Swift is under active development. VSCode is probably currently the second-best IDE for Swift development.


IBM uses Swift for mobile apps https://developer.ibm.com/swift/2015/12/03/introducing-the-i... and develops Kitura (https://www.ibm.com/cloud/swift), a Swift web framework.


IBM is no longer offering that Swift mobile dev kit.

The IBM Swift Sandbox is no longer available as of January 2018.


The kit is available, the web browser sandbox isn't.


> I’m not sure what’s missing as far as libraries, other than UIKit.

It's quite usable. A couple years ago, I tried Swift for Linux when it came out, and it was a dreadful experience. But now I do things with Swift in Docker containers and basically don't think about it.

There's a few things which aren't implemented (IIRC things like XML parsing support) but it's mostly things I wouldn't use or would use a library for anyway.


A key point of this article is exactly that Swift abstractions aren't nearly as "high level" as they could be. That "ABI-resilience by default at dylib boundaries" choice, while understandable, is also quite costly in terms of added complexity to the language and implementation. We already saw this happen to a different extent with C++; I think that Rust developers just saw the writing on the wall and that's why they ultimately chose to stick with a very C-like approach, where the user needs to deal with all ABI-resilience concerns herself.


> ABI-resilience by default at dylib boundaries

Actually ABI-resilience is not the default, it is enabled by a compiler flag.

In addition to that, there are some current compiler limitations when it comes to cross-module usage of generics and inlining that affect performance. But those are not by design and can be improved in future compiler versions.


If only a compiler/runtime was somehow endowed with enough artificial intelligence to decide on optimal data structures and algorithms to provide this magical 'fast by default, optimal by fine tuning'.

High level languages are fat, inefficient, rigid and what every large corporation wants, because they share so much in common.


Interesting comment on Reddit: https://www.reddit.com/r/rust/comments/dtqm36/how_swift_achi...

> You can actually already do dynamic linking in Rust, but only with C-based interfaces. The C-like interfaces that gankra talks about are I belive more similar to current Rust than to C, so I think they shouldn't be called C-like. They could support structs, enums, trait objects, lifetimes, slices, etc. "Stable ABI Rust" would be a name that's more fair. Addition of generator based async and generics of course won't work, but not all interfaces actually need those features.

> I think there is definitely potential for a "better dynamic linking" RFC that adds a stable ABI to the language. Of course, the default compilation mode should keep using unstable ABIs, generics, async, etc. But allowing crate writers to opt into a stable ABI would be beneficial for precompiled crates as well as compile times which are still the biggest problem with the Rust compiler for many people.

(a crate is a Rust package)


>But allowing crate writers to opt into a stable ABI would be beneficial for precompiled crates as well as compile times

I think there is already a fuzzy convention that could basically be made to enable this. A crate name ending in -sys is expected to provide a C-like interface, and to thus be usable in a dylib context, whereas the same crate name with no suffix or with a -rs one provides a statically-linked, ABI-unstable wrapper to its corresponding -sys. The build system just needs to be made aware that the former kind of crate need not be recompiled when potential ABI-breaks are introduced.

> (a crate is a Rust package)

A crate is a Rust compilation unit. Closer to a .cxx, .o file in C/C++ than what are usually called "packages" there.


-sys are raw wrappers around C code so wouldn't provide much benefit for them to be exposed as dynlibs.

And crates are the equivelant of python, node, go, etc packages. That they are also a compilation unit is, I believe, an implementation detail. I think they also made it configurable.


A crate is a compilation unit. A package is defined by a Cargo.toml, and can have one or more crates.

Most folks use “package” and “crate” interchangeably, even if they’re technically different.

See the bottom of https://doc.rust-lang.org/book/ch07-00-managing-growing-proj...

(Additionally, “compilation unit” is a bit weird, given incremental compilation, etc. “the file containing the root module tree that gets passed to rustc” is not as succinct though.)


We can split one crate into multiple units for better paralleization, but I don't think there is a way to put multiple into a single compilation unit.


A crate is a compilation unit. It's not configurable.


In reading comments about Swift on this and other HN threads I see a lot opinions that in my experience are completely off the mark: a) "Apple cares only about iOS so thats all Swift will be used for" b) "Swift is in fact slow, look at this (naive) swift code vs. this (heavily optimized) c/c++ code".

While for people who take time to really work with Swift (yes those are often iOS and Mac developers by necessity) come away with an opinion that Swift is really a diamond in the rough, a new generation of a language co-evolving along with Kotlin and Rust. One distinction is that Apple can afford to hire top-tier compiler developers (including the founder of Rust) and invest heavily in tooling and growing the Swift ecosystem. They can pull off projects like ABI stability which took more than a year of focus for the whole team.

I point this out because the is always some sort of opportunity where widespread public perception is so mismatched against reality. I predict in the future there will be some tech startup that will go all in on the Swift ecosystem and be able to run circles around the competition.

Note for responders: I'm not saying the languages aren't just as good, or better. I'm not excusing the fact that Apple definitely dictates the priority of where the compiler team invests their time.


> I predict in the future there will be some tech startup that will go all in on the Swift ecosystem and be able to run circles around the competition.

Personally I'm really interested with what's going in with Swift in the math/science space. The work that's going on with Automatic Differentiation is fascinating, and the Numeric library which has just been released should make it much easier to achieve highly accurate numerical results in Swift.

I agree with you that Swift's public perception is out of step with the reality of the language. It certainly has issues, and when comparing the developer experience with something like Rust, it falls way short on things like tooling, and platform support, but it's just so easy to be productive in Swift I keep coming back to it.


Supporting both polymorphic and monomorphic compilation helped Swift a lot, but I think the key difference was ultimately just that Apple had a more significant motivation than Mozilla to pursue dynamic linking and way more resources to throw at this very hard problem.

Interesting stand.


All of macOS's system libraries are dynamically linked, so there's just no way Swift could be used in the OS if it didn't do this right.


They're C libraries, there's very few languages which can't dynamically link to C. The essay is about Swift dynamically linking to Swift directly (not through a C ABI as e.g. C++ or Rust would).


That's why I'm talking about use of Swift itself to implement platform libraries.


The last part of the line reads (rather politically incorrectly) that Apple has more talent available to solve this problem.


Or they were just willing to accept a significant increase in language complexity, to deal with things that Rust just punts on by basically expecting you to stick to #[repr(C)] at your preferred dylib boundary. (Though, potentially, that #[repr(C)] could become e.g. #[repr(SomeArbitraryStableABI)], and there have been proposals to this effect.) And they did this precisely because of that perception that Swift "wouldn't be usable" otherwise.


Resources != Talent, and the line talked about resources, not talent.

It's pretty obvious that Apple can throw more money at the problem than Mozilla can, if they choose to do so. That means they can buy more developers to work the problem, which is bound to be helpful whether those developers are particularly talented or not.

There's really nothing politically incorrect about that.


Given that the author is a Mozilla employee (working on Rust) and former Apple one (on Swift), I'd be surprised if your interpretation was correct.


Yeah motivation is the bigger factor (why would Mozilla care about the system APIs of an OS, of all things?) but also it doesn't hurt that Apple is one of the richest companies in the world (depending on the day).


I do admit that English is not my first language...but I am struggling to interpret "Apple had way more resources to throw at this very hard problem." any other way.


It depends what you mean by “talent”, if you use it in the HR way where a “talent” is an employee, then yes talent=resources, but then I don't understand how this is “politically incorrect”.

What I understand when you say this:

> the line reads (rather politically incorrectly) that Apple has more talent available

Is Apple employees are more “talented” than Mozilla's. That's politically incorrect, but I'm pretty sure that's not what's meant here.


maybe they were equal...but Apple had more number of "talent" available.

Which then begs the real question - Swift is what Rust would be if Mozilla had more money/resources ?


Cash is a resource.


Or maybe it's just the hard truth (which wouldn't be surprising considering the valuation of Apple and Mozilla respectively)


How would it not?


I have no idea of the amount of resources spent, so I'm not sure I can comment on that.


Here's one thing I don't understand: In addition to enabling dynamic linking, this mechanism allows Swift to compile less code (generic functions only need to be compiled once) and therefore reduce instruction cache misses.

But certainly the tradeoff for this type-info-vtable "witness table" and much more heavy boxing must impact the data cache miss rate. What's the tradeoff end up being in practical terms? Is it worth it?

Also, although it seems there's features that let you "freeze" a type, is there a practical way that a non-language expert could ever profile and discover that they may want to use such a feature to improve performance?

Especially given that Swift programs on iOS probably only dynamically link to the system libraries, this seems like a powerful language feature to enable something that could have been achieved by writing the Swift system libraries as typed facades around a typeless core, which you so often see in C++ STL implementations.


What a well written article! Thanks for taking the time to post it!


The technicalities behind Swift's work on ABI stability are very interesting but I remain unconvinced developers care about ABI stability nowadays; outside security updates and the very basic layers (syscalls, WinAPI...).

In the past, ABI stability was way more important for many companies because there were many more closed source dependencies, way less access to online updates, way less emphasis on CI/CD, etc.

The argument for size by avoiding several std runtimes is strange in 2019, specially considering Apple's policy of deprecating things and forcing devs to update apps constantly.


ABI stability is not about size! It's about enabling the libraries to evolve simultaneously with the app.

For example, in the next version of the OS, all table views get type select. Combo boxes work better with screen readers. Menus get dark mode. etc.

An OS can provide much more than libc or Win32 "basic layers". It can provide a whole UI vocabulary, and apps which use its frameworks enjoy new features as the OS improves. That's the hope at least.


ABI stability is absolutely (also) about size though, one of the big issues iOS developers have/had with Swift is/was that it would make the size of the bundle explode (compared to an equivalent objective-c application) as the application would need to bring along much of the standard library.

Until there starts being core swift-only APIs, your point is already solved because regardless of the Swift library the underlying functionality and OS interaction is mediated through a C library which is dynamically linked.


In the linked blog post they mention size explicitly:

> ABI stability for Apple OSes means that apps deploying to upcoming releases of those OSes will no longer need to embed the Swift standard library and “overlay” libraries within the app bundle, shrinking their download size; the Swift runtime and standard library will be shipped with the OS, like the Objective-C runtime.

For new UI backends you don't need a different interface, you provide the new UI under the old interface. If your new elements have new behavior you will need to update your app anyway.


Sorry, you are correct, I was imprecise.

"ABI stability" is about defining an ABI. "ABI resilience" is defining how libraries can evolve in a binary compatible way. Stability is a precursor to resilience.

Apple would like to write libraries in Swift, but those libs have to participate in an ABI that is stable (so apps can use them) and resilient (so Apple can evolve them without breakage).

> For new UI backends you don't need a different interface, you provide the new UI under the old interface

The challenge is how to provide new UI features without breaking existing apps. For many UI frameworks (the web in particular) the compiler/runtime has a global view and can sort it out. But if both the app and library are separately compiled, the problem becomes trickier.


Why do we need ABI stability for these features?

Libraries can evolve just fine by providing backward compatible changes as far as I can tell?


Yes that's right: ABI stability is all about nailing down which changes are "backwards compatible."

In C++, you might wish to add a field to a struct, or add a new virtual method, or change the inheritance hierarchy, or the type of a parameter, etc. But such changes are not ABI compatible and will break every app until they are recompiled. The C++ ABI compat story is very strict.

Modern ObjC has a more generous policy, leveraging its dynamic nature. For example you can add fields or methods to classes, without recompiling dependencies. But you pay an optimization penalty, since the apps have to defer offset calculation until runtime.

Swift attempts to have its cake and eat it too. Swift enables you to be explicit about which types might change in the future, and which types are "frozen" and can be aggressively optimized. Furthermore you can explicitly draw the ABI boundaries are: THESE parts are always compiled together and so intra-references may be aggressively optimized, but THOSE parts may need to evolve separately so enforce an ABI there.


> But you pay an optimization penalty, since the apps have to defer offset calculation until runtime.

isn’t that penalty only one time, when the first message is sent? after that it seems pretty dang fast [1]

[1] https://mikeash.com/pyblog/friday-qa-2012-11-16-lets-build-o...


Library can't provide backwards compatibility or even any compatibility if newer version of compiler decides to change the layout for a structure or an enum thus breaking ABI for everything including parts of library that didn't change.


ABI (and module) stability is a big thing for people shipping iOS frameworks.


Yes, but the guarantees Swift provides are not a requirement for that.


They are somewhat important if you cannot recompile the framework to match the version of Swift your app is using.


ABI stability benefits users too. It reduces payload sizes because runtime can reside in the OS as a shared library.


I answered to that in the last paragraph. I don't see the appeal in 2019, in particular in the Apple ecosystem where backwards compatibility is not respected that much.


How is that irrelevant? If the app you install is 20MB smaller it is still 20MB not taken from your data cap and it is still less data to transfer, thus faster. Before ABI stability every iOS app had to come with bundled Swift runtime/libs. Event if two apps used exactly the same version, you'd still get two copies. With ABI stability apps can use what iOS provides and no longer need own copies.


20 MB is a ridiculous amount compared to anything else going on in your phone, including sending a few high res shots. Many apps and iOS updates are an order of magnitude bigger, and they are not even games.

If size was such a huge concern, Apple would have provided it a long time ago in Swift.


Size is a huge concern. The article is exactly about why it wasn't done a long time ago, because it is a very hard problem.


In this a real world advantage? Are iOS and OSX binaries significantly smaller than Android, Windows, and Linux binaries? Not noticeably in my experience but I could be wrong.


> significantly smaller than Android, Windows, and Linux binaries

Linux distributions do use dynamic linking (.so files are dynamic library) like OSX does (but with .dylib).

Windows also has dynamic linking (.dll), but they are less frequently used because the lack of a package manager with dependency management requires application vendors to distributes non-system libraries with their application anyway.


Hmm, then why is the article written like Swift has any significant advantage?


You could instead interpret it as C++ and Rust (minus the C like parts) have disadvantages. Swift performed some compiler heroics to keep the advantages while still having similar features to them.


Constantly? The Mac 32-bit x86 ABI was supported for 14 years. 32-bit iOS lasted 9 years. Those are the only two ABI switches this decade. And how does "forced" updates negate the size-of-code on disk/memory issue?


We are not just talking about arch/system ABIs but language/library ABIs.

In any case, 14 years is definitely a very short time for arch/system ABI support, specially compared to Linux or Windows which will basically never kill x86 ABI support.

Apple has just killed thousands of apps and games that people are using.


> which will basically never kill x86 ABI support

Ubuntu decided to drop i386 support since 19.10, for one. Though, x86_64 kernel still supports running 32-bit software and multilib support is there. The kernel is unlikely to drop support for the architecture, but if the distributions stop including it it will die off at some point.


In principle yeah Win32 ABI is still supported. In practice that has become untenable, so for old enough software they just run it in a Windows XP virtual machine instead.


I want to try swift but the fact that there is no windows support is a deal killer for me. And it perplexes me how over the years there have been no serious attempts to make it happen. Is it just a cult of Apple + Linux masterrace thing?


I 100% agree that lacking windows support is a dealbreaker for many.

I actually cared a lot about the ideas of swift and wanted to (try to) contribute in some way. But without any Windows support AND lacking tools for linux+WSL it's really hard to stay motivated.

Here is what Chris Lattner said in mid 2018:

> I think that first class support for Windows is a critical thing, and I doubt anyone in the Swift community would object to it. The problem is that we need to find someone who wants it badly enough and has the resources/know-how to make it happen.

I think the root of the problem. First class windows support is a too complex task for the community and should have been initiated by Apple/Microsoft.

This is what I admire Golang and Rust. They focused on developer support early on and as a result they are (currently) more usable.


My initial reaction is that neither Apple or Microsoft have incentive to get Swift going well on Windows. It will open Windows developers to the language used to build apps on a competing platform, which could steer them towards Apple. And vice versa, it would allow Apple developers to start thinking more about writing Windows app, and possibly steer them away from the Apple platform, or at least dilute their time on it.


Windows is nonexistence as mobile OS ,and people deploying windows server are probably already entrenched in using .Net technologies anyway.

Provided that you can still install linux on any computer to develop on, i don't see a real market for swift on windows.


It's a good thing Microsoft doesn't think that way. It apparently has metrics that show desktop development is not stagnating, but rising thanks in part to Windows desktop OS usage also rising [1]. See its plans for WinUI v3 [2]. WinUI is not cross-platform but it is talking about it.

[1] Comments from Ryan at MS https://www.dotnetrocks.com/?show=1660

[2] https://github.com/microsoft/microsoft-ui-xaml/blob/master/d...


Windows exists as mobile OS.

It runs on the tablets, hybrid laptops and lots of custom made handeld.

It is the option to go to for anyone not willing or able to acquire an iPad, given the bad experience of large majority of Android tablets.


There are Windows technologies not available on OS X or Linux, is that Windows masterrace thing?

If there is no support for Windows it may indicate there was little demand for that. The biggest domain for Swift is still Apple OSes (macOS, iOS, iPadOS, watchOS, tvOS) and some attempts to use it on the server side. Swift is open source, if someone wants to make it happen on Windows, they are Welcome. I do not see the reason why anyone at Apple or anyone working on Linux would bother to do that.


There are people (not from Apple) working on this.


> I want to try swift but the fact that there is no windows support is a deal killer for me.

There’s no official support, at least not yet. But it’s in the tree.

> And it perplexes me how over the years there have been no serious attempts to make it happen.

https://github.com/apple/swift/search?q=windows&unscoped_q=w...


If it's a real language, it's got to work on Windows. Having written a lot of portable code in C on both Unix and Windows, I can say that the reason I fell in love with Golang and Rust was because they had out of the box Windows support.



It would be super cool if Rust supported the Swift ABI. Currently use of Rust for macOS or iOS applications necessarily involves adding a plain C layer between Swifty UI and Rusty back-end.


You can go in the other direction, no? Write a Rust library with a C FFI (directly in Rust) and call it from Swift?


This is what I meant by a "C layer". No matter how you slice it, you have to "dumb down" communication to the C level and generate C headers. Then you either work with opaque types, or have to generate extra translation and/or wrapper types for classes, closures, etc. Both languages treat C as "unsafe", so it also makes memory safety trickier than if they could use their native safe APIs.


Ok yeah I understand what you mean. Yeah it would be super interesting to have closer integration between Swift and Rust. For instance, if you could treat Rust traits as protocols or vice versa. It seems super compelling the idea of writing low-level memory handling in Rust and then operating on it with high level Swift code.


I definitely see the desire to replace Objective-C, but the downside of ABI stability seems to be that it prompts a lot of worries about new features or optimizations affecting it. I hope this won't add too much sand in the gears.


Stable ABI and especially the issue of generics is why I think bytecode+JIT is the right approach to a language that operates at this level of abstraction. Higher-level semantics of bytecode allow for much more flexibility wrt ABI, and JIT lets you compile generic code monomorphically across ABI boundaries, and even inline it. A long time ago I did some experiments with very carefully written C# code, and it was capable of the same abstraction as C++ STL containers and algorithms, while producing highly efficient native code because everything was inlined at runtime.


> In the extreme case, we could make a system where everything is an opaque pointer and there's only one function that just sends things strings containing commands.

Interestingly, Objective-C has other ABI issues…


what are the issues?



The fragile ivar issue is fixed in the modern Objective-C runtime.


Right, non-fragile ivars shipped with the 64-bit runtime as mentioned in the link.


Does anyone know a good technical comparison of swift vs Kotlin (and eventually vs rust and typescript). I would like to see their singular features and their misfeatures.


I think most notably is the different paradigms towards memory management. The syntax is very similar between the two languages, something like 80% identical, but that's trivial.


How do Kotlin's interfaces compare to Swift protocols? Being able to do things like provide default implementations in a protocol are some of the biggest Swift features for me.


it’s not exactly the same thing, but you can kind of accomplish something similar to “swift protocols with default implementations” with abstract classes

https://www.programiz.com/kotlin-programming/abstract-class


Why the swipe at Rust?


I haven't read the article yet, but important context to this is probably that Gankra has a lot of experience with Rust. They literally wrote a lot of the book on unsafe rust...

See their list of publications for some examples: https://gankra.github.io/blah/

(I found this while trying to determine if Gankra = Gankro, it turns out it does).

Edit: Now that I've read it I can say I though they were pretty fair to both languages. This certainly isn't a comparison between them, but they made some legitimate criticisms of both when that criticism happened to be an interesting point of comparison.

Really good article, I highly recommend reading it if you like low level compiler stuff.


> Also for context on why I'm writing this, I'm just naturally inclined to compare the design of Swift to Rust, because those are the two languages I have helped develop. Also some folks like to complain that Rust doesn't bother with ABI stability, and I think looking at how Swift does helps elucidate why that is.


The author worked on both compiler (first Rust as a summer intern at Mozilla, then Swift at Apple and now back again on Rust at Mozilla). They're one of the best placed people to talk about the differences between these two language.


More accurately I worked on stdlib stuff for both, with a focus on collections. It's just that this naturally pushes you into minoring in language design and majoring in the low-level details of the language. Plus it's hard to not pick this stuff up if you have to hang out with compiler people all day.


Thanks for the clarification!


There is no swipe. It seems like a totally objective statement. There is no inherent (value) judgment here.

The article is really well written and easy to digest. Especially if one has a bit of compiler background.


There is a swipe though. The title can easily be interpreted as Rust having attempted and failed to implement dynamic linking, which I don't believe is true. Replacing "couldn't" by "didn't" would be much more objective and would not be interpretable as a value judgement.


It's literally true that Rust tried and failed in the sense that Rust's early design was extremely similar to Swift's, polymorphic compilation and all, and it was thrown out when it didn't seem to work. Swift pushed on it harder, and got it to work.

But it's less so true in the sense that the two teams had different use cases, and simply took divergent paths. Rust took the "easier"* path that limited expressivity in favour of a simple execution model that requires a minimal runtime, which in turn enabled them to focus on more interesting static analyses. Swift took the other path.

* This may sound like a slight against Rust, but both the Swift and Rust folks who worked on this stuff agree that what Swift did is a comically huge amount of work in pursuit of a relatively niche use case (idiomatic-feeling system apis). There's only so much time in the world, and Rust spent its time on other problems.


> But it's less so true in the sense that the two teams had different use cases, and simply took divergent paths. Rust took the "easier"* path that limited expressivity in favour of a simple execution model that requires a minimal runtime, which in turn enabled them to focus on more interesting static analyses. Swift took the other path.

> * This may sound like a slight against Rust, but both the Swift and Rust folks who worked on this stuff agree that what Swift did is a comically huge amount of work in pursuit of a relatively niche use case (idiomatic-feeling system apis). There's only so much time in the world, and Rust spent its time on other problems.

That I completely agree with and don't think is a slight at all: the work you outlined for Swift to have a stable ABI was a humongous (and truly novel and original) undertaking, but because it's so large it needs a real motivation and an expectation of ROI, and I don't think that ROI was / would have been there for Rust: there is little chance that a system would be shipped with Rust as the system and application libraries baseline (in the sense of libraries which are expected to be consumed essentially only by rust, which is what a rust ABI would imply), so while there are use cases for a Rust ABI they're limited (safer plugins and the like, maybe a system-provided dynamically linked stdlib).

On the other hand Swift is intended as the baseline applications language on all Apple platforms, meaning Apple has a large incentive to be able to ship and update swift-level libraries as part of their OS and other packages, in order both to reduce application bundle sizes and be able to update libraries as they go (without those libraries being limited to fairly shallow shims over C-level libraries where the actual meat would be).


Advantages and disadvantages were mentioned for both languages; for example Swift's "surprising performance cliffs"


Adding back the word How softens the blow.


Agreed. We put it back. HN's software strips leading hows, which mostly but not always improves things.


Yeah it read quite a bit differently when I clicked the link. Didn't realize the title was modified.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: