"... Swift's Stabilized ABIs on Apple platforms ... are not actually properly documented, xcode just implements it and the devs will do their best not to break it. Apple is not opposed to documenting it, it's just a lot of work and shipping was understandably higher-priority."
Having worked in Apple's Developer Publications department for eleven years, I can confirm that this statement is mostly correct. Apple has the financial resources to hire more technical writers and properly document all their APIs, but it's just not a priority.
The sad thing is that developers outside Apple need this documentation and would even be willing to help write it. But they can't because, as a matter of policy, Apple does not generally involve the developer community in writing their official documentation.
The word clock time.
What else exactly do you need?
It's a reference page for the members of the struct AudioTimeSTamp:
I wish many C/C++ libs had such a good documentation...
What unit it reflects. What epoch it uses. When does it wraparound? Is it realtime? Or best guess? Is it wall-clock like CLOCK_REALTIME or is closer to CLOCK_MONOTONIC? Is a valid range the entirety of possible uint64 values, or is there a range limit?
Granted, the documentation isn’t the best at explaining it. But if you know what a word clock is in terms of audio, you know what the value is; a counter.
That should be in the documentation. Each sample is guaranteed to have an increment compared to a previous sample.
But, it doesn't answer all of the questions.
Does the world clock always start from 0? Is that an assumption that can be made?
Or is the valid range anywhere inside a uint64?
What happens if the number of samples exceeds the length of the world clock unit? Is it allowed to overflow? In which case it violates the assumption that it always just increments.
But that particular API does the opposite. Just typedef-ing that data as an opaque `WordClockTime_t` would go a long way to fixing this, telling API users to ignore how it works internally and enabling automated documentation tools to to locate and list every other API that produces/consumes this particular value. A simple automation-friendly abstraction that would reduce—if not eliminate—the need for additional manually-written documentation. i.e. Put the knowledge in the code and the automation can graph it.
Alas, there’s something about C programming that seems to bring out the worst abstractions in many C programmers… and if they’re being lazy in their API design, they’ll be twice as lazy when it comes to manually documenting it.
"What is wrong with giving tree, here?"
"Well, he don't know talking good like me and you, so his vocabulistics is limited to 'I' and 'am' and 'Groot.' Exclusively, in that order."
That's an assumption, that an edge case won't happen. Docs exist to spell out where the edge cases are.
Cisco thought a 32bit number for RTP timestamp would never rollover. It happened.  Centuries it might take if it's initialised from zero, but it doesn't have to be. And if you don't give the documentation, then you can't expect reasonable defaults to be used.
It's important to know when something like that happens, so that they can also know how to handle behaviour that may well be completely unexpected. Hiding the type doesn't help. It just tells you you're even more on your own if you want to handle exceptional events, which leads to code with holes so big you can drive a CVE through it.
Okay, it would really help if C’s so-called “typesystem” would actually enforce a custom-defined type like `WordClockTime_t` so that client code can’t do stupid/malicious things with it like stick its own arbitrary integers in it; but hey, C. While a sensible runtime would also chuck an exception if a fixed-width integer overflows, rendering rollover dangers moot; but again, C. It is what it is; and so it goes.
But if, as an API designer, you’re going to document every single way your API may potentially blow up during normal/abnormal use then perhaps you should write that documentation in the form of defensive code that validates your API’s inputs and handles bad inputs accordingly. e.g. A timestamp API should not be making its users fret about (never mind cope with) C integer overflows; guarding against any edge-case crap is the API’s job, not its users’.
Again, the problem is not a lack of documentation so much as lack of clarity. A good abstraction shows only what its users need to know and hides everything that they don’t; the more that can be left unsaid, the better. (If an API can’t be documented clearly and concisely, that’s a huge red flag that the API design is bad so needs reworked until it can.) The problem with an API like this is the not-knowing, which indicates deeper, more systemic, failings than merely “needs more documentation”.
TL;DR: If your API is puking on its users then don’t start documenting the color and odor of that puke; fix its code so it doesn’t puke again.
 I say “presumably” because damned if I’m going to spend hours spelunking Apple’s crappy documentation just to find out exactly where this mWordClockTime value comes from and where it goes to.
But that's not the case. You get to set mWordClockTime as part of the init . If you can initialise a value, but aren't given bounds for the value, then the documentation has screwed up.
The value is something the developer can create, and pass in when creating any AudioTimeStamp, which you will be doing a lot of if you're dealing with sound.
This isn't an arbitrary value you can just rely on to be correct, there may be good reasons for altering the value, such as when splitting sound into several thousand chunks and rearranging them.
It's a part of the exposed API - it needs to be documented how it behaves.
For a different take on a similar problem, let's look at how PulseAudio handles it .
> pa_core doesn't have any interesting functions associated with it, it is just a central collection of all the components and globally relevant variables. You're not supposed to modify the fields, at least not directly.
This is how you abstract away an API safely. The dev knows up front that pa_core is the type that'll be used, and that other functions will be modifying the sample cache for them - that is, they can't supply a value directly to the type or they've entered unsafe behaviour.
They can go off and find the right setter.
What follows on that page is only a courtesy, and can clearly not be used safely in most programs. It doesn't need to be there at all.
So the dev finds , where they call the client to get the sample.
And whilst duration is a uint64, is also has a few more things in the docs, such as the value can be lazy, so you need to check it exists before using it, and the property can raise an error when you try to access it and it doesn't yet exist. You'll also find that this a property (so you won't be creating it), generated by an interface, and where to find that interface.
I mean, I'm not one to compliment PulseAudio's documentation. It is an awful mess, just like the internals.
But they've given us a lot more than Apple bothered to.
While I’ve never had the pleasure of dealing with Core Audio, I’m getting the strongest impression that our problem is not that its API documentation is inadequate, but that its API was designed by a bunch of absolute hacks and bums.
In which case, asking for additional documentation is like asking for additional band-aids after severing your leg with an exploding chainsaw. Never mind for a moment the current bleeding; fundamentally you’re tackling the wrong problem. The right approach is first to make absolutely sure exploding chainsaws never get sold in the first place. Once that’s addressed, then worry about providing non-exploding chainsaws with adequate training and safety gear.
If the user has to initialize the struct directly then yes, the docs absolutely should state what values to use. However, unless there is some absolutely overriding reason for doing this then really, no, the right answer is to do what the old Carbon C APIs and Core Foundation do, which is to provide a constructor function that correctly populates it for them, and then document that. The documentation is shorter and simpler, and there is less to go wrong. Plus users do not need to write their own guard code—a massively pointless duplication of effort.
For instance, the LSLaunchURLSpec struct (https://developer.apple.com/documentation/coreservices/lslau...) is a good example of a manually-populated struct with appropriate documentation.
But in most cases, when working working Carbon/CF you don’t need to know any of these details because the structs are initialized and disposed for you, and these functions all follow standardized naming conventions so are trivial to locate as well. This is Abstract Data Types 101, and shame on the CA PMs for thinking they’re special snowflakes and simply dumping all their shit onto their users instead, and shame on the dev managers above them for letting them do so.
Incidentally, this is why I always fight anyone who says developers are too busy/special/autistic/etc to documentat their own APIs. Yes, producing high-quality public documentation needs specialist technical writers and editors on top of (but not instead of) its original developers, but there is no better first test of the quality of your API design like being forced to explain it yourself. I know this; I’ve thrown out and replaced more than one working API design over the years simply because I found it too difficult or confusing to explain.
TL;DR: Don’t document Bad APIs. First make them into good APIs, then document that.
You could argue that generally the docs are good enough to Get Shit Done. And people are Getting Shit Done on the platform as a whole. So there's no problem.
But it's a shame Apple doesn't consider better docs a priority. IMO it's not a good look for a trillion dollar company.
More, I don't know if insiders use the same doc base. If so it's definitely wasteful, because newcomers are going to be wasting time and effort trying to get up to speed.
I always like to know the units (seconds? nanoseconds?) and the reference-year (1970? 2001? 1601?)
Apple should look at linux's manpage on the `time` syscall for inspiration.
Related question, what is this seemingly reccurrent trope of Apple to continually understaff their teams/projects or behave as if they still were poor instead of throwing ressources at solving issues (e.g. better developer tools, better documentation, improving QA, ai, servers, reducing bugs, improving security...)? Is it greed? Is it the fear of growing too big and the cult of keeping teams small? Is it an inability to scale up sw projects? I'm dumbfounded by this strange behavior which more often than not lead them to unforced shortcomings.
With respect to documentation, when I worked at Apple the understanding was that management thought developers could learn what they needed to know from reading the header files. Of course, that's nonsense. In reality, so much new software was being developed and shipped in each new release of macOS that it would have cost a fortune to document it in a timely fashion.
My compiler can do basic interop with swift now, but it's pretty much undocumented, unsupported, and when asking for any info the best I got, we're welcome for contributions on the docs. The docs that do exist are out of date and incomplete.
However Windows, xOS and Android sandboxing are relatively restrictive in that use case.
(myObject as! SomeProtocol).someMethod()
For one thing, Swift is hardly ready for application domains like audio, video or games. No doubt it can make the development process so much faster and safer, but also less performant by exactly that amount. Swift is beautiful, surprisingly powerful and non-trivial (something you typically don't expect from a corporate language, having examples of Java and C#), but the run-time costs of its power and beauty are a bit too high to my taste. A bit disappointing to be honest.
I've done quite a bit of experimentation with the performance characteristics of Swift, and I think that's a slight mischaracterization of the situation.
For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance, more in the neighborhood of C/C++ than a managed language, especially when dipping into the unsafe portion of the language for critical sections.
But it's a double edged sword: while it's possible to write high-performance swift code, it's really only possible through profiling. I was hoping to discover a rules-based approach (i.e. to avoid certain performance third-rails) and while there were some takeaways, it was extremely difficult to predict what would incur a high performance penalty.
Currently it seems like the main limiting factor in Swift is ARC: it uses atomic operations to ensure thread-safe reference counts, and this, like any use of synchronization, is very expensive. The ARC penalty can be largely avoided by avoiding reference types, and there also seems to be a lot of potential for improving its performance as discussed in this thread:
This is exactly what Rust avoids by having both Arc and plain-vanilla Rc. Plus reference counts are only updated when the ownership situation changes, not for any reads/writes to the object.
This is required for reference counting objects between threads. Otherwise, you might have one thread try to release an object at the same time another thread is trying to increment the reference count. It's just overkill for objects which are only ever referenced from a single thread.
I never heard this alternate and very different expansion for that term.
...I could see this causing merry hell if trying to do advance interop between Swift and Rust, though, and it's admittedly probably going to be a minor stumbling block for Apple-first devs. (I managed to avoid confusion, but I just port to Apple targets, they're not my bread and butter.)
> For instance, I built a toy data-driven ECS implementation in Swift to see just what kind of performance could be squeezed out of Swift, and it was possible to achieve quite impressive performance
I also have a pure-Swift ECS game engine  where I haven't had to worry about performance yet; it's meant to be 2D-only though I haven't really yet put it to the test with truly complex 2D games like massive worlds with terrain deformation like Terraria (which was/is done in C# if I'm not mistaken) or Lemmings, and in fact it's probably very sloppy, but I was surprised to see it handling 3000+ sprites on screen at 60 FPS, on an iPhone X.
- They were all distinct objects; SpriteKit sprites with GameplayKit components.
- Each entity was executing a couple components every frame.
- The components were checking other components in their entity to find the touch location and rotate their sprite towards it.
- Everything was reference types with multiple levels of inheritance, including generics.
- It was all Swift code and Apple APIs.
Is that impressive? I'm a newb at all this, but given Swift's reputation for high overhead that's perpetuated by comments like GP's, I thought it was good enough for my current and planned purposes.
And performance can only improve as Swift becomes more efficient in future versions (as it previously has). If/when I ever run into a point where Swift is the problem, I could interop with ObjC/C/C++.
SwiftUI and Combine have also given me renewed hope for what can be achieved with pure Swift.
I actually spend more time fighting Apple's bugs than Swift performance issues. :)
My guess is that this would also be true under Rust, as soon as you start using some pretty common facilities such as Rc and RefCell. (Swift does essentially the same things under the hood.)
That said, "hundreds of executed instructions" are literally not a concern with present-day hardware; the bottleneck is elsewhere, especially wrt. limited memory bandwidth (as we push frequencies and core counts higher, even on "low-range" hardware), so it's far more important to just use memory-efficient data representations, and avoid things like obligate GC whenever possible - and Rust is especially good at this.
Depends on the context. I have that line in a very tight loop in a CoreAudio callback that's executed in a high-priority thread. It should produce audio uninterrupted, as fast as possible because the app also has a UI that should be kept responsive. Least of all I want to see objc_msgSend() in that loop. Of course I know I will remove all protocols from that part of the app and lose some of the "beauty" but then what's the point of even writing this in Swift?
For most applications Swift is good enough most of the time. No, it's excellent. I absolutely love how tidy and clever your Swift code can be. Maybe a few things you wish were improved, but every language update brings some nice improvements as if someone is reading your mind. The language is evolving and is very dynamic in this regard.
However, it is not a replacement for C or C++ like we were made to believe. And now that the linked article also explains the costs of ABI stability (even the simplest struct's introduce indirections at the dylib boundaries!) I realize I should re-write my audio app in mixed Swift + C.
Protocols/traits/interfaces are just indirection - we all know that indirect calls are expensive. Fixing this need not be a loss in "beauty" if the language design makes direct calls idiomatic enough.
> And now that the linked article also explains the costs of ABI stability
I definitely agree about this, though. ABI stability and especially ABI-resilience, have big pitfalls if used by default, without a proper understanding of where these drawbacks could arise. They are nowhere near "zero cost"!
They are indeed. Look at how C++ handles multiple inheritance, for example: literally a few extra instructions for each method call, not more than that. Swift's cost of protocol method call and typecasting seems too high in comparison, and I haven't even tried this across dylibs yet.
Yup, C++ does this by building in lightweight RTTI info as part of the vtable. Swift expands on this trick by using broadly-similar RTTI info to basically reverse excess monomophization of generic code. (Rust could be made to support very similar things, but this does require some work on fancy typesystem features. E.g. const generics, trait-associated constants, etc.)
Not if you freeze it. The indirection is only required for resilient structs.
People really need to stop saying this and stop accepting it as a "truth". It only applies in _some_ applications, and even there it stops applying once you want to do it many times over and over again.
(myObject as! SomeProtocol).someMethod()
> For one thing, Swift is hardly ready for application
> domains like audio, video or games.
Yeah there's a move away from C# towards a Burst-compiled unmanaged subset but it hasn't happened yet. And yes - Unity itself is C++ but all your game code is still in Mono/C# and calling in to the engine doesn't make all that go away. There's still plenty of tight loops in managed code.
In short - a lot of mobile game developers seem happy to sacrifice bare metal performance if they get something back in return.
For instance recently this nested data structure "issue" was brought up (again) in the Swift community:
If you had a nested set in a dictionary:
var many_sets: [String:Set<Int>] = ...
let tmp_set = many_sets["a"]
many_sets["a"] = tmp_set
var many_sets: [String:Set<Int>] = ...
The performance is entirely different (e.g. you are making a copy of the Set in the first example). Prior to Swift 5, you would have had to potentially remove the set from the dictionary in order to make sure there were no unintentional copies.
While the examples are contrived to some degree, I think at least a few new Swift programmers would lookup something in a dictionary, pass the value into a function thinking it's a reference, and then when they realize it isn't being changed in the dictionary, set the value in the dictionary after the function returns like:
var many_sets: [String:Set<Int>] = ...
let changed_set = process(set: many_sets["a"])
many_sets["a"] = changed_set
It is "easy" to understand what is happening when you know Swift's collections are value types and about copy on write and value vs reference semantics, but it is also an easy performance issue.
Furthermore, when web framework benchmarks like:
https://www.techempower.com/benchmarks/#section=data-r18&hw=... show Java's Netty vs. Swift NIO (which is based on the architecture of Netty), I think that it indicates that you cannot just port code and expect anywhere near the same performance in Swift.
I really wish Swift moved a notch towards C++ in some areas especially where the designer of the type can define the usage in very precise terms. Is it copyable? Is this method abstract? Maybe also struct deinit, etc etc.
Property wrappers already allow some interesting possibilities with customizing the storage and usage of particular member variables, and there was a thread today about exposing the memory locations of reference type members, which would unlock a lot of optimization opportunities:
I'm not sure whether swift can ever really get there with respect to performance, given the foundational decisions regarding ARC and copy-on-write. But I would love a language with Swift's type system and sensibilities and a bit more control over how memory is handled.
That is untrue; See my comment at https://news.ycombinator.com/item?id=21491597
Too bad Apple hasn't shown much interest in supporting Swift on other platforms; I know efforts exist but they all seem like second-class citizens. I don't really want to invest the time learning a new language that's locked to a single ecosystem.
That's a bit of an overstatement, Swift isn't really in the same ballpark than C++. Its performance characteristics are more the like of a managed language.
Still, Swift definitely makes some compromises when it comes to performance. ARC is pretty costly in general (in terms of time, it's pretty cheap memory-wise), thanks to heavy use of atomic operations, and copy-on-write for value types has really nice properties as far as making it easy to write correct code, but it can result in unnecessary data duplication which is hard to optimize since you're basically at the mercy of the compiler to make it mode efficient.
A lot of these problems have possible solutions which haven't been implemented yet - in the long term I'm curious how much performance could be improved since I think Swift really could be the "sweet spot" language in terms of performance and usability.
I’m not sure what’s missing as far as libraries, other than UIKit.
Still, to truly compete it would need to have Windows support too. And ideally real buy-in from at least one other major tech company.
If you want you can use Swift on Google Colab right now:
"Support" is the kicker - I consider C# and C++ to have Windows Support because the platform vendor publishes and provides support for their own developer tools.
Do you mean that, or maybe something closer to the level of "Support" where interested parties submit improvements, and platforms are included in the CI/CD process?
It's not from Microsoft directly, but it's worth noting that LSP support for Swift is under active development. VSCode is probably currently the second-best IDE for Swift development.
The IBM Swift Sandbox is no longer available as of January 2018.
It's quite usable. A couple years ago, I tried Swift for Linux when it came out, and it was a dreadful experience. But now I do things with Swift in Docker containers and basically don't think about it.
There's a few things which aren't implemented (IIRC things like XML parsing support) but it's mostly things I wouldn't use or would use a library for anyway.
Actually ABI-resilience is not the default, it is enabled by a compiler flag.
In addition to that, there are some current compiler limitations when it comes to cross-module usage of generics and inlining that affect performance. But those are not by design and can be improved in future compiler versions.
High level languages are fat, inefficient, rigid and what every large corporation wants, because they share so much in common.
> You can actually already do dynamic linking in Rust, but only with C-based interfaces. The C-like interfaces that gankra talks about are I belive more similar to current Rust than to C, so I think they shouldn't be called C-like. They could support structs, enums, trait objects, lifetimes, slices, etc. "Stable ABI Rust" would be a name that's more fair. Addition of generator based async and generics of course won't work, but not all interfaces actually need those features.
> I think there is definitely potential for a "better dynamic linking" RFC that adds a stable ABI to the language. Of course, the default compilation mode should keep using unstable ABIs, generics, async, etc. But allowing crate writers to opt into a stable ABI would be beneficial for precompiled crates as well as compile times which are still the biggest problem with the Rust compiler for many people.
(a crate is a Rust package)
I think there is already a fuzzy convention that could basically be made to enable this. A crate name ending in -sys is expected to provide a C-like interface, and to thus be usable in a dylib context, whereas the same crate name with no suffix or with a -rs one provides a statically-linked, ABI-unstable wrapper to its corresponding -sys. The build system just needs to be made aware that the former kind of crate need not be recompiled when potential ABI-breaks are introduced.
> (a crate is a Rust package)
A crate is a Rust compilation unit. Closer to a .cxx, .o file in C/C++ than what are usually called "packages" there.
And crates are the equivelant of python, node, go, etc packages. That they are also a compilation unit is, I believe, an implementation detail. I think they also made it configurable.
Most folks use “package” and “crate” interchangeably, even if they’re technically different.
See the bottom of https://doc.rust-lang.org/book/ch07-00-managing-growing-proj...
(Additionally, “compilation unit” is a bit weird, given incremental compilation, etc. “the file containing the root module tree that gets passed to rustc” is not as succinct though.)
While for people who take time to really work with Swift (yes those are often iOS and Mac developers by necessity) come away with an opinion that Swift is really a diamond in the rough, a new generation of a language co-evolving along with Kotlin and Rust. One distinction is that Apple can afford to hire top-tier compiler developers (including the founder of Rust) and invest heavily in tooling and growing the Swift ecosystem. They can pull off projects like ABI stability which took more than a year of focus for the whole team.
I point this out because the is always some sort of opportunity where widespread public perception is so mismatched against reality. I predict in the future there will be some tech startup that will go all in on the Swift ecosystem and be able to run circles around the competition.
Note for responders: I'm not saying the languages aren't just as good, or better. I'm not excusing the fact that Apple definitely dictates the priority of where the compiler team invests their time.
Personally I'm really interested with what's going in with Swift in the math/science space. The work that's going on with Automatic Differentiation is fascinating, and the Numeric library which has just been released should make it much easier to achieve highly accurate numerical results in Swift.
I agree with you that Swift's public perception is out of step with the reality of the language. It certainly has issues, and when comparing the developer experience with something like Rust, it falls way short on things like tooling, and platform support, but it's just so easy to be productive in Swift I keep coming back to it.
It's pretty obvious that Apple can throw more money at the problem than Mozilla can, if they choose to do so. That means they can buy more developers to work the problem, which is bound to be helpful whether those developers are particularly talented or not.
There's really nothing politically incorrect about that.
What I understand when you say this:
> the line reads (rather politically incorrectly) that Apple has more talent available
Is Apple employees are more “talented” than Mozilla's. That's politically incorrect, but I'm pretty sure that's not what's meant here.
Which then begs the real question - Swift is what Rust would be if Mozilla had more money/resources ?
But certainly the tradeoff for this type-info-vtable "witness table" and much more heavy boxing must impact the data cache miss rate. What's the tradeoff end up being in practical terms? Is it worth it?
Also, although it seems there's features that let you "freeze" a type, is there a practical way that a non-language expert could ever profile and discover that they may want to use such a feature to improve performance?
Especially given that Swift programs on iOS probably only dynamically link to the system libraries, this seems like a powerful language feature to enable something that could have been achieved by writing the Swift system libraries as typed facades around a typeless core, which you so often see in C++ STL implementations.
In the past, ABI stability was way more important for many companies because there were many more closed source dependencies, way less access to online updates, way less emphasis on CI/CD, etc.
The argument for size by avoiding several std runtimes is strange in 2019, specially considering Apple's policy of deprecating things and forcing devs to update apps constantly.
For example, in the next version of the OS, all table views get type select. Combo boxes work better with screen readers. Menus get dark mode. etc.
An OS can provide much more than libc or Win32 "basic layers". It can provide a whole UI vocabulary, and apps which use its frameworks enjoy new features as the OS improves. That's the hope at least.
Until there starts being core swift-only APIs, your point is already solved because regardless of the Swift library the underlying functionality and OS interaction is mediated through a C library which is dynamically linked.
> ABI stability for Apple OSes means that apps deploying to upcoming releases of those OSes will no longer need to embed the Swift standard library and “overlay” libraries within the app bundle, shrinking their download size; the Swift runtime and standard library will be shipped with the OS, like the Objective-C runtime.
For new UI backends you don't need a different interface, you provide the new UI under the old interface. If your new elements have new behavior you will need to update your app anyway.
"ABI stability" is about defining an ABI. "ABI resilience" is defining how libraries can evolve in a binary compatible way. Stability is a precursor to resilience.
Apple would like to write libraries in Swift, but those libs have to participate in an ABI that is stable (so apps can use them) and resilient (so Apple can evolve them without breakage).
> For new UI backends you don't need a different interface, you provide the new UI under the old interface
The challenge is how to provide new UI features without breaking existing apps. For many UI frameworks (the web in particular) the compiler/runtime has a global view and can sort it out. But if both the app and library are separately compiled, the problem becomes trickier.
Libraries can evolve just fine by providing backward compatible changes as far as I can tell?
In C++, you might wish to add a field to a struct, or add a new virtual method, or change the inheritance hierarchy, or the type of a parameter, etc. But such changes are not ABI compatible and will break every app until they are recompiled. The C++ ABI compat story is very strict.
Modern ObjC has a more generous policy, leveraging its dynamic nature. For example you can add fields or methods to classes, without recompiling dependencies. But you pay an optimization penalty, since the apps have to defer offset calculation until runtime.
Swift attempts to have its cake and eat it too. Swift enables you to be explicit about which types might change in the future, and which types are "frozen" and can be aggressively optimized. Furthermore you can explicitly draw the ABI boundaries are: THESE parts are always compiled together and so intra-references may be aggressively optimized, but THOSE parts may need to evolve separately so enforce an ABI there.
isn’t that penalty only one time, when the first message is sent? after that it seems pretty dang fast 
If size was such a huge concern, Apple would have provided it a long time ago in Swift.
Linux distributions do use dynamic linking (.so files are dynamic library) like OSX does (but with .dylib).
Windows also has dynamic linking (.dll), but they are less frequently used because the lack of a package manager with dependency management requires application vendors to distributes non-system libraries with their application anyway.
In any case, 14 years is definitely a very short time for arch/system ABI support, specially compared to Linux or Windows which will basically never kill x86 ABI support.
Apple has just killed thousands of apps and games that people are using.
Ubuntu decided to drop i386 support since 19.10, for one. Though, x86_64 kernel still supports running 32-bit software and multilib support is there. The kernel is unlikely to drop support for the architecture, but if the distributions stop including it it will die off at some point.
I actually cared a lot about the ideas of swift and wanted to (try to) contribute in some way. But without any Windows support AND lacking tools for linux+WSL it's really hard to stay motivated.
Here is what Chris Lattner said in mid 2018:
> I think that first class support for Windows is a critical thing, and I doubt anyone in the Swift community would object to it. The problem is that we need to find someone who wants it badly enough and has the resources/know-how to make it happen.
I think the root of the problem. First class windows support is a too complex task for the community and should have been initiated by Apple/Microsoft.
This is what I admire Golang and Rust. They focused on developer support early on and as a result they are (currently) more usable.
Provided that you can still install linux on any computer to develop on, i don't see a real market for swift on windows.
 Comments from Ryan at MS https://www.dotnetrocks.com/?show=1660
It runs on the tablets, hybrid laptops and lots of custom made handeld.
It is the option to go to for anyone not willing or able to acquire an iPad, given the bad experience of large majority of Android tablets.
If there is no support for Windows it may indicate there was little demand for that. The biggest domain for Swift is still Apple OSes (macOS, iOS, iPadOS, watchOS, tvOS) and some attempts to use it on the server side. Swift is open source, if someone wants to make it happen on Windows, they are Welcome.
I do not see the reason why anyone at Apple or anyone working on Linux would bother to do that.
There’s no official support, at least not yet. But it’s in the tree.
> And it perplexes me how over the years there have been no serious attempts to make it happen.
Interestingly, Objective-C has other ABI issues…
See their list of publications for some examples: https://gankra.github.io/blah/
(I found this while trying to determine if Gankra = Gankro, it turns out it does).
Edit: Now that I've read it I can say I though they were pretty fair to both languages. This certainly isn't a comparison between them, but they made some legitimate criticisms of both when that criticism happened to be an interesting point of comparison.
Really good article, I highly recommend reading it if you like low level compiler stuff.
The article is really well written and easy to digest. Especially if one has a bit of compiler background.
But it's less so true in the sense that the two teams had different use cases, and simply took divergent paths. Rust took the "easier"* path that limited expressivity in favour of a simple execution model that requires a minimal runtime, which in turn enabled them to focus on more interesting static analyses. Swift took the other path.
* This may sound like a slight against Rust, but both the Swift and Rust folks who worked on this stuff agree that what Swift did is a comically huge amount of work in pursuit of a relatively niche use case (idiomatic-feeling system apis). There's only so much time in the world, and Rust spent its time on other problems.
> * This may sound like a slight against Rust, but both the Swift and Rust folks who worked on this stuff agree that what Swift did is a comically huge amount of work in pursuit of a relatively niche use case (idiomatic-feeling system apis). There's only so much time in the world, and Rust spent its time on other problems.
That I completely agree with and don't think is a slight at all: the work you outlined for Swift to have a stable ABI was a humongous (and truly novel and original) undertaking, but because it's so large it needs a real motivation and an expectation of ROI, and I don't think that ROI was / would have been there for Rust: there is little chance that a system would be shipped with Rust as the system and application libraries baseline (in the sense of libraries which are expected to be consumed essentially only by rust, which is what a rust ABI would imply), so while there are use cases for a Rust ABI they're limited (safer plugins and the like, maybe a system-provided dynamically linked stdlib).
On the other hand Swift is intended as the baseline applications language on all Apple platforms, meaning Apple has a large incentive to be able to ship and update swift-level libraries as part of their OS and other packages, in order both to reduce application bundle sizes and be able to update libraries as they go (without those libraries being limited to fairly shallow shims over C-level libraries where the actual meat would be).