
How Swift Achieved Dynamic Linking Where Rust Couldn't - gok
https://gankra.github.io/blah/swift-abi/
======
chmaynard
From the article:

"... Swift's Stabilized ABIs on Apple platforms ... are not actually properly
documented, xcode just implements it and the devs will do their best not to
break it. Apple is not opposed to documenting it, it's just a lot of work and
shipping was understandably higher-priority."

Having worked in Apple's Developer Publications department for eleven years, I
can confirm that this statement is mostly correct. Apple has the financial
resources to hire more technical writers and properly document all their APIs,
but it's just not a priority.

The sad thing is that developers outside Apple need this documentation and
would even be willing to help write it. But they can't because, as a matter of
policy, Apple does not generally involve the developer community in writing
their official documentation.

~~~
mojuba
In some areas Apple's documentation is dismal to the point of being
ridiculously useless. Like this (from CoreAudio):

    
    
        mWordClockTime
    
        The word clock time.
    

And nothing else on that page apart from that it's UInt64. And it's been like
this for years if not decades already. Nothing's changed since the APIs were
also bridged and the documentation re-written for Swift. I doubt it's the
intent (to keep the developers in the dark with regard to CoreAudio?
unlikely), just neglect. Apple, otherwise one of the few companies that pays
attention to details and have unlimited resources for this type of tasks,
what's their problem really?

~~~
coldtea
> _And nothing else on that page apart from that it 's UInt64._

What else exactly do you need?

It's a reference page for the members of the struct AudioTimeSTamp:

[https://developer.apple.com/documentation/coreaudiotypes/aud...](https://developer.apple.com/documentation/coreaudiotypes/audiotimestamp)

I wish many C/C++ libs had such a good documentation...

~~~
shakna
> What else exactly do you need?

What unit it reflects. What epoch it uses. When does it wraparound? Is it
realtime? Or best guess? Is it wall-clock like CLOCK_REALTIME or is closer to
CLOCK_MONOTONIC? Is a valid range the entirety of possible uint64 values, or
is there a range limit?

~~~
justsid
The answer is “not applicable” to all of these. The word clock is unit less,
just ticking up each sample. It’s not actually a clock in the normal sense,
there is no leap years or anything in it. It’s used to synchronize devices.

Granted, the documentation isn’t the best at explaining it. But if you know
what a word clock is in terms of audio, you know what the value is; a counter.

~~~
shakna
> The word clock is unit less, just ticking up each sample.

That should be in the documentation. Each sample is guaranteed to have an
increment compared to a previous sample.

But, it doesn't answer all of the questions.

Does the world clock always start from 0? Is that an assumption that can be
made?

Or is the valid range anywhere inside a uint64?

What happens if the number of samples exceeds the length of the world clock
unit? Is it allowed to overflow? In which case it violates the assumption that
it always just increments.

~~~
hhas01
Even if it counts in picoseconds it’ll take several centuries to roll-over, so
these probably aren’t questions you ever need to ask. Though that’s kind of
the point: the API and its documentation should highlight the stuff you need
to worry about, such as where you get these values from and what they do, and
hide everything else that you don’t (internal implementation details).

But that particular API does the opposite. Just typedef-ing that data as an
opaque `WordClockTime_t` would go a long way to fixing this, telling API users
to ignore how it works internally _and_ enabling automated documentation tools
to to locate and list every other API that produces/consumes this particular
value. A simple automation-friendly abstraction that would reduce—if not
eliminate—the need for additional manually-written documentation. i.e. Put the
knowledge in the code and the automation can graph it.

Alas, there’s something about C programming that seems to bring out the worst
abstractions in many C programmers… and if they’re being lazy in their API
design, they’ll be twice as lazy when it comes to manually documenting it.

\--

"What is wrong with giving tree, here?"

"Well, he don't know talking good like me and you, so his vocabulistics is
limited to 'I' and 'am' and 'Groot.' Exclusively, in that order."

~~~
shakna
> Even if it counts in picoseconds it’ll take several centuries to roll-over,
> so these probably aren’t questions you ever need to ask.

That's an assumption, that an edge case won't happen. Docs exist to spell out
where the edge cases are.

Cisco thought a 32bit number for RTP timestamp would never rollover. It
happened. [0] Centuries it might take if it's initialised from zero, but it
doesn't have to be. And if you don't give the documentation, then you can't
expect reasonable defaults to be used.

It's important to know when something like that happens, so that they can also
know how to handle behaviour that may well be completely unexpected. Hiding
the type doesn't help. It just tells you you're even more on your own if you
want to handle exceptional events, which leads to code with holes so big you
can drive a CVE through it.

[0]
[https://quickview.cloudapps.cisco.com/quickview/bug/CSCvc865...](https://quickview.cloudapps.cisco.com/quickview/bug/CSCvc86525)

~~~
hhas01
Aye, I’m well aware what can go wrong when an integer overflow occurs. My
point was the way Apple presumably[1] uses this particular Uint64 precludes
such an event ever occurring within macOS’s lifetime, therefore there’s no
need to explain it. If a macOS API generates that value and a macOS API
modifies it and a macOS API consumes it, and users should only ever pass it
around as-is and never screw with it directly, it’s opaque data and its
internal workings are none of their business.

Okay, it would really help if C’s so-called “typesystem” would actually
enforce a custom-defined type like `WordClockTime_t` so that client code can’t
do stupid/malicious things with it like stick its own arbitrary integers in
it; but hey, C. While a sensible runtime would also chuck an exception if a
fixed-width integer overflows, rendering rollover dangers moot; but again, C.
It is what it is; and so it goes.

But if, as an API designer, you’re going to document every single way your API
may potentially blow up during normal/abnormal use then perhaps you should
write that documentation in the form of defensive code that validates your
API’s inputs and handles bad inputs accordingly. e.g. A timestamp API should
not be making its users fret about (never mind cope with) C integer overflows;
guarding against any edge-case crap is the API’s job, not its users’.

Again, the problem is not a lack of documentation so much as lack of clarity.
A good abstraction shows only what its users need to know and hides everything
that they don’t; the more that can be left unsaid, the better. (If an API
can’t be documented clearly and concisely, that’s a huge red flag that the API
design is bad so needs reworked until it can.) The problem with an API like
this is the not-knowing, which indicates deeper, more systemic, failings than
merely “needs more documentation”.

..

TL;DR: If your API is puking on its users then don’t start documenting the
color and odor of that puke; _fix its code_ so it doesn’t puke again.

\--

[1] I say “presumably” because damned if I’m going to spend hours spelunking
Apple’s crappy documentation just to find out exactly where this
mWordClockTime value comes from and where it goes to.

~~~
shakna
> If a macOS API generates that value and a macOS API modifies it and a macOS
> API consumes it, and users should only ever pass it around as-is and never
> screw with it directly, it’s opaque data and its internal workings are none
> of their business.

But that's not the case. You get to set mWordClockTime as part of the init
[0]. If you can initialise a value, but aren't given bounds for the value,
then the documentation has screwed up.

The value is something the developer can create, and pass in when creating any
AudioTimeStamp, which you will be doing a lot of if you're dealing with sound.

This isn't an arbitrary value you can just rely on to be correct, there may be
good reasons for altering the value, such as when splitting sound into several
thousand chunks and rearranging them.

It's a part of the exposed API - it needs to be documented how it behaves.

For a different take on a similar problem, let's look at how PulseAudio
handles it [1].

> pa_core doesn't have any interesting functions associated with it, it is
> just a central collection of all the components and globally relevant
> variables. You're not supposed to modify the fields, at least not directly.

This is how you abstract away an API safely. The dev knows up front that
pa_core is the type that'll be used, and that other functions will be
modifying the sample cache for them - that is, they can't supply a value
directly to the type or they've entered unsafe behaviour.

They can go off and find the right setter.

What follows on that page is only a courtesy, and can clearly not be used
safely in most programs. It doesn't need to be there at all.

So the dev finds [2], where they call the client to get the sample.

And whilst duration is a uint64, is also has a few more things in the docs,
such as the value can be lazy, so you need to check it exists before using it,
and the property can raise an error when you try to access it and it doesn't
yet exist. You'll also find that this a property (so you won't be creating
it), generated by an interface, and where to find that interface.

I mean, I'm not one to compliment PulseAudio's documentation. It is an awful
mess, just like the internals.

But they've given us a lot more than Apple bothered to.

[0]
[https://developer.apple.com/documentation/coreaudiotypes/aud...](https://developer.apple.com/documentation/coreaudiotypes/audiotimestamp/)

[1]
[https://www.freedesktop.org/wiki/Software/PulseAudio/Documen...](https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/Developer/CoreAPI/)

[2]
[https://www.freedesktop.org/wiki/Software/PulseAudio/Documen...](https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/Developer/Clients/DBus/Sample/)

~~~
hhas01
“You get to set mWordClockTime as part of the init”

While I’ve never had the pleasure of dealing with Core Audio, I’m getting the
strongest impression that our problem is not that its API documentation is
inadequate, but that its API was designed by a bunch of absolute hacks and
bums.

In which case, asking for additional documentation is like asking for
additional band-aids after severing your leg with an exploding chainsaw. Never
mind for a moment the current bleeding; fundamentally you’re tackling the
wrong problem. The right approach is first to make absolutely sure exploding
chainsaws never get sold in the first place. Once that’s addressed, then worry
about providing non-exploding chainsaws with adequate training and safety
gear.

If the user has to initialize the struct directly then yes, the docs
absolutely should state what values to use. However, unless there is some
absolutely overriding reason for doing this then really, no, the _right_
answer is to do what the old Carbon C APIs and Core Foundation do, which is to
provide a constructor function that correctly populates it for them, and then
document that. The documentation is shorter and simpler, and there is less to
go wrong. Plus users do not need to write their own guard code—a massively
pointless duplication of effort.

For instance, the LSLaunchURLSpec struct
([https://developer.apple.com/documentation/coreservices/lslau...](https://developer.apple.com/documentation/coreservices/lslaunchurlspec))
is a good example of a manually-populated struct with appropriate
documentation.

But in most cases, when working working Carbon/CF you don’t need to know any
of these details because the structs are initialized and disposed for you, and
these functions all follow standardized naming conventions so are trivial to
locate as well. This is Abstract Data Types 101, and shame on the CA PMs for
thinking they’re special snowflakes and simply dumping all their shit onto
their users instead, and shame on the dev managers above them for letting them
do so.

..

Incidentally, this is why I always fight anyone who says developers are too
busy/special/autistic/etc to documentat their own APIs. Yes, producing high-
quality _public_ documentation needs specialist technical writers and editors
on top of (but not instead of) its original developers, but there is no better
first test of the quality of your API design like being forced to explain it
yourself. I know this; I’ve thrown out and replaced more than one working API
design over the years simply because I found it too difficult or confusing to
explain.

\--

TL;DR: Don’t document Bad APIs. First make them into _good_ APIs, then
document that.

------
mojuba
I love Swift very much but every time I look at the disassembly view in Xcode
while debugging, I lose faith in it bit by bit. With my (rather limited)
knowledge of what a C or C++ program would compile into I have some
expectations of what I'll see in Swift's case but the reality ends up being
orders of magnitude more complex. Orders of magnitude is no exaggeration. For
example this:

    
    
        (myObject as! SomeProtocol).someMethod()
    

translates into hundreds of executed instructions, a bunch of nested calls
that somehow end up in objc_msgSend (!) even though none of the objects on
that line have anything to do with NSObject. Let alone ARC atomic
acquisitions, etc.

For one thing, Swift is hardly ready for application domains like audio, video
or games. No doubt it can make the development process so much faster and
safer, but also less performant by exactly that amount. Swift is beautiful,
surprisingly powerful and non-trivial (something you typically don't expect
from a corporate language, having examples of Java and C#), but the run-time
costs of its power and beauty are a bit too high to my taste. A bit
disappointing to be honest.

~~~
zozbot234
> translates into hundreds of executed instructions

My guess is that this would also be true under Rust, as soon as you start
using some pretty common facilities such as Rc and RefCell. (Swift does
essentially the same things under the hood.)

That said, "hundreds of executed instructions" are literally not a concern
with present-day hardware; the bottleneck is elsewhere, especially wrt.
limited memory bandwidth (as we push frequencies and core counts higher, even
on "low-range" hardware), so it's far more important to just use memory-
efficient data representations, and avoid things like obligate GC whenever
possible - and Rust is especially good at this.

~~~
mojuba
> "hundreds of executed instructions" are literally not a concern with
> present-day hardware

Depends on the context. I have that line in a very tight loop in a CoreAudio
callback that's executed in a high-priority thread. It should produce audio
uninterrupted, as fast as possible because the app also has a UI that should
be kept responsive. Least of all I want to see objc_msgSend() in that loop. Of
course I know I will remove all protocols from that part of the app and lose
some of the "beauty" but then what's the point of even writing this in Swift?

For most applications Swift is good enough most of the time. No, it's
excellent. I absolutely love how tidy and clever your Swift code can be. Maybe
a few things you wish were improved, but every language update brings some
nice improvements as if someone is reading your mind. The language is evolving
and is very dynamic in this regard.

However, it is not a replacement for C or C++ like we were made to believe.
And now that the linked article also explains the costs of ABI stability (even
the simplest struct's introduce indirections at the dylib boundaries!) I
realize I should re-write my audio app in mixed Swift + C.

~~~
zozbot234
> Of course I know I will remove all protocols from that part of the app and
> lose some of the "beauty"

Protocols/traits/interfaces are just indirection - we all know that indirect
calls are expensive. Fixing this need not be a loss in "beauty" if the
language design makes direct calls idiomatic enough.

> And now that the linked article also explains the costs of ABI stability

I definitely agree about this, though. ABI stability and _especially_ ABI-
resilience, have big pitfalls if used _by default_ , without a proper
understanding of where these drawbacks could arise. They are nowhere near
"zero cost"!

~~~
mojuba
> Protocols/traits/interfaces are just indirection

They are indeed. Look at how C++ handles multiple inheritance, for example:
literally a few extra instructions for each method call, not more than that.
Swift's cost of protocol method call and typecasting seems too high in
comparison, and I haven't even tried this across dylibs yet.

~~~
zozbot234
> literally a few extra instructions for each method call, not more than that.

Yup, C++ does this by building in lightweight RTTI info as part of the vtable.
Swift expands on this trick by using broadly-similar RTTI info to basically
reverse excess monomophization of generic code. (Rust could be made to support
very similar things, but this does require some work on fancy typesystem
features. E.g. const generics, trait-associated constants, etc.)

------
_bxg1
I really enjoy Swift's philosophy of setting up higher-level abstractions, and
then pirouetting under the hood to get performance near the ballpark of C++
and family. I'm a big believer in "Just Works pretty-fast by default, optimal
by deep-dive".

Too bad Apple hasn't shown much interest in supporting Swift on other
platforms; I know efforts exist but they all seem like second-class citizens.
I don't really want to invest the time learning a new language that's locked
to a single ecosystem.

~~~
melling
Swift has been made available for Linux for the past four years:

[https://swift.org/download/#releases](https://swift.org/download/#releases)

[https://www.digitalocean.com/community/tutorials/how-to-
inst...](https://www.digitalocean.com/community/tutorials/how-to-install-
swift-and-vapor-on-ubuntu-16-04)

I’m not sure what’s missing as far as libraries, other than UIKit.

~~~
_bxg1
Didn't realize the Linux version was offered through the official channels.
That's something.

Still, to truly compete it would need to have Windows support too. And ideally
real buy-in from at least one other major tech company.

~~~
skohan
As the other poster has mentioned, IBM has put a lot of effort into server-
side swift. Also Google is investing in Swift for Tensorflow, which means
there is a team at Google who's job it is to work on the Swift compiler every
day.

edit:

If you want you can use Swift on Google Colab right now:

[https://colab.research.google.com/github/tensorflow/swift/bl...](https://colab.research.google.com/github/tensorflow/swift/blob/master/docs/site/tutorials/model_training_walkthrough.ipynb)

~~~
kevsim
A team that is lead by Chris Lattner, creator of Swift. Not bad!

~~~
pjmlp
Which is why Swift with its poor CUDA and Windows support instead of Julia.

------
progval
Interesting comment on Reddit:
[https://www.reddit.com/r/rust/comments/dtqm36/how_swift_achi...](https://www.reddit.com/r/rust/comments/dtqm36/how_swift_achieved_dynamic_linking_where_rust/f6yh5jy/)

> You can actually already do dynamic linking in Rust, but only with C-based
> interfaces. The C-like interfaces that gankra talks about are I belive more
> similar to current Rust than to C, so I think they shouldn't be called
> C-like. They could support structs, enums, trait objects, lifetimes, slices,
> etc. "Stable ABI Rust" would be a name that's more fair. Addition of
> generator based async and generics of course won't work, but not all
> interfaces actually need those features.

> I think there is definitely potential for a "better dynamic linking" RFC
> that adds a stable ABI to the language. Of course, the default compilation
> mode should keep using unstable ABIs, generics, async, etc. But allowing
> crate writers to opt into a stable ABI would be beneficial for precompiled
> crates as well as compile times which are still the biggest problem with the
> Rust compiler for many people.

(a crate is a Rust package)

~~~
zozbot234
>But allowing crate writers to opt into a stable ABI would be beneficial for
precompiled crates as well as compile times

I think there is already a fuzzy convention that could basically be made to
enable this. A crate name ending in -sys is expected to provide a C-like
interface, and to thus be usable in a dylib context, whereas the same crate
name with no suffix or with a -rs one provides a statically-linked, ABI-
unstable wrapper to its corresponding -sys. The build system just needs to be
made aware that the former kind of crate need not be recompiled when potential
ABI-breaks are introduced.

> (a crate is a Rust package)

A crate is a Rust _compilation unit_. Closer to a .cxx, .o file in C/C++ than
what are usually called "packages" there.

~~~
epage
-sys are raw wrappers around C code so wouldn't provide much benefit for them to be exposed as dynlibs.

And crates are the equivelant of python, node, go, etc packages. That they are
also a compilation unit is, I believe, an implementation detail. I think they
also made it configurable.

~~~
steveklabnik
A crate is a compilation unit. A package is defined by a Cargo.toml, and can
have one or more crates.

Most folks use “package” and “crate” interchangeably, even if they’re
technically different.

See the bottom of [https://doc.rust-lang.org/book/ch07-00-managing-growing-
proj...](https://doc.rust-lang.org/book/ch07-00-managing-growing-projects-
with-packages-crates-and-modules.html)

(Additionally, “compilation unit” is a bit weird, given incremental
compilation, etc. “the file containing the root module tree that gets passed
to rustc” is not as succinct though.)

------
flipgimble
In reading comments about Swift on this and other HN threads I see a lot
opinions that in my experience are completely off the mark: a) "Apple cares
only about iOS so thats all Swift will be used for" b) "Swift is in fact slow,
look at this (naive) swift code vs. this (heavily optimized) c/c++ code".

While for people who take time to really work with Swift (yes those are often
iOS and Mac developers by necessity) come away with an opinion that Swift is
really a diamond in the rough, a new generation of a language co-evolving
along with Kotlin and Rust. One distinction is that Apple can afford to hire
top-tier compiler developers (including the founder of Rust) and invest
heavily in tooling and growing the Swift ecosystem. They can pull off projects
like ABI stability which took more than a year of focus for the whole team.

I point this out because the is always some sort of opportunity where
widespread public perception is so mismatched against reality. I predict in
the future there will be some tech startup that will go all in on the Swift
ecosystem and be able to run circles around the competition.

Note for responders: I'm not saying the languages aren't just as good, or
better. I'm not excusing the fact that Apple definitely dictates the priority
of where the compiler team invests their time.

~~~
skohan
> I predict in the future there will be some tech startup that will go all in
> on the Swift ecosystem and be able to run circles around the competition.

Personally I'm really interested with what's going in with Swift in the
math/science space. The work that's going on with Automatic Differentiation is
fascinating, and the Numeric library which has just been released should make
it much easier to achieve highly accurate numerical results in Swift.

I agree with you that Swift's public perception is out of step with the
reality of the language. It certainly has issues, and when comparing the
developer experience with something like Rust, it falls way short on things
like tooling, and platform support, but it's just so easy to be productive in
Swift I keep coming back to it.

------
sandGorgon
_Supporting both polymorphic and monomorphic compilation helped Swift a lot,
but I think the key difference was ultimately just that Apple had a more
significant motivation than Mozilla to pursue dynamic linking and way more
resources to throw at this very hard problem._

Interesting stand.

~~~
saagarjha
All of macOS's system libraries are dynamically linked, so there's just no way
Swift could be used in the OS if it didn't do this right.

~~~
sandGorgon
The last part of the line reads (rather politically incorrectly) that Apple
has more talent available to solve this problem.

~~~
littlestymaar
Given that the author is a Mozilla employee (working on Rust) and former Apple
one (on Swift), I'd be surprised if your interpretation was correct.

~~~
sandGorgon
I do admit that English is not my first language...but I am struggling to
interpret " _Apple had way more resources to throw at this very hard problem._
" any other way.

~~~
littlestymaar
It depends what you mean by “talent”, if you use it in the HR way where a
“talent” is an employee, then yes talent=resources, but then I don't
understand how this is “politically incorrect”.

What I understand when you say this:

> the line reads (rather politically incorrectly) that Apple has more talent
> available

Is Apple employees are more “talented” than Mozilla's. That's politically
incorrect, but I'm pretty sure that's not what's meant here.

~~~
sandGorgon
maybe they were equal...but Apple had more number of "talent" available.

Which then begs the real question - Swift is what Rust would be if Mozilla had
more money/resources ?

------
oautholaf
Here's one thing I don't understand: In addition to enabling dynamic linking,
this mechanism allows Swift to compile less code (generic functions only need
to be compiled once) and therefore reduce instruction cache misses.

But certainly the tradeoff for this type-info-vtable "witness table" and much
more heavy boxing must impact the data cache miss rate. What's the tradeoff
end up being in practical terms? Is it worth it?

Also, although it seems there's features that let you "freeze" a type, is
there a practical way that a non-language expert could ever profile and
discover that they may want to use such a feature to improve performance?

Especially given that Swift programs on iOS probably only dynamically link to
the system libraries, this seems like a powerful language feature to enable
something that could have been achieved by writing the Swift system libraries
as typed facades around a typeless core, which you so often see in C++ STL
implementations.

------
an_d_rew
What a well written article! Thanks for taking the time to post it!

------
gdxhyrd
The technicalities behind Swift's work on ABI stability are very interesting
but I remain unconvinced developers care about ABI stability nowadays; outside
security updates and the very basic layers (syscalls, WinAPI...).

In the past, ABI stability was way more important for many companies because
there were many more closed source dependencies, way less access to online
updates, way less emphasis on CI/CD, etc.

The argument for size by avoiding several std runtimes is strange in 2019,
specially considering Apple's policy of deprecating things and forcing devs to
update apps constantly.

~~~
ridiculous_fish
ABI stability is not about size! It's about enabling the libraries to evolve
simultaneously with the app.

For example, in the next version of the OS, all table views get type select.
Combo boxes work better with screen readers. Menus get dark mode. etc.

An OS can provide much more than libc or Win32 "basic layers". It can provide
a whole UI vocabulary, and apps which use its frameworks enjoy new features as
the OS improves. That's the hope at least.

~~~
alexashka
Why do we need ABI stability for these features?

Libraries can evolve just fine by providing backward compatible changes as far
as I can tell?

~~~
ridiculous_fish
Yes that's right: ABI stability is all about nailing down which changes are
"backwards compatible."

In C++, you might wish to add a field to a struct, or add a new virtual
method, or change the inheritance hierarchy, or the type of a parameter, etc.
But such changes are not ABI compatible and will break every app until they
are recompiled. The C++ ABI compat story is very strict.

Modern ObjC has a more generous policy, leveraging its dynamic nature. For
example you can add fields or methods to classes, without recompiling
dependencies. But you pay an optimization penalty, since the apps have to
defer offset calculation until runtime.

Swift attempts to have its cake and eat it too. Swift enables you to be
explicit about which types might change in the future, and which types are
"frozen" and can be aggressively optimized. Furthermore you can explicitly
draw the ABI boundaries are: THESE parts are always compiled together and so
intra-references may be aggressively optimized, but THOSE parts may need to
evolve separately so enforce an ABI there.

~~~
andrekandre
> But you pay an optimization penalty, since the apps have to defer offset
> calculation until runtime.

isn’t that penalty only one time, when the first message is sent? after that
it seems pretty dang fast [1]

[1] [https://mikeash.com/pyblog/friday-qa-2012-11-16-lets-
build-o...](https://mikeash.com/pyblog/friday-qa-2012-11-16-lets-build-
objc_msgsend.html)

------
rishav_sharan
I want to try swift but the fact that there is no windows support is a deal
killer for me. And it perplexes me how over the years there have been no
serious attempts to make it happen. Is it just a cult of Apple + Linux
masterrace thing?

~~~
bsaul
Windows is nonexistence as mobile OS ,and people deploying windows server are
probably already entrenched in using .Net technologies anyway.

Provided that you can still install linux on any computer to develop on, i
don't see a real market for swift on windows.

~~~
voidmain0001
It's a good thing Microsoft doesn't think that way. It apparently has metrics
that show desktop development is not stagnating, but rising thanks in part to
Windows desktop OS usage also rising [1]. See its plans for WinUI v3 [2].
WinUI is not cross-platform but it is talking about it.

[1] Comments from Ryan at MS
[https://www.dotnetrocks.com/?show=1660](https://www.dotnetrocks.com/?show=1660)

[2] [https://github.com/microsoft/microsoft-ui-
xaml/blob/master/d...](https://github.com/microsoft/microsoft-ui-
xaml/blob/master/docs/roadmap.md)

------
pornel
It would be super cool if Rust supported the Swift ABI. Currently use of Rust
for macOS or iOS applications necessarily involves adding a plain C layer
between Swifty UI and Rusty back-end.

~~~
skohan
You can go in the other direction, no? Write a Rust library with a C FFI
(directly in Rust) and call it from Swift?

~~~
pornel
This is what I meant by a "C layer". No matter how you slice it, you have to
"dumb down" communication to the C level and generate C headers. Then you
either work with opaque types, or have to generate extra translation and/or
wrapper types for classes, closures, etc. Both languages treat C as "unsafe",
so it also makes memory safety trickier than if they could use their native
safe APIs.

~~~
skohan
Ok yeah I understand what you mean. Yeah it would be super interesting to have
closer integration between Swift and Rust. For instance, if you could treat
Rust traits as protocols or vice versa. It seems super compelling the idea of
writing low-level memory handling in Rust and then operating on it with high
level Swift code.

------
nitwit005
I definitely see the desire to replace Objective-C, but the downside of ABI
stability seems to be that it prompts a lot of worries about new features or
optimizations affecting it. I hope this won't add too much sand in the gears.

------
int_19h
Stable ABI and especially the issue of generics is why I think bytecode+JIT is
the right approach to a language that operates at this level of abstraction.
Higher-level semantics of bytecode allow for much more flexibility wrt ABI,
and JIT lets you compile generic code monomorphically across ABI boundaries,
and even inline it. A long time ago I did some experiments with very carefully
written C# code, and it was capable of the same abstraction as C++ STL
containers and algorithms, while producing highly efficient native code
because everything was inlined at runtime.

------
saagarjha
> In the extreme case, we could make a system where everything is an opaque
> pointer and there's only one function that just sends things strings
> containing commands.

Interestingly, Objective-C has other ABI issues…

~~~
olliej
what are the issues?

~~~
saagarjha
ivar layout:
[http://www.sealiesoftware.com/blog/archive/2009/01/27/objc_e...](http://www.sealiesoftware.com/blog/archive/2009/01/27/objc_explain_Non-
fragile_ivars.html)

~~~
dexter0
The fragile ivar issue is fixed in the modern Objective-C runtime.

~~~
saagarjha
Right, non-fragile ivars shipped with the 64-bit runtime as mentioned in the
link.

------
The_rationalist
Does anyone know a good technical comparison of swift vs Kotlin (and
eventually vs rust and typescript). I would like to see their singular
features and their misfeatures.

~~~
hellofunk
I think most notably is the different paradigms towards memory management. The
syntax is very similar between the two languages, something like 80%
identical, but that's trivial.

~~~
skohan
How do Kotlin's interfaces compare to Swift protocols? Being able to do things
like provide default implementations in a protocol are some of the biggest
Swift features for me.

~~~
andrekandre
it’s not exactly the same thing, but you can kind of accomplish something
similar to “swift protocols with default implementations” with abstract
classes

[https://www.programiz.com/kotlin-programming/abstract-
class](https://www.programiz.com/kotlin-programming/abstract-class)

------
garmaine
Why the swipe at Rust?

~~~
tedunangst
Adding back the word How softens the blow.

~~~
dang
Agreed. We put it back. HN's software strips leading hows, which mostly but
not always improves things.

