Hacker News new | past | comments | ask | show | jobs | submit login
Apple announces full Swift rewrite of the Foundation framework (2022) (infoq.com)
434 points by frizlab on Jan 11, 2023 | hide | past | favorite | 367 comments



This will take a while, but I'm looking forward to it.

It's probably another big step towards "Swift everywhere," without worrying about bridging to C.

I've been doing little but Swift since 2014, and really like the language. I'm still "on the fence" about SwiftUI, but that's mostly because of the level of support from Apple, and the [im]maturity of the system. This will help with that.


Yes, and it means Swift scripts and modules that don’t reference UIKit/AppKit/SwiftUI/Combine will run on Linux and possibly Windows with zero or little modification.

I‘m a little sad though that they didn’t start this endeavor years ago, because IMO Rust has already built so much momentum that it will win (for the popular, medium to long term definition of „winning“).


I don't particularly care, whether or not Swift ever leaves the Apple ecosystem (like ObjC). In that domain, Rust will never "win." I think that Rust is an awesome server language, though, and I'm glad to see it gain traction. I just hope that it doesn't get trashed by a bunch of junk dependencies, written in it.

I find using apps written, using hybrid systems, or PWAs, to be quite painful (on Apple devices -and that includes that awful JS TVOS system), so I am a big proponent of real native apps.


I think whoever „wins“, it would be necessary to have good interop at least. Currently, Rust devs are integrating core libs into native iOS apps by going the C route. All this work to make everything memory safe, and then this.


Swift-based interop would provide richer type information. See how swift-bridge takes advantage of it:

https://github.com/chinedufn/swift-bridge


Which direction are you talking? If you want to link against Rust there is little choice but to use a C layer because it doesn’t have a stable ABI.


I‘m talking Swift > Rust. TIL about the unstable ABI, thanks!


Got it - the other way is actually fine because Swift does have a stable ABI - swift-bridge [1] allows interop in terms of high level types instead of devolving to C.

[1]: https://github.com/chinedufn/swift-bridge


What is a real native app? Most operating systems have multiple graphics interfaces, ui toolkits and other abstractions with varying levels of inconsistency. They could all be driven by various languages, compiled or interpreted.


macOS and iOS not so much. Which is probably one of the reasons why apps for these platforms are often much better (or at least more polished) than say Windows apps.


In my view, "polish" is rather orthogonal. For trivial apps that just assemble the building blocks that come out of the box, I'd agree. Otherwise, I find the "native" developer experience (XCode, Swift compiler) to be rather unpleasant. Moreover, the economics of developing "natively" limit the amount of polish you can actually justify.


The way to deal with that, is develop (or buy|integrate) dependencies that natively implement polish and chrome.

For example, a UI framework may implement good transition animations. That’s pretty typical. Write an extension to UINavigationController, or UIViewController (if you use UIKit) that implements these transitions, package it as a standalone project, and integrate it, or simply have project-specific baseline framework extensions.

That’s what I do. I have a ton of these packages[0].

It’s a fair bit of work, to do it yourself, but there are package[1] and extension[2] indexes. Caveat emptor.

[0] https://riftvalleysoftware.com/work/open-source-projects/

[1] https://swiftpackageindex.com/

[2] https://swifterswift.com/


> Yes, and it means Swift scripts and modules that don’t reference UIKit/AppKit/SwiftUI/Combine will run on Linux and possibly Windows with zero or little modification.

This is already pretty much true today, aside from the few, mostly down-in-the-weeds holes in Foundation on Linux that will be solved once this rewrite is complete.

Cross-platform command-line tools, web backends, and things like AWS Lambdas are all possible and pretty easy to do today.


Why do you think Swift and Rust are competing on the same grounds or for the same objective? Genuinely curious, i don't know much about Rust.


I watched a Rust intro video recently that provided a perspective I liked, so I’ll share that here: MS, Apple, Google (and more) all relied heavily on C for low level code that needed to be as performant as possible. It turned out though that C‘s memory management is so problematic that many/most security issues are caused by it. To address that, Google invented Go, Apple made Swift, Mozilla gave us Rust etc.

MS is interesting - they tried to write memory safe C and invested heavily, but admitted defeat eventually - and started using Rust. And I think that’s what will happen at most companies eventually. Rust has a lot going for it (faster than C sometimes, excellent WASM support etc). Swift might have been another contender, but Apple kept things too close to their chest for too long IMO.

As another poster wrote, Swift certainly won’t die at Apple (and their ecosystem), and Google will certainly keep Go alive. But I think Rust will eventually be used at many/most other companies for anything security or performance critical. Maybe it will replace C as the de facto low level language.


If it was just a matter of avoiding memory safety Java and ML were right there, and these languages are pretty different from each other.

It's not just memory safety. Go gets motivated by highly concurrent systems with large numbers of programmers and prioritized simplicity and developer experience. Rust was aiming at very high performance at the extended of complexity and compile times, and Swift wanted to build UI hierarchies.


ML is still right there, and yet people are adding its features to more popular languages (recently Java). OCaml had a compiler to JS released in 2010 (js_of_ocaml), yet Typescript was still released 2 years later. Is this because of technical concerns? NIH syndrome? Lack of knowledge about absolutely everything that has been done? A need for control? Probably a bit of each, and other things too.


Java made the mistake of being VM based, no value types, and AOT only available via expensive 3rd party plugins.

Had Java been like Modula-3 or Eiffel since version 1.0, Go might never have happened at all.

Thankfully they are on the right path to fix those issues.


So Microsoft created C# to address that


No it didn't, it created C# because they got sued by Sun due to the J++ extensions, none of them were related to value types and AOT.

J++ extended Java in having a Windows specific framework JFC, Java Foundation Classes, what later became Windows Forms.

Support for events and J/Direct, which is basically how P/Invoke came to be on C#.

.NET has always supported a basic kind of AOT via NGEN, which only supports dynamic linking, AOT has to be done at install time, requires strong named Assemblies and it is tailored for fast startup leaving the rest of the work to the JIT.

If it wasn't for the lawsuit, C# would never happened, in fact the research being done with COM vNext used J++.


The biggest issue with C# is that the only real implementation was closed source and Windows-only until recently.

Additionally, AOT is still experimental, and doesn't support ASP.Net Core yet.


Note that Apple is known er use Rust for some of its infrastructure and Google is adopting Rust in a big way in Android and Fuchsia.


Currently the Android team doesn't have any plans to expose Rust support on the NDK/AGK, anyone going down that path is expected to support themselves in Android Studio, Gradle, CMake, android-ndk, AAR/Bundles, JNI integration.


I mean, you say "support themselves" but e.g. https://blog.rust-lang.org/2023/01/09/android-ndk-update-r25...


Nothing of that covers "Android Studio, Gradle, CMake, android-ndk, AAR/Bundles, JNI integration", which an Android shop expects to have out of the box support in the Android SDK installer.

Note that android-ndk in that comment means the original Makefile based build tooling, which CMake builds still lack some corner cases in functionality, hence why I listed both.


So, just to be clear, in what languages do I not need to "support myself" under your definition and what does the "support" consist of?

Do I get an Android test phone to try my software out? Is there like free phone support so I can chat to some expert in my language about Android problems? You make it sounds like a pretty big deal, but my small experience† of writing Android software a decade ago was that it just wasn't that hard.

† I wrote an implementation of the now obscure mOTP (similar to TOTP) for in-house usage. For obvious reasons I named This One Time app "Band Camp" which was already a pretty old reference at the time but once I thought of it I couldn't help myself.


Java, Kotlin and C++.

The languages with tier 1 support on Android SDK tooling for app development, properly configured out of the box after a SDK full install.

https://developer.android.com/guide/components/fundamentals

Which by the way, also includes a phone emulator to try out your stuff, including simulation of hardware events.

No need to get a phone.


> Java, Kotlin and C++.

I just tried this and... no C++. You can add the NDK and start building stuff with C++, but that's also exactly how the Rust offering works. If the result was actually a properly configured out of the box C++ development environment that would be pretty nice besides the Android stuff, but it isn't, the actual result out of the box is you get to pick Java or Kotlin.

You can do C++ native development for Android, but only via basically the same route as Rust, there's just not the huge gap you implied.


Since when does Rust appear as language selection on the NDK installer?

Has out of the box support on Android Studio for:

- mixed language debbugging

- project templates wizard

- code completion and linting

- JNI bindings generation

- two way editing between native JNI wrappers and Java/Kotlin native method declaration

- packaging of Android libraries for native code

And for game developers, if they so which, plugins for Visual Studio with similar capabilities.

In both cases, official support from Android team if there are issues with the above tooling.

Apparently you haven't tried enough, if you think bare bones NDK integration with cargo is enough for Android shops.

Maybe Rust will get on https://developer.android.com some day, but it isn't there today, even despite the fact that it is being used for Android internals, there is zero documentation on how to write Android drivers in Rust.

https://source.android.com/docs/core/architecture

So lets not pretend it is the same effort using Rust on Android, as it is for the official SDK languages.


Since editing window is already out, I am not arguing against Rust, and would welcome first class support for Rust on the Android tooling (Android/VS Studios, NDK, AGDK, Modules/Bundles) and being visible across https://developer.android.com documentation.


This assumes that no subsequent language comes along and steals away all the, um, seekers after novelty lately chasing Rust and Swift.


> and Swift

Can’t speak for Rust (but I hear that it is now quite mature -it predates Swift), but I’ve been programming Swift, since the day it was announced. In that time, the language, itself, has matured; possibly to the point that it’s starting to look a bit “Swiss army knife”-like.

I’m not exactly your typical jargonaut. I’ve been writing software since 1983. Been through a lot of changes, paradigms, and just plain old bullshit, in that time. I don’t really go for “shiny,” just because all the kids are into it, these days.


I do not claim everyone coding Swift or Rust is a novelty-seeker; rather, novelty-seekers were drawn to those languages, and will be drawn to others as they appear.

I can understand the appeal of abandoning Objective-C.


Val[0] language?

[0] https://www.val-lang.dev


> Mozilla gave us Rust

Sorry for the nit but Mozilla didn’t give us Rust. They sponsored some of its development but that was only 3ish years after Rust started.


“Rust grew out of a personal project begun in 2006 by Mozilla Research employee Graydon Hoare. Mozilla began sponsoring the project in 2009 as a part of the ongoing development of an experimental browser engine called Servo. The project was officially announced by Mozilla in 2010.”

Sitting hairs really.


The segment in this interview where Chris talks about Rust and compares the design decisions they made in Swift tied the two languages closely in my head: https://atp.fm/371

the section on the future hopes for Swift similarly; it seems both Rust and Swift contend to replace C++ in many use cases


Lattner had a vision for a lower-level Swift, which wouldn't need the support Swift needs today - it didn't end up happening and I suspect is not practical. He talked about it in several places, but obviously by 2018 or so what Chris Lattner thinks about the future of Swift doesn't matter very much.

This "lower-level Swift" felt like a similar problem to what Carbon or Herb Sutter's Cpp2 have. They've got something that's unsafe, and they want to somehow build a safe abstraction, but that's a foundation layer, and you've already built a tower of stuff above that, so you need to build it underneath all the stuff you have, which will be way harder than what Rust did where they begin at the bottom.

Is it impossible? No. But it might well be too expensive to be pulled off in practice, especially given that you need the result to pay back your expense over and above what exists today e.g. Rust.


I used rust for a while. When I saw and read about swift, my immediate thought was: "hey, this looks like rust's easy mode"


Aside from Swift having automatic reference counting, and Rust relying more heavily on its borrow checker, Swift and are super similar languages. If what you want is an ergonomic language that lets you combine the best of ML type systems with C family imperative programming then they’re the two obvious choices.


I would not describe Rust as being particularly ergonomic. While it is a suitable replacement for C++, that is not necessarily a high standard to meet.


For me it seems to almost exactly fit my model of how things ought to work, and of course the diagnostics are so much better than most languages.

We saw the other day where C and C++ just let you write what seems reasonable but then they do something insane, D says no, that's a syntax error, but Rust's diagnostic gives a name to what you intended, says you can't do that, and suggests how to write it instead.

  if (a < b < c)
C and C++ think that's doing a comparison, coercing the boolean result to whatever type to compare it with the remaining variable, and then acting on the new boolean result. D says that's a syntax error because there should be a closing parenthesis after the variable "b". Rust says "comparison operators cannot be chained" and suggests a < b && b < c

Edited to add:

Swift says: adjacent operators are in non-associative precedence group 'ComparisonPrecedence' -- which is definitely better than D, but it doesn't offer the helpful suggestion.


I have to agree. I find writing Rust painful (probably my fault) and I'm used to writing C & C++.


There maybe a causality there ...


Reading the super majority of Rust code is equally easy or easier than reading Python, or Typescript, or even Ruby.

Writing Rust applications is also very ergonomic. The Rust web server frameworks are approaching the ergonomics of Typescript web server frameworks.

The only Rudy lacking ergonomics is writing net new frameworks or missing framework pieces, or when std is not available.

95%+ of all new Rust code will fall into the bucket of “ergonomic to write and read”.


This isn't really the case. Rust code is almost as complex as C or C++ for any non-trivial application.


> Rust code is almost as complex as C or C++ for any non-trivial application.

This is also, if not more, true for Swift as well. I've been working with both Swift and Rust (in addition to C++) for some time now and I find real-world, advanced Rust SIGNIFICANTLY easier to read and comprehend than real-world, advanced Swift. This is, IMHO, due to the fact that where Rust chooses to be syntactically simple, explicit and consistent, Swift chooses to be syntactically complex, idiom-based and feature-bloated. Sure, Swift code can look very modern and attractive at times, but usually when it comes to superficial code samples in Apple promotional videos. Otherwise, if you, say, look at a large codebase written by someone else, Swift reads just as badly as C++ complexity-wise.


I’ve worked professionally with Java, Scala, C++, Python, Rust, JavaScript, Typescript, Ruby.

None of them are ergonomic for non-trivial applications!

The goal is to appropriately abstract away the super minority of code that deals with the non-trivial parts into a nice ergonomic interface.

Rust is frankly better than most in the list above at allowing the writer to create an ergonomic interface. Yes it’ll take the writer 3x as long in the short term to create the ergonomic interface but:

1. Relative to everything else creating/maintaining these types of internal abstractions is a super minority of time spent reading and writing code.

2. Unlike other languages, you’ll end up with fewer iterations of the interface because it’ll push the author to really understand the complete interface, rather than shipping a buggy interface that needs iteration. Also refactoring is Rust is simpler than any other of the listed languages (because it self documents more assumptions).

3. The ergonomic interface likely has already been published as a crate. I.e. don’t need to write it at all. These internal abstractions are more likely to be written in their first pass as general purpose than in other languages because of the collaborative design working with the rustc compiler.


That doesn’t make it less straightforward to read than Python or Ruby, where you have a variety of dynamic ills to contend with.


That’s just not true. Just the fact that you can reuse libraries easily makes Rust much easier. That combined with memory safety means that the two biggest headaches of C and C++ are just gone.


I don't know why you are being downvoted, but in my experience this is exactly right.

The pain of adding third party C++ dependencies is undeniable, especially in a cross-platform manner. I've had the displeasure of maintaining three different C++ build systems in 3 different companies, and they were all a nightmare.

Contrast that with cargo that just... works.


> The Rust web server frameworks are approaching the ergonomics of Typescript web server frameworks.

I don't think that's true at all, at least it wasn't for me. Async in Rust has always been hard for me, it seems that using it requires knowing about how exactly it works and how your runtime of choice works. This is a lot, and requires a lot of time.

The documentation in Rust is above average, however the Rust ecosystem tends to be very unstable. Libraries often pull lots of depencies, many of which aren't even in 1.0, some that have switched to a new version but still have docs made for the old. The guideline from semantic versioning is to release 1.0 as soon as people start depending on it, since the idea is to version the public API. This is not always respected. And goes hand in hand with libraries that are 4 years old and on version 12 or something.

I remember actix-web before the 4.0 being particularly hard to get into.


I've been writing rust recently and trying to figure out how to write generic traits using higher ranked trait bounds with understanding variance sufficiently to know I'm not creating a soundness hole is much more difficult for me than writing C++. It's like all the template hackery madness you needed to resort to to do anything mildly interesting in C++98, except it's in the core idioms of the language.


> trying to figure out how to write generic traits using higher ranked trait bounds with understanding variance sufficiently

I agree! You’re currently dealing with writing non-ergonomic Rust.

I’d argue your domain is missing a fundamental reusable library/framework or the framework is currently missing a piece. Once someone publishes the needed library (hopefully it’ll be you) then everyone consuming it can just Lego block multiple crates together. Lego blocking crates together (barring heavy macro crates) is very ergonomic.

95% of all new code written is Lego blocking other crates. 5% of new code written is to build new or improve/patch crates.


Swift would never have won, and will never win, while it's so closely associated with Apple. Many people in positions of power believe (correctly) that Apple will always put its own needs above everyone else's. Those are the people who have chosen Rust.


Swift will never win against Rust for a much simpler reason: performance. Invariably, rust is chosen where performance is critical, and Swift’s reference counting GC ensures it will never compete with Rust in those scenarios.


Slight correction: Swift‘s ARC is not a GC (the compiler just inserts retain/release calls where it’s necessary), and one can write very performant code by steering clear of reference counted types, and Objective-C types (dynamic dispatch is really slow). Value types with copy-on-write shouldn’t impose much overhead. Automatic inlining is progressing nicely AFAIK, and recently introduced concurrency can collapse callstack frames (not sure Rust does that).


> dynamic dispatch is really slow

This is false. People believe it on faith without measuring:

https://mikeash.com/pyblog/friday-qa-2016-04-15-performance-...


I have never used IMP caching, and don't think the average developer uses this (aside from indirect use of Apple's frameworks). Otherwise:

> A normal Objective-C message send is a bit slower, as we'd expect. Still, the speed of objc_msgSend continues to astound me. Considering that it performs a full hash table lookup followed by an indirect jump to the result, the fact that it runs in 2.6 nanoseconds is amazing. That's about 9 CPU cycles. In the 10.5 days it was a dozen or more, so we've seen a nice improvement. To turn this number upside down, if you did nothing but Objective-C message sends, you could do about 400 million of them per second on this computer.

So it's 9 cycles vs what, one?


Or 0, given not using dynamic dispatch allows for inlining or even the compiler optimizing the call away entirely.


> So it's 9 cycles vs what, one?

I'm not sure what you're asking? What are you trying to compare?


I’m comparing a message send with a statically linked function call (or inlining, as mentioned by sibling comment).


This has the feel of premature optimization. These are the fastest operations. Their running time starts to get overwhelmed by the slower operations as you go down the list of things tested. Making the fastest operations a few times faster doesn't necessarily have any noticeable effect on your program.


You made me think, thanks for your input. I agree that it’s not a big or unsolvable problem.


It is definitely one per CS definition of automatic memory mangement algorithms.

Compiler optimizations are exactly that, optimizations that any automatic memory mangement implementation usually takes advantage of.


I stand corrected, thanks!


Rust is pretty often chosen also in places where performance is not that critical. It has a good ecosystem, a good type system, native builds and great performace. Those factors make it a good choice in places where you'd be just fine even if it was slightly slower. If Swift loses by a small margin — and Swift is a high performance language too — it could very well edge out Rust with ergonomics.


Also great build tool and IDE support. Cargo and rust-analyzer by themselves make me not miss a GC that much.


You don't need ultra performant memory management for most of your code. Swift compiles to machine code and ARC is fine for most use cases and you can use unsafe raw pointers etc for your occasional hot code path where you want to optimize memory management.


I would love to see what this looks like in practice if you can point me to a link.

(I’m now curious if the unowned keyword would have similar performance characteristics)



Replying to both comments at once: I understood “win” to mean “be a viable replacement for C / C++”. A viable replacement has to meet _everyone’s_ needs, not just the needs of those for whom performance isn’t critical. And to be more specific about the performance, it isn’t by a small margin (unfortunately I no longer have the links handy, but the general trend was that Rust, C and C++ were in the same performance group, then Java, then Go, then Swift, then JavaScript. It was something like 3 to 5x slower).

I should also perhaps mention that Swift is my favorite language to develop in. I’m not trying to be antagonistic, just realistic about its prospects against Rust.


It is enough to win on Apple platforms.


Sure.


Swift is great. I’ve done a lot of it, but also bounced around between Typescript, Elixir and Java and every time I miss Swift’s strongly opinionated, highly descriptive and flexible style.

Also, for the most part, it’s vastly superior to Obj C. The sad thing is that when writing Obj C I often find myself not writing the clean, well structured solution that’s in my head out of sheer resistance to the verbosity that it would take. I always just keep thinking how simple it would be in Swift and how much longer and more keystrokes / files it takes in Objective C.


Waiting for things to settle down on the server side and will gladly use Swift for web APIs once dust settles as the lang is really nice for the most part.


Similar post by Michael Tsai with some quick summary points of the impact of this:

https://mjtsai.com/blog/2022/12/12/the-swifty-future-of-foun...


Thanks for that, always appreciaet Tsai's perspective and this part stuck out to me:

>it sounds like the plan is to rewrite it in Swift and extend Swift to allow Objective-C to call the Swift implementation of the old API


I really like his style of 1) summarising articles he links to 2) attaching follow up from elsewhere on the internet, it's a nice pattern I'm surprised hasn't been more widely copied


Not sure why but most people don't seem to have noticed that people outside Apple will now be able to contribute to Foundation and these contributions will ship across all platforms.


Free labor for the world's most profitable corporation.

Looking forward to it.

And then they still take a cut of your revenue.


You also mean “paid labour From the worlds most profitable corporation” right?


No?


Good for Apple. In my opinion they are laying the groundwork for reducing technical debt.

Objective-C had a good run. I haven’t used Objective-C in over 10 years but I have used Swift about 10% of the time in the last four years. Swift is a nicely designed language and I could see it supporting Apple’s business for many years.

Will Swift ever be a primary language on Linux? I would say yes, except now that Rust is used the advantages of also mixing in Swift are diminished.


I would recommend people always use language like Rust or C++ instead of swift, so that it is easy to make cross-platform apps. The UI can be made using a native toolkit like swift, gtk, winapi, etc... however, the core engine app should be written in a language like Rust.


Swift just isn't mature enough to know for sure, at least on Linux and Windows. With proper optimization, it might make a case for itself on Linux. That being said, it's going nowhere fast without official support from Apple.


I bet you would. What color are your programming socks?


Who else remembers when they re-implemented the (at the time version of) the Foundation framework in Java for WebObjects? That was, what, 1998? Right around when NeXT got re-absorbed by Apple, not sure if the rewrite was started before or after.

https://developer.apple.com/library/archive/documentation/Le...

https://en.wikibooks.org/wiki/WebObjects/Overview/Objective-...

Those were the days! Actually kind of amazing that the Foundation framework is the result of steady evolution of an ObjC framework written by NeXT... over 30 years ago? All those `NS` prefixes that are still hanging on are for `NextStep`.

If this really replaces the ObjC implementation... would that be the final sunset of the codebase that has been there (at least ship of theseus style) from NeXTStep days? I wonder if there's continuous version control history of Foundation source from the start, and how many, if any, lines of code remain from the initial implementation.


Foundation was developed for the needs of EOF so it makes sense there was a version for the Java WebObjects. There's almost certainly NeXT-derived code in macOS/iOS with a longer pedigree and a bunch of it will probably outlast the Foundation rewrite. As software evolution goes, it's an astonishingly long run, no doubt. Especially for a technology that very nearly went extinct.


Huh, Foundation was developed for EOF? (Enterprise Object Framework; it was actually very much like Rails ActiveRecord). I did not realize that, I always figured it came first.


I don't have a better reference than 'stuff I heard from people who worked on EOF'. The EOF Wikipedia page puts like this:

EOF 1.0 was the first product released by NeXT using the Foundation Kit

One way or the other, their development was closely connected/overlapping.


There were already many Rails like frameworks when it came to be, I never understood the hype, specially since I was part of one written in TCL back in 1999, whose core team went on to create OutSystems in 2001.


The "scaffold" demo really drove the hype.


WebObjects is actually still alive and kicking, but only inside of Apple

A lot of the web services that Apple provides still use that implementation of Foundation :)


From the day I realised that, I’ve never ceased to be amazed by how many things have come from Next or were derived from it. I find it quite astounding.


I thought it was NeXTSTEP/Sun, but it seems disputed. https://stackoverflow.com/questions/473758/what-does-the-ns-...


What they collaborated was in Distributed Objects Everywhere, which ended up becoming Java EE.

https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere

Their collaboration is also one of the reasons why Java is basically Objective-C semantics with C++ like syntax.

https://cs.gmu.edu/~sean/stuff/java-objc.html

Interfaces (protocols), dynamic class loading (plugins), RMI (distributed objects), jars (bundles), dynamic dispatch by default, object root class,...


And I guess Cocoa (the initially Objective-C based application framework for macOS) was kind of a wordplay on Java again.

It’s like Java, just sweeter ;)


During OS X early days, Apple wasn't sure that the Mac OS developer community groomed on Object Pascal and C++, was that keen into embracing Objective-C.

So they jumped into the Java hype, created their own JVM implementation, with Swing extensions for the OS X UI, and Cocoa Bridge was born for Objective-C interop, with bindings for all key Apple techonologies like Quicktime and such.

When it became clear that Objective-C wasn't going to be an adoption problem, instead of using a 3rd party owned language, they dropped support for Java and eventually gave their implementation to OpenJDK.


What happened to WebObjects? And why did Apple left the business market?


The article quotes this:

> With a native Swift implementation of Foundation, the framework no longer pays conversion costs between C and Swift, resulting in faster performance.

and this:

> A reimplementation of Calendar in Swift is 1.5x to 18x as fast as the C one (calling from Swift in various synthetic benchmarks like creation, date calculation).

First, that range of 1.5x-18x is kind of huge. Why?

Second, why would the intro with C be such a big performance penalty even assuming it's just 1.5x? I know there must be an overhead, but why so large?

Also, why single out the Calendar? Is it somehow representative?


My guess is that it isn't just down to the language, but possibly better design and perhaps better programmers. Back when I used Java as my main language I reimplemented a few projects in Java that had been C++ and sometimes got an unreasonably high performance boosts. (I'm not implying I always got things to run faster, but often). Most of those were down to:

- better understanding of the problem that needed solving, which leads to: - structuring the application so it is better suited for what it actually has to do - better choice of data structures and the algorithms that operate on them - concurrency was more easily usable, hence the code would often run on more cores

In a couple of cases I discovered that Java was inherently faster because it coincided with "what the GC likes". In some cases Java turned out to be more CPU intensive, but that this was mitigated by being able to use more cores, so the user experience was better. (There were also setbacks: anything that looks like an LRU cache for instance is not friends with the GC, and you'd have to do silly tricks with NIO ByteBuffers plus serialization and whatnot).

A lot of Apple software is buggy, badly designed junk. It used to be worse, but you can still see that a lot of their software does beginner mistakes such as blocking calls in the main event loop, resulting in "spinning ball of fail" type blockages.

Some of their system software tends to misbehave as well, consuming lots of CPU and making the fans spin up if left unaddressed. It seems to be some kind of rule that for each release there is at least one daemon that shits the bed.


I know nothing about their c codebase but I assume it's because when the language is hard, you tend to keep it as simple as possible. When every line stops being a foot gun, you start using optimizations you wouldn't dare before.


That's a plausible explanation which I find to have support in personal experience.


I’m not deeply familiar with swift, objective-C or the next-step APIs, but it seems like they’re describing the difference between a (virtual?) function call (ie: swift to swift) vs a “message passing operation” in objective C. If that assessment is correct, then it’s really an exercise in small costs adding up quickly. If you call through the function pointer you have way lower latency, and if you call through the same operation hundreds of times to render a single screen (the calendar) inside of the render loop where latency matters- that’s measurably better. Especially on a device that tries to throttle CPU frequency to save battery power.

This is somewhat analogous to the same arguments that can be had about JIT compiled languages having the the opportunity to exceed performance of AoT compiled ones because of inlining opportunities- except in this case I don’t know if the call has any chance of being actually inlined so much as at least bypassing the message passing machinery overhead.


More importantly, why single out something like date calculation? Does that really eat a meaningful number of CPU cycles that needs to be optimized?


Betting it has a lot to do with string representation conversions.


I don't know Swift too well but my impression is that the support on Linux within the Foundation is not complete. (Also, is Foundation the same thing as what one might normally term a standard library? I can't tell.)

Does this change imply better (eventual) support on non-Darwin systems? Or maybe I've misread it and the change is unrelated.


> my impression is that the support on Linux within the Foundation is not complete

It is not, but it's like 95% of the way there in my experience - most things that are missing are relatively recent additions that have some complex OS interactions, like filesystem I/O with language-level concurrency features.

> Does this change imply better (eventual) support on non-Darwin systems?

Yes. The non-Darwin Foundation version is already rewrite that takes a lot of effort to keep in lock-step with the closed-source Objective-C version, so unifying the implementations in Swift will both reduce the amount of maintenance effort and promote non-Darwin platorms to more of a "first-party" status.

> Also, is Foundation the same thing as what one might normally term a standard library?

It's more of a standard library++, including some things that other languages include in their standard libraries (Date/Time models) but also other things that are common to put in third-party libraries (networking, etc.). You do not need to use Foundation for basic things like arrays or concurrency that are built into the normal standard library.


> complex OS interactions, like filesystem I/O with language-level concurrency features.

Do you mean like io_uring? That wouldn't be too surprising. Even Go doesn't have that in the standard library yet.

> so unifying the implementations in Swift will both reduce the amount of maintenance effort and promote non-Darwin platorms to more of a "first-party" status.

That's what I was hoping for. Thanks!


<bikeshed> It feels off to call things like Array part of any library when you have array literal syntax `let oddNumbers = [1, 3, 5, 7, 9, 11, 13, 15]`

I'd call that a language feature. Or maybe with Swift there is a blurry line between the two?


Array is defined in the Swift standard library, but the compiler knows about the type directly, and also recognizes various @_semantics("array.blah") annotations that appear in the standard library source code. I believe the semantics annotations are primarily to help the optimizer.

https://github.com/apple/swift/blob/main/stdlib/public/core/...


Arrays are part of the Swift standard library. Foundation is another library that provides access to things like I/O, networking, calendars, etc.


For high level applications, Foundation is almost always imported and can pretty much be assumed. It's kind of a de facto standard library extension. The problem as you put it is that what's available on macOS/iOS is not fully available on other platforms. So you can't just take some software that uses Foundation and compile it for a non-apple system without potentially running into a bunch of things you need to implement or polyfill or whatever you want to call it. Foundation has been growing its support for other platforms over the years, so the situation has been getting better. But it's been hard to take the "open source cross platform" goal of Swift seriously when they there is clearly a priority given to apple platforms.


> is Foundation the same thing as what one might normally term a standard library? I can't tell.

https://developer.apple.com/documentation/foundation


Foundation is more like a standard library for Objective C. Some components are not needed in Swift.


Swift is great because it's the only alternative to Objective C. I don't really think anyone is clamoring for it to move into non-Apple domains.


I know I do. Swift is an amazing language, truly.


How so?


I would love it.

Swift is fantastic, and by far my favorite language to write client code in. (Compared to Rust, TypeScript/JS, ObjC, C++, C#, & Java.)


Why do you prefer it to Rust? I don't have a horse in the race just limited time to learn languages these days.


The removal of C from Android and iOS is going to pay off massively. The current state of things where every government is sitting on an endless collection of exploits is not ideal.


Considering that both Google and Apple are cardholding members of PRISM, it sounds like a bit of a frying pan vs fire situation to me.


Its still fewer backdoors and more controlled.


More controlled by who?


Apple. They now gatekeep access rather than it being open to anyone with the money to hack the platform.


So, we should feel safe knowing that Apple is protecting us from the wrong governments and working with the right ones? Considering how attached they are to the Chinese market, one would be forgiven for assuming Apple too is motivated by money over ethical responsibility.


Better just apple than apple + hackers. Not ideal but incremental progress is still progress.


It will never be ideal while Apple's ethics are so soft. Let's help them do the right thing.


Yes, but that's OUR surveillance and we're cool with that


Id rather be spied on by foreigners who couldn't do anything about my eccentricities even if they cared.

When you're own government's security services are spying on you, they own you.


The options weren't foreign government or local government, the current state is both, the future is local only. It's clearly an improvement.

These security exploits also get used by local police who wouldn't have access to the top government access used for terrorists.


They'll use the data to influence your next election. Just look at Moldova, who is infiltrated with Russian spies and dirty money. They've lost the last election but the country is paralysed by corruption.


Interesting. I didn't realize C was so easy to exploit (I am a novice). Very interesting implications for security. Thanks for the cool insight.


It's not necessarily _easy_ to exploit iOS, the problem is that world governments and hacker orgs have budgets in the millions to find these exploits. We have already gone as far as we can by just telling people to be more careful when using C so to solve this problem the only way forward is languages which make it harder or impossible to make the mistake that keep happening in C programs. Apple opting for Swift and Google using Rust in Android.


Ahh gotcha. Thanks for the background context.


While it's a very real advance, there will still be a nearly endless supply of 0 days, they just won't be memory exploits anymore.


Google has real world data on over 1M lines of Rust in Android and found that based on expected bug frequency they have astonishing few bugs due to Rust.


Correction: rewrite of PARTS of Foundation

There already was an open-source project to rewrite ALL of foundation, but it had stalled on the shores of having to re-implement everything:

  https://github.com/apple/swift-corelibs-foundation
The news is actually that Apple is now instead trying to define the bits/parts to support via Swift on all platforms (the original API's will always be supported on Darwin).

The announcement:

  https://www.swift.org/blog/future-of-foundation/
The discussion, with hairy details about which bits, esp. for async:

  https://forums.swift.org/t/what-s-next-for-foundation/61939/103
The plan is to divide up Foundation into more- and less-essential parts, to get more-essential parts locked down so people can rely on them.

What's Foundation? Swift's most-core library is the stdlib, tightly coupled to the compiler/language version, providing things like arrays, dictionary, etc., and available wherever Swift is. Beyond that, Foundation is the library with core API's for common features, e.g., for dates and concurrency. Stdlib is fully cross-platform and Swift-specific, but Foundation is a beast with API's dating to NExT with support for 20 years of API's.

Microsoft famously arrived at porridge for an operating system by maintaining backwards compatibility. Apple has cracked open Swift by developing in the open, but it's still Apple-funded and Apple-driven. Library and some integration support for other platforms has always had to come from community (notably compnerd's heroic effort to make-things-work on windows, and a revolving cast wanting Linux support for their server API's).

But there's no good reason to impose the whole Foundation history on other platforms. And there may be a movement inside Apple to migrate internal code to newer async API's, designed after the recent Apple-silicon generation.

For developers with server experience and some free time in a tech lull, it could be a good opportunity to help rebuild a new, er, foundation for computing on all devices, that's native but type- and memory-safe. The community is large and mature, but there is plenty of room for others.


It simply sounds like there isn't much demand for Swift from the Open Source community. Apple certainly makes a big push for it internally, but outside their platform, Swift doesn't have much momentum.

If Apple wants people to use Swift like a first-class runtime, they should stop treating third-parties like second-class citizens. They have $200 billion dollars in cold, hard cash - surely some of it could go towards the selfless development of a universal future computing platform, right?


The proprietary mobile OS vendor for which you have to use their proprietary desktop OS to build apps isn't popular in the open source crowd? Shocking, really.


How great are the IDEs over there for Swift? A quick search indicates jetbrains just shut down theirs, apple.com/swift/ really screams "this is apple stuff" and swift.org/getting-started/ seems like they use VS2019 to build on windows, but nothing about use as IDE. There is a VS Code extension with 80k installs, though.


Xcode has its flaws but mostly it's pretty great to work in. With it available for the vast majority of current Swift programmers, it's a hard market to break into.


Xcode is slow and requires a huge amount of disk space (50gb free space just to install it). Also you can check reviews in AppStore. Current rating is 3.2.


Sounds like a great market opportunity then!


1st class support for Linux is a must. It's a pain to distribute anything you write in swift (unless you can now statically-link?)

By this a mean common distros like Debian Stable must have a package for core libs, going to https://www.swift.org/download/ doesn't cut it.


The compiler has supported static linking the runtime libraries since 5.3.1, IIRC, and the package manager does so by default when targeting Linux since the acceptance of SE-0342: https://github.com/apple/swift-evolution/blob/main/proposals...


I don't think there's a binary between "totally open, general-purpose language across all platforms" and "closed-source/proprietary". Even if Swift remains a decidedly Apple language, developers can still benefit from having access to source code and to an open discussion/planning forum. Unreal Engine takes a similar approach


I wouldn't trust Apple as far as I could kick them. Not that I think they're evil per se, just that their motivations align with their shareholders, not the dev community. Too capricious by half


And historically Apple not aligning with developers is a good thing, isn’t it?

There’s a few interesting old videos on YouTube where Steve Jobs is answering developer questions in the audience.

Apple focused on the product and the customer, while Microsoft focused on “developers developers developers.

We don’t need another Microsoft.

We need them both to be different.


No need to wave the flag for your side - I don't trust MS for exactly the same reason. MS spouting BS to developers doesn't mean they actually prioritise them.


They want lock-in. If you write something in Swift and it's not easily portable to other platforms then you might decide porting is not worth the effort.


Does it mean Objective C is going to be second-class language? It would be a pity, I prefer Objective C to Swift.


Seems so.

I often wonder what things would be like if Apple had improved upon Objective-C by fixing up the underyling C language. Yeah it'd no longer be a superset of C, but so what? Neither is Swift. Swift is great but I wish it was closer to Objective-C in spirit by deliberately being a more lightweight language. It's language spec is gargantuan - they've pretty much succeeded in making it a language as complex as C++.

Merely adding generics and better typing akin to TypeScript's, to Objective-C would've worked wonders and made the transition less abrupt.


Not up to date? They already made most of what was possible without breaking compatibility.

https://medium.com/ios-os-x-development/generics-in-objectiv...


I used to too, it took me a while to switch, but now I see the benefits. Obj-c does give a lot more freedom, and sometimes you still need it, but what initially felt restrictive does actually lead to less buggy and predictable code, and some protocols like codable cut out massive amounts of code.

I still rely on some stuff like GCD that isn't very swift like, there still isn't anything as fine grained in Swift, but I'm liking combine in some cases.


My issue with Swift is that it changes faster than I can learn it. I'm not Apple developer, I just write some code for macOS or iOS sometimes, like one time a year. I learned Objective C few years ago and this knowledge is with me. learned Swift 1 but now it's different and it seems that I have to learn it again almost from the scratch.

For professional *OS development Swift probably makes sense. But if I occasionally need to write some code, Objective C for me is preferred. May be Swift is stabilized enough already...


There were a lot of changes made in the Swift 1–3 time frame as the team learned what it was like to write Swift at scale and developed a unique language style. That’s all done now, and Swift 5 code (released 2019) should continue to be compilable with few exceptions long into the future. It’s different enough from Swift 1 that it’s basically a different language though.

The biggest source-breaking change on the near-term horizon (which is still in progress) is compile-time enforcement of concurrency safety.


Very short discussion last month:

https://news.ycombinator.com/item?id=33923484


Someone knowing existing pattern help me understand this. right now when i use these “foundation” apis in swift it is doing c call? so they rewriting now the c code in swift? (dont do ios or mac code so now sure)

me dont understand what make it faster now. i once wrote some jni code to call c++ library from java. does mean there some similar code to do cross language call? they get rid of it and so things now faster?

offtopic but debugging crashes in c++ over jni call was super hard. logs not work and me not figure out how to fix. wonder if they had similar “troubles” and this make everything easier :)

always feel skeptical when seeing large code base have “rewrite”. try many time in me career thinking there good reasons but super hard and not best value in end


It's a lot of obj-c/c on apple and syscalls on other platforms, I think the biggest point its to get rid of annoying obj-c calls and make it fully cross platform as its a nightmare for both safety and performance in many cases to call objc.


Most of Foundation on Darwin used to call into Objective-C code. This is going to be replaced with Swift code while keeping the API the same.


Conversely, will existing Obj-C (and hybrid Swift/Obj-C) apps be able to call the new Swift Foundation?

And will it be binary-compatible with existing apps, or will Apple just ship a separate compatibility version of Foundation for use by legacy apps?


Yes, it’ll be binary compatible.


i don't understand how a compiled language like swift can be binary compatible with a dynamic one like objc. How are they going to implement method swizzling on NSObject for example ?


Swift and Obj-C are both compiled languages, and also dynamic in the sense that they allow a lot of runtime introspection and dynamic dispatch.

They're already binary compatible, in the sense that you can call compiled Obj-C classes from Swift apps (of course), and also - with some restrictions - call compiled Swift code from Obj-C.

In fact, you can "swizzle" methods in Swift just like in ObjC, on classes derived from NSObject:

https://medium.com/@valsamiselmaliotis/method-swizzling-in-s...


What you’re doing in this medium post is call the objc runtime from swift and take advantage from swift <-> objc bridging. This does not work if the class you’re trying to swizzle is a pure swift class. For a reason : swift is mostly static dispatch, whereas objc is pure dynamic dispatch (taking its root from smalltalk).

This means code that tried to swizzle foundation types, assuming foundation is developped in objc, will not work anymore once foundation is using a pure swift implementation.


If Foundation is to retain binary compatibility with Obj-C, then it will by necessity still be using the Obj-C runtime, with Foundation types still extending NSObject, and thus swizzling should still work as before.

The alternative would be for Apple to break compatibility with old Obj-C apps, instead shipping a compatibility version of Foundation.framework which old apps would continue to link against. But it sounds like they're not going that route.


Yet using the objc runtime defeats the purpose of being easier to port to other platforms.


Every time like this, bunch of people who believe safe programming is a language capability will jump out.

But in fact, the most valuable part of the code in any nontrivial system is the “unsafe” yet safe ones. You write memory safe code by understanding how computers memory works, sometimes you can use certain patterns to make that process easier, but not always. This is regardless what language you use: you programming in Rust, still the most valuable part is where one can get the “unsafe” part right.

A good C programmer will always a better Rust programmer when he wants to. That’s it.


It looks like an important step in progressively retiring ObjC. Swift is built from the start with a ObC compatibility in mind, which is a source of much ugliness and inefficiency in the language. With the Swift team working in a value semantic first language (Val), I can kind of see where this is going.

Really looking forward to OOP being phased out.


> Really looking forward to OOP being phased out.

All the higher level macOS APIs are still object oriented, and I don't see that changing TBH. And macOS application source code is essentially just minimal glue code to tie those system APIs together, in the end, the programming language used for this glue code doesn't matter all that much, since the code is completely dominated by API calls.


Cocoa is object-oriented, but modern APIs not so much. They are of course based on interfaces/traits/protocols (whatever you want to call them) but it’s not the same as 90ies style inheritance-based OOP. It’s the inheritance as basis for behavior is what I want to see gone. But even Cocoa already embraces composition more than inheritance.


Given that all modern language support most OOP concepts, phasing out is not happening any time soon.


By OOP I mean entangling of behavior and type hierarchy (mainstream class-based OOP aka. C++ OOP). Dot-notation and methods do not equate OOP. But of course, it depends on your definition. Pretty much anything can be called “OOP” if one wants to.


Ah so interfaces/traits/type classes hierarchies suddenly isn't OOP, nor dynamic /static method dispatch with type polymorphism, got it.


The defining feature of mainstream OOP (as popularised by languages like C++ and Kava) is the entanglement of type hierarchy and API contract. In other words, inheritance. And inheritance is the main point of criticism of the mainstream OOP. You can call it class-based programming if you feel this is more accurate. Let's not get stuck at terminological issues.

At any rate, I apologise for using such an ill-defined term as "OOP" in the first place. This terminology is so overladed and washed out at this point that it might sense to retire it altogether.


i don't think people working on val are part of the swift team anymore.


Ah, thanks for pointing this out. I was not aware that Abrahams has left Apple...


Yeap, those past years have been pretty scary for swift. First lattner, then abraham..


Could someone please explain what exactly is the “Foundation”?


It's the base framework where types like NSObject, NSString, NSDictionary etc. are defined. It's right there in the article. It also explains why it exists, what others exist, and what a rewrite will bring.


The Foundation framework is a cornerstone of most macOS and iOS apps, providing a large number of ubiquitous abstractions, including NSObject, NSString, NSArray and NSDictionary, and many more.



It’s not respectful to not read the link and instead ask somebody to summarise it for you.


You’re making the wrong assumption! I have read the article before I asked, and still have no idea what it is! Saying it provides these abstraction NSObject, NSString, NSArray tells me nothing! The article is clearly written towards readers familiar with Apple ecosystem.


Objective-C is an object-oriented programming language that is essentially a super-set of C. In fact the earliest implementation of Objective C, if I recall correctly, translated Objective-C code to C.

The only built in types in Objective-C are the C types, like int, char and so on. The "Foundation" is a set of classes that implement many useful types that all inherit from a superclass called NSObject. The "NS" prefix refers to Next Step, the name of the company that Steve Jobs started and where Foundation was first created.

One of the advantages of this scheme is that it is possible to have heterogeneous collections (like an array of objects, where the objects do not all have to be of the same type/class).

Underneath the NS foundation (whose headers are all in Objective-C) is something called Core Foundation, which implements the same classes, but in pure C. To do this well requires huge programming discipline, especially around memory management, and much of that pain is taken away by using the NS classes.

Swift has already started replacing some of the foundation classes, NSString and NSDictionary, for example, giving them the names String and Dictionary without the cumbersome name-space two letter prefix.

I suspect that some of this rewrite may still "bridge" to the CF classes, but in other cases, it would be much better to simply write the class in its entirety in Swift.

I hope that helps.


GCC still compiles Objective-C as if it is translated on-the-fly to C. Unfortunately, there is no way to get translated source :-( because it happens on AST-like level.


Yes absolutely, thank you very much!


The Foundation framework is basically the standard library of Objective C (and thus pretty much the standard library for most of the Apple ecosystem as well).


What is the Linux equivalent to Foundation Framework?


Its an API for the higher level parts of the OS, not the kernel, so that would be distribution dependent on Linux. You would build it on top of linux and make other programs program against it.

Windows has WinAPI which is basically the same thing.


Thanks! Can you please give a distribution specific example, such as Ubuntu/Redhat?


Is the Swift compiler itself written in Swift? How is it bootstrapped from source?


It's written in C++ although there is some Swift in it now [1]. The compiler doesn't depend on Foundation though.

[1] https://github.com/apple/swift/tree/main/SwiftCompilerSource...


What remaining Apple frameworks are not yet written in Swift?


Most of them. Very few frameworks are actually written primarily in Swift.


Would the macOS kernel (Mach) ever be rewritten in Swift?

And if so, what benefit would it provide?


I doubt the entire kernel would - there are low-level, memory-unsafe operations that an operating system performs that Swift _could_ do, but it would be fighting the design of the language and likely not a great trade-off for the kernel developers.

But pieces absolutely could be. The benefit would be compile-time checking for memory safety issues (reducing crashes) and language-level concurrency (fewer race conditions and a much easier path to parallelize single-threaded code for performance).


I believe all the code on the Secure Enclave is now Swift. I believe they use a special mode/analyzers to ensure only the “ok” stuff is used. Perhaps no memory allocation, for example.

It sounds to me like Apple would like Swift, perhaps with some extra tooling, to be able to handle the Mach kernel. I don’t know if they’ll go that far though.


IIRC, it is /some/ (not /all/) of the code on the SEP is now Swift. SEPOS's microkernel, etc. is still C.


Ah, I must have been misremembering. Thanks.


Definitely not, it’s just used for some parts.


Wait, no more "po [$arg1 reason]"?


I am pretty certain that most of high level use cases for Rust could be replaced with Swift, with increased developer velocity, if Swift was actually cross platform. And that's coming from a Rust fan. It will be interesting to see how effective this rewrite is.


> I am pretty certain that most of high level use cases for Rust could be replaced with Swift, with increased developer velocity, if Swift was actually cross platform.

Sure, or really OCaml or any other language with ML-family features (other replies have already mentioned Kotlin or C#). But the fact that it took Rust to get adoption and not OCaml suggests that it's not language functionality that drives adoption; Rust has something that those languages don't. (My theory is that it succeeds by being the first decent language that can match C's performance-on-silly-microbenchmark numbers)


I would love to be able to write cross-platform desktops apps in Swift. The language isn't perfect of course but I enjoy it a lot more than the alternatives I've tried.

That possibility is still a ways down the road of course but Foundation getting a cross platform rewrite is a nice step in that direction.


If you or others on this thread are interested in working on a Swift compiler for Windows, please reach out to me :) a good friend is hiring a team to make this happen.


How will that be different from Apple’s Swift toolchain for Windows (https://www.swift.org/blog/swift-on-windows/)?


https://youtu.be/bOMQiMxh5Bc?t=430

relevant listen is about 2min long


To me, that sounds as if most of the work would not be on the compiler but on improving the development experience (IDE, debugging support).

On the other hand, your comment said ”working on _a_ Swift compiler for Windows”. To me, that sounded like a new compiler, not adding missing features to the existing one. Which of the two is it?


I don't know if you can write high-throughput systems with 100% reference counted GC. Swift is targeted at UI work pretty hard.


You could take a look a swift-nio (https://github.com/apple/swift-nio) which is a pretty high-throughput system. swift-nio does this using some reference-counted GC where it simplifies the code and doesn't affect performance. Otherwise, value-types are used which incur no GC overhead (unless they are copy-on-write, and backed by something that requires reference counting).


Well that's why I said high level application use cases and not systems. Ideally I'd love to see a compile mode for modules where certain conveniences were enabled like: auto-clone, unified reference/owned types, disabled lifetimes and instead borrows become auto-wrapped-arcs, etc. That way you could care about that stuff in modules where it matters, and ignore it in areas where you prefer convenience over zero-cost abstractions and raw performance.


Sure you can. C++ has reference-counted garbage collection with its smart pointers and RAII, just like Swift. People certainly write high-throughput systems in C++.

Swift is slower than C++, yes, but not because of its memory management scheme.


Unless you go absolutely ham with smart pointers (which everyone knows not to do if you care about performance), C++ isn’t really a reference counting language.

I wouldn’t call RAII “reference counting”. I mean, I guess, but it’s the programmer or the compiler doing it. I’m talking about runtime reference counting.


Afaik Swift is pretty close to using Rc and Arc everywhere. Does swift do some more escape analysis than Rust?


Swift only uses reference counting when working with objects; Structs are optimised using copy-on-write. There's currently work on implementing move and ownership semantics, similar to Rust, but opt-in rather than by default.


Ummm… No.

Swift structs are just like C structs (from a memory perspective). The copy-on-write thing is implemented manually by storing a private refcounted object in your struct. See the implementation of Array for example: https://github.com/apple/swift/blob/main/stdlib/public/core/...

There’s no magical copy-on-write mechanism at the language level.


I don't know why this is downvoted, it's true. There is no automatic copy on write optimization in Swift. It's a manual optimization that expensive types like Array implement manually.


>I don't know why this is downvoted, it's true.

I know, right? Some people are pretty insecure…


I haven’t read that much on Rust, but that should be pretty much it.

Swift has an emphasis on value types, which often are stored on the stack, but only up to a certain size. Copy-on-write makes this feasible - often, those value types are passed by reference until a write actually occurs, but that’s opaque to the dev. Value types can have pointers to reference-counted types - if a value type is passed/copied, any pointer it owns is retained (weirdly enough, they claim that value types have no „lifetime“, but at some point those refs have to be released - we just have to trust the compiler in this).


Swift and Carbon both seem like very strong contenders in this space. Swift is already a really strong language albeit a little too tied to Apple’s ecosystem. This along with the ownership manifesto slated for Swift 6 (and C++ interop) should make it easier to use it everywhere. In particular, I see a lot of value for tooling (eg JS code bundlers and such) that are rewritten in “thinner” languages for performance needs without sacrificing developer productivity.


Interesting. Asking as someone who's never programmed with Swift or Rust, but is familiar with the syntax/ideas of each, I'm curious why you say that?


If you don't care about performance and "zero cost abstractions" and are more interested in Rust for its memory safety, then most of your Rust programs end up with a lot of syntactic and library bloat to add those "costly" (but convenient) abstractions back in, like explicitly cloning or wrapping types in Rc and Arc or just always taking ownership of borrowed data by constructing owned types with it. And then there are lifetimes which can be and are elided in almost all uses cases so they're ultimately just confusing whenever you actually have to deal with them. C.f. Swift, where the default is in my experience what "higher level" rust programs end up looking like, but with much nicer and cleaner syntax. And Swift has a very nice rather memory safe API for calling into C or otherwise just generally accessing scoped pointers when needed. This same api can actually be used to expose more performant apis to underlying data etc. when needed. It's kinda the reverse of Rust.

It feels like Rust is two languages in one: a high performance zero cost abstraction language that tracks pointer ownership, and a package rich and hyper explicit application language build atop all the guts. That's why I say high-level application use cases, because most high-level applications are not concerned with raw performance but rather with functionality and user experience. The parts that are concerned with throughput can be implemented using tooling where those knobs are available. I have enjoyed writing a cli and api server and various libraries in Rust. But every so often I am left wondering when Swift might be able to replace it for my higher level concerns. Alternatively, it would be neat to see some effort put behind a "convenient rust" type of compile mode for modules where you could compile with things like implied clones, unified owned vs reference types, auto-arc/box, etc.


> Alternatively, it would be neat to see some effort put behind a "convenient rust" type of compile mode for modules where you could compile with things like implied clones, unified owned vs reference types, auto-arc/box, etc.

If you can come up with a precise transform from "convenient/sloppy rust" to the underlying language, it can already be implemented via proc macros and the #[attributes] syntax. This is how async programming was prototyped in Rust before it became part of the language proper.

Though I'm not sure it's fair to describe Rc, Arc, Cow etc. as "library bloat". It certainly adds some boilerplate, but it's designed to stay manageable.

(Arguably, good coding practice should also informally document why the Rc, etc. is needed and can't be refactored away, i.e. what parts of the program are controlling the lifetime of each Rc'd object.)


Maybe bloat wasn't the right term (was thinking that it bloats the code, not the resulting program), but when you're in a scenario where you're doing one thing more often than the default, it feels like boilerplate to have to continually say `String::from("some string ref")` (as an example) just to take ownership when that's what you want every single time. Or, at least something like an "owning assignment" or "owning deref" operator that automatically adds the `to_owned()` call would help reduce boilerplate.

Maybe it was a mistake to conflate lifetimes and generics. It wouldn't be so bad to deal with references in Rust if you didn't have to include lifetime parameters when building APIs. It seems silly that (in my experience) people gravitate toward structs with owned fields just to avoid specifying the lifetime of a borrowed field.


>If you don't care about performance and "zero cost abstractions" and are more interested in Rust for its memory safety...

Just about GC language has similar tradeoffs. The question here would be why Swift over Go/C#/Java/Python/Typescript?


Well that's easy. Swift is:

1. Much more expressive and featureful than Go

2. More concise and modern than C# (debatable maybe?)

3. Less JVM than Kotlin (the apples-to-apples; Java is a much worse language)

4. Way, way faster and more typesafe than Python

5. Better support for parallel execution (and probably fine-tuning other performance knobs) than TypeScript


1. Definitely, but Go is well very optimized for concurrent network apps.

2. I think the C# team has done very well modernizing. The runtime is a bit of a bother though.

3. There's Kotlin/Native and Kotlin/JS.

4. Python has dev speed and ML benefits.

5. If you're going this way, performance probably isn't your main goal?

All-in-all, Swift is interesting, but (IMHO) lackluster crossplatform support and toolchain is a problem.


I have used all those languages seriously save C# (only lightly) and can safely say Swift knocks them all out of the park in terms of actually writing applications. Indeed, its cross platform support is lackluster, thats why we’re here.


Makes sense, thank you for explaining!

Personally I'd love to see a forked flavor of TypeScript with static compilation and multithreading (which implies a lot more immutability, etc). Maybe I should give Swift a try…


If the foundation is rewritten in Swift, it will become a very potent alternative to Rust for this use case. Swift+LSP has good enough IDE support and the developer experience and debugging capabilities should make possible.


I think the fact that Rust won't provide inheritance based OOP workflows means that something like swift will always be valuable, especially for application development.


What is the value provided by inheritance? It was my understanding that this was deliberately excluded from Rust as it's now seen as a bit of an anti pattern mostly inferior to traits.


None. It's code organizational syntactic sugar and abstraction that incurs a runtime penalty (vtables).

Go ditched OO for similar reasons.

Haskell, Erlang, and more get along fine without OO.

(OO crested with Smalltalk, Java, Ruby, Python, and JS (more prototypal though). Let's not talk about C++98)


Don’t trait objects work the same way as vtables (fat pointer)?


Monomorphisation solves this. The compiler duplicates functions for every type you use them with in the finished binary.


Inheritance is about providing business logic in all downstream inheritors through a super() chain. If you want shared logic in Rust between several implementors of a trait you will probably end up with duplicate code. There are drawbacks to both.


You can solve duplication with macros to implement traits. The Rust way seems to provide the same convenience without the runtime costs.


Macros provide code injection, that's not the same thing. Inheritance allows for overrides so you can have [default super logic]+[custom child logic]. That's only possible in rust with a lot of copy and paste, which can be fragile to maintain long term. Most UI frameworks are built around inheritance to guarantee logic provided by the base view or view group, while allowing for inheritors to mix in custom logic as well


How would you compare Swift with Kotlin in terms of being a good stand-in for high-level Rust?

I've not paid attention to Kotlin in a while, but it seems like there's more progress at supporting Kotlin across more platforms (Kotlin native, web?, etc). I'm curious if people feel that the language features and design are at about parity, or if one is significantly stronger/weaker.


Last time I toyed with Kotlin Multiplattform (bit less than a year ago) it provided only really weird Objective-C bindings for use in iOS/macOS. Like, a statically defined List turned into an Obj-C iterator. Enum and sealed classes were quite problematic either - I honestly would not consider using KMM for anything but a PoC.

BTW, JetBrains recently sunsetted AppCode, so they are bleeding their Swift talent now. I suspect that doesn’t bode so well for Kotlin‘s Swift interop.


Another scenario where an often missed and underrated choice would be C#.

Nowadays it runs everywhere and supports a variety of deployment targets (relying on local runtime, packaging it together into a single somewhat compact binary, natively compiling the code (NativeAOT, it's good but has some limitations) or mixing and matching all of the above).

It is also one of the best high-level languages to write performance-oriented code with, especially so with .NET 7 which allows writing cross-platform SIMD code in a similar fashion to Rust's 'portable-simd' crate.


My experience is probably colored by .NET/WinUI but when I was dabbling in writing a Windows desktop app a few months ago I can't say I loved C#. My background has no Java in it so maybe that's why, but a lot of things that come standard as part of Swift seem to be in external libraries with .NET/C#, and it felt like some things were different for the sake of being different. Point in case, with Swift and Kotlin I've gotten used to chaining various transformation functions (map, compactMap, flatMap, filter, sort, reduce, etc) and some of these don't have a 1:1 equivalent in C# which was a huge hit to productivity, even if the same result is achievable via other means.

And while it's tangential, Xaml drives me absolutely bonkers. It's like the worst parts of iOS Storyboards and Android Framework XML layouts except there's no escape hatch for those looking to build a UI in pure code (Android Framework is a bit of an offender here too, but Jetpack Compose looks to remedy that).


All these are part of standard library. Map is Select, Filter is Where, Reduce is Aggregate, etc. You can write C# in a functional style easily, it is one of the premiere features.

UI frameworks on the other hand…I feel your pain.


I did find and use Select and Where at least, but I recall running into caveats with them or with their interactions with other parts of the language that meant that they couldn't be used identically. I don't remember specifically what it was since it's been a while but I remember it being frustrating. I should probably find a cheat sheet of equivalents if I try writing it again.


The C# equivalents uses SQL naming, knowing this makes it much easier to understand.


> some of these don't have a 1:1 equivalent in C# which was a huge hit to productivity

This is interesting because all these magical functions (zip, map, Rx etc) have roots in LINQ which sprung from .Net world. I find it hard to believe that the battle tested CLR and C# doesn't have the equivalent functions.


> all these magical functions (zip, map, Rx etc) have roots in LINQ which sprung from .Net world

No they don't. They're essentially unchanged from ML back in the 1970s. The part that was new in C# was the SQL-like syntax on top of them, and most subsequent languages haven't considered that worth adopting.


Apart from language syntax similarities, Kotlin and Swift are fundamentally different languages, one runs on a VM (ok there's KNative) and other runs on bare metal.

Kotlin has it's roots in JVM and the early language design choices clearly reflect that. Kotlin/Native will find it very hard to break free of it's JVM counterpart because it can't diverge too much from it to maintain compatibility.


Swift has the distinction (for better and worse) of not being tied to the JVM.


Kotlin-native is an effort to compile directly to your target non-jvm machine.


Background: I spend most of my time in Swift but do work in Kotlin too. Occasionally some Rust.

Swift and Kotlin are in many ways similar, but I find Swift to be a bit nicer in almost every respect. Swift has more powerful generics, tuples, structs, powerful enums that are value types, `if let`/`guard let` statements that I'll take over `?.let {}` any day, etc. Kotlin is somewhat more expression oriented, which is nice; Swift is moving in that direction too, but slowly.

A native Kotlin (I have no idea how mature it is) might be a decent alternative to high-level Rust, but I'll take Swift if it's available. It's both more pleasant to work in and I assume more performant with better access to stack allocated value types.


Kotlin is about as nice as Swift until you run into some bit where it's interfacing with Java and then it becomes a lot less fun. Swift I think handles Obj-C/C interior a bit more gracefully by comparison. Various bits of other JVM baggage also aren't great.

That said I've only used Kotlin in the context of Android development. It might be nicer elsewhere.


I haven't seriously used Kotlin in 4-5 years. I too was excited about it supporting native compiles and web assembly but I have no firsthand experience with how far those efforts got.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: