This will take a while, but I'm looking forward to it.
It's probably another big step towards "Swift everywhere," without worrying about bridging to C.
I've been doing little but Swift since 2014, and really like the language. I'm still "on the fence" about SwiftUI, but that's mostly because of the level of support from Apple, and the [im]maturity of the system. This will help with that.
Yes, and it means Swift scripts and modules that don’t reference UIKit/AppKit/SwiftUI/Combine will run on Linux and possibly Windows with zero or little modification.
I‘m a little sad though that they didn’t start this endeavor years ago, because IMO Rust has already built so much momentum that it will win (for the popular, medium to long term definition of „winning“).
I don't particularly care, whether or not Swift ever leaves the Apple ecosystem (like ObjC). In that domain, Rust will never "win." I think that Rust is an awesome server language, though, and I'm glad to see it gain traction. I just hope that it doesn't get trashed by a bunch of junk dependencies, written in it.
I find using apps written, using hybrid systems, or PWAs, to be quite painful (on Apple devices -and that includes that awful JS TVOS system), so I am a big proponent of real native apps.
I think whoever „wins“, it would be necessary to have good interop at least. Currently, Rust devs are integrating core libs into native iOS apps by going the C route. All this work to make everything memory safe, and then this.
Got it - the other way is actually fine because Swift does have a stable ABI - swift-bridge [1] allows interop in terms of high level types instead of devolving to C.
What is a real native app? Most operating systems have multiple graphics interfaces, ui toolkits and other abstractions with varying levels of inconsistency. They could all be driven by various languages, compiled or interpreted.
macOS and iOS not so much. Which is probably one of the reasons why apps for these platforms are often much better (or at least more polished) than say Windows apps.
In my view, "polish" is rather orthogonal. For trivial apps that just assemble the building blocks that come out of the box, I'd agree. Otherwise, I find the "native" developer experience (XCode, Swift compiler) to be rather unpleasant. Moreover, the economics of developing "natively" limit the amount of polish you can actually justify.
The way to deal with that, is develop (or buy|integrate) dependencies that natively implement polish and chrome.
For example, a UI framework may implement good transition animations. That’s pretty typical. Write an extension to UINavigationController, or UIViewController (if you use UIKit) that implements these transitions, package it as a standalone project, and integrate it, or simply have project-specific baseline framework extensions.
That’s what I do. I have a ton of these packages[0].
It’s a fair bit of work, to do it yourself, but there are package[1] and extension[2] indexes. Caveat emptor.
> Yes, and it means Swift scripts and modules that don’t reference UIKit/AppKit/SwiftUI/Combine will run on Linux and possibly Windows with zero or little modification.
This is already pretty much true today, aside from the few, mostly down-in-the-weeds holes in Foundation on Linux that will be solved once this rewrite is complete.
Cross-platform command-line tools, web backends, and things like AWS Lambdas are all possible and pretty easy to do today.
I watched a Rust intro video recently that provided a perspective I liked, so I’ll share that here: MS, Apple, Google (and more) all relied heavily on C for low level code that needed to be as performant as possible. It turned out though that C‘s memory management is so problematic that many/most security issues are caused by it. To address that, Google invented Go, Apple made Swift, Mozilla gave us Rust etc.
MS is interesting - they tried to write memory safe C and invested heavily, but admitted defeat eventually - and started using Rust. And I think that’s what will happen at most companies eventually. Rust has a lot going for it (faster than C sometimes, excellent WASM support etc). Swift might have been another contender, but Apple kept things too close to their chest for too long IMO.
As another poster wrote, Swift certainly won’t die at Apple (and their ecosystem), and Google will certainly keep Go alive. But I think Rust will eventually be used at many/most other companies for anything security or performance critical. Maybe it will replace C as the de facto low level language.
If it was just a matter of avoiding memory safety Java and ML were right there, and these languages are pretty different from each other.
It's not just memory safety. Go gets motivated by highly concurrent systems with large numbers of programmers and prioritized simplicity and developer experience. Rust was aiming at very high performance at the extended of complexity and compile times, and Swift wanted to build UI hierarchies.
ML is still right there, and yet people are adding its features to more popular languages (recently Java). OCaml had a compiler to JS released in 2010 (js_of_ocaml), yet Typescript was still released 2 years later. Is this because of technical concerns? NIH syndrome? Lack of knowledge about absolutely everything that has been done? A need for control? Probably a bit of each, and other things too.
No it didn't, it created C# because they got sued by Sun due to the J++ extensions, none of them were related to value types and AOT.
J++ extended Java in having a Windows specific framework JFC, Java Foundation Classes, what later became Windows Forms.
Support for events and J/Direct, which is basically how P/Invoke came to be on C#.
.NET has always supported a basic kind of AOT via NGEN, which only supports dynamic linking, AOT has to be done at install time, requires strong named Assemblies and it is tailored for fast startup leaving the rest of the work to the JIT.
If it wasn't for the lawsuit, C# would never happened, in fact the research being done with COM vNext used J++.
Currently the Android team doesn't have any plans to expose Rust support on the NDK/AGK, anyone going down that path is expected to support themselves in Android Studio, Gradle, CMake, android-ndk, AAR/Bundles, JNI integration.
Nothing of that covers "Android Studio, Gradle, CMake, android-ndk, AAR/Bundles, JNI integration", which an Android shop expects to have out of the box support in the Android SDK installer.
Note that android-ndk in that comment means the original Makefile based build tooling, which CMake builds still lack some corner cases in functionality, hence why I listed both.
So, just to be clear, in what languages do I not need to "support myself" under your definition and what does the "support" consist of?
Do I get an Android test phone to try my software out? Is there like free phone support so I can chat to some expert in my language about Android problems? You make it sounds like a pretty big deal, but my small experience† of writing Android software a decade ago was that it just wasn't that hard.
† I wrote an implementation of the now obscure mOTP (similar to TOTP) for in-house usage. For obvious reasons I named This One Time app "Band Camp" which was already a pretty old reference at the time but once I thought of it I couldn't help myself.
I just tried this and... no C++. You can add the NDK and start building stuff with C++, but that's also exactly how the Rust offering works. If the result was actually a properly configured out of the box C++ development environment that would be pretty nice besides the Android stuff, but it isn't, the actual result out of the box is you get to pick Java or Kotlin.
You can do C++ native development for Android, but only via basically the same route as Rust, there's just not the huge gap you implied.
Since when does Rust appear as language selection on the NDK installer?
Has out of the box support on Android Studio for:
- mixed language debbugging
- project templates wizard
- code completion and linting
- JNI bindings generation
- two way editing between native JNI wrappers and Java/Kotlin native method declaration
- packaging of Android libraries for native code
And for game developers, if they so which, plugins for Visual Studio with similar capabilities.
In both cases, official support from Android team if there are issues with the above tooling.
Apparently you haven't tried enough, if you think bare bones NDK integration with cargo is enough for Android shops.
Maybe Rust will get on https://developer.android.com some day, but it isn't there today, even despite the fact that it is being used for Android internals, there is zero documentation on how to write Android drivers in Rust.
Since editing window is already out, I am not arguing against Rust, and would welcome first class support for Rust on the Android tooling (Android/VS Studios, NDK, AGDK, Modules/Bundles) and being visible across https://developer.android.com documentation.
Can’t speak for Rust (but I hear that it is now quite mature -it predates Swift), but I’ve been programming Swift, since the day it was announced. In that time, the language, itself, has matured; possibly to the point that it’s starting to look a bit “Swiss army knife”-like.
I’m not exactly your typical jargonaut. I’ve been writing software since 1983. Been through a lot of changes, paradigms, and just plain old bullshit, in that time. I don’t really go for “shiny,” just because all the kids are into it, these days.
I do not claim everyone coding Swift or Rust is a novelty-seeker; rather, novelty-seekers were drawn to those languages, and will be drawn to others as they appear.
I can understand the appeal of abandoning Objective-C.
“Rust grew out of a personal project begun in 2006 by Mozilla Research employee Graydon Hoare. Mozilla began sponsoring the project in 2009 as a part of the ongoing development of an experimental browser engine called Servo. The project was officially announced by Mozilla in 2010.”
The segment in this interview where Chris talks about Rust and compares the design decisions they made in Swift tied the two languages closely in my head: https://atp.fm/371
the section on the future hopes for Swift similarly; it seems both Rust and Swift contend to replace C++ in many use cases
Lattner had a vision for a lower-level Swift, which wouldn't need the support Swift needs today - it didn't end up happening and I suspect is not practical. He talked about it in several places, but obviously by 2018 or so what Chris Lattner thinks about the future of Swift doesn't matter very much.
This "lower-level Swift" felt like a similar problem to what Carbon or Herb Sutter's Cpp2 have. They've got something that's unsafe, and they want to somehow build a safe abstraction, but that's a foundation layer, and you've already built a tower of stuff above that, so you need to build it underneath all the stuff you have, which will be way harder than what Rust did where they begin at the bottom.
Is it impossible? No. But it might well be too expensive to be pulled off in practice, especially given that you need the result to pay back your expense over and above what exists today e.g. Rust.
Aside from Swift having automatic reference counting, and Rust relying more heavily on its borrow checker, Swift and are super similar languages. If what you want is an ergonomic language that lets you combine the best of ML type systems with C family imperative programming then they’re the two obvious choices.
I would not describe Rust as being particularly ergonomic. While it is a suitable replacement for C++, that is not necessarily a high standard to meet.
For me it seems to almost exactly fit my model of how things ought to work, and of course the diagnostics are so much better than most languages.
We saw the other day where C and C++ just let you write what seems reasonable but then they do something insane, D says no, that's a syntax error, but Rust's diagnostic gives a name to what you intended, says you can't do that, and suggests how to write it instead.
if (a < b < c)
C and C++ think that's doing a comparison, coercing the boolean result to whatever type to compare it with the remaining variable, and then acting on the new boolean result. D says that's a syntax error because there should be a closing parenthesis after the variable "b". Rust says "comparison operators cannot be chained" and suggests a < b && b < c
Edited to add:
Swift says: adjacent operators are in non-associative precedence group 'ComparisonPrecedence' -- which is definitely better than D, but it doesn't offer the helpful suggestion.
> Rust code is almost as complex as C or C++ for any non-trivial application.
This is also, if not more, true for Swift as well. I've been working with both Swift and Rust (in addition to C++) for some time now and I find real-world, advanced Rust SIGNIFICANTLY easier to read and comprehend than real-world, advanced Swift. This is, IMHO, due to the fact that where Rust chooses to be syntactically simple, explicit and consistent, Swift chooses to be syntactically complex, idiom-based and feature-bloated. Sure, Swift code can look very modern and attractive at times, but usually when it comes to superficial code samples in Apple promotional videos. Otherwise, if you, say, look at a large codebase written by someone else, Swift reads just as badly as C++ complexity-wise.
I’ve worked professionally with Java, Scala, C++, Python, Rust, JavaScript, Typescript, Ruby.
None of them are ergonomic for non-trivial applications!
The goal is to appropriately abstract away the super minority of code that deals with the non-trivial parts into a nice ergonomic interface.
Rust is frankly better than most in the list above at allowing the writer to create an ergonomic interface. Yes it’ll take the writer 3x as long in the short term to create the ergonomic interface but:
1. Relative to everything else creating/maintaining these types of internal abstractions is a super minority of time spent reading and writing code.
2. Unlike other languages, you’ll end up with fewer iterations of the interface because it’ll push the author to really understand the complete interface, rather than shipping a buggy interface that needs iteration. Also refactoring is Rust is simpler than any other of the listed languages (because it self documents more assumptions).
3. The ergonomic interface likely has already been published as a crate. I.e. don’t need to write it at all. These internal abstractions are more likely to be written in their first pass as general purpose than in other languages because of the collaborative design working with the rustc compiler.
That’s just not true. Just the fact that you can reuse libraries easily makes Rust much easier. That combined with memory safety means that the two biggest headaches of C and C++ are just gone.
I don't know why you are being downvoted, but in my experience this is exactly right.
The pain of adding third party C++ dependencies is undeniable, especially in a cross-platform manner. I've had the displeasure of maintaining three different C++ build systems in 3 different companies, and they were all a nightmare.
> The Rust web server frameworks are approaching the ergonomics of Typescript web server frameworks.
I don't think that's true at all, at least it wasn't for me. Async in Rust has always been hard for me, it seems that using it requires knowing about how exactly it works and how your runtime of choice works. This is a lot, and requires a lot of time.
The documentation in Rust is above average, however the Rust ecosystem tends to be very unstable. Libraries often pull lots of depencies, many of which aren't even in 1.0, some that have switched to a new version but still have docs made for the old. The guideline from semantic versioning is to release 1.0 as soon as people start depending on it, since the idea is to version the public API. This is not always respected. And goes hand in hand with libraries that are 4 years old and on version 12 or something.
I remember actix-web before the 4.0 being particularly hard to get into.
I've been writing rust recently and trying to figure out how to write generic traits using higher ranked trait bounds with understanding variance sufficiently to know I'm not creating a soundness hole is much more difficult for me than writing C++. It's like all the template hackery madness you needed to resort to to do anything mildly interesting in C++98, except it's in the core idioms of the language.
> trying to figure out how to write generic traits using higher ranked trait bounds with understanding variance sufficiently
I agree! You’re currently dealing with writing non-ergonomic Rust.
I’d argue your domain is missing a fundamental reusable library/framework or the framework is currently missing a piece. Once someone publishes the needed library (hopefully it’ll be you) then everyone consuming it can just Lego block multiple crates together. Lego blocking crates together (barring heavy macro crates) is very ergonomic.
95% of all new code written is Lego blocking other crates. 5% of new code written is to build new or improve/patch crates.
Swift would never have won, and will never win, while it's so closely associated with Apple. Many people in positions of power believe (correctly) that Apple will always put its own needs above everyone else's. Those are the people who have chosen Rust.
Swift will never win against Rust for a much simpler reason: performance. Invariably, rust is chosen where performance is critical, and Swift’s reference counting GC ensures it will never compete with Rust in those scenarios.
Slight correction: Swift‘s ARC is not a GC (the compiler just inserts retain/release calls where it’s necessary), and one can write very performant code by steering clear of reference counted types, and Objective-C types (dynamic dispatch is really slow). Value types with copy-on-write shouldn’t impose much overhead. Automatic inlining is progressing nicely AFAIK, and recently introduced concurrency can collapse callstack frames (not sure Rust does that).
I have never used IMP caching, and don't think the average developer uses this (aside from indirect use of Apple's frameworks). Otherwise:
> A normal Objective-C message send is a bit slower, as we'd expect. Still, the speed of objc_msgSend continues to astound me. Considering that it performs a full hash table lookup followed by an indirect jump to the result, the fact that it runs in 2.6 nanoseconds is amazing. That's about 9 CPU cycles. In the 10.5 days it was a dozen or more, so we've seen a nice improvement. To turn this number upside down, if you did nothing but Objective-C message sends, you could do about 400 million of them per second on this computer.
This has the feel of premature optimization. These are the fastest operations. Their running time starts to get overwhelmed by the slower operations as you go down the list of things tested. Making the fastest operations a few times faster doesn't necessarily have any noticeable effect on your program.
Rust is pretty often chosen also in places where performance is not that critical. It has a good ecosystem, a good type system, native builds and great performace. Those factors make it a good choice in places where you'd be just fine even if it was slightly slower. If Swift loses by a small margin — and Swift is a high performance language too — it could very well edge out Rust with ergonomics.
You don't need ultra performant memory management for most of your code. Swift compiles to machine code and ARC is fine for most use cases and you can use unsafe raw pointers etc for your occasional hot code path where you want to optimize memory management.
Replying to both comments at once: I understood “win” to mean “be a viable replacement for C / C++”. A viable replacement has to meet _everyone’s_ needs, not just the needs of those for whom performance isn’t critical. And to be more specific about the performance, it isn’t by a small margin (unfortunately I no longer have the links handy, but the general trend was that Rust, C and C++ were in the same performance group, then Java, then Go, then Swift, then JavaScript. It was something like 3 to 5x slower).
I should also perhaps mention that Swift is my favorite language to develop in. I’m not trying to be antagonistic, just realistic about its prospects against Rust.
Swift is great. I’ve done a lot of it, but also bounced around between Typescript, Elixir and Java and every time I miss Swift’s strongly opinionated, highly descriptive and flexible style.
Also, for the most part, it’s vastly superior to Obj C. The sad thing is that when writing Obj C I often find myself not writing the clean, well structured solution that’s in my head out of sheer resistance to the verbosity that it would take. I always just keep thinking how simple it would be in Swift and how much longer and more keystrokes / files it takes in Objective C.
Waiting for things to settle down on the server side and will gladly use Swift for web APIs once dust settles as the lang is really nice for the most part.
I really like his style of 1) summarising articles he links to 2) attaching follow up from elsewhere on the internet, it's a nice pattern I'm surprised hasn't been more widely copied
Not sure why but most people don't seem to have noticed that people outside Apple will now be able to contribute to Foundation and these contributions will ship across all platforms.
Good for Apple. In my opinion they are laying the groundwork for reducing technical debt.
Objective-C had a good run. I haven’t used Objective-C in over 10 years but I have used Swift about 10% of the time in the last four years. Swift is a nicely designed language and I could see it supporting Apple’s business for many years.
Will Swift ever be a primary language on Linux? I would say yes, except now that Rust is used the advantages of also mixing in Swift are diminished.
I would recommend people always use language like Rust or C++ instead of swift, so that it is easy to make cross-platform apps. The UI can be made using a native toolkit like swift, gtk, winapi, etc... however, the core engine app should be written in a language like Rust.
Swift just isn't mature enough to know for sure, at least on Linux and Windows. With proper optimization, it might make a case for itself on Linux. That being said, it's going nowhere fast without official support from Apple.
Who else remembers when they re-implemented the (at the time version of) the Foundation framework in Java for WebObjects? That was, what, 1998? Right around when NeXT got re-absorbed by Apple, not sure if the rewrite was started before or after.
Those were the days! Actually kind of amazing that the Foundation framework is the result of steady evolution of an ObjC framework written by NeXT... over 30 years ago? All those `NS` prefixes that are still hanging on are for `NextStep`.
If this really replaces the ObjC implementation... would that be the final sunset of the codebase that has been there (at least ship of theseus style) from NeXTStep days? I wonder if there's continuous version control history of Foundation source from the start, and how many, if any, lines of code remain from the initial implementation.
Foundation was developed for the needs of EOF so it makes sense there was a version for the Java WebObjects. There's almost certainly NeXT-derived code in macOS/iOS with a longer pedigree and a bunch of it will probably outlast the Foundation rewrite. As software evolution goes, it's an astonishingly long run, no doubt. Especially for a technology that very nearly went extinct.
Huh, Foundation was developed for EOF? (Enterprise Object Framework; it was actually very much like Rails ActiveRecord). I did not realize that, I always figured it came first.
There were already many Rails like frameworks when it came to be, I never understood the hype, specially since I was part of one written in TCL back in 1999, whose core team went on to create OutSystems in 2001.
From the day I realised that, I’ve never ceased to be amazed by how many things have come from Next or were derived from it. I find it quite astounding.
During OS X early days, Apple wasn't sure that the Mac OS developer community groomed on Object Pascal and C++, was that keen into embracing Objective-C.
So they jumped into the Java hype, created their own JVM implementation, with Swing extensions for the OS X UI, and Cocoa Bridge was born for Objective-C interop, with bindings for all key Apple techonologies like Quicktime and such.
When it became clear that Objective-C wasn't going to be an adoption problem, instead of using a 3rd party owned language, they dropped support for Java and eventually gave their implementation to OpenJDK.
> With a native Swift implementation of Foundation, the framework no longer pays conversion costs between C and Swift, resulting in faster performance.
and this:
> A reimplementation of Calendar in Swift is 1.5x to 18x as fast as the C one (calling from Swift in various synthetic benchmarks like creation, date calculation).
First, that range of 1.5x-18x is kind of huge. Why?
Second, why would the intro with C be such a big performance penalty even assuming it's just 1.5x? I know there must be an overhead, but why so large?
Also, why single out the Calendar? Is it somehow representative?
My guess is that it isn't just down to the language, but possibly better design and perhaps better programmers. Back when I used Java as my main language I reimplemented a few projects in Java that had been C++ and sometimes got an unreasonably high performance boosts. (I'm not implying I always got things to run faster, but often). Most of those were down to:
- better understanding of the problem that needed solving, which leads to:
- structuring the application so it is better suited for what it actually has to do
- better choice of data structures and the algorithms that operate on them
- concurrency was more easily usable, hence the code would often run on more cores
In a couple of cases I discovered that Java was inherently faster because it coincided with "what the GC likes". In some cases Java turned out to be more CPU intensive, but that this was mitigated by being able to use more cores, so the user experience was better. (There were also setbacks: anything that looks like an LRU cache for instance is not friends with the GC, and you'd have to do silly tricks with NIO ByteBuffers plus serialization and whatnot).
A lot of Apple software is buggy, badly designed junk. It used to be worse, but you can still see that a lot of their software does beginner mistakes such as blocking calls in the main event loop, resulting in "spinning ball of fail" type blockages.
Some of their system software tends to misbehave as well, consuming lots of CPU and making the fans spin up if left unaddressed. It seems to be some kind of rule that for each release there is at least one daemon that shits the bed.
I know nothing about their c codebase but I assume it's because when the language is hard, you tend to keep it as simple as possible. When every line stops being a foot gun, you start using optimizations you wouldn't dare before.
I’m not deeply familiar with swift, objective-C or the next-step APIs, but it seems like they’re describing the difference between a (virtual?) function call (ie: swift to swift) vs a “message passing operation” in objective C. If that assessment is correct, then it’s really an exercise in small costs adding up quickly. If you call through the function pointer you have way lower latency, and if you call through the same operation hundreds of times to render a single screen (the calendar) inside of the render loop where latency matters- that’s measurably better. Especially on a device that tries to throttle CPU frequency to save battery power.
This is somewhat analogous to the same arguments that can be had about JIT compiled languages having the the opportunity to exceed performance of AoT compiled ones because of inlining opportunities- except in this case I don’t know if the call has any chance of being actually inlined so much as at least bypassing the message passing machinery overhead.
I don't know Swift too well but my impression is that the support on Linux within the Foundation is not complete. (Also, is Foundation the same thing as what one might normally term a standard library? I can't tell.)
Does this change imply better (eventual) support on non-Darwin systems? Or maybe I've misread it and the change is unrelated.
> my impression is that the support on Linux within the Foundation is not complete
It is not, but it's like 95% of the way there in my experience - most things that are missing are relatively recent additions that have some complex OS interactions, like filesystem I/O with language-level concurrency features.
> Does this change imply better (eventual) support on non-Darwin systems?
Yes. The non-Darwin Foundation version is already rewrite that takes a lot of effort to keep in lock-step with the closed-source Objective-C version, so unifying the implementations in Swift will both reduce the amount of maintenance effort and promote non-Darwin platorms to more of a "first-party" status.
> Also, is Foundation the same thing as what one might normally term a standard library?
It's more of a standard library++, including some things that other languages include in their standard libraries (Date/Time models) but also other things that are common to put in third-party libraries (networking, etc.). You do not need to use Foundation for basic things like arrays or concurrency that are built into the normal standard library.
> complex OS interactions, like filesystem I/O with language-level concurrency features.
Do you mean like io_uring? That wouldn't be too surprising. Even Go doesn't have that in the standard library yet.
> so unifying the implementations in Swift will both reduce the amount of maintenance effort and promote non-Darwin platorms to more of a "first-party" status.
<bikeshed>
It feels off to call things like Array part of any library when you have array literal syntax `let oddNumbers = [1, 3, 5, 7, 9, 11, 13, 15]`
I'd call that a language feature. Or maybe with Swift there is a blurry line between the two?
Array is defined in the Swift standard library, but the compiler knows about the type directly, and also recognizes various @_semantics("array.blah") annotations that appear in the standard library source code. I believe the semantics annotations are primarily to help the optimizer.
For high level applications, Foundation is almost always imported and can pretty much be assumed. It's kind of a de facto standard library extension. The problem as you put it is that what's available on macOS/iOS is not fully available on other platforms. So you can't just take some software that uses Foundation and compile it for a non-apple system without potentially running into a bunch of things you need to implement or polyfill or whatever you want to call it. Foundation has been growing its support for other platforms over the years, so the situation has been getting better. But it's been hard to take the "open source cross platform" goal of Swift seriously when they there is clearly a priority given to apple platforms.
The removal of C from Android and iOS is going to pay off massively. The current state of things where every government is sitting on an endless collection of exploits is not ideal.
So, we should feel safe knowing that Apple is protecting us from the wrong governments and working with the right ones? Considering how attached they are to the Chinese market, one would be forgiven for assuming Apple too is motivated by money over ethical responsibility.
They'll use the data to influence your next election. Just look at Moldova, who is infiltrated with Russian spies and dirty money. They've lost the last election but the country is paralysed by corruption.
It's not necessarily _easy_ to exploit iOS, the problem is that world governments and hacker orgs have budgets in the millions to find these exploits. We have already gone as far as we can by just telling people to be more careful when using C so to solve this problem the only way forward is languages which make it harder or impossible to make the mistake that keep happening in C programs. Apple opting for Swift and Google using Rust in Android.
Google has real world data on over 1M lines of Rust in Android and found that based on expected bug frequency they have astonishing few bugs due to Rust.
The news is actually that Apple is now instead trying to define the bits/parts to support via Swift on all platforms (the original API's will always be supported on Darwin).
The announcement:
https://www.swift.org/blog/future-of-foundation/
The discussion, with hairy details about which bits, esp. for async:
The plan is to divide up Foundation into more- and less-essential parts, to get more-essential parts locked down so people can rely on them.
What's Foundation? Swift's most-core library is the stdlib, tightly coupled to the compiler/language version, providing things like arrays, dictionary, etc., and available wherever Swift is. Beyond that, Foundation is the library with core API's for common features, e.g., for dates and concurrency. Stdlib is fully cross-platform and Swift-specific, but Foundation is a beast with API's dating to NExT with support for 20 years of API's.
Microsoft famously arrived at porridge for an operating system by maintaining backwards compatibility. Apple has cracked open Swift by developing in the open, but it's still Apple-funded and Apple-driven. Library and some integration support for other platforms has always had to come from community (notably compnerd's heroic effort to make-things-work on windows, and a revolving cast wanting Linux support for their server API's).
But there's no good reason to impose the whole Foundation history on other platforms. And there may be a movement inside Apple to migrate internal code to newer async API's, designed after the recent Apple-silicon generation.
For developers with server experience and some free time in a tech lull, it could be a good opportunity to help rebuild a new, er, foundation for computing on all devices, that's native but type- and memory-safe. The community is large and mature, but there is plenty of room for others.
It simply sounds like there isn't much demand for Swift from the Open Source community. Apple certainly makes a big push for it internally, but outside their platform, Swift doesn't have much momentum.
If Apple wants people to use Swift like a first-class runtime, they should stop treating third-parties like second-class citizens. They have $200 billion dollars in cold, hard cash - surely some of it could go towards the selfless development of a universal future computing platform, right?
The proprietary mobile OS vendor for which you have to use their proprietary desktop OS to build apps isn't popular in the open source crowd? Shocking, really.
How great are the IDEs over there for Swift? A quick search indicates jetbrains just shut down theirs, apple.com/swift/ really screams "this is apple stuff" and swift.org/getting-started/ seems like they use VS2019 to build on windows, but nothing about use as IDE. There is a VS Code extension with 80k installs, though.
Xcode has its flaws but mostly it's pretty great to work in. With it available for the vast majority of current Swift programmers, it's a hard market to break into.
Xcode is slow and requires a huge amount of disk space (50gb free space just to install it).
Also you can check reviews in AppStore. Current rating is 3.2.
I don't think there's a binary between "totally open, general-purpose language across all platforms" and "closed-source/proprietary". Even if Swift remains a decidedly Apple language, developers can still benefit from having access to source code and to an open discussion/planning forum. Unreal Engine takes a similar approach
I wouldn't trust Apple as far as I could kick them. Not that I think they're evil per se, just that their motivations align with their shareholders, not the dev community. Too capricious by half
No need to wave the flag for your side - I don't trust MS for exactly the same reason. MS spouting BS to developers doesn't mean they actually prioritise them.
They want lock-in. If you write something in Swift and it's not easily portable to other platforms then you might decide porting is not worth the effort.
I often wonder what things would be like if Apple had improved upon Objective-C by fixing up the underyling C language. Yeah it'd no longer be a superset of C, but so what? Neither is Swift. Swift is great but I wish it was closer to Objective-C in spirit by deliberately being a more lightweight language. It's language spec is gargantuan - they've pretty much succeeded in making it a language as complex as C++.
Merely adding generics and better typing akin to TypeScript's, to Objective-C would've worked wonders and made the transition less abrupt.
I used to too, it took me a while to switch, but now I see the benefits. Obj-c does give a lot more freedom, and sometimes you still need it, but what initially felt restrictive does actually lead to less buggy and predictable code, and some protocols like codable cut out massive amounts of code.
I still rely on some stuff like GCD that isn't very swift like, there still isn't anything as fine grained in Swift, but I'm liking combine in some cases.
My issue with Swift is that it changes faster than I can learn it. I'm not Apple developer, I just write some code for macOS or iOS sometimes, like one time a year. I learned Objective C few years ago and this knowledge is with me.
learned Swift 1 but now it's different and it seems that I have to learn it again almost from the scratch.
For professional *OS development Swift probably makes sense. But if I occasionally need to write some code, Objective C for me is preferred. May be Swift is stabilized enough already...
There were a lot of changes made in the Swift 1–3 time frame as the team learned what it was like to write Swift at scale and developed a unique language style. That’s all done now, and Swift 5 code (released 2019) should continue to be compilable with few exceptions long into the future. It’s different enough from Swift 1 that it’s basically a different language though.
The biggest source-breaking change on the near-term horizon (which is still in progress) is compile-time enforcement of concurrency safety.
Someone knowing existing pattern help me understand this. right now when i use these “foundation” apis in swift it is doing c call? so they rewriting now the c code in swift? (dont do ios or mac code so now sure)
me dont understand what make it faster now. i once wrote some jni code to call c++ library from java. does mean there some similar code to do cross language call? they get rid of it and so things now faster?
offtopic but debugging crashes in c++ over jni call was super hard. logs not work and me not figure out how to fix. wonder if they had similar “troubles” and this make everything easier :)
always feel skeptical when seeing large code base have “rewrite”. try many time in me career thinking there good reasons but super hard and not best value in end
It's a lot of obj-c/c on apple and syscalls on other platforms, I think the biggest point its to get rid of annoying obj-c calls and make it fully cross platform as its a nightmare for both safety and performance in many cases to call objc.
i don't understand how a compiled language like swift can be binary compatible with a dynamic one like objc. How are they going to implement method swizzling on NSObject for example ?
Swift and Obj-C are both compiled languages, and also dynamic in the sense that they allow a lot of runtime introspection and dynamic dispatch.
They're already binary compatible, in the sense that you can call compiled Obj-C classes from Swift apps (of course), and also - with some restrictions - call compiled Swift code from Obj-C.
In fact, you can "swizzle" methods in Swift just like in ObjC, on classes derived from NSObject:
What you’re doing in this medium post is call the objc runtime from swift and take advantage from swift <-> objc bridging. This does not work if the class you’re trying to swizzle is a pure swift class. For a reason : swift is mostly static dispatch, whereas objc is pure dynamic dispatch (taking its root from smalltalk).
This means code that tried to swizzle foundation types, assuming foundation is developped in objc, will not work anymore once foundation is using a pure swift implementation.
If Foundation is to retain binary compatibility with Obj-C, then it will by necessity still be using the Obj-C runtime, with Foundation types still extending NSObject, and thus swizzling should still work as before.
The alternative would be for Apple to break compatibility with old Obj-C apps, instead shipping a compatibility version of Foundation.framework which old apps would continue to link against. But it sounds like they're not going that route.
Every time like this, bunch of people who believe safe programming is a language capability will jump out.
But in fact, the most valuable part of the code in any nontrivial system is the “unsafe” yet safe ones. You write memory safe code by understanding how computers memory works, sometimes you can use certain patterns to make that process easier, but not always. This is regardless what language you use: you programming in Rust, still the most valuable part is where one can get the “unsafe” part right.
A good C programmer will always a better Rust programmer when he wants to. That’s it.
It looks like an important step in progressively retiring ObjC. Swift is built from the start with a ObC compatibility in mind, which is a source of much ugliness and inefficiency in the language. With the Swift team working in a value semantic first language (Val), I can kind of see where this is going.
All the higher level macOS APIs are still object oriented, and I don't see that changing TBH. And macOS application source code is essentially just minimal glue code to tie those system APIs together, in the end, the programming language used for this glue code doesn't matter all that much, since the code is completely dominated by API calls.
Cocoa is object-oriented, but modern APIs not so much. They are of course based on interfaces/traits/protocols (whatever you want to call them) but it’s not the same as 90ies style inheritance-based OOP. It’s the inheritance as basis for behavior is what I want to see gone. But even Cocoa already embraces composition more than inheritance.
By OOP I mean entangling of behavior and type hierarchy (mainstream class-based OOP aka. C++ OOP). Dot-notation and methods do not equate OOP. But of course, it depends on your definition. Pretty much anything can be called “OOP” if one wants to.
The defining feature of mainstream OOP (as popularised by languages like C++ and Kava) is the entanglement of type hierarchy and API contract. In other words, inheritance. And inheritance is the main point of criticism of the mainstream OOP. You can call it class-based programming if you feel this is more accurate. Let's not get stuck at terminological issues.
At any rate, I apologise for using such an ill-defined term as "OOP" in the first place. This terminology is so overladed and washed out at this point that it might sense to retire it altogether.
It's the base framework where types like NSObject, NSString, NSDictionary etc. are defined. It's right there in the article. It also explains why it exists, what others exist, and what a rewrite will bring.
The Foundation framework is a cornerstone of most macOS and iOS apps, providing a large number of ubiquitous abstractions, including NSObject, NSString, NSArray and NSDictionary, and many more.
You’re making the wrong assumption! I have read the article before I asked, and still have no idea what it is! Saying it provides these abstraction NSObject, NSString, NSArray tells me nothing! The article is clearly written towards readers familiar with Apple ecosystem.
Objective-C is an object-oriented programming language that is essentially a super-set of C. In fact the earliest implementation of Objective C, if I recall correctly, translated Objective-C code to C.
The only built in types in Objective-C are the C types, like int, char and so on. The "Foundation" is a set of classes that implement many useful types that all inherit from a superclass called NSObject. The "NS" prefix refers to Next Step, the name of the company that Steve Jobs started and where Foundation was first created.
One of the advantages of this scheme is that it is possible to have heterogeneous collections (like an array of objects, where the objects do not all have to be of the same type/class).
Underneath the NS foundation (whose headers are all in Objective-C) is something called Core Foundation, which implements the same classes, but in pure C. To do this well requires huge programming discipline, especially around memory management, and much of that pain is taken away by using the NS classes.
Swift has already started replacing some of the foundation classes, NSString and NSDictionary, for example, giving them the names String and Dictionary without the cumbersome name-space two letter prefix.
I suspect that some of this rewrite may still "bridge" to the CF classes, but in other cases, it would be much better to simply write the class in its entirety in Swift.
GCC still compiles Objective-C as if it is translated on-the-fly to C.
Unfortunately, there is no way to get translated source :-( because it happens on AST-like level.
The Foundation framework is basically the standard library of Objective C (and thus pretty much the standard library for most of the Apple ecosystem as well).
Its an API for the higher level parts of the OS, not the kernel, so that would be distribution dependent on Linux. You would build it on top of linux and make other programs program against it.
Windows has WinAPI which is basically the same thing.
I doubt the entire kernel would - there are low-level, memory-unsafe operations that an operating system performs that Swift _could_ do, but it would be fighting the design of the language and likely not a great trade-off for the kernel developers.
But pieces absolutely could be. The benefit would be compile-time checking for memory safety issues (reducing crashes) and language-level concurrency (fewer race conditions and a much easier path to parallelize single-threaded code for performance).
I believe all the code on the Secure Enclave is now Swift. I believe they use a special mode/analyzers to ensure only the “ok” stuff is used. Perhaps no memory allocation, for example.
It sounds to me like Apple would like Swift, perhaps with some extra tooling, to be able to handle the Mach kernel. I don’t know if they’ll go that far though.
I am pretty certain that most of high level use cases for Rust could be replaced with Swift, with increased developer velocity, if Swift was actually cross platform. And that's coming from a Rust fan. It will be interesting to see how effective this rewrite is.
> I am pretty certain that most of high level use cases for Rust could be replaced with Swift, with increased developer velocity, if Swift was actually cross platform.
Sure, or really OCaml or any other language with ML-family features (other replies have already mentioned Kotlin or C#). But the fact that it took Rust to get adoption and not OCaml suggests that it's not language functionality that drives adoption; Rust has something that those languages don't. (My theory is that it succeeds by being the first decent language that can match C's performance-on-silly-microbenchmark numbers)
I would love to be able to write cross-platform desktops apps in Swift. The language isn't perfect of course but I enjoy it a lot more than the alternatives I've tried.
That possibility is still a ways down the road of course but Foundation getting a cross platform rewrite is a nice step in that direction.
If you or others on this thread are interested in working on a Swift compiler for Windows, please reach out to me :) a good friend is hiring a team to make this happen.
To me, that sounds as if most of the work would not be on the compiler but on improving the development experience (IDE, debugging support).
On the other hand, your comment said ”working on _a_ Swift compiler for Windows”. To me, that sounded like a new compiler, not adding missing features to the existing one. Which of the two is it?
You could take a look a swift-nio (https://github.com/apple/swift-nio) which is a pretty high-throughput system. swift-nio does this using some reference-counted GC where it simplifies the code and doesn't affect performance. Otherwise, value-types are used which incur no GC overhead (unless they are copy-on-write, and backed by something that requires reference counting).
Well that's why I said high level application use cases and not systems. Ideally I'd love to see a compile mode for modules where certain conveniences were enabled like: auto-clone, unified reference/owned types, disabled lifetimes and instead borrows become auto-wrapped-arcs, etc. That way you could care about that stuff in modules where it matters, and ignore it in areas where you prefer convenience over zero-cost abstractions and raw performance.
Sure you can. C++ has reference-counted garbage collection with its smart pointers and RAII, just like Swift. People certainly write high-throughput systems in C++.
Swift is slower than C++, yes, but not because of its memory management scheme.
Unless you go absolutely ham with smart pointers (which everyone knows not to do if you care about performance), C++ isn’t really a reference counting language.
I wouldn’t call RAII “reference counting”. I mean, I guess, but it’s the programmer or the compiler doing it. I’m talking about runtime reference counting.
Swift only uses reference counting when working with objects; Structs are optimised using copy-on-write. There's currently work on implementing move and ownership semantics, similar to Rust, but opt-in rather than by default.
Swift structs are just like C structs (from a memory perspective). The copy-on-write thing is implemented manually by storing a private refcounted object in your struct. See the implementation of Array for example: https://github.com/apple/swift/blob/main/stdlib/public/core/...
There’s no magical copy-on-write mechanism at the language level.
I don't know why this is downvoted, it's true. There is no automatic copy on write optimization in Swift. It's a manual optimization that expensive types like Array implement manually.
I haven’t read that much on Rust, but that should be pretty much it.
Swift has an emphasis on value types, which often are stored on the stack, but only up to a certain size. Copy-on-write makes this feasible - often, those value types are passed by reference until a write actually occurs, but that’s opaque to the dev.
Value types can have pointers to reference-counted types - if a value type is passed/copied, any pointer it owns is retained (weirdly enough, they claim that value types have no „lifetime“, but at some point those refs have to be released - we just have to trust the compiler in this).
Swift and Carbon both seem like very strong contenders in this space. Swift is already a really strong language albeit a little too tied to Apple’s ecosystem. This along with the ownership manifesto slated for Swift 6 (and C++ interop) should make it easier to use it everywhere. In particular, I see a lot of value for tooling (eg JS code bundlers and such) that are rewritten in “thinner” languages for performance needs without sacrificing developer productivity.
If you don't care about performance and "zero cost abstractions" and are more interested in Rust for its memory safety, then most of your Rust programs end up with a lot of syntactic and library bloat to add those "costly" (but convenient) abstractions back in, like explicitly cloning or wrapping types in Rc and Arc or just always taking ownership of borrowed data by constructing owned types with it. And then there are lifetimes which can be and are elided in almost all uses cases so they're ultimately just confusing whenever you actually have to deal with them. C.f. Swift, where the default is in my experience what "higher level" rust programs end up looking like, but with much nicer and cleaner syntax. And Swift has a very nice rather memory safe API for calling into C or otherwise just generally accessing scoped pointers when needed. This same api can actually be used to expose more performant apis to underlying data etc. when needed. It's kinda the reverse of Rust.
It feels like Rust is two languages in one: a high performance zero cost abstraction language that tracks pointer ownership, and a package rich and hyper explicit application language build atop all the guts. That's why I say high-level application use cases, because most high-level applications are not concerned with raw performance but rather with functionality and user experience. The parts that are concerned with throughput can be implemented using tooling where those knobs are available. I have enjoyed writing a cli and api server and various libraries in Rust. But every so often I am left wondering when Swift might be able to replace it for my higher level concerns. Alternatively, it would be neat to see some effort put behind a "convenient rust" type of compile mode for modules where you could compile with things like implied clones, unified owned vs reference types, auto-arc/box, etc.
> Alternatively, it would be neat to see some effort put behind a "convenient rust" type of compile mode for modules where you could compile with things like implied clones, unified owned vs reference types, auto-arc/box, etc.
If you can come up with a precise transform from "convenient/sloppy rust" to the underlying language, it can already be implemented via proc macros and the #[attributes] syntax. This is how async programming was prototyped in Rust before it became part of the language proper.
Though I'm not sure it's fair to describe Rc, Arc, Cow etc. as "library bloat". It certainly adds some boilerplate, but it's designed to stay manageable.
(Arguably, good coding practice should also informally document why the Rc, etc. is needed and can't be refactored away, i.e. what parts of the program are controlling the lifetime of each Rc'd
object.)
Maybe bloat wasn't the right term (was thinking that it bloats the code, not the resulting program), but when you're in a scenario where you're doing one thing more often than the default, it feels like boilerplate to have to continually say `String::from("some string ref")` (as an example) just to take ownership when that's what you want every single time. Or, at least something like an "owning assignment" or "owning deref" operator that automatically adds the `to_owned()` call would help reduce boilerplate.
Maybe it was a mistake to conflate lifetimes and generics. It wouldn't be so bad to deal with references in Rust if you didn't have to include lifetime parameters when building APIs. It seems silly that (in my experience) people gravitate toward structs with owned fields just to avoid specifying the lifetime of a borrowed field.
I have used all those languages seriously save C# (only lightly) and can safely say Swift knocks them all out of the park in terms of actually writing applications. Indeed, its cross platform support is lackluster, thats why we’re here.
Personally I'd love to see a forked flavor of TypeScript with static compilation and multithreading (which implies a lot more immutability, etc). Maybe I should give Swift a try…
If the foundation is rewritten in Swift, it will become a very potent alternative to Rust for this use case. Swift+LSP has good enough IDE support and the developer experience and debugging capabilities should make possible.
I think the fact that Rust won't provide inheritance based OOP workflows means that something like swift will always be valuable, especially for application development.
What is the value provided by inheritance? It was my understanding that this was deliberately excluded from Rust as it's now seen as a bit of an anti pattern mostly inferior to traits.
Inheritance is about providing business logic in all downstream inheritors through a super() chain. If you want shared logic in Rust between several implementors of a trait you will probably end up with duplicate code. There are drawbacks to both.
Macros provide code injection, that's not the same thing. Inheritance allows for overrides so you can have [default super logic]+[custom child logic]. That's only possible in rust with a lot of copy and paste, which can be fragile to maintain long term. Most UI frameworks are built around inheritance to guarantee logic provided by the base view or view group, while allowing for inheritors to mix in custom logic as well
How would you compare Swift with Kotlin in terms of being a good stand-in for high-level Rust?
I've not paid attention to Kotlin in a while, but it seems like there's more progress at supporting Kotlin across more platforms (Kotlin native, web?, etc). I'm curious if people feel that the language features and design are at about parity, or if one is significantly stronger/weaker.
Last time I toyed with Kotlin Multiplattform (bit less than a year ago) it provided only really weird Objective-C bindings for use in iOS/macOS. Like, a statically defined List turned into an Obj-C iterator. Enum and sealed classes were quite problematic either - I honestly would not consider using KMM for anything but a PoC.
BTW, JetBrains recently sunsetted AppCode, so they are bleeding their Swift talent now. I suspect that doesn’t bode so well for Kotlin‘s Swift interop.
Another scenario where an often missed and underrated choice would be C#.
Nowadays it runs everywhere and supports a variety of deployment targets (relying on local runtime, packaging it together into a single somewhat compact binary, natively compiling the code (NativeAOT, it's good but has some limitations) or mixing and matching all of the above).
It is also one of the best high-level languages to write performance-oriented code with, especially so with .NET 7 which allows writing cross-platform SIMD code in a similar fashion to Rust's 'portable-simd' crate.
My experience is probably colored by .NET/WinUI but when I was dabbling in writing a Windows desktop app a few months ago I can't say I loved C#. My background has no Java in it so maybe that's why, but a lot of things that come standard as part of Swift seem to be in external libraries with .NET/C#, and it felt like some things were different for the sake of being different. Point in case, with Swift and Kotlin I've gotten used to chaining various transformation functions (map, compactMap, flatMap, filter, sort, reduce, etc) and some of these don't have a 1:1 equivalent in C# which was a huge hit to productivity, even if the same result is achievable via other means.
And while it's tangential, Xaml drives me absolutely bonkers. It's like the worst parts of iOS Storyboards and Android Framework XML layouts except there's no escape hatch for those looking to build a UI in pure code (Android Framework is a bit of an offender here too, but Jetpack Compose looks to remedy that).
All these are part of standard library.
Map is Select, Filter is Where, Reduce is Aggregate, etc. You can write C# in a functional style easily, it is one of the premiere features.
I did find and use Select and Where at least, but I recall running into caveats with them or with their interactions with other parts of the language that meant that they couldn't be used identically. I don't remember specifically what it was since it's been a while but I remember it being frustrating. I should probably find a cheat sheet of equivalents if I try writing it again.
> some of these don't have a 1:1 equivalent in C# which was a huge hit to productivity
This is interesting because all these magical functions (zip, map, Rx etc) have roots in LINQ which sprung from .Net world. I find it hard to believe that the battle tested CLR and C# doesn't have the equivalent functions.
> all these magical functions (zip, map, Rx etc) have roots in LINQ which sprung from .Net world
No they don't. They're essentially unchanged from ML back in the 1970s. The part that was new in C# was the SQL-like syntax on top of them, and most subsequent languages haven't considered that worth adopting.
Apart from language syntax similarities, Kotlin and Swift are fundamentally different languages, one runs on a VM (ok there's KNative) and other runs on bare metal.
Kotlin has it's roots in JVM and the early language design choices clearly reflect that. Kotlin/Native will find it very hard to break free of it's JVM counterpart because it can't diverge too much from it to maintain compatibility.
Background: I spend most of my time in Swift but do work in Kotlin too. Occasionally some Rust.
Swift and Kotlin are in many ways similar, but I find Swift to be a bit nicer in almost every respect. Swift has more powerful generics, tuples, structs, powerful enums that are value types, `if let`/`guard let` statements that I'll take over `?.let {}` any day, etc. Kotlin is somewhat more expression oriented, which is nice; Swift is moving in that direction too, but slowly.
A native Kotlin (I have no idea how mature it is) might be a decent alternative to high-level Rust, but I'll take Swift if it's available. It's both more pleasant to work in and I assume more performant with better access to stack allocated value types.
Kotlin is about as nice as Swift until you run into some bit where it's interfacing with Java and then it becomes a lot less fun. Swift I think handles Obj-C/C interior a bit more gracefully by comparison. Various bits of other JVM baggage also aren't great.
That said I've only used Kotlin in the context of Android development. It might be nicer elsewhere.
I haven't seriously used Kotlin in 4-5 years. I too was excited about it supporting native compiles and web assembly but I have no firsthand experience with how far those efforts got.
Tangental question: How do you Swift-using, non-app developers find out how to do basic anything non-app related, such as reading files and making network connections?
I Google for stuff and probably 3/4s of the result are out of date and won't work in any reasonably modern version of Swift. I find Apple's Developer documentation sub-par, to say the least. So, how do you navigate it all?
It's quite a shock coming from stuff like Go, C#, and Rust.
It'd be pretty cool if there were a good version of the Swift Cookbook Ala the classic Perl Cookbook.
Is there anything genuinely innovative about Swift or is it just another "cross-platform scripting language" that I'll only see on 1 platform?
The only reason I learned Powershell is because I have to admin for Windows. At least Powershell is interesting for its pipeline and object handling. I get not one spark of joy from Swift, and the comments I see are "at least we're not writing C anymore". Stockholm syndrome.
Sorry this is so negative, but I can't get excited about $NewLanguage unless I know it offers real benefit. I have drunk the Rust koolaid, and for very good reasons. What am I missing in Swift?
It’s pleasant to use, that’s the main thing about Swift.
A very Apple thing to be honest, so just like most of the Apple products it’s very good in few thing and good enough in others and probably nothing is completely new.
The overall experience is pleasant. Apple is building just as pleasant tooling around it, Swift Playgrounds is amazing for example.
Newbies can learn coding by doing some exercises disguised as games but that’s not what’s amazing, what’s amazing is that you can make full apps with all the libraries and Swift packages and SwiftUI. Apple did good job hiding the boilerplate away, did good job removing the decision making about the project file structure and created an intuitive interface where you can just code without worrying about anything else.
In 2020s, Swift is probably the only language with tooling that lets you just code your ideas and deploy without dealing with anything else. Like creating a Word document and writing your thoughts down straightforward.
I don’t know if submitting from Swift Playgrounds to the AppStore has been released but the beta I was trying last month had this functionality. From idea to market, start to finish integration.
> Is there anything genuinely innovative about Swift
Apple's thing is innovation - taking technology that exists but is clunky and making it accessible.
Swift is intended to be a more accessible alternative to Objective-C. It is memory safe (reference counted like Objective-C's ARC but built into the language) and type-safe, and has modern features such as named parameters, generics, duck typing, closures, multiple values, inline arrays and dictionaries, etc.. It's intended for writing user apps, but is also suitable for writing system frameworks. It also has some (possibly overly) clever syntactic sugar that has enabled SwiftUI's pseudo-declarative API.
Regarding memory safety we have: Swift (mostly reference counting), Rust (mostly compile-time checks), Java (mostly garbage collection), and C++ (smart pointers or manual/unsafe.)
A good point, and it bears emphasising the subtle but important distinction between innovation and invention. Taking an invented thing and honing it into something that seems obvious in its effortlessness is often especially difficult. But often impossible to see that difficulty in hindsight.
Swift doesn't really have to be "genuinely innovative" – it's a modern, ergonomic language which is pretty easy to pick up and and use, has decent performance and a solid type system combined along with the familiarly of a C-like imperative language. It has pretty obvious usability and expressiveness benefits relative to Objective C while being designed for effective interoperability with a massive existing set of codebases.
It actually feels quite a lot like Rust to write it. They're obviously designed for two different use-cases – Apple platform development versus safe systems programming, but they share a lot of similarities in other ways.
Innovation isn't the same as invention. Apple didn't invent the MP3 player or the smartphone, but they certainly innovated with the iPod and the iPhone, which combined technologies into useful and accessible products.
Swift might not change the industry, but it makes life easier for anyone who wants to develop on an Apple platform but finds Objective-C unwieldy. Also it provides a path for Apple to migrate its platforms to memory-safe languages, which can reduce bugs and security vulnerabilities.
Swift isn't a scripting language, it's a memory safe compiled language suitable for systems programming in the Obj/C++/Rust/Go category. But unlike Rust, it's a bit more automatic and you don't feel compelled to micro-optimize every memory allocation and RC operation.
…also, the multithreading design is better and more performant.
…also, it has a stable ABI that supports shared libraries, which no other compiled language except C ever manages to get right.
Powershell and Swift are so far apart in design as well as intended use cases that I'm not sure why you'd even want to compare them. It's like comparing Zsh to C#.
I suppose their point is that much as you can install and run PowerShell outside Windows, you can run Swift outside Apple's ecosystem. People just don't do either of those things very often.
With Swift, the support for other platforms is mostly token right now. As in, you can write code in it, but e.g. their sample app for Windows uses Win32 API directly - for Windows developers, that would be already considered outdated 20 years ago.
OTOH PowerShell actually works largely the same across all supported platforms, so it's equally useful on all of them in absolute terms. Now, outside of Windows, we've had decent shells for much longer, so its relative utility is indeed lower on those platforms (although not non-zero, if you find the notion of an object-oriented shell with a rich standard library useful). One particular use case where it comes in handy is when you have to script some automation for both Windows and Linux; e.g. cross-platform builds or CI. So mostly shared code, but in a few places you might need to do "if LINUX then ...".
That's kind of one of Swift's things - it is designed to be as accessible as a scripting language but scalable to full applications and system software.
Ideally it gives you some of the convenience and accessibility of a language like Python, combined with runtime efficiency of a language like Objective-C.
print("hello, world") is a cute example since it's a valid program in both Python and Swift.
(Though I still haven't forgiven Python for removing the print statement, which dates back to BASIC antiquity.)
Top level programming is actually a little too surprising in Swift. For instance, you can access variables before they're declared - so you're not just implicitly inside a main function.
I've never done any Mac or iOS development and even I can see this is a strange question. I don't know why you think it's a scripting language, or how it compares in anyway to powershell?
"It's not Objective-C" is pretty much the only advantage it has. You're not missing anything, it's basically a superfluous language that only exists in its current form because Objective C needed replacing and Apple would never adopt a language they didn't have effective control over. Language-wise, I see zero benefit over Rust, possibly Go (I'm not familiar enough to comment) or even JVM-less Kotlin.
But hey, we're stuck with it now so no use complaining.
Static and concise NPE protection via optional syntax, and the guard construct, both make defensive programming much easier in Swift. Tons of other little things.
Memory management is easier than it is in Rust. It has generics which makes it more flexible than Go [was until its generics were recently introduced]. Kotlin native wasn’t announced until three years after Swift.
I don’t think it’s as simple or one-sided as you’re making it out to be.
For the record, I find the guard keyword totally unnecessary and I still have NFI what's wrong with this in Swift:
if bar == nil {
...
}
I also totally disagree about memory management being "easier" (at least with C-interop), because I think Swift's handling of UnsafePointer/UnsafeRawPointer/UnsafeBufferPointer/mutable variants/withMemoryRebound/etc isn't particularly easy to follow at all.
To be clear though: I don't think Swift is a bad language. It's perfectly adequate. But it's the fact that Apple decided to force Yet Another Programming Language down our throats that, in my opinion, offers no real advantage over many of the other choices available at the time.
"We're Apple and you will only use a language that we control" was, IMO, the driving force behind Swift, not the inherent superiority of the language itself.
The guard keyword behaves like a non-crashing assertion that promotes code readability with a compile-time required early exit. “Don’t bother executing past this point if these conditions are not met” is fundamentally different from “if these conditions are met create a new local scope and then continue executing”.
> Swift's handling of UnsafePointer/UnsafeRawPointer/UnsafeBufferPointer/mutable variants/withMemoryRebound/etc isn't particularly easy to follow at all.
And that’s why we have inout instead? Unsafe* is useful if you need to bit pack frozen structs into a ring buffer and increment the r/w offsets by the stride, maybe? I think inout can handle that as well.
In addition to what replies have already pointed out to you, guard places the assigned new variable in the same scope below. This encourages early exits and avoids pyramids of doom.
swift isnt new, but yeah i agree its not as exciting to think about as rust or Go. I think i prefer it over some of the oddities from TS, but certainly over js.
Swift can do everything Go does but better now that it has async/await because it has real types and a real unsafe API. Calling C from Go is horrendous.
I really should have started using Long Bets years ago.
Ten years ago a “Tanya” told me she would never use an ebook to read literature. I’m pretty sure if I went to her house today I’d find 2. In fact I may be the only person I know who doesn’t own one (not wanting one and being fundamentally against them are two different things).
That’s the most memorable but I’ve had these sorts of “over my dead body” conversations on a hundred topics over the years.
Had never heard of Long Bets, I really like it (https://longbets.org/ for others who, like me, were unfamiliar). Is there a way to see the month and day of month when the bet or prediction was made? I can see the duration, e.g. "Duration 4 years 02017-02020", but don't see the actual date.
Maybe Tanya reads a lot of PDF as kind of default format for all kinds of technical things and very common format for downloadable ebooks. AFAIK ebook readers are terrible in pdf rendering.
> AFAIK ebook readers are terrible in pdf rendering.
They're not really comparable formats. PDFs are documents rendered onto virtual paper: there is only so much a mobile device can do there, as a PDF is likely rendered as an 8½ by 11 or A4, which isn't going to be great on a tiny mobile screen. Bad tradeoffs between tiny text or pan-and-scan.
eBooks, OTOH, are reflowable: ePub is "just" a ZIP of HTML¹. But the viewer can lay them out appropriately, so they can be formatted to fit your device, whether that's a tablet, a phone, or something else.
IMO if you want to read something on a device, ePub > PDF for those reasons. PDF is good for print, & ensuring that what comes out of the printer has a shot at resembling the screen. (Though there are a good number of print settings to mess that up, too.) And, as a passably portable document format, if you're just doing short term viewing or something.
¹Plus a lot of other metadata that I'm eliding; it's a fair bit more complicated, of course.
Punch line: Swift codable does around 10MB/s, despite all the supposed performance goodies and compiler support everyone always talks about and nobody appears to ever measure. A pure Objective-C implementation does 284MB/s.
I also did a more general survey for my book[1]. While that's been a while, I haven't seen any indication that things have fundamentally changed, and lots of indications that they haven't.
This largely depends on how much of Swift<>ObjC bridging you'll hit, as such it'll get faster if more of Foundation is converted to Swift.
In particular Swift is more memory efficient than ObjC because there's less boxing overhead to small values. That should matter more than the overhead from more overflow checking.
All the pure Swift implementations are even slower than the bridged ones, and the bridged ones are slower than the most comically inefficient Objective-C one (using KVC, for example), which is slower than the reasonable Objective-C ones.
> In particular Swift is more memory efficient than ObjC ...
It's not.
> ... because there's less boxing overhead to small values
You would think, yes. I actually did think that as well. Because it really sounds plausible. So imagine my surprise when I measured some common cases for my book (Chapter 9) and it turned that even in the cases where I just knew™ that Swift would be faster, because of this very reason, it just wasn't.
> That should matter more than the overhead from more overflow checking.
Computers don't care what you think "should" be the case, and they cannot be argued into better performance due to plausible arguments and strongly held beliefs.
Just to explain why Objective C code, in practice, is not especially fast:
You generally make the choice between writing something in pure C, which gives you plenty of performance but little safety, or using Objective C classes and methods, which gives you plenty of safety but you're paying the price for dynamic dispatch (objc_msgSend) and pointer indirections all the time.
Swift makes it easier to eliminate things like dynamic dispatch while still keeping the safety. So it should be normal and expected that a rewrite from Objective C to Swift would result in a speedup (and increase in code size).
I read the blog post, but it just seems to provide a single example of a case where some particular piece of Objective C code was faster than the equivalent piece of code in Swift. Maybe there's something I'm missing here... what should I be looking for in these blog posts?
In practice would be what the resulting apps turn out like.
In my experience, old-school ObjC iOS and Mac apps generally tended to be quite snappy.
On the other hand, the new Swift hotness of Messages and Settings on macOS are noticably slower than what they're replacing.
Of course I haven't done a comprehensive inventory, but I certainly don't have the general impression of things getting faster with Swift. (Are there examples of this?)
My guess is any theoretical gains are generally swamped by the language culture of complex abstractions.
Somebody rewrote an app in Swift -> the app is slower.
Somebody rewrote a library in Swift -> the library is faster.
There's a lot to talk about and unpack here. My experience is that people develop a different mindset when they do library development, because any code that they write will be code that they're forced to support for decades to come. You put bad code in an app and you can simply rip it out with the next revision, no questions asked. So you are much more likely to end up with questionable code in an app than in a library.
I'm also old enough to remember not only the same complaints when people were rewriting apps into Objective C (lots of early Mac OS X apps were Carbon, including Finder), but when the same complaints were made about rewrites from assembler into C. Every decade your computer has a hundred times more computing power and your code takes 10x as much footprint, so you come out ahead, on average. It's just that any given year you might come out behind.
What are you basing that on? It seems unlikely that they'd reverse the ongoing trend of improved performance, especially at a time when they're shedding backwards compatibility concerns with the migration to ARM removing the least-maintained apps.
So apple seems to be dropping c like languages, Microsoft says new stuff should not be written in c like languages anymore, and google has steadily worked towards using rust and go in place of c and c++ where it makes sense to do so it seems.
Lots of languages, including Java and Javascript, have been described as being "in the C family", but this papers over a whole lot of important details. Swift is a low-ish-level imperative-ish language with C-ish syntax.
You could just as plausibly say that Swift is a high-ish-level functional-ish language with a modern syntax. Whereas C blows up at runtime, Swift nags-and-whines at compile time - in that way it's very unlike C. All that said "in the C family" is highly ambiguous at best...
When I hear that C family languages are no longer to be utilized, I’m assuming they mean memory unsafe options. C & C++.
Everything else is just as good of an idea as the next. C family is the most successful language paradigm in the history of software. Abandoning those constructs would be suicide. Nothing would get done.
As I would use the terminology, the C family of languages generally corresponds to languages that follow C's general syntactic rules, such as how you represent operators (particularly = is assignment and == is equality), curly braces for blocks, and to lesser degrees, making distinction between statement/expression, or having types precede variable names.
"C-like", by contrast, is a language whose semantics follow closely to C, and in particular, the C-with-extensions languages like C++ or Objective-C are usually going to fall into this category.
Based on the context I think by "c like" they meant languages with a culture of dealing with raw pointers. c# is c by name and curly braces, but primarily uses managed memory
I think the GP probably poorly phrased that. “Memory unsafe languages” would be a better term, apart from the basic syntax C# really isn’t very C-like.
When the product of this hits the OS I predict massive instability. I don't think the Apple of 2022 has the level of systems programming talent that was behind the 1990s code in the core foundation. Today I feel like most of their attempts to move the OS forward beyond the shiny UI gimmicks (Tired of Exposé™? Try Stage Manager!™) are sloppy and only barely work (like "discoveryd" - google that if you don't remember).
If you don't believe me, quit all your non-Apple apps, and open up Console.app and hit Start streaming, and limit only to the "Errors and Faults" and watch the computer scream in logs until you're convinced. This is what modern Apple software quality looks like. I really don't want them touching anything near the core OS. Especially because when Apple software doesn't work, either you'll get no feedback, or it will say something comically vague like, that it "can't be completed."
I'd LOVE to be proven wrong here because otherwise I'll have to finally catch up on my Windows knowledge and get used to not having a GUI metakey separate from Ctrl!
This is just plain cynicism and dramatization. “Massive instability” lol as if anyone with a desktop OS of any sort in 2023 has had anything close to that kind of experience. Let me know when my scroll bar UI is lagging like it’s 2003 in Cupertino.
I’m sure Jaguar had zero errors and had all the code audited by Steve Jobs himself to make sure it had top quality.
And for real what indicator is there that Apple has low-talent employees? Do they not pay well with good benefits? Where exactly do you expect all the good systems programmers are working?
I don’t think a company with bad low level engineers would be bragging about wake from sleep times in their marketing materials.
I don’t know of any other desktop OSes that replaced their entire file system with a routine OTA update transparent to the user.
I can think of other big recent changes like how people criticized the new System Settings panel a bunch, but at least Apple didn’t take 10 years of iterative updates to not even finish the transition. Please, someone tell me why Windows 11 still has the old Control Panel when Apple replaced their entire settings panel in a single update.
As stupid as it is, in my mind the simple fact that iOS just added a “copy and delete” button to the screenshot capture UI tells me that there are still plenty of power user geeks at Apple. We shouldn’t be surprised that they happen to care about different things than what we cared about in the 90s.
We can nitpick the bugs all day but I can’t think of another desktop OS that executes on its goals as consistently as macOS, commercial or not.
However degraded the Apple UX is from the glory days, as someone who recently converted over from Windows I can assure you the UX there is garbage. Compared to 2022 Apple.
Ok, I can definitely agree with that. They could do a lot more with the window manager. I think there are a couple other pain points I've had along those lines (things that should be in the base system, but aren't). Stuff like, how do I reverse the scroll direction _only_ when I have a mouse plugged in? The answer "oh go install this plugin and give it full access to your mouse and keyboard" is pretty underwhelming.
But I do stand by the statement that overall, the experience is a lot better on MacOS
Yeah this is bullshit - No one remember "the beachball of death". It plagued OS X and 10.1
Cocoa and Carbon and OS9 emulation... it was terribly unstable. But sure the current rock-solid-haven't-restarted-my-computer-in-6 months version is terribly unstable! Want proof? Explicitly turn on logging and watch the logs... log. xD
Apple has such an unpredictable stance on software development.
For example after all these years and lots of requests for such a feature it's still not possible to get notified about new reviews for your apps, but it's possible to get notified when a user edits a review after you've replied to it.
Apple’s style reminds me a lot of something I once read (and later experienced) about Japan and tourism:
> (roughly) Japan is a country for the Japanese. While they may welcome you and invite you to experience their country and culture, the country is very much setup to serve the Japanese populace.
Apple feels similarly. They build what they need and enough to capture the revenue they want, but I don’t perceive them to really cater to external needs beyond what will monetize.
I’m not sure about macOS, but on iOS you can install the App Store Connect app (https://apps.apple.com/app/id1234793120) and then enable notifications for new reviews from there. ([app] → Notifications → Mobile → Reviews, [your initials] → Notifications → Customer Reviews)
I was in a line once at WWDC in 2011. I happened to be in front of a group of Apple engineers. I overheard them discussing a rumored functional-ish language that Apple was to switch to from Obj-C. They didn't know what it was but they seemed skeptical that it would ever be put into real use. Glad to see it only took 11 years for it to permeate down to the Foundation framework. :)
I don't think Apple can do a full rewrite of Foundation, unless its opt-in only for Swift apps that have been ported to the new Swift-only codebase. If they are trying to opt in non-Swift apps, well, I'm glad I don't use OSX nor have to support it.
The article linked has few details on how they plan to do this, or how they plan on limiting the blast range.
Just switched back to windows after a decade with macos.
It seem that microsoft care way more about developers at the moment than Apple does.
So much good stuff, open source, Linux subsystem, vscode, copilot, documentation, plugins, etc.
Things works and are cheap and plentiful.
Beside the sleek and high quality hardware, nothing is missing.
Learning iOS (Swift) could be a gateway to the next-generation of platforms: carOS, realityOS (AR/VR). Or both of these efforts could be complete duds!
Objective-C was/is an awful programming language, one of the worst I ever had the displeasure to work with.
But at least they relied on the LLVM toolchain and the NeXTStep heritage, it was possible to use clang and many C APIs and a few wrappers around the horrible Obj-C APIs.
With Swift I don't think this will be as practical, if even possible.
They are so rich and powerful that they don't care. And most devs on their platforms are happy to drink the kool-aid.
This will take time, because they have relatively strong foundations and deep pockets, but they are going to slowly sink their ship, what they are building these days is technically unsound and pretty soon their engineering culture will have completely changed.
> A reimplementation of Calendar in Swift is 1.5x to 18x as fast as the C one (calling from Swift in various synthetic benchmarks like creation, date calculation).
These sorts of statements in the industry always make me ask, "So did you guys make something terribly slow first, and then bring performance back close to where it originally was?"
I'm not sure if the current Calendar app was ported to Swift, was originally Objective-C based, and such efforts to move Foundation to Swift will just... make Calendar fast again?
Also, Calendar doesn't seem to be particularly slow in the first place. But I love performance engineering, and quality of life is big, so... hey cool.
> "So did you guys make something terribly slow first, and then bring performance back close to where it originally was?"
In this case, the answer is basically, "yes". Objective-C is a neat language and really powerful in terms of flexibility and extensibility. But it gets that by essentially dynamically typed. It's Smalltalk duct taped onto C.
Dynamically typed languages are much slower unless you do lots of very powerful JIT magic, and even then they still tend to be quite a bit slower than most statically typed compiled languages.
Building Swift on top of an Objective-C core library makes a lot of sense in terms of getting Swift adoption when Swift was new. But in terms of performance, it's sort of like building a concrete bunker on top of a straw hut. You really want the bottom of your stack to be the faster, statically typed language. Then you can layer dynamic scripting languages on top for the users who want it.
“A reimplementation of Calendar in Swift is 1.5x to 18x as fast as the C one (calling from Swift in various synthetic benchmarks like creation, date calculation).”
I’m sure it is, but this is not solving the right problem at all. Calendar is all UI. It needs to be lovingly gone over, not this.
The reimplementation itself is not 1.5x to 18x as fast as the C one. It is calling from Swift that is faster.
Yes, they correct in the parenthesis, but that's not how parenthetical expressions work:
A parenthetical expression is extra information added to a sentence or question that clarifies, explains, or adds information without changing the basic meaning. Think of it as an aside providing readers with helpful information that they don’t absolutely have to have, but that is helpful to them.
So basically, they are fixing the problem they've been having with bridging performance, which of course is due purely to them mis-designing Swift in such a way that it doesn't bridge well with all the existing code they have.
No, I think you are misunderstanding the parenthetical claim. They are still benchmarking the actual reimplementation, they are just parenthetically pointing out that the reimplementations are being called from Swift, not Obj-C.
The reimplementation itself is 1.5x to 18x. They are using the parenthetical in precisely the way the style and usage guide suggests, adding the additional information that the new implementation is called from Swift.
It's probably another big step towards "Swift everywhere," without worrying about bridging to C.
I've been doing little but Swift since 2014, and really like the language. I'm still "on the fence" about SwiftUI, but that's mostly because of the level of support from Apple, and the [im]maturity of the system. This will help with that.