Super cool! I absolutely adore swift, it has so many simple features I love that I am sometimes surprised to find are missing in other languages. (ex off the top of my head: switching on enum's with associated values is super powerful in Swift and I was surprised Dart was completely missing that when I tried flutter a while back?)
Ex:
enum MyEnum<T> {
case one
case two(twosChild: Child)
case three(threesGenericChild: T)
case four(foursCallbackChild: () -> Void)
}
You can then pass this enum around and switch on it, with compiler-guaranteed completeness everywhere in your codebase. It seems so freaking simple, but it's incredibly useful.
As others have pointed out, that is actually a stunted and quite limited version of pattern matching that existed in other languages for years and even decades before Swift.
F#, OCaml, and Standard ML are the primary examples. It's surprising to me how ignored these languages are.
I don't really understand the point of pointing this out. The GP already says that it's a simple feature that they're also surprised isn't available in more languages - nowhere did they claim that Apple invented it or that it's a super innovative thing or anything.
> it has so many simple features I love that I am sometimes surprised to find are missing in other languages
and a few other things that led me to believe they were unaware of discriminated unions. There was an implication that this was a unique or novel feature in Swift. It's like someone being excited about Python's pattern matching, thinking Python came up with it and wishing other languages had it when it's a watered down, less useful version of that found in languages it ignored for 30 years.
I don't know, personally I think it is an immense leap of logic to look at someone's surprise that a simple feature is missing in [another] language and conclude that they're saying the feature is unique or novel.
GP is probably talking about other mainstream languages or language that they regularly use such as javascript/typescript. It makes no sense to interpret this as "are missing in every other language ever invented."
You're right that plenty of people do ascribe Apple credit for inventions that they didn't invent, but there isn't enough context/detail here to conclude that has happened in this case.
We're in the weeds here, but I actually think it was the other commenter responding to me that was under that impression. I didn't interpret the original comment as saying it wasn't in any other language ever. There are all sorts of niche languages that do things in very interesting and novel ways. I just consider F#, OCaml, and Standard ML as mainstream-ish languages. They're just poorly known, despite their influences on languages, although those influences are still not as strong as they should be.
I know this sounds very Emacs-fanatic of me, but it does get a bit tiring in the software world to have spent a lot of time in tool and language discovery only to be forced to work in a language that feels like a step backwards where people suddenly get excited or finally acknowledge the benefits of language designs that came before them. Really, this primarily reflects on Python, which I am currently having to use, and it feels like several step-functions down in capability, robustness, clean-ness, and several other qualitative and quantitative factors compared to something like F# and OCaml. There's a huge amount of "let's ignore other languages but then ad-hoc reinvent what they do in a mudball type of fashion". See Python's concurrency story for a particular egregious example. I am constantly bitten by things in Python that I'm honestly aware of no other language doing.
So it's probably fair to say I read in a bit to the above comment. I'm happy either way that people are appreciating ML-like features.
I could not agree more! Python in particular, is so painful to work in, and the addition of insult to injury of other Python people definitely gets under my skin. I think one of the most disappointing realizations I've had is the truth in "the best product doesn't win," which is especially apparent when it comes to programming languages.
I've been doing Elixir for several years and have tried to get others excited about it, and the frustrating chicken and egg problem of "there's not a good pool of programmers to hire or ecosystem of libraries" often seems to kill it in its tracks. Especially painful when it comes to machine learning where Python has de-facto won.
Rust has a borrow checker. I think it is unique/novel, and I definitely don't think it is simple. In fact, it's precisely because I think it is unique/novel that I very much am not surprised that it doesn't exist in other languages that I use. Why on earth would I be surprised that a feature I think was just invented isn't yet present in other things? That's not how the passage of time works.
On the other hand, Rust also has first-class functions including closures. I do think that closures are a simple programming language feature, and I'm definitely surprised/put off when I write a language that doesn't have them (though these days I'm not sure there are any mainstream ones, now that Java and C++ both have them in the spec). Again it's precisely because I think of closures as the opposite of unique/novel that I am surprised at their absence. Why would I be surprised that a language does not have closures if I thought the Rust team just made them up?
So yes, I have no idea how one would conclude that someone's surprise at a simple feature being unavailable indicates that they think the feature is unique or novel, other than just being a knee-jerk reaction.
[This is a reference to Ignaz Semmelweis, a Hungarian medic who wanted doctors to wash their hands properly based on his observations of the difference between the outcomes for women giving birth who were cared for by midwives (who don't work with corpses) and medical students (who do). This is very slightly before modern germ theory is established, so the doctors who reject Semmelweis' suggestions aren't strictly rejecting known facts about the world - Semmelweis can't explain why it's important to do this, only that it will work (which it does).]
Thanks for explaining the reference! I was wondering about that as I had never heard that turn of phrase before. I looked him up and apparently he was put into an asylum due in part to his recommendations and the rejection of them and subsequently beat to death while committed. An absolutely brutal example of being persecuted for being right.
F# is my “primary” programming language, I work with it daily and absolutely love it. That said, it doesn’t surprise me that people ignore it or don’t know about it given Microsoft’s own lack of attention and focus on it. Half the time it seems new dotnet features aren’t even compatible with F#.
The day C# gets discriminated union types and true exhaustive pattern matching is the day I’ll have to think long and hard about how much more time I want to put into working with my beloved F#.
I strongly believe (waves hands while making incoherent small brained primate noises) that discriminated union types would be fabo for C as well.
Example why it would be sweet. Compiler could spit out a warning if it's not sure the correct union type is being used. Like if the pet union has dog and cat. Cat has respawn set initially to 9. Dog doesn't have respawn. If you try to set respawn without checking if it's a cat compiler whines about it.
Funnily enough, I sort of had the opposite journey, Swift seems horribly complicated and over-specified. The Dart compiler knows if I do a null-check, it can treat following references as safe, instead of if let x = x etc.
And it just hasn't lived up to any of the initial starry-eyed hype I had for it and expectations for a wider Swift ecosystem...
Like, here, it's great it runs on Raspberry Pi Pico but...it can't render UI unless someone writes a new UI library from scratch. And whoever does that inevitably will have to copy SwiftUI directly or alienate the people who actually know Swift.
> And whoever does that inevitably will have to copy SwiftUI directly or alienate the people who actually know Swift.
I may be wrong, but I don’t think this is actually true. SwiftUI isn’t universally loved in the community, largely thanks to its rough edges and how it struggles with that last 10% of polish in projects written with it. While copying SwiftUI wouldn’t necessarily be wrong per se, you’d actually probably get more rapid buy-in by copying “boring” old imperative UIKit, as that’s what most writers of Swift are using.
s/SwiftUI/UIKit, doesn't matter. (though I'm happy to have someone else point out SwiftUI has...some growing pains...it would have felt unfair for me to do so)
Point is its DOA for UI on non-Apple platforms.
Either you:
- attract current Swift programmers and keep up with $APPLE_UI_FRAMEWORK with no assistance.*
- attract no one and make a brand new framework in Swift.
- bind to some other UI framework, alienate current Swift programmers*, get all the people who have been dying to write Swift yet didn't build a UI framework for 7 years
There's just no universe where using Swift to write UI on non-Apple platforms makes sense to any constituency. You have to be simultaneously obsessed with Swift and don't care about platform-first UI, or want to maintain two separate UI frontends.
I hate being that absolutist, but that is what 16 years of experience on Apple platforms has taught me.
I'm surprised by the # of people arguing there's a possibility something else coudl work. I guess what I'd say is, if there is, where is it? Swift was on Linux eons ago, I believe before I started at Google so 7+ years.
* People used to try this to do iOS/OSX cross-platform, those projects are dead. One could argue this is just because Catalyst exists. Sure, but, let's just say you need at least N / 2 engineers, where N = the number of engineers Apple has on an arbitrary UI framework, in order to keep up with it. That's a lot.
** I've been in Apple dev since iPhone OS 2.0, and while in theory this could work, in practice it doesn't. Ex. Qt won't appeal to Apple programmers. It's antithetical to community values.
> Swift seems horribly complicated and over-specified. The Dart compiler knows if I do a null-check, it can treat following references as safe, instead of if let x = x etc.
`if let x = x` is an alternative to a null check, it's not as if now you need both a null check and an `if let`. What makes an `if let` more complicated?
Small change made in swift means an unwrap can just be
`if let x {
}`
Which while small, is great if you do it all the time.
Swift though made me question how much I used optionals, now I get annoyed if they're over used, it usually means someone down the line has not made a decision, they propagate through the program and eventually someone wonders why they didn't get a value.
I like the Result type and the Flexibility of the Enums a lot for this
You return an optional with no reason, it colours all subsequent calls.
When i was younger, i thought of them as an extension of bool, yes/no/maybe, but now, was it cancelled? Did an error occur? Did it just timed out? How many cpu cycles am i wasting throughout my code because every thing down the line checks it for null? It snowballs
It's not the same op though. One is a check, the other creates an in-context copy that's unwrapped from the optional. The compiler of course optimizes this out but it's not the same.
They have different use cases.
if x != nil is to do some work without needing x, because you'd still have to unwrap it if you used it.
That's what if let y = x is for, it creates a safe lexical context to use the unwrapped value.
And then you have guard let y = x which is a stronger guarantee that no code further down in the context is executed if x is nil. This helps to write happy-path code with early returns, avoiding pyramids of doom and encouraging writing code where all the assumptions are listed up front with logic handling exceptional cases before finally arriving at the logic that works under all the assumptions.
Dart feels like a step backwards after seeing the benefits these provide.
But there’s also no harm in combining them. I agree with the OP, TypeScript does what they describe Dart doing and I find it much simpler than the way Swift does it. It’s never confusing.
Disagree. I’ve seen people write things like the following:
if x != nil { x!.foo() }
and then later the x != nil condition is changed to no longer check for nil, or the x!.foo() is moved out of that lexical scope elsewhere, or copypastad, and then the force unwrap crashed. I’ve seen similar yet more subtle bugs with soft unwraps instead of forced: x?.foo()
Not possible if you use if let y = x . If you copypasta the inner scope with the different variable name then it won’t compile.
(ofc these are trivial representative examples, in reality these were lexical scopes with many lines and more complicated code)
It’s not just about how the code looks when you write it, it’s the paths that can be taken by future maintenance programmers.
I‘ve complained about the large surface area of swift syntax, stdlib etc in the past, but I don’t personally find this specific thing confusing.
Changing the type of something based on a conditional check is not great, because if I wanted a non-optional I would request it. Sometimes a nil check is literally just a nil check. What if I want to set the value back to nil if it is not-nil? Why should I have to wrap things back in an optional if I want to pass it along as such?
The reason why languages promote variable types based on control flow is because developers en masse actually expect that to happen, e.g. facing the code like
Dog? maybeDog;
if (maybeDog != null) {
maybeDog.bark();
}
If compiler says "sorry, maybeDog might be null", developer (rightfully so) usually responds "but I have just checked and I know it is not, why do you bother me?". So languages chose to accommodate this.
> What if I want to set the value back to nil if it is not-nil?
You can. The type of the variable does not actually change. You could say that the information about more precise type is simply propagated to the uses which are guarded by the control flow. The following will compile just fine:
See, I don't think languages should accommodate this, because I see it as an ugly solution. It's nice that it works in a few cases but then it very quickly breaks down: a developer finds that their null check is enough to strip an optional, but a test against zero doesn't convert their signed integer to unsigned. Checking that a collection has elements in it doesn't magically turn it into a "NonEmptyCollection" with guaranteed first and last elements. I'm all for compilers getting smarter to help do what programmers expect, but when they can't do a very good job I don't find the tradeoff to be very appealing. Basically, I think pattern matching is a much better solution this problem, rather than repurposing syntax that technically means something else (even though 90% of the time people who reach for it mean to do the other thing).
Also, fwiw, I was mostly talking about things like the last example you gave. I guess it would be possible that in circumstances where T is invalid but T? would be valid, the language actually silently undos the refinement to make that code work. However, I am not sure this is actually a positive, and it doesn't help with the ambiguous cases anyways.
This isn't the same thing as "magically" changing the type.
What does the syntax technically mean?
What refinement is undone?
I think you're a bit too wedded to the idea that there's a type conversion going on or some dark magic or something behind the scenes that's "done" then "undone" like a mechanism. There isn't. It's just static analysis. Same as:
let x: Animal;
x = Dog()
if (x is Dog) {
x.bark()
}
The Zen koan you want to ponder on your end is, why do you want to A) eliminate polymorphism from OOP B) remove the safety of a compiler error if the code is changed to x = Cat()?
I don’t like that either. x is Dog is a boolean expression. Presumably I can write
let isDog = x is Dog;
And the value whether it is a dog or not goes into that new variable. The fact that I can’t then immediately go
if (isDog) {
x.bark()
}
shows the deficiencies of this static analysis (even if you could do some simple value tracking to make this work, it doesn’t really address the real issue I have with it, which I described above: why can’t I do this refinement for other things?) The conceptual model is one of “we special cased 2-3 cases that we can detect”, which I don’t find very elegant, especially considering that other languages seem to have better solutions that express the operation in what I feel is a better way.
(The equivalent Swift code would be something like this:
let x = Animal()
if let dog = x as? Dog {
dog.bark()
}
I see this as much superior. One, because x is still available in the scope of I want an Animal and not a Dog for whatever reason. I understand that most of the time you don’t do this but to have it available and not have to do a special case for it is nice. The second thing is that I just like the syntax, since it gives you an opportunity to give a more specific name in that branch as part of the assignment. Third, it composes, because none of the operations are special save for the “if let” binding. This code is also valid:
let x = Animal()
let dog /* : Dog? */ = x as? Dog
if let dog /* = dog */ {
}
The static analysis for undoing polymorphism is the exact same as the one for binding to optionals, because the idiomatic way to cast results in an optional.)
Who said you can't? :) This actually works in Dart:
Dog? dog;
bool isDog = dog is Dog;
if (isDog) {
dog.bark();
}
i.e. boolean variables which serve as witnesses for type judgements are integrated into promotion machinery.
I do agree with you that
1. In general there is no limit to developers' expectations with respect to promotion, but there is a limit to what compiler can reasonably achieve. At some point rules become too complex for developers to understand and rely upon.
2. Creating local variables when you need to refine the type is conceptually simpler and more explicit, both for language designers, language developers and language users.
I don't hate excessive typing, especially where it helps to keep things simple and readable - so I would not be against a language that does not do any control flow based variable type promotion. Alas many developers don't share this view of the world and are vehemently opposed to typing anything they believe compiler can already infer.
Kotlin can't do this, which the language I have more experience with. It's good that Dart does a little better in that regard. And, I think we do agree, I just don't really like the viewpoint of doing this, because I feel like it's not really general enough.
I don't think any of that has to do anything with static analysis deficiencies. The analysis is the same. What Swift requires is for the user to manually help the compiler with a rather verbose and otherwise unnecessary construct.
It’s not a new idea, but I don’t think it’s a good idea. That’s just it. Swift has a construct to do the equivalent and it’s fewer characters to boot when compared to Dart when you’re going from an optional to non-optional type.
But that's exactly what Swift does: it narrows the types. To do that, it uses a separate construct that only exists to make the parser ever so slightly simpler.
The ugly solution is Swift's if let x = x which is literally the same check, but in a more verbose manner.
Yes, compilers should be able to help in this and many other cases, and not just give up and force the programmer to do all the unnecessary manual work
It’s not a check, it’s binding a new variable with a different type (which may or may not have the same name; if it does you can use if let x these days). And solving the other examples I mentioned is generally out of the realm of most programming languages in use today (except maybe TypeScript?) It comes with significant challenges that I’m not sure we are ready to address properly yet.
> it can't render UI unless someone writes a new UI library from scratch
This is true for literally every programming language. You can’t render a UI, in general, without a target compatible UI library.
SwiftUI and UIKit are specific Apple things built on top of CoreAnimation and completely separate from the Swift language. And swift will happily link to C and somewhat C++, so no it’s not all evident why one would copy Apple’s SDK to render UI on a completely separate platform and environment.
> The Dart compiler knows if I do a null-check, it can treat following references as safe, instead of if let x = x etc.
Does Dart bind a new variable 'x' for the if-statement scope or does the if-statement scope refer to the same 'x' from the outer scope?
The reason Swift and other languages bind a new 'x' variable (if let x = x) is because the original 'x' can be assigned a different value elsewhere, like in another thread or a closure. If a closure captures and assigns null to 'x' and there is only one binding of 'x', then that means the 'x' within the if-statement will become null _post null-check_ which is type unsound. I believe TypeScript has this "problem" as well.
1) Dart doesn't have threads, it has isolates (where memory is not shared). So no variable modifications here.
2) Dart will promote a variable as not null after a non-null check only if it cannot be modified elsewhere, ie, the variable is final (can't be modified once assigned), or there's no other reference to it.
To clarify, in the above scenario in dart I can have an input with type MyEnum and do this?
switch myEnum {
case let one:
// react to this here
case let two(twosChild: twosChild):
// call an API passing in a child element of twosChild here
case let three(threesGenericChild: threesGenericChild):
// T: Protocol, and call an extension function threesGenericChild.extension() here?
case let four(foursCallbackChild: foursCallbackChild):
// Pass foursCallbackChild callback into another context, and execute an escaping closure if needed?
}
Last I checked Dart required a nested if/else tree and didn't have compiler-guarantees of switch completeness like the above.
Correct, Dart has enum exhaustiveness checks, associated types, and expressions as of 11-13 months ago: https://cfdevelop.medium.com/dart-switch-expressions-33145c3... (keeping it short and simple because I'm at -2 on the original post, don't want to look like I'm argumentative)
In Dart you use class hierarchies instead, rather than enums (which in Dart are way to define a compile time constant set of values). So the original example becomes:
sealed class MySomething<T> {
}
final class One extends MySomething<Never> {
}
final class Two extends MySomething<Never> {
final Child child;
Two(this.child);
}
final class Three<T> extends MySomething<T> {
final T value;
Three(this.value);
}
final class Four extends MySomething<Never> {
final int Function() callback;
Four(this.callback);
}
And then you can exhaustively switch over values of MySomething:
The declaration of a sealed family is considerably more verbose than Swift - and I really would like[1] us to optimized things for the case when people often declare such families. Macros and a primary constructors will likely provide reprieve from this verbosity.
But importantly it composes well with Dart's OOPy nature to achieve things which are not possible in Swift: you can mix and match pattern matching and classical virtual dispatch as you see fit. This, for example, means you can attach different virtual behavior to every member of the sealed class family.
I miss Swift enums when writing Kotlin. Kotlin enums are more restricted in that associated values can’t differ between cases, which limits their usefulness and means that much of the time what you want are instead sealed classes.
There’s actually several bits of Kotlin like this which feel somewhat pedantic/idiosyncratic and fussy if you’re used to writing Swift. I’m sure there’s valid design reasons behind these decisions, but sometimes it feels like unnecessary friction.
Huh. I was thinking about this in much lower level terms, it seems weird to me to define these structures recursively when how they're actually stored is thus omitted yet that's actually crucial, but I guess if you work in a sufficiently high level system you don't care about the implementation details.
probably, I didn't really check it, but I found [1]. Rust has a lot of support for embedded systems, even from the companies that provide the chips, like Espressif.
Absolutely. There have been a lot of discussions here on HN about embedded rust in the past so you might want to search those on algolia to catch up but look at embedded_hal, embassy, or droneos as starting points.
Swift is really an underrated language mainly due to the Apple baggage.
It hits the sweet spot between Rust and C++ due to it's sane design choices like structured concurrency or simple things like tagged enums, Result<T,E> etc.
I wish it gains more popularity outside Apple land and is discussed more like Zig or Rust.
Having written a bit of Swift code on macOS and a bit for iOS, I agree. This language has a very nice flow to it which should appeal to developers. It's hard to explain, but somehow the language is a joy to work in (Xcode also helps with that experience, of course).
If I could use a minimized Swift on embedded, I'd be very happy. Right now I'm just doing C for ESP32 and that sort of thing.
Yeah I was wondering the same! I fondly remember coding swift almost 10 years ago but large part of it was the good integration with Xcode I believe. Seems like they have an OSS LSP so probably that experience translates to other IDEs nowadays
It looks like a nice language but my concern would be Apple might decide not to support it on other platforms at some point. Just too dependant in a single company.
Are you willing? Do you want a certain feature? There's nothing stopping you from making it yourself.
A company literally makes an entire project open-source (and with major contributions from people outside that company) and you still say it's pRoPrIeTaRy??
Have you submitted any concrete requests or proposals? You want other people to magically guess exactly what you want AND spend their time and effort to make it for you, for free?
It scares me how many people don't realize this. A language is only as good as the things it allows you to build. Swift can be a great language from a PLT perspective, but it is pretty much only good at making apple UI's. Anything else and you are making some important compromises you might not be aware of.
What important compromises are being made that aren't "libraries may not exist", which is true of any of the equally niche languages that get a lot more love because they aren't associated with Apple? Even the severely limited ecosystem is mitigated by the language having excellent C and decent-enough C++ interop.
That theory works well in vacum, not when there are languages with similar features and mature ecosystems to chose from.
Microsoft is discovering how hard it is to take .NET outside Windows, the fierce competition of UNIX first programming language ecosystems, and Apple is not doing with Swift half as much as Microsoft is trying to.
The sentiment I was responding to was that Swift is only good for making Apple UIs, which is patently false.
Again, newer languages having less of an ecosystem than older languages is not news or somehow unique to Apple (or Microsoft by that matter). The existence of more mature languages than whichever one you're considering is not news - if it was, then nobody would start any project in anything other than like three languages. And as I mentioned there is a gigantic ecosystem that Swift and several others have access to by design, which for some reason people refuse to acknowledge whenever they shut down newer languages with "but no ecosystem".
> Microsoft is discovering how hard it is to take .NET outside Windows
Ten years ago you would have had a point. However it's 2024 and .NET is solid, stable, and widely deployed on Linux et al - and that's without taking projects like Unity into account. That's far from only working well in a vacuum.
As someone that does polyglot development across Windows and UNIX like systems, I am pretty well aware how well .NET adoption is going on UNIX shops.
It doesn't matter how solid and stable it happens to be, those widely deployed on Linux workloads are mostly from Microsoft enterprise shops cutting down on server costs, not startups filled with macOS, FreeBSD and GNU/Linux desktops deciding that C# would be a great language to use on their startup.
Unity is a special snowflake, they have their own .NET infrastructure, which is also a reason why you can't just use .NET vlatest on Unity.
Going back to Swift, so far it hasn't shown to be any different than open source Objective-C, and Objective-C at least has GNUStep outside NeXT/Apple.
The reasons they don't pick it is because they are just as illiterate as their microsoft shop counterparts.
I have colleagues who worked in a "macOS + Rider and Linux Hosts" style teams for a long time, and had great experience, but when they talk about this to a wider audience they get a similar treatment to what another commenter described when dealing with an architect from Walmart. People only ever listen to what confirms their beliefs and at best ignore when being exposed to something different, or sometimes react in an outright hostile manner.
Microsoft has a lot of baggage they’re dragging around, though.
Getting started on C# and Windows App SDK sucks a lot for example, because everything you google about them returns boatloads of results for myriad incarnations of C#, .NET, and XAML-based UI frameworks which makes figuring it all out somewhat painful.
That’s certainly a valid weakness, but at least getting up and running on Swift’s most native platforms is straightforward. One wouldn’t expect C# and Windows App SDK on Windows to be as much of a struggle as it is.
I don't get why you're so down on WinAppSDK, something that is largely ignored by .NET developers, and is really only used by Windows team or those poor souls that still believe WinRT has a future.
ASP.NET, EF, Avalonia, Uno, Akka.NET, Silk.NET, Unity, Godot and tons of other software packages work just fine on macOS and Linux, with JetBrains Rider and VSCode/C# Dev Kit/Ionide.
I like to develop in platforms’ “preferred” frameworks when possible and am not really interested in solutions trying to be a “silver bullet”. In the case of Windows I find the design language of Win11 actually pretty nice and would like to have that, which involves using WinUI/App SDK. There are alternatives such as writing an app with some other framework using a third party theme, but that can instantly become outdated with a system update or comes with weirdness like how some of the older Windows C#/XAML stuff gets all stuttery on displays running a refresh rates over 60hz.
That has nothing to do with cross platform .NET, and as mentioned only Redmond cares about it, and not everyone, Office team doesn't want anything to do with it, rather shipping Web tech on Windows.
And despite all of this, Swift is a toy on Linux compared with .NET ecosystem on Linux.
The two are completely separate. Getting C# to build and run only needs executing 'winget install dotnet-sdk-8' on windows or 'sudo apt-get install -y dotnet-sdk-8.0' on e.g. Ubuntu.
Couldn't agree more! I find it to be a very elegant language and a real treat to program in. I do mostly Java for work, and wish Swift had the open source ecosystem that Java has. There are so many support libraries and frameworks in the Java world that make it easy to develop complex applications. I wish Swift could say the same.
In my experience it has a solid 90+% of the semantic expressiveness - ADTs, advanced pattern matching, if statements and similar being expressions, traits (split into "protocols" and "extensions" in Swift), automatic reference counting, a well-defined ownership model (albeit not quite as strict as Rust's), constructs to prevent data races, etc. It lacks things like explicit control over lifetimes, and also defaults to copy-on-write rather than to move semantics - that isn't unsafe, but it does lead to beginners writing programs that compile fine but behave in a way that's surprising to them. It also doesn't expose threads directly (similar to Go only having goroutines), but the concurrency model is quite nice to use. The language is obviously opinionated about its async runtime (Grand Central Dispatch, which is also available on Windows and Linux) so you don't have to wrangle with that yourself though of course it loses the power and flexibility of bringing your own.
On the flip side, Swift provides a lot of syntactic sugar that makes it feel faster/smoother to engage with than Rust. For example, enum literals don't have to specify the enum's name wherever their type can be inferred by the compiler. So e.g. when you are pattern matching on an enum value, the match arm `MyEnum::Red => {}` in Rust would just be `case .Red:` in Swift. It isn't much on its own, but each tiny QoL thing like this adds up.
Swift’s syntax I think also makes it more approachable for people coming from other languages. Rust by contrast is rather intimidating looking for anybody without a background in C++.
Thanks for the feedback here -- this definitely make sense. Interested in the copy-on-write vs move semantics... I'm so used to it now that I almost prefer the harder/more manual control/checking that Rust provides, but I definitely see the ergonomics of going another way.
I find Rust a bit easier for me to read, quite preference-driven as I prefer snake_case to camelCase but that's a tiny thing.
And weirdly enough I think I actually prefer `MyEnum::Red => {}`, but maybe that's just stockholm syndrome.
While I do appreciate macros, and use them sparingly I find the best case for them is doing things like generating to/from <protocol> (e.g. JSON) implementations.
In Haskell, there's a fully baked system for derivation where you can set up implementations that walk the type trees, but most other languages don't have that -- I find macros to be a great slightly lower complexity way to do this kind of thing.
What is Swift's answer in production? Does everyone just write extensions for their types? Rust's serde is a joy to use and 99% of the time "just works", despite the macro magic.
> Type system function resolution
I'm not sure this is a real problem for me or most people, but taken with the point about polymorphism (I guess this point is basically about late binding), I see the benefit here.
> Protocols vs Traits as implemented
Generic methods being allowed in Swift is quite nice.
> Sugar
Great points in there -- some of them are style/preference but I can definitely see benefits to a bunch of them.
The string interpolation point you might want to update (Rust does that now!)
Named arguments are also quite amazing.
Again, fantastic list -- saved! I've been meaning to give Swift a proper try lately and this is a great reference.
For the record "No macros", is one of the things that is out-of-date. There is some discussion about it in the comments... maybe I'll be forced to actually update it :)
After having written a few of them at this point at $DAYJOB and on my own... They're actually not so bad at all -- it's quite nice the world that they open up.
I'd venture to say most people are mostly annoyed at the overuse of macros (where a simple function would do).
Rust's attribute system is something it REALLY got right -- stuff like #[cfg(test)] and the conditional compilation stuff is really really impressive. Not sure what cross platform dev looks like in Swift but the bar is quite high in Rust land.
Yeah I played around with it a bit and I really like it, but I've never really had a reason to use it professionally since I'm mostly targetting Linux and as you say it's definitely seen as being a bit exclusive to Apple land.
Can't complain though my current daily driver is Scala and that's also a fantastic language to work with.
How usable/useful is Swift on not-Apple platforms in general nowadays? I've been giving it a wide berth since I got the impression that other platforms were more of an afterthought, but maybe things have improved when I wasn't looking.
The automatic C/C++ bridging got way better in 5.9, so building an app integrating native libraries is pretty easy (~ 1 day to start iterating).
As for Swift libraries, Stdlib is fully ported, Foundation can be spotty, the new Foundation-Essentials is small but growing. Swift-System has portable low-level path and such, and Shwift helps with fully async scripting.
As for supported platforms, Ubuntu seems robust, AWS is usable but marginal. Windows has some hurdles - technical, legal, and incentives - but VSCode support seems relatively good for non-GUI stuff outside windows API's.
Embedded has been landing in the main compiler and pushed by the Swift team lead. They're still issue-spotting, planning how (much) to live without a heap and the runtime metadata used for protocols, but the number of issues seems manageable for the intensity of effort, even though it's not Apple's priority.
With Swift 6.0, a breaking release, up next (in September?), it's likely embedded features could be both driven by the desire to make any breaking changes now, and blocked by other priorities.
One addition: Swift WASM support has recently become stable enough to be [included in CI](https://forums.swift.org/t/stdlib-and-runtime-tests-for-wasm...). The folks working on that have done a great job and I’m very excited for that to get more fleshed out!
In this (and highlighted in a comment) there's a quote:
> Swift benefits greatly from being an embedded part of Apple's ecosystem, but it also provides challenges for its organic growth as there's a lot of dependency on Apple for capabilities that the server ecosystem requires. For example, almost all Swift developers use Apple's Xcode as their IDE and it has fantastic support for developing for iOS devices, including the ability to run locally in emulator environments. It would be great to add support for writing server code that allows developers to develop directly into containerized environments from their local IDE by supporting the simple integration into Xcode of development tools such as Appsody. Where there is open governance and open ecosystems, it enables the community to contribute and to solve the problems and use cases that are important to them.
If you're a vim user, you're gonna find vscode close enough that it feels like you're close, but far enough that it's frustrating. I finally setup coc and have been quite happy.
I’ve been playing around with Swift on Windows, building a small raylib-based game. The only raylib bindings out there are pretty outdated so I had to write my own, with little to no experience with either C or Swift. It was way easier to get that set up than I expected it to be!
The language server seems a bit clunky on Windows, I’ve only used Swift with Xcode on macOS so I don’t know if that’s platform specific.
It's pretty usable on Linux, the odd package needs a patch, but it's pretty smooth sailing. Of course, if you're trying to write an application with a UI, YMMV (although swift-cross-ui looks promising).
I’m using swift on the server with vapor and it’s really working great. Non-Apple platforms support has been greatly enhanced over time and is still being enhanced.
The Arc browser borrows very little UI or tab/account syncing code from chromium. The non-V8+Blink bits are almost entirely written in swift as far as I know.
This means it probably has the most Windows Swift code of any app out there and that’s why I mentioned it.
Yes. Ignoring the engine there’s a lot of swift in how they build the ui components and sync tabs etc. they’ve built something similar to SwiftUI but it used winui/wpf (or whatever it’s called these days ) underneath.
Depends on the platform. Android is still super rough (expect debugging custom third-party experimental compiler build scripts).
Linux support is i think a bit better. But for now it's still the biggest handicap for the language for sure. Any sane project would have solved those issues a long time ago. But since it's an apple language, they seem to not be caring much.
Xcode is fine as long as you don’t have a ton of Obj-C ↔ Swift bridging going on and avoid XIBs/Storyboards, with writing UIKit in pure code being the most stable option (SwiftUI still needs some time in the oven). Writing idiomatic Swift also helps (code smells like deeply nested closures can make SourceKit grumpy).
I’ve spent countless hours working in it and crashes and errors have been rare for years now. Especially since ditching CocoaPods in favor of Swift Package Manager for dependencies, I’ve found that Android development to be considerably more frustrating with the mess of Gradle and Proguard paired with the surprisingly anemic Android Framework. Jetpack Compose is thankfully replacing Android Framework and is a major improvement, but from time to time Gradle and Proguard make me want to tear my hair out.
I briefly worked on a swift app that had just a login screen and a background process, so no Storyboard stuff. It was still so horribly, horribly buggy, it starts getting comical. When turning to seasoned full-time iOS developers, they would just shrug it off as a normal thing. It constantly needs to be restarted, and actions retried. An insane amount of "clicking around". It was a complete joke, I cannot fathom that some people just take it as state of the art.
I tried the IntelliJ IDE, which didn't feel so shoddy, but in turn has a lot of other problems that boil down to not being the "blessed" solution.
It’s not that I’m shrugging bugginess off, I legitimately am not encountering these things. I restart Xcode maybe once every 1-2 weeks, if that. Maybe I’ve just learned how to not anger it?
Android Studio (IntelliJ) on the other hand I’ve run into mysterious things with fairly frequently, like it failing to install the app on the connected phone after hitting run for no reason or builds failing with bogus errors, requiring a project clean to fix. Its “smart” refactoring tools also can’t always be trusted to do the right thing, at least where XML UI layout files are concerned.
It makes sense that UIKit is more solid, because not only does it have 17 years of its own history, but builds upon the legacy of AppKit which stretches back another 20 years. By comparison SwiftUI is still quite young.
Haven’t touched Flutter because Material Design feels weird on iOS, its Cupertino theme isn’t great, and I’m not the type to reinvent the wheel on UI design in the apps I work on. Also don’t really feel like picking up a language, toolchain, and ecosystem for a single UI framework.
We just started migrating our app to Jetpack after migrating our iOS to SwiftUI, so far Jetpack has been more fun. The problem is just the horrible Android documentation and starter tutorial which it is more video centric, I hate it.