Hacker News new | past | comments | ask | show | jobs | submit login
Swift System Is Now Open Source (swift.org)
500 points by NobodyNada on Sept 25, 2020 | hide | past | favorite | 328 comments



To me Swift is one of the most interesting languages right now. It's killer for iOS development of course, but Swift for Tensorflow[0] is exciting and could become a refreshing alternative to python for machine learning. Here Swift System + Swift for Linux could make it a compelling alternative to Go or Rust in the future. Looking at Swift System, one interesting thing to note is that they are wrapping the library functions rather than calling into the kernel directly. The source for that is here: https://github.com/apple/swift-system/blob/main/Sources/Syst...

[0] Yes, I'm aware of the staffing issues. No need to beat that dead horse again.


I don't think Swift really competes with Rust or Go at all.

- Go aims to be simple enough for many engineers to be productive with little investment.

- Rust aims to be extremely reliable and efficient.

Swift is very complex, and makes many compromises away from reliability or efficiency in favor of application use cases.

Swift may fill an interesting role for a "native C#" that is mostly reliable, mostly efficient, and somewhat productive, but on the other hand, C#, OCaml, Java, Kotlin, and so on already have various answers to that, and they're only becoming better at it.

The only real advantage it has (as far as I can tell) is iOS (and TensorFlow almost as a knock-on of being the only serious language for iOS).


> - Go aims to be simple enough for many engineers to be productive with little investment.

Go isn't simpler, there is just a defered cost engineers pay later on when it turns out one cannot just do away with complexity by deeming it irrelevant.


But swift isn't as easy as python or javascript either. It is arguably more complex than Java, C#. And probably more complex than Kotlin. It is sure less complex than scala.


IMO Swift is easier than JS or python for medium to large projects. Once you get to a certain scale of codebase where you can't keep all the systems in your mind at one time, weakly typed, interpreted languages start to reach a point of diminishing returns, and the inability to catch issues at compile time starts to be missed. You can mitigate this by writing a lot of tests, but this creates a large additional workload which more than outweighs the additional complexity you would have encountered working with a language like Swift in the first place.


I know the advantage of static typing (see my other comments).

Swift just seems quite more complex for average programmer.


I think one advantage of Swift, as compared to say Rust, on this topic is that one of it's core values is the concept of "gradual disclosure". In other words, Swift is designed in such a way that you can start programming at a low level of complexity and produce functional programs, and over time you can introduce more complex and esoteric concepts into your code as you become more familiar with the language.

If you're jumping straight into a large, mature code-base I can understand how it might seem complex, but it's very possible to write Swift code which is as simple and readable as Go code. The same can probably not be said for C++ or Rust, where there's a certain baseline complexity which cannot be escaped.


Some FUD right there... What exactly is the deffered cost? Give real examples please.


I love Go, but there is a strong case that Kubernetes has a messy codebase because Go didn’t have good dependency management or generics when they started it. The strength of the ecosystem in other areas let them get started and now they have the tragedy of success, a la Wordpress.


Part of the issues with Kubernetes is that it started as a Java project and then it became a Go project. If anyone started a large Go project with that mindset it would look messy no matter what you did.


Can you point to any large Go project that isn’t a mess?

I agree that from a single sample you can’t reach conclusions, but “it was their fault” isn’t a great defense.


Is rclone a mess? I don't do Go, but some glances at it don't scream "mess". https://github.com/rclone/rclone



Can YOU point to any large project that isn’t a mess?


Isn’t mess relative to the beholder of the eye.

Many large projects may look like a mess but their success is because they got the high level architecture right and perhaps had the moat of a large tech behemoth behind them.

VSCode, Tensorflow, Kubernetes, Go, Typescript, React. These are a few that come to mind.

At a high level, they are all fantastic projects and perhaps messy in their own way, but they are absolutely fantastic, hardened proven codebases with ~millions of users.


Servo


Ahh with 3250 open issues and 8k closed ones? With the main-driver being Mozilla?


There is one video about the Kuberentes code base: https://www.youtube.com/watch?v=4VNDjwzzKPo and it has nothing to do with Go language iteself, it's about organizing the repo and code strucutre.

Kubernetes is a complex distributed system with over 2M loc, not matter what the language you use there going to be some issues.

It's probably one of the largest open source project out there.


2MLOC is too big for what it’s supposed to be doing. That’s a sign that something went wrong. I wouldn’t blame Go qua language per se, but the lack of generics and lack of dependency management (when they started) couldn’t have helped.


Yeah lets just use Docker Swarm or Nomad. Or which features would you remove to make it less bloated?


The argument - as I understood it knowing only little about Go and Kubernetes - wasn't about bloat in application features, but bloat caused by the language. The stated argument was "missing generics" - missing generics means that similar algorithms have to be re-created multiple times instead of being reused. Thus comparing this tonnaother language might be interesting.

For example C++ is extremely powerful in writing generic code: Make it a template and and provide a set of of traits to modify the behavior and you get a set of algorithms you can re-use for anything – at the cost of unreadable code in the impelemntation (if you aren't careful; with a chance of quite precise code on the call side) and of course with the bloated version in the resulting binary (after compiler instantiated and inlined (thus copied) everything; where the programmer typically doesn't have to care (till the binary is too large to handle) all of it)


So the better question ist ... What code got duplicated exactly and if the one mentioning can't give examples which make a dent in 2MLOC then it is not due to the language in the first place.

I assumed it is about features because there are really a lot of features inside Kubernetes which are not available in Swarm or Nomad by default.


2 MLOC is insane. I assume some of that might be test code and other infrastructure, and it's basically impossible to compare apples to apples in terms of code size. But for reference, 2 MLOC is around the same code size as PostgreSQL, which is also a complex distributed system, has arguably a lot more user-exposed features and a long history, and is written in C of all things.


Kubernetes is exception, not the norm.


Because of no overloading and generics, you are often forced to repeat code or resort to unsafe runtime practices.


Everyone is posting some stuff about the kubernetes codebase, but:

https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...

The monotonic time thing is an especially good example.


I think one of Swift's advantages is its progressive disclosure. It can be written without using any of its more advanced features, making it approachable without limiting possibilities for more advanced users.


Can it be read without knowing about its more advanced features though? Production software is read much more often than it is written.


One of Swift's strengths is that most of its features really make sense if you read them, even if you aren't aware they existed. The entire language is optimized around readability. (No, I'm not counting the function builder fiasco.) For example, you may not know that Swift's loops support where clauses:

  for i in 1..<10 where i % 2 == 0 {
      print(i)
  }
But you can immediately understand how it works, because it's consistent with the rest of the language and reads well.


Any claim of the form "this language is magical! every statement is self-evident" has been bunk since the dawn of programming. The people making such claims are always already too close to the language to realise their own confirmation bias.

So, feedback, from someone whose exposure to Swift consists basically of this article but has eaten a metric crapload of many other braced languages over the decades.

What I can tell is that the programmer wanted to print 2,4,6,8. So the intentional readability is there. However when it comes to understanding how it works (i.e. the language mechanics delivering the intuitive reading) I had lexical concerns. I couldn't immediately tell whether the where-clause acts like a guard predicate in the for-statement or an unbraced if-statement i.e. equivalent to

    for i in 1..<10 {
        next if !(i % 2 == 0)
        print(i)
    }
or

    for i in 1..<10 { 
        if (i % 2 == 0) {
            print(i)
        }
    }
or even (let's hope not)

    if (i % 2 == 0) {
        for i in 1..<10 { 
            print(i)
        }
    }

and in a similar vein the lexical scope and binding of i was not clear to me; some languages bind the iterated variable at the level of the statement, others inside the iterated body (which may even be a closure for all I know), and this has implications particularly for the the value of i after the loop, for the binding of i in any function generated inside the loop (example: print could actually talking to an IO object that defers output by prepending lambdas to a chain - I've seen web containers deferring view rendering this way), for name masking, and for the consequences of any flow-control/continuation/exception mechanism that may cause an early loop exit.


I'm unsure what the difference is between the first two…are they not equivalent? The lexical scoping could be interpreted differently, sure (allowing your third example the benefit of the doubt) but this is where the "Swift is consistent" comes into play. You already know Swift's scoping rules, it's one of the first things you learn, so using that knowledge you can figure out this code as well.


No, they're not equivalent. See, that requires making an assumption that I didn't about lexical nesting. They merely do the same thing in this example.

I do not know Swift's scoping rules and having read through the introductory guide and the flow control chapter I still can't tell you if the loop variable remains visible after the loop, and what it is bound to if so.

No doubt that if I sat down with a REPL or an IDE it'd hopefully be evident by example within a few seconds, but the claim was about dry-reading, not interactive experience.

I'll leave it there with an admonishment against making assumptions/declarations about the obvious-ness of something you're already familiar with.


In every language I’ve used with a for-in construct (or equivalent), the iterated variable is scoped to the loop. That seems to be implied by the syntax, too; it’s equivalent to “for all i in [1,9] where i mod 2 = 0”. If you saw that in a math context, you’d expect i to have no meaning beyond the iteration.

Is there a case to be made for other behavior? Or an example of a language that intentionally treats it any differently? (Unless you go out of your way to iterate over a variable you’ve declared in an outer scope in C, but then that’s not really a for-in construct.)


Python and JavaScript are languages that do not behave the way you mentioned.


JS works that way with plain for loops, but not for-in or for-of. The following fails on the last line, for example:

  for (const i in [1,2,3,4]) {
    alert(i);
  }
  alert("i is now " + i); // ReferenceError
(Not that JS is exactly known for consistent and predictable behavior on edge cases, of course...)

Python I don’t have experience with, and it appears I stand corrected on that front! I wonder how intentional it is, and what sort of use case it enables (and if it’s considered good practice to use).


Python does this mostly because it doesn't have block scope, only function scope like JS's `var`. After using Python for a decade, I think this is mostly a mistake. I would configure (or edit) my linter to prevent me from using it if I needed to write a lot more Python.


Lexical nesting? While I agree with the statement you’re making — that reading code requires you to understand more about what the code means, in that context/runtime, I also think the example you’re picking apart reminds me of the dangers of undefined behaviour in C++. If the language specification plus its test cases still has implementation-specific details, then while by all means measure or discover them, but I would both say this is not unique to Swift and is just part of the complexity that is not writing CPU instructions directly that target only one type of CPU design. Maybe I’m taking this too far, but my point I suppose is that I know what was intended by the swift code even if the exact details are optimized out from underneath me by a compiler or language library. Might as well argue that you should have full unit and integration tests for every target platform to ensure the code behaves the way you expect, and even then you’ll still hit platform-specific edge cases, particularly on the wide variety of platforms Linux supports... I do tend to trust Clang, but even it can vary greatly from (hopefully major) version to version.


Yes, I'm definitely trying to point out the distinction between reading code and understanding its intention vs understanding implementation mechanics and awareness of the possible alternative behaviours. It's easy to point out that C++ is amongst the worst offenders in that regard, but even a language with a reputation for "programmer happiness" like Ruby (which I absolutely adore) is full of magic, almost every nontrivial expression glossing over a huge pile of conceptual and mechanical object-functional devices that beginners trip over quite routinely (don't even get me started on the thick layer of sorcery that Rails slathers over the top of that).

So I'm not actually calling Swift a major offender, but noting generally that I don't there's ever been a language in which the intuition gap between intention and mechanics was small enough to be irrelevant.

(something at the back of my mind is now whispering "Scheme", but too quietly to be taken seriously)


A where clause on a loop is nice, but I think it's likely it will either be abused or be limiting. I've always favored systems that work well together to build something more than the sum of the parts. Perl is an example of that in many cases.

  for my $i ( 1..(10-1) ) {
    next if $i % 2 == 0;
    print $i;
  }
Here Perl's post-conditional syntax (which works only on a statement and not a block so it's more manageable) combines well with "next" (Perl's version of "continue") to very clearly convey intent without a lot of extra boilerplate. An additional clause is often just an additional line. Comments after each can provide additional context as needed. I think this is a more flexible way to accomplish the same thing, and more readable in all but the simplest of cases. And for those simplest of cases, I would use a grep, which is also useful in other parts of the language:

  for my $i ( grep { not $_ % 2 } 1..(10-1) ) {
    print $i;
  }
Any syntax learned that works as a special case to only a single structure is either a case of missed opportunity or a case of extra syntax that needs to be learned which shouldn't be, IMO.


That where is not a one-off construct, it's used with switch statements:

  switch i {
  case 1...100 where i % 2 == 0:
      print("\(i) is a small even number")
  case ..<0:
      print("\(i) is negative")
  default:
      print("I couldn't care less")
  }
Or generic constraints:

  struct Vector<T> where T: Numeric {
      // ...
  }
If you so wish, you can always use any of the functional constructs as well, as in

  for i in (0..<10).filter { $0 % 2 == 0 } {
      print(i)
  }


So it looks like it's a general modifier to a range type, so can be used where those work. That's slightly better, but if filter exists, why not just use that?

One of the biggest criticisms that Perl gets is that there's so many different ways to do things, and that's somewhat deserved, so the question is what does "where" offer that "filter" doesn't other than another keyword and concept you have to learn to understand the language? What's the point if it's not really that much more expressive or clear?

> struct Vector<T> where T: Numeric

Is this really the same thing? This seems like a case where the same word is used to mean something else, even if they are loosely conceptually the same.


It isn't actually a modifier to a range type -- it modifies other constructs, like looping or conditionals. The examples don't show it but literally any condition can go in the where clause, even something like `where random() % 7 == 2`.


Better learn a new, generic concept than obscure syntax I’ll see combined 1000 different ways on the same codebase.


I find this style (Rust)) more readable end easier to extend:

RangeInclusive::new(1, 10) .filter(|x| x % 2 == 0) .for_each(|x|{ println!("{}", x); });

(though you could write it the same way, for i in 1..=-10 { if .. { .. }}


The same can be written in Swift as

(1...10).filter { $0.isMultiple(of: 2) }.forEach { print($0) }


I've been able to dip into Swift codebases with zero previous experience without too much trouble. I do have Rust experience which helps. But never-the-less everyday Swift seemed pretty readable to me.


Swift has some really clunky "features" such as dual-identifier parameters and escaped opening parens as identifiers for string templates. WTF! As if function/method declarations weren't complex enough in a statically-typed language which supports generics.


They’re not dual identifiers; one is an argument label and the other is the parameter name. Maybe it’s because I come from an ObjC background, but this is one of my favorite aspects of the syntax; IMO it makes code way easier to read without having to look up method definitions to figure out what each argument does.


From State of the Platform this year: https://pbs.twimg.com/media/Ebb51t6XkAIF4YD?format=jpg&name=...

Which for is for and would work as for for?


Madness.


As a disclaimer, I’ve written Swift, but this seems to be a simple concept. The first name is the interface and is used at the call site, and the second (optional) name is used within the scope of the function, should the developer want a more descriptive name.


> escaped opening parens as identifiers for string templates

Do you mean the use of “\(variable)” for string interpolation? What’s wrong with that?


It's just downright clunky. What's wrong with ${variable} as in Kotlin and Javascript?


I think that’s more a question of taste/what one is used to. I find that dollar sign a bit heavy, visually, but one could also argue that’s a plus.

Using backslash has the advantage that there’s only one ‘special’ character in strings, but of course, also has the problem can escape any character with a backsplash, except for opening parentheses.


They had good reasons, it means there's only two substrings that need escaping: the backslash and end string sequence. In hindsight, it's ugly and hard to read. A case of optimising for the wrong thing.


Totally. It's very easy to read.


Isn't that true of almost all programming languages? Or do you mean cleanly?

I think D has to be this idea (or disease depending on who you ask i.e. Go is based on the idea of only keeping the simple bits) taken to the extreme - you can quite happily write Java in D but also implement a Java in D all the way down to the metal (merrily metaprogramming all the way down)


All of this sounds great but it doesn’t mean that the code written in it will maintain the same level of simplicity/approachability. Go has the advantage of enforcing it. Writing code is easier than reading it.

Don’t get me wrong, I love swift but I think that for what you’re talking about constraints are necessary because we naturally shift towards more complex ways of expression.


Exactly, I was in love @ Swift 2, but as the language evolved to cover corner cases and enable things like SwiftUI, it became nigh-unreadable, there's simply too many 'dialects'.

Dart is the closest I've found to optimizing the surface area of a language while maintaining flexibility like ObjC.


This describes Perl - one could get very clever with it. As a recovering Perl programmer - that is a scary thing for a language to be, especially for large code-bases worked on by people with different skill levels as the code quickly becomes inconsistent, with mixed paradigms.


I once came across a production bash script that contained sections in Korn syntax, others in C Shell syntax and yet others in bourne shell syntax. Lovely!


I agree. I referenced this (perhaps a bit too implicitly) in a sibling comment. Swift's complexity is there but only when you need it.


> It can be written without using any of its more advanced features

If anything, that's bad, because it invites projects to adopt a impoverished subset of the language that avoids those features, in which case they might as well not exist because you aren't allowed to use them.


I have yet to encounter a project that does not allow contributions that use advanced features.


I'm not sure where is the problem here for the team that doesn't use it.


>It can be written without using any of its more advanced features...

So can Perl.


It can sure but I remember having to spend 2hours(with a reference manual open) rewriting two lines of Perl script that a sub wrote (after a colleague told me that he couldn't understand the script) into two lines that a Perl beginner could understand..


I've never understood this notion that a professional codebase should be dumbed down to the level of the beginner. In most professions the neophyte must jump though hoops to reach mastery yet in programming we seem to have turned this idea on its head. Whatever happened to mastering your tools? A seasoned Perl veteran should be able to play a damned good round of golf.


We have too many different, interacting parts in most software systems these days to demand devs have pro-level expertise in every part they might have to deal with. If you can simplify the code (without significant harm to functionality) so that only 10% of your potential devs wouldn't understand it very well instead of only 10% would understand it well, you have just increased the value of the code.


It is an appeal to the artistry inherent in making the difficult as simple as possible.

Or the internalization of the managerial imperative for cheap labor, take your pick.


If the usage of more advanced concept helped reduce the number of lines, better maintainability or increased performance, you may have a point but notice that it was two lines replaced by two lines. I've always suspected the sub to increase artificially the difficulty of the code to ensure he was employed, this didn't work..


>Whatever happened to mastering your tools?

Companies prefer a churn of cheaper juniors happened.


I perled a bit in early 00s, and I never used any of those so-called advanced features, because it made the code totally unreadable.


>Swift is very complex, and makes many compromises away from reliability or efficiency in favor of application use cases.

Swift is only complex if you need it, otherwise very easy, with few caveats.

And it's efficient enough for all application / cli etc use cases -- if Go can do it, Swift can do it even better.

It's not Rust in performance and low-levelness, but Rust is the "really complex" one in its semantics and learning curve.


> if Go can do it, Swift can do it even better.

This might be hyperbole, but if not I'd love to see some examples on how Swift can improve on Go's concurrency patterns.


Swift currently has no language-level concurrency model at all (the only concurrency support available is through libraries such as pthread and libdispatch). However, real concurrency support is on the roadmap for Swift 6 [0] (probably based on some combination of async/await and an actor model), and the core team is expected to release an initial design document within the next few weeks.

[0]: https://forums.swift.org/t/on-the-road-to-swift-6/32862/


It might not offer first class support for green threads like Go, but GCD is an even better imho model for 90% of concurrency needs of the average app:

https://developer.apple.com/documentation/DISPATCH


"concurrency needs of the average app" is different from what Go does well with green threads; this seems like something Swift does notably worse, but well enough that it doesn't annoy you in your use case. I feel like this is a different standard from "if Go can do it, Swift can do it better".


Isn’t that just a bridge to an existing OS level feature? Also is GCD green? (As in light weight, co-operative muti-threading)


>Isn’t that just a bridge to an existing OS level feature?

Yes. A feature well designed for this very purpose.

>Also is GCD green?

GCD tasks are not green threads but they're not direct threads either (even though they're used under the hood). They're more lightweight and much faster to create (than direct OS threads).


> And it's efficient enough for all application / cli etc use cases -- if Go can do it, Swift can do it even better.

If you claim something like this, please obviously back up your claims or in the very least, cite a simple example.


What you're asserting could be said to the initial claim as well. Where are all the examples of why it's so much more complex than other languages.


Swift isn't good for backend services. Linux port is also average ect ...


> Rust aims to be extremely reliable and efficient.

Fair point, Swift will never be as predictable or efficient as Rust (not a negative perse, just different goals). But I disagree on the complexity re: Go. What about Swift gives you the impression of extreme complexity? Go is definitely comparatively simpler but I do not think it is so much so as to put it in a different league.


The type errors you start running into with generics with all of their weird limitations in swift can make things complicated fast. That and RxSwift.

Also swift as a language doesn't scale well. There are a lot of bottlenecks in the build process that doesn't let you scale simply amongst many cores like you can with C++, Obj-C & C and probably many other languages too. You're also effectively limited to xcode & apple desktops, so you can't go rent out a 100 core build server on AWS for builds like you can for every other platform out there, not that is matters much yet with swift's build scaling issues.

Also stuff like basic debugging often just... dies.

The more I work with a badly scaling language the more I appreciate a design decision like go made with building fast. I hope with generics in go v2 the boilerplate should reduce a lot.


RxAnything makes any language unreadable. RxJava is the bane of my existence.


I find Rx to be super clear usually. What problems have you run into?

Combine (Apple's Rx framework) is pretty understandable/usable as well.


In my opinion it's just easy to do wrong. Most uses I've seen (at least in java land) are attempts at non-blocking io which ends up turning the whole application into observables from bottom up. Which in turn makes your app hard to debug and reason about.

When done right, when you actually need the observer pattern, and use it as an event queue I'm sure it's probably amazing though.


Those are good points but I agree with the other commenter that these are just difficulties with asynchronous programming generally. I like that Rx makes reasoning about those issues more straight forward and something you must handle instead of something that will bite you later if you didn't think through it.


this is the problem with async programming in general not just rx. Relevant read:

https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...


Yes I know about colored functions. I'm looking forward to loom and virtual threads on the jvm to solve that.

BUT it doesn't change the fact that Rx is hard to implement correctly and is often misused when you don't need the observer pattern.


When was the last time you compiled Swift code? Xcode's new(er) build system has solved many of the scaling issues that affected the previous system, to the point where it scales linearly (on large enough projects) even on a 28/56 Mac Pro, and my 10/20 iMac. Additionally, with cmake's native Swift support in 3.18(?), the scaling should be more solved across platform as well. There's certainly nothing in the language itself that prevents scaling builds, only the compiler and build systems.

The debugger is still rough, true, but improving. Apple just isn't spending the resources needed to bring lldb up to a great experience in a timely fashion.


Yesterday!

Swifts build system can split off enough threads to consume all of your CPU cores, and batch mode did improve things, but it's not actually doing so efficiently. There is a trade off between total compute time used and number of threads used. In compute time consumed, swift is the most efficient when it's single threaded in WMO mode, but this means you can only compile in parallel with separate modules that don't depend on each other. And even then it's not a very fast compiling language itself, multithreading issues notwithstanding

Maybe something has changed recently, but as far as Xcode 12 goes, I haven't noticed much of a difference. I last checked deeply with swift 4.

More details here: https://github.com/apple/swift/blob/master/docs/CompilerPerf...


> You're also effectively limited to xcode & apple desktops

Swift supports LSP now, so you can use VS Code if that's your thing.


But what is going to build your iOS/macOS application, the application of %99.9 of swift code? It's only going to build on macOS unless you are willing to illegally virtualize macOS on non-apple hardware.


Swift has almost seamless interoperability with existing C and C++ libraries and has much stronger safety guarantees than either of those languages. It's not on the same level as Rust, but ARC eliminates entire classes of memory safety issues.

The Swift runtime is large-ish, but not outrageously so, and is generally statically compiled-in to the binaries, so there's no dependencies on dynamic libraries.

It is, as you say, quite complex, but still an interesting choice IMO.


Minor nitpick but it does not have seamless interfacing with anything other than Objective C.

C can be exposed as Objective C, but C++ has to go through C (potential objective C++) first.

This is really no different than most other languages.

What it does do is provide good auto binding tools however to expose classes and objects bidirectionally to objective C.


My understanding was that there is no need to involve Objective-C if you’re directly interfacing with C. That interop is direct/seamless, and works on Linux etc. even without Objective-C or Darwin [0].

But the C++ story is what you’ve described above.

[0] https://developer.apple.com/documentation/swift/imported_c_a...

and

https://developer.apple.com/documentation/swift/imported_c_a...


I wouldn't say "seamless". Interacting with C directly from Swift is possible, and fine from a binary perspective, but quite clunky at the source level. (Which obviously is a motivation for the library that is being announced here.)

There are still assorted gaps in C imports -- a key example is forward-declared structs, that _all_ arrive in Swift as a _single shared_ `OpaquePointer` type: https://forums.swift.org/t/opaque-pointers-in-swift/6875


I was thinking of Objective-C++ which can freely call into C++.

Swift seems to be one step removed from that. I guess you would have to write an Objective-C to Objective-C++ bridge to then use from Swift, ugh.


obj-c is a direct superset of c, so that interop already requires plain c capabilities right out of the box

and afair obj-c interrop is only available/supported on apple platforms


Swift can interface with C++? I've just done some googling but I cannot find any documentation at all relating to C++ interop. I make no assumption that I'm right but all I can see is interop via a C API

Seamless in my mind is what D has where name mangling, templates, and classes all work on Windows, Linux (possibly MacOS - I don't have one). Does swift have any of that?


> Swift can interface with C++? I've just done some googling but I cannot find any documentation at all relating to C++ interop.

It is not there yet but it is actually being implemented on a native level. A quick Google gave me this. [0] [1] [2]

The approach is first-party than what you get with Rust's third-party crates and tools like bindgen, cxx, etc. Swift's C-interop approach is built-in and much more seamless and automated than bindgen's toggles and switches and creating them by hand in with cgo in Golang.

[0] https://github.com/apple/swift/blob/master/docs/CppInteroper...

[1] https://github.com/apple/swift/pulls?q=label%3A%22C%2B%2B+In...

[2] https://forums.swift.org/t/manifesto-interoperability-betwee...


It's being worked on, but it's nowhere near being usable in any sense. Very fundamental things still need to be tied down and decided, much less implemented.


Well I did say it's 'not there yet' and it is 'being implemented' so of course it is not yet usable as of now. Your same point could be said about Dart C/C++ FFI, Rust async-await, etc.

The parent comment I replied to said: "...I cannot find any documentation at all relating to C++ interop." and this some kind of "documentation" that is related to 'C++ interop'.


On most platforms, if you can interface with C, you can also interface with C++. Usually you have to find the mangled symbol in the C++ binary using a tool like nm, and treat it like a C function with whatever ffi you're using.

If you are targeting multiple platforms this can get tricky because the mangled C++ symbol will (probably) be different for each platform, which can be solved with a good build system


C++ doesn't have a stable ABI, so if you target a C++ symbol manually like this, as soon as the compiler revs, to say, C++20, the ABI will change and you'll get a linker error.

Also, this may work for simple C++ methods, but once you cross over into vtable territory, you're going to be in a world of hurt.

Also as far as interoperability, I just don't see a smooth path forward between Swift and C++. Philosophically, much of the C++ code you write ends up compiling away or building specialized generated code at compile time. Anything written in C++17 and newer will also have a lot of constexpr code that generates immutable results baked into the final output.

I'd say, wrap your C++ in a simple C API and expose that to Swift (same would apply to Go/Rust or any other language honestly).


Hot take: ABI stability is overrated. How often do you need to relink specifically without recompiling everything? As an OS packager, it's no big deal to rebuild everything depending on a C++ shared library when that library is updated. For self-contained binary distributions, statically linking the C++ library or putting the shared objects together works fine still.

vtables are always a pain, but I think the Swift team had some cool ideas about vtables and other dynamic stuff across dynamically linked shared objects..

upd: https://gankra.github.io/blah/swift-abi/ this


A good build system would solve those problems. Yes a C wrapper is way easier.


Stable C++ ABI depends on implementation, it’s not inherent to the language.


Eek. I've done it but not exactly seamless as implied above! (feels Like using a landmine as a baking scale)


What issues did you run into?


I've only ever done it out of curious laziness but it's one of those things where it's theoretically clean enough to do right but if you do it wrong you might be fixing it all day. If something changes downstream or you make a mistake at 9AM you can end up with subtle non-failing garbage everywhere.

The only way I would do it and trust it, would be if the binding is generated and tested (separately) automatically, but at that stage why not just write one with a proper ABI.


Good luck instantiating and using classes if you do it that way. You pretty much have to reimplement the itanium abi in whatever language you're using (hint: if your language doesn't already support it, then even implementing it in assembly would be easier).


Swift has great C interop for a language that is not a C superset, but C++ interop is at best a thing you can theoretically do. In practice c++ interop is usually done via a wrapper.


I think you are focusing on a bit artificial things here with things like intentions and other biases and notions.

Swift is a statically compiled, C like language, that can be used for system programming. Just like C, C++, Rust, Go, etc. This is not an accident. It was built from the ground up to do that.

The goal with this OSS library is literally making swift easier to use in projects where you'd otherwise reach for exactly those languages. So, whether you like it or not, it's competing (or at least trying to) in that space (i.e. system programming). Actually, Swift was designed to replace Objective C, which of course was the system programming language that Apple standardized on as an alternative to C decades ago. It's designed from the ground up to be a drop in replacement in any project where you'd previously be using that. So, it's a natural fit for any kind of project where you'd otherwise be considering things like Rust, C++, Go, C, etc.

Whether that makes sense or not in your context is of course up for debate and highly subjective.


Where do you think Ada - a mature and tested language - fit in between these 3 newbies?


Ada is a superior language if you are trying to write safe code, but given that the tools (compilers and such) cost serious money, it will never get any adoption apart from avionics, defense contractors, and mission critical/safety type of software.


Free Software Foundation's GNAT is FREE and has a GNU GCC frontend to compile Ada - https://gcc.gnu.org/wiki/GNAT .


I don't know Ada well personally, but from what I've read, it's a bit less advanced than Rust at static checks, a good bit simpler than Rust and Swift but more complex than Go, and a little bit old-fashioned in approach (verbose syntax and all that). But I'm not very educated.

I would say that Ada and Rust probably compete on some things, but given the history of Ada in industry, it's probably only in fields that already use Ada (aerospace and what-not).


Rust lacks the formal specifications from SPARK.

And SPARK 2014 will support ownership specifications.

Rust is the less advanced one.


Good feedback there :)


>> I don't know Ada well personally >> But I'm not very educated.

If you don't know, please don't speculate.

I use Ada professionally and I am experimenting with Rust. Rust has great promise and I am interested to see where it will go, especially as an alternative to C++.

>> it's a bit less advanced than Rust at static checks

Ada 2012 has comparable and in many cases more advanced static checks than Rust with the exception of memory management. This is especially true of the Ada type system (https://learn.adacore.com/courses/intro-to-ada/chapters/stro...) and design by contracts (https://learn.adacore.com/courses/intro-to-ada/chapters/cont...).

If you use the SPARK subset of Ada, you can use advanced tools to prove the correctness of your program. See https://learn.adacore.com/courses/intro-to-spark/index.html

Rust's borrow checker approach to memory management is novel and SPARK is acutally adding similar concepts to the next version of SPARK (https://blog.adacore.com/using-pointers-in-spark).

>> a good bit simpler than Rust and Swift but more complex than Go

Ada is a large and fairly complex language because it was designed for hard real-time, safety-critical, embedded systems and it has been in real-world use for 40+ years. I don't think it is that simple, but judging simplicity is subjective: what one person sees as complex, another person might see as simple. You can browse the Ada 2012 Language Reference Manual and see what you think: http://ada-auth.org/standards/12rm/html/RM-TOC.html

>> a little bit old-fashioned in approach (verbose syntax and all that).

This comes from the Ada design philosophy to be explicit in everything and to prefer the use of keywords over symbols.

"Readability is more important than conciseness. Syntactically this shows through the fact that keywords are preferred to symbols, that no keyword is an abbreviation, etc." (See https://learn.adacore.com/courses/intro-to-ada/chapters/intr...)

Long-life programs tend to be read more than they are written. The Ada way is to make programs easier to read rather than faster to write.

>> I would say that Ada and Rust probably compete on some things, but given the history of Ada in industry, it's probably only in fields that already use Ada (aerospace and what-not).

This is largely true. Ada occupies a niche for aerospace and other safety-critical areas, but has not been widely adopted due to the "uncoolness" factor and the cost of most of the available Ada compilers and toolchains.

I think the popularity of Rust has peaked some interest in Ada as well, but I am not sure if it will cause any change it where either are used.

I would like to see Rust to continue to mature and get adopted for wide-spread use with multiple implementations and a language standard. As it currently stands, many aerospace and safety-critical spaces would not be willing / able to adopt Rust without a language standard and certifications. Here's hoping . . .


>> This comes from the Ada design philosophy to be explicit in everything and to prefer the use of keywords over symbols.

This may be true, but it doesn't contradict the claim that Ada is old-fashioned in this regard. It was an old fashion in programming languages to prefer words over symbols, and to try to make programming languages look more natural language-like to make them more readable. See Ada, Cobol, Pascal or AppleScript for some examples. This is much less common with newer programming languages, where it's much more common to favor terseness and shortcut syntax. It's debatable whether this is better, but it seems undeniable that Ada-style verbosity is no longer in fashion.


The claim that more verbose, with keywords instead of braces/symbols is more readable is, imho, very subjective. It's easy to state "code is read more often than it is written" and jump to conclusions from there with no evidence that Ada is actually more readable.

An alternative syntax could be helpful for Ada to gain more traction.


The “verbose syntax is good” stance seems to have been a lot more common back in 70s–80s language design, hence why I call it “old-fashioned” :-) not trying to make a judgement against the language (I personally love Cocoa-style method names, for example).


I'll tell you: You're wrong, so is OP.

Benching languages against eachother is like pitting birds of prey against eachother, its pointless because they will hunt what they hunt.

This language/framework dispute I though was fever back in bad PHP days has not changed. I'm not saying; 'oh you kids dont know shit' I'm saying the language is not the problem, the problem is the problem, the right tool for said problem is the answer.

You find Swift complex and speak about why, you're not wrong with its application, iOS it does very well, therefore its a tool for an iOS job.

If I grew up just learning Swift and nothin else, I would say swift is the best . Plato's cave springs to mind.

We are all Engineers, Developers, Hackers, Designers and or Code Monkeys.. Don't try to pit against, know the right tool for the job, if you cant find it, make it. With people who can.

I though thats how we roll


To be clear, that’s kind of what I was trying to say :) maybe the “only advantage” part at the end made that unclear.

I think the problems you’d want to use Go for, the problems you’d want to use Rust for, and the problems you’d want to use Swift for are largely non-overlapping (in spite of whatever similarities they do have).


I will gather a lot of downvotes, but still: I find Swift a very haphazardly designed language with very little foresight and forethought. This is further compounded by its standard libraries which are directly lifted from MacOS with all of their idiosyncrasies and huge incompatible changes from version to version.

For what it's worth, [1] [2] [3] [4]

[1] A type system that can't cope with SwiftUI: https://tonsky.me/blog/swiftui/

[2] More syntax to shake a stick at https://twitter.com/dmitriid/status/1276482336486576133

[3] Even more syntax weirdness https://twitter.com/bradfitz/status/1285302091544576000

[4] Standard library between versions: https://twitter.com/dmitriid/status/1201441652507844608 File manipulation functions on String, great design.


So you ding the language because of a suboptimal choice made by one tool for it?

The good and bad of Swift is the regular breaking updates. Sure it’s a pain when things change, but it’s also healthy when they improve, and almost every change has been an improvement.

A language as simple as C can slowly grow without breaking existing source code. But it’s also never been able to address its most significant flaws.


> So you ding the language because of a suboptimal choice made by one tool for it?

I provided 4 different links showing multiple different features in the language I find suboptimal.

> Sure it’s a pain when things change, but it’s also healthy when they improve, and almost every change has been an improvement.

It's a very dubious statement at the very least.

> But it’s also never been able to address its most significant flaws.

Can Swift address its flaws even with its breaking changes? So far its been piling on more and more syntax, and continuously breaking its standard library (whose design choices like writing to files from Strings are very dubious)?


Swift is an extremely well designed language, probably one of the best I have encountered. Dinging it for its syntax is something that I didn't think I would ever hear. To respond your links–SwiftUI is not part of Swift, so 1 and 2 are not relevant; 3 is clearly explained and is never an issue in practice, and 4 is you faulting the language for being able to interface with APIs designed in the last century–you might as well laugh at it open(2) taking a bitmask.


> Dinging it for its syntax is something that I didn't think I would ever hear.

Not just syntax, but the amount of it, and the amount of special cases in it.

> SwiftUI is not part of Swift, so 1 and 2 are not relevant

SwiftUI exposes deficiencies in the "one of the best designed languages" and shows how Swift fails to scale in complex scenarios.

(1) Shows that its design cannot cope with anything complex. You have to write clunky wrappers for `if` and `foreach` (but not for while and for, apparently), because the design of the language didn't have place for expressions. The type system cannot handle complex requirements of the library and can't provide better facilities than Java.

(2) Shows just how bad the design decisions are in the language that they all but force you to write code this way. There's syntax upon syntax with no internal consistency or actual design to figure out how they all operate with each other.

And Swift UI forces more half-baked escape hatches (like the `some` "type", like the lambda changes described in (1)) etc.

I'll just once again expose my favorite piece of code ever:

  guard let self = self { return }
instead of

  if self == nil { return }
This is not an "extremely well designed language". These are badly thrown together pieces of half-baked ideas.

(3) Once again shows just how badly it's designed. Or, rather, how badly the parser is designed that it can't reliably figure out what is going on. In a language that has no significant whitespace you have to care about significant whitespace. Great design.

(4) I'm faulting the design of the standard library that is only compounding the problems already present. Many languages managed to figure out standard libraries that don't make you read and write files from a string object, or need 15 lines (different every year) of code to figure out how to write/read a file. This only continues to show that very little thought is going into this project, as they are being pressured for time to deliver "something".


Guard statements are great.

First of all, this is still perfectly valid swift:

    if self == nil { return }
But guard is really nice, because it signals to the reader of a code block that if the guard condition can't be met, execution cannot continue past the guard. It does a lot to signal intent which using an if statement for early return does not.

It's something I sorely miss in Rust for example.


> It does a lot to signal intent which using an if statement for early return does not.

Since every single language has an if statement, we clearly know what intent an if block signals.

If the if condition is not meant, execution cannot continue. See? Easy.

The only reason guard exists is not to aid the reader, but to aid the compiler to recognise nullability checks. That's about it. It really is a very specific if statement that is given a special treatment. And all comparisons of how great it is compared to an if statement I've seen are hillariously funny in how they willingly lie about ifs. Starting with the actual Swift docs [1]:

> Using a guard statement for requirements improves the readability of your code, compared to doing the same check with an if statement. It lets you write the code that’s typically executed without wrapping it in an else block

Here's how people take it to heart [2]:

> Without using guard, we’d end up with a big pile of code that resembles a pyramid of doom. This doesn’t scale well

Nope, you don't end up with a pyramid of doom, and yes it does scale well, provided you language designers actually design the language.

[1] https://docs.swift.org/swift-book/LanguageGuide/ControlFlow....

[2] https://thatthinginswift.com/guard-statement-swift/


> Since every single language has an if statement, we clearly know what intent an if block signals.

Guard has different semantics than if. For instance, I can write:

    if self == nil { 
        ... do some cleanup
        return 
    }
Or:

    if self nil { 
        ... do something else without returning 
    }
If it's a simple early return, then yes it's pretty easy to tell the intent. But imagine that I have to do some complex cleanup work over many lines before the return. In the case of a guard, I can look at the guard statement and immediately know that this block of code must end the control flow within this function. With an if statement, I have to read and parse the content of the block before I can understand that execution should not progress past this block.

This is especially relevant when you're maintaining code which was written by someone else. With an if statement, you might accidentally remove a return statement from an if block which is required for correctness, and you may not find out about it until you run your program (or in the worst case after you release your code, and it causes unexpected bugs in production). With the guard statement, the compiler will enforce this constraint. It's one of many tools which Swift gives you to help you write correct code.

Also, with your example:

    if self == nil { return }
I assume you are alluding to languages where self would be implicitly unwrapped in this case, so it can be treated as a concrete instance rather than an optional after this statement. IMO it is a strength of swift that you don't have this magical conversion of an optional to a concrete value which never explicitly written, but is inferred by the compiler. Guard offers clarity and consistency about exactly where the unwrap happens, and it's barely more verbose than a bare if statement. IMO this is a very elegant solution, and in practice it's something I miss in languages which don't have it.


> Guard has different semantics than if.

Yes, it has. But why? Everyone parrots the same "different semantics" excuse never bothering to ask why.

The only reason `guard` exists is to manually tell the compiler to do a nullability check. And that is it. It's a very small, very specific case wrapped into a separate syntax of its own because the compiler and the type checker are just not good enough.

  guard X else {}
is exactly the same as

  if X is not null {} // pseudo code
but the compiler/type checker are not good enough to know that once null checks are passed in a statement, it's safe to use that value in the code that follows.

This leads to horrendous piles of useless code like the one I showed:

  guard let self = self else { return }
How can one look at that and say, "yup, that's good design"?

> But imagine that I have to do some complex cleanup work over many lines before the return. In the case of a guard, I can look at the guard statement and immediately know

And immediately know almost nothing. Except that it's a very specific `if` case that returns. Which brings us to:

> you might accidentally remove a return statement from an if block which is required for correctness, and you may not find out about it until you run your program (or in the worst case after you release your code, and it causes unexpected bugs in production).

It won't cause issues if the language is actually properly designed. Because the variables you'll use will not checked for nullability in this case. And even Java will warn you with "potentially null value". But Swift can't do that without a `guard` statement. Go figure.

> IMO it is a strength of swift that you don't have this magical conversion of an optional to a concrete value

Yes, you do. You wrap it in a guard statement, assign a variable to itself, and poof, magic, it's suddenly unwrapped.


The difference between guard and if isn’t really about optionals. ‘guard x ...’ tells you that after the guard block, ‘x’ is the case. If x is a Boolean, it’s true; if it’s a matched pattern, it has matched, if it’s a ‘if let’, the value is non-nil.

A guard must return if the condition doesn’t match. There’s no such requirement for if.

I also like that unwrapping is explicit in Swift. ‘if/guard let x = optional’ is the syntax for unwrapping. ‘optional == nil’ is the syntax for checking whether an optional is nil. Why would the latter do any unwrapping? The first time I saw that syntax in Kotlin, I thought that it’s weird to add functionality like this to a normal looking if.


> Everyone parrots the same "different semantics" excuse never bothering to ask why.

It's like you didn't even read my comment, I gave a reasoned answer why it's different. It gives a queue to the user that the condition must be met, or else there will be an early return.

The example you gave is also not equivalent - here you have:

    guard X else { /* branch A */ }
    // Branch B
And here you have:

    if X is not null {
        // Branch B
    }
    // Branch A
The problem with your argument is that you are taking a subjective preference - that magical unwrapping via if-statements is the best form of unwrapping - and stating it as if it is an objective fact. You haven't actually given any arguments for why guard is an inferior approach beyond basically saying: just look at it, it's bad language design. Why is it bad?

> It won't cause issues if the language is actually properly designed. Because the variables you'll use will not checked for nullability in this case. And even Java will warn you with "potentially null value". But Swift can't do that without a `guard` statement.

Again, it's not a question of proper/improper design or a failure of the compiler. The designers of swift made a conscious decision to provide exactly 3 ways to unwrap an optional:

1) the force unwrap `!` operator,

2) the `if let` statement,

3) the `guard let` statement

Just because that is not the exact design you would prefer does not make it a bad design.

You're also ignoring the uses for `guard` beyond null checking. For instance, if I wanted to check that my arguments are within a certain bounds, I could do something like this:

    func foo(x: Int) -> Int {
        guard x >= 0, x <= 42 else { return 0 }
        return 2*x
    }
This has nothing to do with "manually tell[ing] the compiler to do a nullability check ... because the compiler and the type checker are just not good enough" - it's about having an explicit language construct to signal that you do not want to continue with the logic of this function if the condition is not met.

> Yes, you do. You wrap it in a guard statement, assign a variable to itself, and poof, magic, it's suddenly unwrapped.

These are actually very different things.

In this statement:

    if x == nil { ... }
you are taking the conditional (`x == nil`) and giving it this magic side-effect that once it's executed, a new variable of a separate type is created and will replace `x` until the end of execution. Every other conditional simply evaluates to a boolean, so why should this one special case have this side effect where it's also declaring and assigning a new variable?

To contrast that with the statement which bothers you so much aesthetically:

    guard let self = self else { return }
Here the `let <identifier> = <value>` denotes the fact that you are declaring and assigning a new variable here, which is why the type change can take place. It happens to have the same name as the previous `self`, but every aspect of this is consistent with the rules of the language, and there is no special case being created here.

You may not like it, and that's fine, but your personal taste does not reflect an objective truth about language design. Tens of thousands of swift developers around the world are able to understand the motivation for, and benefits of `guard` statements just fine. If you are bizarrely confused or angered by them, it does not mean they are categorically bad.


I'm not interested in talking about SwiftUI, but you claiming that its type system is on the same level as Java's is just hilariously off-base. The "some Type" thing is fairly standard type erasure. Your "favorite piece of code" (which, FWIW, is missing an else on the guard) has different semantics than the other snipped you present because it rebinds self to a non-optional type, whereas the second one doesn't. The parser is largely whitespace agnostic, just like pretty much every other language. Oh, maybe you were going to bring up C++, the one where you have vexing parses and "most" vexing parses and until recently couldn't put >> down without it looking like an operator? Or maybe Rust, where you need turbofish or :: to help the parser out? In Swift at least the issue you mentioned doesn't actually show up if you care about your code style…

Oh, and for the last point, I suggest you read the article :P


> I'm not interested in talking about SwiftUI, but you claiming that its type system is on the same level as Java's is just hilariously off-base. The "some Type" thing is fairly standard type erasure.

It amazes how people completely ignore everything against some one thing they can potentially defend.

Once again, SwiftUI exposes deficiencies in the language, no matter if you care about it or not. Let's take it from the top, shall we?

- Swift UI says: hey, I need to have a list of values inside my nice little functions, what can yo give me? Swift: I can give you Java:

  static func buildBlock<C0, C1>(C0, C1) -> TupleView<(C0, C1)>

  static func buildBlock<C0, C1, C2>(C0, C1, C2) -> TupleView<(C0, C1, C2)>

  static func buildBlock<C0, C1, C2, C3>(C0, C1, C2, C3) -> TupleView<(C0, C1, C2, C3)>

  static func buildBlock<C0, C1, C2, C3, C4>(C0, C1, C2, C3, C4) -> TupleView<(C0, C1, C2, C3, C4)>

  static func buildBlock<C0, C1, C2, C3, C4, C5>(C0, C1, C2, C3, C4, C5) -> TupleView<(C0, C1, C2, C3, C4, C5)>
- SwiftUI: I obviously need need some conditional and list management logic in my nice little lambdas. Swift: well, our "best designed language" can only give you Java. Because our "best designed language" has skipped class on language development of the past 40 years and can't even imagine that conditional statements can be expressions, for example

  static func buildEither<TrueContent, FalseContent>(first: TrueContent) -> _ConditionalContent<TrueContent, FalseContent>

  static func buildEither<TrueContent, FalseContent>(second: FalseContent) -> _ConditionalContent<TrueContent, FalseContent>

  static func buildIf<Content>(Content?) -> Content?
> The "some Type" thing is fairly standard type erasure.

Once again, the only reason `some` exists is to aid the compiler. Because the compiler in "the best designed language" cannot figure out the difference between protocols and types on its own.

In general, in Swift and you have to constantly manually aid the compiler in everything. Also see `guard` below.

> Your "favorite piece of code" (which, FWIW, is missing an else on the guard) has different semantics than the other snipped you present because it rebinds self to a non-optional type, whereas the second one doesn't.

Yeah, yeah. I've seen the "it's different semantics" argument everywhere. It's not "different semantics". The only reason this exists is because they rushed the language out, and the compiler and the type checker was very limited. The only reason guard exists is to force nullability checks by the compiler. It's an inelegant and clunky clutch to aid the compiler. That is it. "The best designed language" could instead have this:

   if x == nil { return }

   // the compiler knows that x is now not nil, 
   // it's no longer an optional type beyond this point

> The parser is largely whitespace agnostic, just like pretty much every other language. Oh, maybe you were going to bring up C++

No I'm not going to bring it up, but it's funny how you brought it up.


I haven’t said the language is perfect; I think I probably know more about where the language has issues than most people would. The lack of variadic generics is an annoying limitation they’re working on bringing to the language, as they are with improving the situation around building conditionals.

Actually, the reason I chose to refuse to engage with you about SwiftUI is that Swift doesn’t support ever programming paradigm in existence, nor should it. The fact that SwiftUI is running into issues where it is trying to use language features that don’t exist and then shoehorn them into the language retroactively is not really a great situation. That said, you claiming (multiple times!) that the language had no thought put into it and that it’s the same as Java is just outright trolling/bait at this point. The language has had a huge amount of effort put into it, many of the questionable decisions made earlier have been rolled out; at this point its type system is really at a similar place Rust or Scala’s is.

The lack of conditional statements being expressions (note: the ternary operator does exist) and type narrowing is a conscious choice, not something that the language has to have in order to “be modern” or some sort of evidence that this wasn’t considered. Type narrowing in particular is very common in languages that interface with code that is not well typed (TypeScript with untyped JavaScript, Kotlin with unannotated Optionals and inheritance hierarchies from Java) and Swift has generally had a much better interface with system libraries than that (this being largely controlled by Apple, they can roll out annotations fairly widely). Statement expressions are just a choice that Swift does not choose to have, although as see with function builders perhaps Apple will force it into the language anyways. And type erasure is good not only for the compiler but it’s also a huge benefit for users and API designers: it allows for “class cluster” designs, it keeps users from having to see SomeMonsterGeneric<Wrapper<Type1, Type2, Type3>, OtherGarbage> for no reason.


> Swift doesn’t support ever programming paradigm in existence, nor should it.

Of course it shouldn't. But going ahead and saying that "we shouldn't look at SwiftUI" for whatever reason stops most of the discussion about the deficiencies in the language.

Look, even you are saying things like this:

-- start quote --

The lack of variadic generics is an annoying limitation...

The fact that SwiftUI is running into issues... and then shoehorn them into the language retroactively is not really a great situation.

-- end quote --

You are basically repeating my words, but somehow I'm wrong in my assessment.

> that the language had no thought put into it and that it’s the same as Java is just outright trolling/bait at this point.

When you say that "lack of variadic generics is an annoying limitation" and "language features that don’t exist and then shoehorn them into the language retroactively" it's all right. When I say the same things, it's trolling. Got ya.

> The lack of conditional statements being expressions (note: the ternary operator does exist) and type narrowing is a conscious choice, not something that the language has to have in order to “be modern”

That's why SwiftUI has to "retroactively shoehorn" things like `buildEither<TrueContent, FalseContent>` because a "best designed language" doesn't have any facilities to, well, facilitate this.

> although as see with function builders perhaps Apple will force it into the language anyways.

Me: Swift lacks this and that.

You: You are a troll, and you are wrong, and this is a good design decision <literally half a sentence later> it will likely become a part of the language because <a few paragraphs before> there are features that don't exist in the language

> And type erasure is good not only for the compiler but it’s also a huge benefit for users and API designers: it allows for “class cluster” designs, it keeps users from having to see SomeMonsterGeneric<Wrapper<Type1, Type2, Type3>, OtherGarbage> for no reason.

There's literally no reason to have SomeMonsterGeneric<Wrapper<Type1, Type2, Type3>, OtherGarbage>. The only reason `some` exists is, and I repeat myself, because the compiler and the type checker literally can't distinguish between a type and a protocol without the developer babysitting them.


Ok, I'll summarize myself once more and then stop trying: Swift is not perfect, but it's way better than Java. Some generally useful features are missing that were identified early but were put on the backseat for now because other things took priority, but they might be finally be coming now. SwiftUI is wrong because it is trying to use language features that don't exist, ones you think are "obvious" to include but I say are not necessarily "better".

The reason I suspect trolling is that you have repeatedly taken my comments out of context, glommed them with other things that are not related, and then responded to that strawman, plus created what I can only refer to as "bait" because I have to waste my time responding to them when I could be having a much more productive conversation. So tell me, would you rather talk about how Swift's type system is the same as Java, how the compiler is designed by incompetent fools who rush out releases, and how any language that doesn't include the three features you brought up is automatically stupid, or maybe we can discuss this more productively from the viewpoint of "why didn't Swift include these things I like?" or "was SwiftUI poorly designed if it is trying to create new parts of the language out of thin air to support itself?" or "has Swift prioritized the wrong set of features?"


Yup, guard statements are indeed great. I am using them all the time. They say more than an if statement, they "guard" the rest of the code that comes afterwards. And especially "guard let" is super useful.

It's ok for you not to like Swift, though. After all, taste is subjective.


The versioning issues have improved significantly as the language has matured.


How many active Linux projects are still written in Objective C? Personally, I see no compelling reason to use Swift on Linux. Programming languages are dime-a-dozen. The libraries and ecosystem surrounding a language are what’s important to me. Currently, I’m at a handicap if I try to write Swift on anything but macOS; no Xcode, most third party libraries depend on proprietary Apple SDKs and most of the Swift community just assumes you’re working on a Mac because why wouldn’t you be? So if I’m a Linux user and I’m choosing a language, why choose Swift over other languages like Rust, Go or even modern C++?

Don’t get me wrong, I love Swift. It’s an absolute pleasure to work in and my first language of choice for projects targeting Apple platforms. But I’m very skeptical of its long-term potential outside of that ecosystem, especially with Apple’s move toward custom silicon and relying on hardware-level implementations of what would traditionally reside in software. This approach has worked out well for Apple with the rest of their product line and I’m completely on board from an engineering perspective - but it’s a path that will not result in higher cross-platform adoption.


It could be useful for a backend for a Swift iOS application, so you do not have to switch language and can make a library for the protocol


IMO (as someone who has written plenty of Swift and Rust) Rust is in category of its own here. It's a fantastic language and I love working with it... but it has a mental overhead in dealing with ownership, borrow checking etc that other languages don't.

It's amazing in situations where performance is critical, or where you have constrained resources. But I'd much rather use Swift in other situations.


> but Swift for Tensorflow[0] is exciting and could become a refreshing alternative to python for machine learning

I prefer the landscape for ML not to be fragmented based on non-essential qualities like the language that is used.

How great is it that researchers publish code in the same language, and everybody can use that code immediately without having to learn language X and/or porting the implementation?


My problem with Python is that once you start working with a strongly-typed language, a duck-typed interpreted language starts to feel like it's missing a major tool in terms of writing correct code.

I also think S4TF's approach has been really neat here: there's a really nice python interop so you still have access to the entire body of work and tools in python available, and it basically feels like writing native swift code.


> My problem with Python is that once you start working with a strongly-typed language, a duck-typed interpreted language starts to feel like it's missing a major tool in terms of writing correct code.

I primarily write C++ and Python and never feel like this.


Not all type systems are created equal. Swift's is powerful and expressive


You could at least explain what features set it apart, in your opinion, from other type systems and why they are important in this case ...


Well a good example would be the 1st class optional handling in Swift - Swift's type system gives you explicit tools which make it possible to guarantee you won't have an NPE at runtime, as long as you write idiomatic Swift. This is a whole class of errors which are systematically avoidable in Swift (and other languages, like Rust), but take special care to avoid in c++ or python. So with Swift you get this "if it compiles it runs" experience, which is a difference you would not feel between c++ and python.

But in general, c++ to python is an awkward comparison because c++ is extremely low-level by comparison. So you do get static typing, which is nice, but you also get a lot of other issues to deal with - namely manual memory management - so with either python or c++ you have a high probability you will have code which fails at runtime. It would just be the case that in c++ most of the time it would be because you handled memory incorrectly, whereas with python it would more often be because you made a type-related error. Swift is much more comparable with Python in terms of reducing cognitive load by obviating low-level details from the programmer.


Yeah, C++ is not a language with a proper type system. Swift is.


swift for tensorflow is dead, and even if it weren't it hardly makes the language compelling for ml. Machine learning != deep learning, and pytorch is in the lead in deep learning anyway.

the alternative to python is julia


Every time Swift for Tensorflow gets brought up I am reminded of the disappointment that they didn’t make arguably the smarter choice and do it in Julia


What is the unique value set that it brings to the table?


Why would you mention Swift, Go and Rust in the same context? I mean Swift vs. Go kinda has some merit, although they have completely different intentions. At least Go doesn't aspire to be a systems programming language. Rust definitely has no place in this comparison. Rust serves a niche market of systems programming that is so security focused that C/C++ just won't do. It's not useful for anything else (at least not anymore than Haskell is useful for anything besides university projects, in the real world at least).

Swift has many nice similarities with Rust and other current languages from a pure syntactical/language perspective, while aiming to be a GP language. It definitely serves a broader audience than Go, whose main target is server programming. Swift's main target is Apple UI programming, however, the language is capable of so much more.

Which is why Open Source is good news. It may make it escape its Apple box.


Why the cheap swipe at Haskell? We use Haskell very productively in the real world. You can take a look at some of our blog posts here, if you want to learn more: https://tech.channable.com/

Here is one post about writing an Aho-Corasick implementation in Haskell which is as fast as the fastest Rust implementation: https://tech.channable.com/posts/2019-03-13-how-we-made-hask...


I strongly disagree with your view of Rust.

While it’s true that Rust shows promise in security and embedded, some of the early adopters have been in the server side and microservices.


I'm exploring Rust in embedded (for fun-time projects), and there is a lot of activity across the programming spectrum. I think Rust has a good story for anyone doing multi-threaded programming and values correctness.


>Rust serves a niche market of systems programming that is so security focused that C/C++ just won't do. It's not useful for anything else (at least not anymore than Haskell is useful for anything besides university projects, in the real world at least).

You could call Java is niche for the same reason. Some of us want to write native, low overhead code in a language that has a sane design (only disparaging C++ here). It's about time C++ had more alternatives.


What do you mean by "so security focused that C/C++ just won't do"?

Anything on the internet should be cautious of memory unsafety issues. Something like an image processing library I won't consider 'security focused' yet I would trust one written in Rust over one written in C.


Harsh words. I agree with the statement, but Rust is used not only for system programming and Haskell has its niche beyond university.

I think Swift is where .NET was in its first decade. While capable of so much more, it is limited by its designers and primary purpose so it cannot go beyond. The tight coupling to UI products is a burden an ecosystem carries. Java won in the backend not with is UI, JavaScript with node/npm before Angular reset the frontend and C# just when .NET Core pushed it into spaces it has not been before (apis, lambdas, etc).


I used to be a bit of a Swift fan boy, but then I switched to Rust (I was a Rust fanboy for even longer but thought that Swift was better for my use case which is a native macos app). I have not looked back. The Swift ecosystem is mostly people wrapping Apple APIs.

God help you if you want to have a swift package with some metal code in it. Rust cargo manages this without a hiccup.

Swift is nice as long as daddy Apple had your use case in mind, God forbid you have to tweak something. Rust feels more "timeless".

Swift for TF is dead.


For a while I was looking aggressively at using swift everywhere, but then I tried rust and read this https://v4.chriskrycho.com/rust-and-swift.html and gained a new respect for Rust and have been using it whenever I can since.


Boy that is really old. And oppressively detailed. And ends up sounding like a Coke vs Pepsi argument. Do I really care about the minutia of minor features when I can build the same things with both tools?

For server side I’m probably going to use Rust, for MacOS or iOS I’m certainly using Swift, for Android I choose Kotlin, and for Windows I would choose suicide.


It definitely is old, but a lot of the points still stand with respect to swift syntax.

Anyway I was trying to use swift for things outside of iOS work to share code and it wasnt worth it, is what I'm really getting at. So agreed on your whole second sentence haha


C++17/20 on Visual Studio 2019 with vcpkg is a pretty nice place to be for Windows development.

Oh, unless you're developing GUI stuff. In which case, good luck. Use Qt or something, I dunno.


WinUI is quite alright.


I'm using rust and metal on macos and it's pretty nice. I'll switch to wgpu eventually, it's similar enough to metal.


There's nothing about Swift that has me interested, curious or excited. That said, I haven't done much research into it. What is new and interesting about Swift?

From looking through the swift official site, it just seems like a new C# or Java with ahead of time compilation.

The only slightly unique bit I can find is that the syntax is a bit more JavaScript inspired.

Am I missing something? Or that's it? It's just the C# of Apple?

For example, Rust has borrow checking for zero cost memory safety.

Go has concurrent sequential processes and its accompanying goroutines.

Haskell is a fully pure language.

Python has cool indent based syntax that looks like pseudocode.

Ruby is objects all the way down and has this unique concept of blocks for passing code around to be yielded.

Clojure has immutable persistent data-structures as default and software transactional memory, while also being a Lisp bringing macros and all to the JVM.

Erlang has actor concurrency.

Etc.

So in a similar vein, what would be Swift's innovation or originality here?


Firstly, wants wrong with being “the C# of Apple”? I think they wanted a language that can coexist with objective-C libraries, and is reasonably memory-efficient (adding RAM to phones decreases battery life) that they could control, and created it.

They certainly didn’t just make run-of-the mill choices for all language features. Some non-standard design choices they made (for better or for worse):

- Reference counting without any automated cycle-breaking

- Collections are value types (implemented somewhat efficiently by having copy-on-write collections)

- the Character type is closer to what ‘normal’ people think a character is (https://developer.apple.com/documentation/swift/character)

- consequently, string length is closer to what users who don’t know Unicode internals expect it to be.

- Arrays are ‘different’ (‘inherited’ from NSArray. See https://ridiculousfish.com/blog/posts/array.html)

- protocols are a bit like interfaces, but protocol conformance can retroactively be added to classes, even to ones you didn’t write or don’t have the source code of.


> wants wrong with being “the C# of Apple”?

Oh nothing. It makes sense for Apple to want a more modern language to replace the now aging Objective C. In turn, that meant Swift was not on my list of languages to try, but instead on my, possibly a language I'll be forced to use one day list, just as Java, C#, JS, and all are, by simply being defacto languages for some platform which depending what you work on, you have no choice over.

But then I started seeing people in the comments claiming things like: "Swift is one of the most interesting languages right now". So I was like... Hum okay, what did I overlook?

Now, I'd be okay with people saying that it improves drastically on C# and Java. If people think Swift is a dramatic improvement because of the sum of its part, like just little details here and there that end up making it much better, and they think that in turn could lead Swift to replace both Java and C# if it gained support on other platforms. That could make it quite interesting as well. So I'd be up to listen to those arguments as well.


Swift takes a lot of good choices from other languages and puts them together in a cohesive package. It's not necessarily something on the bleeding edge of a new frontier, going someplace that nothing else has gone, but it is a pleasant way to use fairly recent programming language theory in your day-to-day code.


> the Character type is closer to what ‘normal’ people think a character is

I'm sure I'm missing some nuanced downside to this, but this sounds fantastic. I've never fully understood the difference between Unicode code points and code units[0], and would love to think less about this sort of thing in Java (for ex. when using String#toCharArray).

[0] I'm just regurgitating language from the String javadoc, this sentence might not even make sense.


Downsides: indexing into a string isn’t O(1). To find, say, the 200th character in a string, you have to go through the string data from the start.

So (not Swift, nor, AFAIK, any other existing language)

  for i = 0 to stringlen(s)
    print s(i)
would be O(n²), so slow for long strings. It’s rare to have code that accesses the n-th character of a string without accessing all earlier ones, though, and idiomatic code

  for c in s
    print c
can be efficient (not as efficient as iterating over fixed size units, but not dramatic, either)

Also, requiring String to know of what code points in Unicode are “combining” means carrying a few tables around in the implementation. That could be problematic when porting Swift to devices with small amounts of memory. It also, I think, ties Swift to a version of Unicode. You cannot predict what code points in future versions will be combining characters (https://en.wikipedia.org/wiki/Combining_character)

I also think there are a few corners where Swift doesn’t reach the goal of “a character is what naive users expect”. Ligatures in Unicode might be problematic. For example. “fl” is a single Unicode code point, but, ideally, would be two characters “f” and “l”.


I could see issue if using it for splitting strings, but I guess it also depends what Swift assumes to be what normal people think a character is.

Most normal people assume Unicode grapheme to be characters. That would be the set of symbols you'd consider a single letter if you were to visually count the number of symbols in the string.

In Unicode though, some code points are non visible, yet are part of the string. So for example, the character at i+1 for some match against a letter might not be the next visible letter. I can see this causing issue either way.

Honestly, I'd say it's best to just learn this, it you'll have some weird bugs one way or another.

Basically, you have code units, those are the smallest chunk of bit that has meaning in Unicode and thus can be sent over a wire, streamed or decoded. In UTF-16 encoding, which is what JVM uses, code units are 16 bit.

The history helps. In the beginning, each character in Unicode was 16bit. So you could represent all Unicode characters with 16bit only. Later, there were even more characters added, and it couldn't fit into 16bit. So UTF-16 where the 16 tells you the code units is 16 bit no longer could represent all characters in one 16 bit unit. Thus code points were invented, those are characters that span two code units 16bit + 16bit. Together they represent the new characters that didn't fit in the single 16bit range.

So nowadays, it actually takes 21 bits to represent all Unicode characters. There's three encodings: UTF-8, UTF-16 and UTF-32. Their number correspond to the bit size of their code units. In UTF-8 we break each character into 8 bit units. In UTF-16 we do so in 16 bit units, and in UTF-32 into 32 bit units.

Ok, now back to measuring length. We can say tell me the number of code units used to represent the string. That's what Java's string.length() does. Or we can say telle the number of bytes needed to represent the string (a byte is 8bit). That's what converting to byteArray and getting the length of the array does. Or you can say tell me the number of code points (Unicode characters) the string contains. In UTF-16 a code point can be 16bit (composed of one unit only), or it can be 32 bit (composed of two units). But not all Unicode characters are visible, so the user might be surprised to get a length that is greater than what it counts visually. So you can also get the number of grapheme in the string, which are "visual characters". In Java you can use BreakIterator to count those. That said, Unicode grapheme don't always make sense to what each speaker of each language would consider a character. That's for example some languages might consider a combo of two letters one character with one pronunciation, etc.

Finally, I'll say there's another interesting length. It's the font width in terms of physical measures like pixel count.

So basically, each measure might be most appropriate depending what your measuring for.

I think the real issue is that we didn't adopt new terminology of Unicode yet. You should try to start referring to things as code units, code points, grapheme and font width/height. And stop using the word character or letter.

And then just make yourself a little library that has string.codeUnitCount(), string.codePointCount(), string.graphemeCount() and string.pixelWidth().

And when you don't care about the difference between these, I'd argue that code unit count is the best default for length, which is why I don't actually disagree with string.length counting code units. Cause I think of string.length from the perspective of looping over the smallest units of the string.


To make sure I have this right:

Graphemes are comprised of code points, code points are comprised of code units, and code units are a chunk of binary of variable length depending on the UTF flavor.

UTF-8 uses smaller code units so that strings containing only ASCII (for ex.) are of optimal size (in terms of memory). UTF-32 would mean each "character" takes up 32 bits regardless.

I think the String javadoc could be clearer here. The class overview mentions these concepts but the methods are not as forthcoming.

Thanks!


You got it. I find this SO does a good concise explanation for them: https://stackoverflow.com/a/27331885/172272


I am unsure how your arrays link is relevant?


You’re right. I had forgotten that the bridging between Swift Arrays and NSArray isn’t purely 1:1 (https://developer.apple.com/documentation/swift/array#284673...). (IIRC, it used to be)

Arrays still are not guaranteed to be contiguous in memory, though. If you want that, there’s ContiguousArray (https://developer.apple.com/documentation/swift/contiguousar...)


> - Collections are value types (implemented somewhat efficiently by having copy-on-write collections)

Isn't copy on write data structures mostly discouraged idea these days?


Swift already uses reference counting for memory management, so copy-on-write is basically free -- it just checks to see if the collection's reference count is greater than 1 before writing.


It's decidedly not free. Reference counting in Swift can be (is?) a major bottleneck. See this project: https://github.com/ixy-languages/ixy-languages and its discussion on Swift performance: https://github.com/ixy-languages/ixy.swift/blob/master/perfo...


It’s free since the language is already using reference counting. But yes, retain/release traffic is often a huge problem in performance-critical code.


In my personal experience, Swift occupies the sweet spot in between Rust and something like Python for application-level programming. It's a modern and well-designed language with a Rust-like strong and expressive type system, including useful generics, powerful enums, and safe error handling. It also has fairly straightforward memory management, really straightforward polymorphism, seamless C interop, and good performance -- though it's not quite as fast as Rust because it prioritizes ease of use over zero-cost abstractions.

So no, it doesn't really have a whole lot of "innovation or originality" -- but it essentially takes some of the best ideas of modern language design and wraps them into one general-purpose language for application development, and as a result I find it to be a really useful and practical language.


The best thing about swift is it's ML-style type system (incidentally this is also one of the best things about Rust, and one of the reasons why Rust is competitive in "higher level" application development scenarios as well as "systems programming").

For me Sum Types and pattern matching in particular are an absolutely massive improvement over languages that only have classes/structs. They make statically typed languages feel almost as expressive as dynamic ones.


TypeScript has this too


Sort of. Although I will say that the upcoming TypeScript 4.1 with its string-template types potentially opens up TypeScript to limitless power. Imagine the expressiveness and power of tcl, but as your type system / type definitions.

The weird thing about typescript is that this incredible power and expressively must be backported into the runtime environment. And because of that, the promises that the type system makes are unreliable and also lag way behind the actual language features that are implemented.

I hope these criticisms are read in the proper context: I absolutely love TypeScript, and find it to be a really special unique oddball in the world of language design. I wish it had been ReasonML instead of TypeScript, but TypeScript is incrementally working it’s way there and bringing the whole js community with it, so I can’t complain.

Edit: Prediction: prepare for switch statements to become popular as the typescript community gradually adopts exhaustive pattern-matching types.


What do you mean by “the promises that the type system makes are unreliable”?


The type of an underlying variable at runtime may be different than what the typescript type declares. Practically, this would probably be due to either poor, incorrect, out-of-sync, or incomplete type definitions.

A poor example, but this compiles without error and crashes at runtime:

let x: string = "foo"; (x as any) = 1; x.substr(0);

Now, technically, you can do this sort of thing in most, if not all, languages, but this happens organically more often with typescript due to many libraries being written for plain javascript and being looser on their types, or type definitions getting out of sync with the libraries they are written for.


> Am I missing something?

Yes: Swift is now (and has been for a while) the standard language for developing iOS and MacOS apps. That makes it important completely independent of any features it may or may not have.


That is my current impression. Practically speaking, this is huge off course, but it's nothing inherent to Swift. Apple could have embraced any language, no matter what it was like, good or bad, and it be on this privilege position that would undoubtedly make it successful.

And up until now, I was thinking I'll try Swift the day I'm forced too, because I'm working to develop an app for Mac or iOS.

But then I started seeing some comments like: "For me, Swift is currently the most interesting language". So I thought I might be overlooking something.

The most innovative feature I can find is null safety, which is alright, but I guess I've had my fun with it with Kotlin already. Someone mentioned good ABI compatibility, that is interesting, and I'll need to read up on that. Others say, it's just a nice mix of modern features in a well maintained package. That's great as well, you could argue C# and Java being older have some less modern aspects lingering. But that's nothing that makes me want to jump in it and try to use it for my next project.


While a huge boon for the poor ObjC devs, this doesn't seem like much of a thing for the rest of the world.


Also, through the various *Kit libraries and now (sooner or later) SwiftUI, Apple is staking out a rapid-development territory that will be hard for the other ecosystem(s?) to match.

So, if you have to choose between making your app for iOS first, or Android first, then in addition to any consideration of markets which might or might not apply to your business model, you will have a very strong incentive to choose Apple for time to production.

I would love to see more serious competition on that front.


Swift has done some very original things around ABI stability: https://gankra.github.io/blah/swift-abi/


The primary feature of Swift is that it a more safe language (via many of the same mechanisms in languages you referred to above), while being extremely compatible with the extremely large API surface area of iOS/macOS (which is both in C and Objective-C), and allowing incremental adoption without having to rewrite existing application/library code.

To have adopted any of those other languages requires dramatic changes to a huge amount of platform APIs, along with on-going challenges of divergence. Put another way, any other language would have basically forked the entire platform.

The Swift developers definitely have ambitions beyond being a compatible with Apple's existing SDKs, but that's the fundamental thing Swift does that no other language can do.


In terms of syntax it's much closer to Kotlin than it is C# (though still different).

It has seamless interop with Objective-C and C, and can be mixed with about any language LLVM/Clang can handle (as long as you have C headers), compiling it all down to a small, self-contained binary.

For now it's mainly useful on Apple platforms, but there's interest and efforts in cross platform from both Apple and the Swift community, so there's a lot of potential for it to be a great language for the main body of application code, with individual bits of functionality written in languages that make the most sense without too many layers of adapters or wrappers.


> seamless interop with Objective-C and C

The interop actually has lots of seams. A huge engineering effort had to be made and has been made to paper over those seams, but they are definitely there.

Calling conventions are gratuitously incompatible, objects have to be converted at API boundaries at significant cost, the meta-systems are incompatible, etc.


I think the person meant in terms of identifiable trait/philosophy/focus. Swift and c# and maybe even kotlin are languages that are just a big piles of feature and does not really have an principle behind its design like stupid simple of golang or lazyness of haskel


at a time when every language is trying to be edgy/different, it'd be nice to have an actual language that is comfortable and boring without being dumb.

Seems like a viable niche to me.


this is not a boring language, it has a lot of complexity. I prefer simple but interesting language, f# comes to mind.


Boring in the sense of not being, well, Rust, or a lisp, or an ML derivative or...


The people that don't like edgy PL features normally also prefer a clean and simple language with orthogonal design not the big-pile-of-features kind of language though.


The really cool thing about Swift is it's lack of innovation or originality. It is ALGOL 2014. It tries to integrate a few more patterns into the structured programming paradigm.


You realize you are speaking of a 6 year old language right? For some reason, it sounds like you are referring to some newly launched shiny new language that nobody has ever done anything with.


You do realize Swift has only been officially supported for early adopters on Windows less than a week now? Ubuntu was the only officially supported distribution of Linux until earlier this year. Were any of the Linux distributions creating their own Swift packages?


that doesnt answer the question


I think it’s a good point actually. I think it’s reasonable to say swift isn’t intended as a language to experiment with a specific concept, it’s a six year old production language designed for a general set of usecases where it’s good at a bunch of things that the languages it is replacing weren’t as good at. There’s some interesting discussions to be had about tradeoffs it makes but a question like “what’s the one thing swift does that other languages don’t do” is not the right way to evaluate a production language. It might explain why some people don’t find it interesting but swift was built to be a better tool, not merely an interesting one.

A lot of really good programming languages are “uninteresting” in this way, and that’s probably a good sign, not a bad one.


There was no point made other than the language is 6 years old and "someone" uses it. That's nice. I'm sure I can find other languages 6 years or older which people use and are not worth using.

How long has Swift been available outside of Apple platforms? And I don't mean experimentally available either. They only posted an announcement a few days ago where Swift is deemed ready for early adopters on Windows. That certainly does not sound production quality there.


Swift has been officially available on Linux since it was open sourced in Dec. 2015.


the question was in the context of swift open sourcing something. if it s an apple-ecosystem-specific language , it s not much use to the general audience


Swift is slowly evolving into a formidable competitor to C++, and Rust. SwiftNIO, Swift on Windows and now this multiplatform System library, Swift is truly on its way to become a mainstream systems language. Exciting Tim times indeed!


Only that Apple's track record for supporting anything long term, cross platform, "open", is non-existent.

At this point I don't even care about how good their language is. Apple has a very, very long way to go before I can trust them on anything of that magnitude. I would instead assume they WILL pull the rug from under developers for any random reason.

(And I say this as an iOS developer)


One counterexample that comes to mind is CUPS (the printing system). LLVM and WebKit are also extremely non-trivial open source (and cross platform) contributions.

Most Linux users here likely use one or two out of three of these on a weekly basis. Many likely use all three on a weekly basis. Some use all three daily.


Apple has removed most stuff that's needed for non-mac platforms from CUPS over the years, which are maintained by the Linux community separately now.

WebKit is probably pretty rare outside macos, most browsers will be built from Chromium/Blink instead.

LLVM is developed by many companies these days, including Intel, Sony and Google.


> Apple has removed most stuff that's needed for non-mac platforms from CUPS over the years

Wrong. Upstream CUPS on Apple's github has support for everything, even systemd.

> WebKit is probably pretty rare outside macos, most browsers will be built from Chromium/Blink

GTK's WebView (and browsers like Epiphany aka GNOME Web) use WebKit, complete with Apple's web inspector / devtools.


https://github.com/OpenPrinting and cups-filters in particular.

> GTK's WebView (and browsers like Epiphany aka GNOME Web) use WebKit, complete with Apple's web inspector / devtools.

Good point, Qt's legacy HTML engine (Qt WebKit) is, well, WebKit, too. The newer Qt WebEngine uses Blink. I suspect this is a common pattern with components built on KHTML/WebKit before Chromium became Blink. In any case, Blink is clearly the fork that won.


At least one command line tool in my toolbox uses webkit. (wkhtmltopdf)


On Apple platforms yes, on the other ones still needs to learn to walk, before it can fight.


OP said systems language, and what’s Swift lacking there on other platforms?


Technically? Nothing, you can do many things with Swift as a systems level language, and there are some interesting aspects to it. Swift was my first introduction to the power of traits and protocols, and I'll be forever fond of it for that.

However, for the project I was on, when I was working on it, Swift on Linux was a massive pain in the ass, mostly due to documentation and dependency hell. Documentation was extremely useless; the number of times I was told "oh yes there's a swift way to do that!" only to find out the 'swift' way was to wrap a Cocoa library or just drop in some Objective-C module. If I recall correctly, there was an issue with pthreads not being properly implemented at the time, relying on some hacks to get it to 'work' while they figured out a 1.0 implementation. We got it to work, eventually, but the hoops we had to jump through really pushed our team to Rust as our general purpose systems language.

I'm glad they seem to have got it working, but I'm never subjecting myself to trying to get Apple's nonesense working outside of iOS or macOS again.


Supporting a language fully on a variety of non-Apple platforms is a massive undertaking. Even assuming for the sake of argument, unlimited funding, there's still the issue of finding the right talent, as well as expending limited managerial bandwidth for oversight & prioritization.

I think the Linux support that already exists is an awesome start. But the only realistic way to see more from Apple on this is if they themselves start to deploy critical Swift web-services on their own backend infra. (I'm assuming here that they run Linux systems somewhere in their stack.) That will give them the incentive to take it to the next level. Until then it's largely left to the community to find the energy and time to do this.


Apparently Google, Microsoft, Oracle, IBM, Azul, Mozilla managed to do it.

Apple has enough in the bank to do it as well.


So, you had problems with the exact same thing Apple set out to fix with open System library? I’d say they are on the right track then.


Sane APIs (with exception of course, SwiftUI looked beyond Apple's ecosystem for inspiration).

If you've been putting up with the [lack of] iOS API ergonomics for years, by all means, Swift will look like a substantial step forward. If you're used to thoughtful and quality APIs, the marriage between Swift and NSDrunkApiDesign is an incredibly unattractive proposition.

I really just refuse to tolerate those APIs on platforms (.Net) and languages (Rust stdlib) where more sensible APIs are available. If there were an alternative stdlib for Swift that lacked/wrapped the iOS quirks, I'd probably be all for it.


Swift ships with an extremely polished, well designed standard library out-of-the-box. You can use it with a simple "import Swift"…wait, you don't even need to do that, because it's available by default.


Why do I then see NSGarbage all over the place in Swift code? Swift is a language facade over Objective C.


In newer versions of Swift the need for almost all prefixing has been removed.

You can still say it's a facade when using things like UIKit, but as we've seen in the past few years w/ SwiftUI that is changing. It's already no longer the case if you're doing anything related to heavy duty computation [Accelerate, Metal & others have Swift-specific APIs now] or networking [SwiftNIO].


Because Swift allows interoperatability with Objective-C frameworks, such as the ones that ship with Apple's platforms. It's easy to see that Swift is not a language façade over Objective-C because you can write a type that cannot be represented in Objective-C almost trivially.


But you still have to use the awful ObjectiveC idioms. That's the problem. I did not refer to the limited Swift wrappers, I in-fact explicitly excluded it.


You have to use Objective-C idioms because Apple wrote that code in Objective-C, not Swift. If your complaint is "why can't Swift take my Objective-C code and not only provide a fairly decent interface to it, completely change the API so that it fits perfectly into Swift regardless of issues like ownership and interface design" that's just an impossible bar to set for it.


Rust has done that, with multiple platforms.

.Net has done that, with multiple platforms.

Dart has done that, with multiple platforms.

Apple doesn't care for anything except iOS. That is why Swift won't be taken seriously on anything except iOS.


As far as I am aware, all of those require someone to explicitly go in and create a wrapper, which is worse than what Swift gives you out of the box. So how are they better?


Because they all have 1st-party wrappers.


Swift is adding their own as you can see, they’ve been using Foundation until now because it’s really not that bad of an interface. Also, I disagree with the claim that Rust has first-party wrappers, it barely has first-party anything to be honest.


Truly an exceptionally beautiful and polished stdlib: https://twitter.com/dmitriid/status/1201441652507844608

I especially like that you have file operations defined on String type.


The first two lines of your example interact with the standard library.


The first two lines of which version?

Also, how is it that a "well-designed and polished" standard library doesn't have a means to write to a file?


Either version. And, FWIW, a standard library that is well-designed and polished can be spartan; Swift's has been on the operating system interface side because Foundation exists.


That is why the Linux examples are full of import Glibc.


Idiomatic libraries for a cross-platform UI, same as every language, system or non.

If you're not targeting an apple UI the choice of swift is entirely confusing because it has even less of an ecosystem to build off of than where it's popular.


Well, it has only been available on Windows specifically for early adopters for a few days now. I imagine it has a long ways to go to prove itself there.


What's the USP for swift? To my eye it looks too far away from C++ to catch on without a big selling point, but also doesn't have central thesis as it's core like Rust.


Swift can be a great alternative to C++, and it’s learning curve is smaller than Rust. Of course it had a long way to go, but nevertheless it is a interesting language.


"Swift is slowly evolving into a formidable competitor to ... Rust"

"Swift uses Automatic Reference Counting (ARC) to manage memory."


Just like many Rust code bases that are full of Rc<RefCell<>>, which don't get optimised away by the compiler.


Even that poor case in Rust is already better than what Swift can do, because Swift's refcounting has to be atomic. Swift is too dynamic to prove objects don't escape threads, so it can't optimize that cost away.

Rust's Rc/Arc can still selectively use borrowing, which guarantees the level of efficiency that in Swift may or may not happen depending on the optimizer. A borrowed Arc can often stay borrowed across many non-inlined method calls, which you're unlikely to get in Swift.


That assuming that the developer hasn't plagued the code with unnecessary clone() in an attempt to make everything finally compile.


I really have to learn more about Rust.


That what they said about Go but the inclusion of a GC made it only popular among python like programmers.


And Java developers. And Ruby developers. And node developers.


Most Java devs I know wouldn't touch go. Lack of generics, proper collection classes and having to manually iterate everything isn't very attractive to people who have these features.


Harsh take, but IMO you are irrelevant & asleep as a language unless you have something to offer on:

https://github.com/quicwg/base-drafts/wiki/Implementations

Phrased another way, your language should be participating in the future of HTTP, which is HTTP3 aka HTTP-over-QUIC. Swift's absence here indicates, to me, that Swift does not take itself seriously as a tool for delivering server side systems.

IBM dropping swift was another pretty strong indicator to me. They really tried, they wanted to believe. But this seems like an insular, closed off world, however much they keep going through long lengths like this effort here to open themselves up & make themselves accessible to the rest of the world. Their attempts to build bridges haven't seemed to make people be very interested in transitting over to their parcel of land, the advantages aren't clear, choices of language just don't seem that relevant, there's some advantages but overall the day to day won't be radically different except that you'll be an outsider hanging out with a bunch of people mostly entirely doing iOS. I'd be happy to be wrong, seems decent enough, maybe there's some real differentiation that truly improves life, but it also seems like it's a C# type situation, where it exists & is developed only to keep the natives in spirits & from getting restless.

I want to re-iterate that I think Swift seems pretty ok. More than not-bad. But languages just don't seem that relevant to me any more. Rust is being extra-strict, but otherwise, the feature-sets of languages doesn't seem very notable. There's some preferences & styles, some community favor that distinguishes languages, but by & large the work is not that different. I'm not sure what I would suggest to Swift to help themselves rise above, to underscore their own meaningfulness, in this kind of murky abyss-like scenario I've painted. I do think trying to win some AI champions makes sense; python seems to be unshakeable there, but that also means there's opportunity for better/different. Trying to get some web-developers web-platform folk on your side is usually a pretty big win, but not easy, & very factionalized already. It's weird days for languages.


> your language should be participating in the future of HTTP

Nah. It's fine to leave any networking to external libraries.


If you had clicked the link, you'd see that it is primary a list of libraries. Admittedly I had not considered that Swift can use native C libraries. I'm not sure whether it makes sense to expect a wrapper library or not, whether that would help. But overall, I'm pretty sure no one is doing nor advancing towards HTTP3 in Swift at the moment, which makes me feel like the Swift community is not a serious player for general development.

Some trivia, not necessarily a recommendation: Node.js is the only language I know of working towards HTTP3 support in the platform (also in this link, evidence that I may be biased in my priorities):

https://github.com/nodejs/node/issues/23064


This is great. I played around with Swift three years ago and tried to use the socket APIs. In the end, it felt like I was essentially writing C just with prettier syntax. Now that we have safer languages we need safer foundational pieces too.

As a side note, the approach that Swift is taking here is exactly what I really like about the Zig programming language. The standard library nearly exclusively takes the approach of wrapping the system calls so you get nice Zig error types rather than the archaic return codes. These things go a long way in making correct usage of really essential but hard to use APIs possible.


We always had them.

What we have now is the acknowledgement that after 60 years even 10x C developers don't write corruption free code.


I initially was excited that Swift https://www.swift.com/ had opened it's payment APIs.


For those that don’t know Swift is:

> DescriptionThe Society for Worldwide Interbank Financial Telecommunication, legally S.W.I.F.T. SCRL, provides a network that enables financial institutions worldwide to send and receive information about financial transactions in a secure, standardized and reliable environment.

https://en.wikipedia.org/wiki/Society_for_Worldwide_Interban...


Your comment is funny, but I bet the majority of people here won't get your joke.

Most people on this thread are the ones that care a lot about the Swift programming language. Most of the ones that would understand the joke are not reading this thread.


I knew there was no chance of that, so I parsed it correctly.


The article states that operating system "offer a C interface"; that's not really true of Linux, at least. It provide a series of system calls on which a C API is built. It's a subtle difference, but am important one if you're going to provide a runtime system abstraction.


I think that's fair to say for Linux, since it aims for syscall compatibility, but I don't think that's true for Windows or macOS, where (unless I'm mistaken) your only options are to use libc (or an equivalent) or to constantly track breaking changes to the syscall interface.


I don’t recall any breaking changes on Windows syscall interface, can you give an example?


Windows syscall interface changes all the time. [1] has a table of windows syscall numbers. Just looking at the first syscall, NtAcceptConnectPort, you can see that all windowses up to and including 7 used 0x60; windows 8 changed to 0x61; 8.1 changed it to 1; and then 10 changed it to 2.

1. https://j00ru.vexillium.org/syscalls/nt/64/


I see, that’s interesting. I guess software ecosystem almost always uses Ntdll instead of raw syscalls. I’m still curious why there’s been a reshuffling at all.


The kernel has been refactored several times.

The most famous one being MinWin

https://arstechnica.com/information-technology/2007/10/core-...

During Windows 10 that happened also a couple of times.


You don't do direct syscall on Windows, you make those through system DLLs.


There is nothing preventing you from doing that though, that’s why I was curious.


Hopefully your brain will prevent you from doing that. When OS developers say that the C library is the public interface and syscalls are private, you listen and use the C library.

Golang authors completely ignored FreeBSD's policy and went with syscalls directly because "they don't change in practice" and that is infuriating. Porting to new architectures is much harder, for one.


If you're going to be so pedantic as to say that linux doesn't offer a C interface, then you also have to acknowledge that linux is not an operating system.

If linux is not an operating system, what is? Gnu/linux is a very popular operating system offering a c interface, as is android linux. I do not know of any linuxoid operating systems not offering a c interface.


The C API for syscalls is mostly (almost entirely) just thin, 1:1 wrappers around the system calls themselves. In FreeBSD most of it is automatically generated from the same sys/kern/syscalls.master file that is used to generate the kernel syscall table.


Can you elaborate on the difference you're pointing to here?

I'm seeing a distinction without a difference, but clearly you aren't.


You can't write code that expects to call C functions to interop with the Linux system interface. You have to call the syscalls directly without a C-like ABI.

There is some functionality you can access from glibc at link time, but not all of it (as some are implemented as macros). Even then, that's glibc providing the interface, not Linux.

As the other commenter said, this is subtle (and probably non-consequential) for most people, but if you are writing an abstraction over the system interfaces, it becomes very significant to your codebase.

As an example, you can lookup how Go does syscalls on Linux. It has some important consequences for their low-level design decisions: https://utcc.utoronto.ca/~cks/space/blog/programming/GoSched...


OTOH, Linux's setuid and similar interfaces are per-thread, not per-process as required by POSIX and as what most people would expect--particularly in the context of software security. glibc and musl have to emulate the POSIX behavior, and doing it correctly is extremely tricky. See https://ewontfix.com/17/

Linux's various APIs used for container frameworks also tend to be per-thread, not per-process; that can be both useful and a giant headache, depending on context. It's somewhat ironic that Go was used for Docker as Go is pretty much the worst possible language you could choose in this regard. Go deliberately and thoroughly obscures native thread and process semantics. IIRC, Docker was already well along and established before Go even introduced an API for pinning a goroutine to a machine (kernel) thread. I once ran across a comment where the author of (I think) runc lamented his choice of Go. But I haven't been able to find it again, so it's entirely possible it's a misattribution on my part.


It's a bit more complicated than some of the other posters are making it seem:

https://utcc.utoronto.ca/~cks/space/blog/unix/UnixAPIAndCRun...

> A few Unixes explicitly say that the standard C library is the stable API and point of interface with the system; one example is Solaris (and now Illumos). Although they don't casually change the low level system call implementation, as far as I know Illumos officially reserves the right to change all of their actual system calls around, breaking any user space code that isn't dynamically linked to libc. If your code breaks, it's your fault; Illumos told you that dynamic linking to libc is the official API.

> Other Unixes simply do this tacitly and by accretion. For example, on any Unix using nsswitch.conf, it's very difficult to always get the same results for operations like getaddrinfo() without going through the standard C library, because these may use arbitrary and strange dynamically loaded modules that are accessed through libc and require various random libc APIs to work. This points out one of the problems here; once you start (indirectly) calling random bits of the libc API, they may quite reasonably make assumptions about the runtime environment that they're operating in. How to set up a limited standard C library runtime environment is generally not documented; instead the official view is generally 'let the standard C library runtime code start your main() function'.

The kernel's API (which isn't C, but assembly language, as it relies on special opcodes) might be guaranteed stable, as it is in Linux, but even so there are or might be reasons you should call into libc anyway, and take advantage of the official functionality there.


Not only take advantage of functionality, but also:

- make porting to new architectures easier (have you tried writing syscalls wrappers in e.g. golang's insane assembly syntax? Not a fun time); - not break anyone's fucking LD_PRELOAD hooks!!


Thank you, they were all good answers, but this one gets at why I was confused on the matter.


On many POSIX compliant systems the stable interface to the system is libc, with syscalls being unstable and changing between releases.

Linux itself does not provide a libc or other similar system library implementation (even NT has ntdll as the stable syscall layer), and it’s the syscall numbers and parameters themselves that are the stable interface.


I guess I just meant that you don't have to code against a C ABI to make system calls: it may be more efficient to do the grunt work around the system call in the "local" language and only make the system call at the end.


> It provide a series of system calls on which a C API is built.

I can't see how that isn't offering a C interface.


One thing not mentioned by others so far is that not all system calls available on Linux are actually available in (say) glibc, for example if you want to access the perf_event subsystem you'll have to write your own C function to handle it (This is not true for eBPF IIRC).


As one of the cousin comments said, on Linux the syscall table is defined as stable and various POSIX(-like) C APIs (glibc for example) are built on top.


... thus glibc is one of the C interfaces to which the article refers!

I guess I should just stop since you're making a distinction I just am not getting. I realize glibc doesn't == syscalls, but I don't see how that's relevant to the article.


To be really pedantic, the pure Linux OS doesn't provide the C interface - a layer on top does.


Is this just a wrapper around a subset of system calls? Looking at the code it seems to be, not sure.


Yes, it's a typed, safe wrapper around a subset of Darwin or Glibc (depending on platform) functions. See: https://github.com/apple/swift-system/blob/main/Sources/Syst...


It looks like it:

> a new library for Apple platforms that provides idiomatic interfaces to system calls and low-level currency types

In other words, it's a set of native, type-safe Swift APIs that wrap the usual C system calls.

> System pervasively uses raw representable structs and option sets. These strong types help catch mistakes at compile time and are trivial to convert to and from the weaker C types.

> Errors are thrown using the standard language mechanism and cannot be missed. Further, all system calls interruptible by a signal take a defaulted-true retryOnInterrupt argument, causing them to retry on failure. When combined, these two changes dramatically simplify error and signal handling.

> FilePath is a managed, null-terminated bag-of-bytes that conforms to ExpressibleByStringLiteral — far safer to work with than a UnsafePointer<CChar> [Swift's spelling of `char *`].


>causing them to retry on failure

I hope they meant in case of failure with EINTR.



Yes, and I'm glad for it. Not sure if it's "ready for prime time," yet (it sounds like it may still be too "nascent" for me), but I'll have to see if I can use it to tweak stuff like this[0]:

    var md5: String {
        // The reason we are declaring these here, is so we don't have to actally import the CC module. We will just grope around and find the entry point, ourselves.
        
        /// This is a cast for [the MD5 function](https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/CC_MD5.3cc.html#//apple_ref/doc/man/3cc/CC_MD5). [The convention attribute](https://docs.swift.org/swift-book/ReferenceManual/Attributes.html#ID600) just says that it's a "raw" C function.
        typealias CC_MD5_TYPE = @convention(c) (UnsafeRawPointer, UInt32, UnsafeMutableRawPointer) -> UnsafeMutableRawPointer
        
        // This is a flag, telling the name lookup to happen in the global scope. No dlopen required.
        let RTLD_DEFAULT = UnsafeMutableRawPointer(bitPattern: -2)
        
        // This loads a function pointer with the CommonCrypto MD5 function.
        // [dlsym](https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/dlsym.3.html) is a symbol lookup. It finds the symbol in our library, and returns a pointer to it.
        let CC_MD5 = unsafeBitCast(dlsym(RTLD_DEFAULT, "CC_MD5")!, to: CC_MD5_TYPE.self)
        
        // This is the length of the hash
        let CC_MD5_DIGEST_LENGTH = 16
    
        guard let strData = self.data(using: .utf8) else { return "" }
        
        /// Creates an array of unsigned 8 bit integers that contains 16 zeros
        var digest = [UInt8](repeating: 0, count: Int(CC_MD5_DIGEST_LENGTH))

        /// CC_MD5 performs digest calculation and places the result in the caller-supplied buffer for digest (md)
        /// Calls the given closure with a pointer to the underlying unsafe bytes of the strData’s contiguous storage.
        _ = strData.withUnsafeBytes { (inBytes) -> Int in
            // CommonCrypto
            // extern unsigned char *CC_MD5(const void *data, CC_LONG len, unsigned char *md) --|
            // OpenSSL                                                                          |
            // unsigned char *MD5(const unsigned char *d, size_t n, unsigned char *md)        <-|
            if let baseAddr = inBytes.baseAddress {
                _ = CC_MD5(baseAddr, UInt32(strData.count), &digest)
            }

            return 0
        }
        
        // Convert the numerical response to an uppercase hex string.
        return digest.reduce("") { (current, new) -> String in String(format: "\(current)%02X", new) }
    }
As a Swift programmer, the above hurts my heart (but it works very well).

[0] https://github.com/RiftValleySoftware/RVS_Generic_Swift_Tool...


You know you can just call the function directly, right? (Stupid warnings about MD5 being insecure aside…)


Didn’t want to import CC.

This is one of a couple of computed CC functions in a very general-purpose StringProtocol extension. I wanted to reduce as many external requirements as possible. I don’t even like importing Foundation, if I can help it.


import Foundation import CryptoKit

extension String { var md5: String { let computed = Insecure.MD5.hash(data: self.data(using: .utf8)!) return computed.map { String(format: "%02hhx", $0) }.joined() } }


This reminds me of Node.js fs module that enables interacting with the file system on standard POSIX functions. Probably in 2030 we will have GPT-400 wrapping all of this primitive functions again and we will be relieved that progress is being made once again.


Except Swift is developed by apple with Systems integrating the *NIX functions within a BSD foundation.

Very shrewd from Apple, with linux on ARM looking the next big leap for cloud computing (amazon etc) being able to program in swift natively on a mac running apple silicon and host in an ARM box on the cloud, having open sourced the systems api, is a very attractive offering.


I looked at Swift a bit recently, for no reason I can remember, and was quite pleasantly surprised. A decent type system, mostly-sane memory management and error handling, easy ways to call out to C - System will make that even better - seems to do OK on performance and library support. It doesn't seem like any kind of breakthrough, but nothing wrong with having another solid choice on the menu.


Swift for Android! The Swift Android Compiler allows the Standard Library to be compiled for Android armv7 targets. Consequently, you have the ability to execute the Swift code on mobile devices running Android. This is where the Low-Level Virtual Machine (LLVM) Architecture comes in, enabling targeting those with a backend LLVM compiler; something Google already does with Android Development from C/C++. Therefore, the LLVM provides a window into its’ compatibility with other systems/software/devices.


What's the advantage of Swift over Rust if I'm looking at ditching C++ on a project that has nothing to do with Apple's ecosystem?

Currently the top two comments suggest that Swift looks compelling for this use case, but I don't really see it.


If you don't need super-high performance it should in theory be easier to develop in swift as the automatic memory management is easier to deal with than rusts lifetimes/borrow checking. I tried them both out a few years ago, but gave up on swift because the cross-platform development story was poor. That's gotten better now it seems, but I'm already comfortable enough writing rust for everything I see no reason to switch.


Swift is designed for making nice, concise, clever APIs. It favors convenient abstractions over raw performance. It's very well-suited for GUI applications (that's the #1 task it's been created for). That may extend to application back-ends that have lots of business logic, but don't need to process large amounts of data.

This is in contrast to Rust, where more explicit memory management makes GUI frameworks a bit tedious. Rust is more explicit and gives lower-level of control, but borrow checking favors simple code over abstract interfaces.

To me Swift is more like native TypeScript, and a more practical alternative to JavaScript and Python for app development. Rust is a more direct replacement for C++, because it offers similar level of performance and control over every byte in the program.

Rust's compiler doesn't dare to insert any implicit allocations or non-trivial code itself, and will make programmer make every choice intentionally. Swift's compiler does whatever it takes to implement whatever abstraction it presents.


Is this just the language? Will we be able to develop ios apps on windows?


The core language has been open source for a while. Windows support was added a few days ago. It will not allow you to develop iOS apps on Windows, as the tools to do that are part of Xcode, which is not open source and not cross-platform.

This announcement concerns Swift System, which provides a wrapper over UNIX functions like `open'. Since those functions come from C, they do things like using integers to represent files (``file descriptors'') and setting errno to indicate failure. System wraps them to make them into proper Swift functions that throw exceptions and make full use of the type system. It has been open-sourced and Linux support has been added.


Yes just the language. iOS apps are built using UIKit or SwiftUI gui frameworks, neither of which have been ported to platforms other than macos or ios.

This, and Foundation, are frameworks that allow you to write non-gui linux apps in swift (e.g. swift on the server) - and these could potentially be ported to windows as well.


Got it, thanks. <3


Not iOS apps, but windows applications[1].

[1] - https://swift.org/blog/swift-on-windows/


You might want to check out https://github.com/compnerd/swift-win32 as well


Most everything has pros and cons but apparently Swift System has no cons! Amazing!

“... provides idiomatic interfaces to system calls and low-level currency types.”


Isn't the core value of Swift in Apple's large set of libraries?


Is this built on top of libc or using Linux syscalls directly?


So the AppStore goes, so Swift goes.


They're trying hard, but the main draw of the language will always be their proprietary UI frameworks, same as objective-c.


Can you create decent GUIs with Swift on GNU/Linux???


In theory, yes. It's a Turing-complete language with quite nice C FFI, so you can call into Gtk just fine. In practice, uhm, no.

Apple's branding of SwiftUI and Swift is a bit of a double-edged sword, I think. The former is a highly proprietary, undocumented (but beautifully engineered) walled garden that only works on Apple hardware, and only recent OS versions at that. The latter is open source and purports to be a general-purpose, cross platform language, comparable in scope to C++, C#, Go, Rust, Kotlin, and other comparable contenders. They have only themselves to blame if some of the perception of the former rubs off when people think about the latter.


I meant to say "underdocumented" here. Its internals are undocumented, which is potentially a problem because there's no source available, but there is basic documentation for using SwiftUI. It's still sparse but I'm sure will improve as it matures.


Not really. But I did this: https://liuliu.me/eyes/write-cross-platform-gui-in-swift-lik...

Unlike most comments here, I felt that Swift was a Python replacement to me.


> In June, Apple introduced Swift System, a new library for Apple platforms that provides idiomatic interfaces to system calls and low-level currency types.

"One of these things is not like the other, one of these things does not belong"

I mean, it kind of fits, it works, but I did do a quintuple take trying process & make sure I was reading this first line correctly. Posting just to share. Read a little more to confirm, yeah, system calls & currency. Check & check.

One other idle thought, it would be interesting for something like the web platform to try to expose it's api's in a polyglot fashion. For some reason, this exposure, of Swift trying to open up more of it's platform to other platforms (that makes sense in context i swear) makes me think, how can the web platform keep expanding to offer more?

We're seeing really interesting neat early indicators with projects like Rust's web-sys, to wrap & expose the web platform in rust-webassembly. But what would it look like to try to expose & make useful JavaScript's new Temporal standard library, or Intl, or currenc... oh wait we don't have currency. ;)


The term "currency type" is used here to mean a type that is commonly used.

For example, although there are a variety of range types in the Swift standard library ('Range', 'ClosedRange', 'PartialRangeFrom', 'PartialRangeUpTo', etc.), 'Range' is considered the currency type. Similarly, among string types, 'String' is considered the currency type, as opposed to 'Substring', 'StaticString', etc.

Like currency (money), the idea is that APIs in different libraries across different domains of programming will generally take values of the currency type as input and produce values of the currency type as output unless there's a good reason to use a different type.

For Swift System, the stated goal is to provide low-level currency types; if that goal is accomplished, other users of Swift can rely on these types instead of supporting multiple disparate third-party wrappers of system calls that may provide similar functionality just so that they can interoperate with other libraries.


I’ve never before heard of the phrase “currency types”. I even searched the web for “system currency types” and only got money-related results.


It may just be jargon specific to the Swift community:

https://www.google.com/search?q=swift+evolution+%22currency+...


why do i always get downvoted for having fun? i'm not sure if i want to be coached, but i wish dissenters had to offer something, other than minuses. did you all not also have fun at this mismatch? does the question of how to offer platforms up not entice you at all? is my tone off? so many downvotes on hacker news. i usually chalk it up to ya'll but i would like to know. -1! bah. who are you?! why?!


I didn't downvote you, but initially I did not get your joke.

It does stand to reason, I guess, that currency is as fundamental to Apple as system calls are to Linux.


You're rambling a little, is my guess. Plus, humor seems appreciated here, but not as the single point of a post.


The trolls need constant feeding. Wear your negative karma as a badge of honour.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: