Hacker News new | past | comments | ask | show | jobs | submit login
Swift Numerics (swift.org)
177 points by hiraki9 12 days ago | hide | past | web | favorite | 44 comments





You may find this article I wrote useful background for understanding the capabilities of numerical computing in Swift, including details about a library which provides features overlapping with the new Swift package: https://www.fast.ai/2019/01/10/swift-numerics/

Thanks, it is a well written article. The only part which tarnishes an otherwise excellent article is the language comparisons portion. Such things are best avoided as they almost always reflect the biases of or where the author has the most experience.

As you acknowledge, you were unfair to Julia. Julia has excellent support for general purpose programming. It has support for multiple dispatch, a quite powerful organizational construct rarely found in other languages, for example. Where it might fall short is in the number of general purpose libraries but that shouldn’t matter as much for ML. It certainly isn’t lacking for numeric libraries.

For Java, it’s unclear what you mean by JVM issues but with GraalVM and Scala or Kotlin, I fail to identify any major issues.

F# on .net core can be distributed without installing a runtime nor does garbage collection limit its expressiveness. It is among the fastest, in the Go/Java/Ocaml/Swift tier, which is next fastest after C/C++/Rust/Fortran. Compare https://benchmarksgame-team.pages.debian.net/benchmarksgame/... to https://benchmarksgame-team.pages.debian.net/benchmarksgame/.... Setting aside the problems of microbenchmarks and unidiomatic code, they’re both within spitting distance of each other, both wining and losing with no significant margin. Their runtime also recently introduced SIMD and fine-grained memory management support.

I'd say practically speaking, Swift's current great weakness is its cross platform support, when compared to Julia, Kotlin/Scala or C#/F#.

Swift is an excellent language and it’s great a numeric programming ecosystem is being built for it. There’s no need to justify its existence by misrepresenting or downplaying the capabilities of competing ecosystems. I also saw this occur for why differentiable programming in Swift and not Julia. The more the merrier, I say. Let the various communities collaborate and share ideas in order to better explore the space of possibilities and ideas.


On paper F# seems pretty great. Unfortunately, in practice it was the first and so far the only language I've ever used that I never found myself enjoying.

I wrote a simple, but relatively robust toy language interpreter in it, so the time I put in wasn't trivial and I kept thinking it would get better, but no such luck. This is just my anecdotal experience though, I know some companies have been wildly successful with F# as the technical foundation for their entire platform. Jet is a great example of one such company.


Certainly, not every language will fit everyone's preferred style of thinking and approach to breaking down problems. This is why I think a diversity in available options is so useful.

> JVM issues

stop the world GC (my car ran you over because of a random unpredictable pause!)

its bytecode is object-oriented

impossible at the moment to do CUDA style GPU programming without horrible JNI calls

> F# on .net core can be distributed without installing a runtime nor does garbage collection limit its expressiveness

F# has two big crippling factors: (1) the .NET at the end of its name and (2) functional programming, which is a huge barrier for 99% of developers (and it's not functional enough for the other 1%). F# is dead in the water--I mean that respectfully. To replace Python for deep learning, you need a language that other people will actually use, not should.


> stop the world GC (my car ran you over because of a random unpredictable pause!)

Given a certain baseline of available resources and for the places where GC issues can be problematic (regimes I do not believe common) unpredictability of GC is the main issue, which can be readily addressed. You can for example, allocate and then manually manage or opt for a specialized JVM. If startup-time is an issue there are native compilation options. Or just not use the JVM, that's an option too.

I'll also note that Automatic reference counting is no panacea either and Python is no better suited for the scenario you've given. Personally, I've not found this to be an issue given modern low pause concurrent GC but your mileage may vary.

> its bytecode is object-oriented

> impossible at the moment to do CUDA style GPU programming without horrible JNI calls

I don't think these are deal breakers. More easily writing kernels in a high level language is very much an open problem. Even Tensorflow faces this issue, with most workflows optimized for a handful of prewritten kernels. On the JVM there are options such as https://index.scala-lang.org/thoughtworksinc/compute.scala/b... or http://aparapi.com/ for GPU backed ND-arrays or JVM translation.

> F# has two big crippling factors: (1) the .NET at the end of its name and (2) functional programming, which is a huge barrier for 99% of developers (and it's not functional enough for the other 1%). F# is dead in the water--I mean that respectfully. To replace Python for deep learning, you need a language that other people will actually use, not should.

I don't think (1) is true or even if so, see its technical relevance. If you're doing numeric/array-based differentiable programming then you shouldn't have any problem with functional programming. I'd even argue functional programming makes things easier as you get many things free from existing combinators. Many concepts are naturally expressed with the features such languages tend to have.

I don't think Python needs replacement, I'd much more rather see interoperability and language agnosticism. I can tell you that for each of Julia, Haskell, Scala, Kotlin, Ocaml, Nim, F#, Rust and of course Swift, at least one fascinating machine learning library is being built. I think that's a great thing.


Can you expand on F# not being functional enough for the other 1%?

I do not believe this generalization is accurate. It wasn't me who said that but the person I quoted. F# is plenty functional but what it does lack are organizational constructs which reduce code duplication. In practice, these are not deal breakers for what you get in return (especially that you can work around them by leveraging more OOP features, which isn't necessarily a bad thing) but that as always, depends on preferences and priorities. Among functional languages, F# is pretty streamlined, believe it or not. F#'s pragmatism means the subset of Haskell that gets used in practice to keep things simple shares a fair amount of overlap with F#. Even more-so for Ocaml.

But, the person might have meant something like, if you already know Ocaml, you might complain about the lack of functors or polymorphic variants. Similarly, if you already know Haskell you might complain about the lack of higher-kinded types or GADTs. On the other hand, F# offers its own features (clean implementations of active patterns, computation expressions, type providers, async, multicore) and advantages, mostly from being able to leverage the .NET ecosystem (which also explains why those specific features are missing).


For what it's worth, Neanderthal in Clojure gives very smooth access to CUDA:

https://neanderthal.uncomplicate.org/


Is Swift's GC real-time, then? I thought it's just normal refcounting with unbounded stop times.

No, but ARC does not have to pause all threads. Also this exists:

https://developer.apple.com/documentation/swift/unsafemutabl...


So your scenario there is a separate thread that doesn't generate or free garbage? Because if it does, there will be synchronization inside malloc/free and your thread will pause. Also the Swift compiler is unpredictable about when it will use the heap instead of stack, allocations are not explicit. So it might be hard to accomplish in practice.

I agree that this special case could be more realtime with ARC, but on other GCs such an isolated worker can be implemented as a separate program instance (separate process or separate VM instance in same process) that has memory allocation and GC disabled after init.

In the other cases where you keep whe GC on, I think the low-pause GCs of today should be suitable for tasks like computer vision in cars because the pause times are so short (sub-millisecond) vs the processing time for a frame. And the generally higher performance of non-rc GC should more than make up for it.


Swift has no GC, just ARC (automatic refcounting).

Swift has a GC, what it doesn't have is a tracing GC.

Technically it is a form of garbage collection. However, garbage collection is a very broad term that arguably encompasses even C++ smart pointers. Typically one refers to a garbage collector in the meaning of a separate process that runs independent of the application. Either in the background or “stopping the world” to collect. ARC is part of the application code. It is also visible to the compilers optimizer, which can optimize away redundant redundant reference count operations.

Typically one refers to the CS definition of automatic memory management, using accredited references like "The Garbage Collection Handbook: The Art of Automatic Memory Management".

Everything from "ARC" is there on chapter 5.


In your division, ARC is stop-the-world variety because things like cascading deallocations cause unbounded work.

>F# on .net core can be distributed without installing a runtime

how?


By bundling. I believe there are couple a options to reduce the resulting package size some. As with all things, it's a matter of priorities when weighing language and deployment options.

> Setting aside the problems of microbenchmarks and unidiomatic code …

We don't get to set aside problems when they would undermine a premise of your conclusion.

Use of the benchmarks game as evidence must imply that you have confidence in that evidence.


It does not. Because, since both systems in comparison are in violation, when normalized to just this setting, the comparisons are meaningful. That is, while microbenchmarks are not something that should replace task relevant testing, they do have utility as a coarse indicator. The information should be taken with a large helping of uncertainty but it also points in the general correct direction in terms of relative ordering of the compared.

The reality is often that other things will dominate. Things such as computational complexity, appropriateness and optimizations of data structures in use, I/O bounds, cache locality and specific details of the problem that will tend to reduce and not magnify the differences between languages near each other in a relative ordering, when things are done properly. Or slow things majorly when things are not done properly. This holds especially if idiomatic code is not anymore expensive to write in any of the compared languages, as is the case here.


> both systems in comparison are in violation

Invalid + Invalild != Valid

> they do have utility as a coarse indicator

Which, once again, must imply that you have confidence in that evidence.


Your arithmetic is not relevant to my point because as I said, when normalized to the same domain, it is valid.

> Which, once again, must imply that you have confidence in that evidence.

Nope, think of it as a bayesian update which induces a non-zero but low relative entropy, with respect to the new posterior distribution.


> normalized to just this setting > normalized to the same domain

What do you mean by "normalized" ? What "setting" ? What "domain" ?

> Nope, think of it as …

Think of it as being honest or not being honest.


I am also happy to see that sometime in the last several years slices of arrays became views rather than copies. https://developer.apple.com/documentation/swift/arrayslice

Yes, and it makes working with Data so fast and easy.

Does anyone use Swift (in production) for backend services not related at all to the Apple ecosystem? It seems like a better Golang to me, but the tooling space only targets Apple stuff.

The issue with swift on the server currently is that it’s an entirely different compilation toolchain to that run on MacOS and iOS, so the language has different bugs than those that exist for app development, and gets less attention from swift developers.

If you want to take a stab at server side swift, I’d recommend looking at the swift port of Netty that Apple released called swift-nio (after the name “swetty” was nixed by Apple marketing and communications) - https://github.com/apple/swift-nio


The NIO version of Swift gRPC is currently at v1.0.0-alpha.6. Hopefully we'll see a 1.0 version soon.

Swift gRPC repo: https://github.com/grpc/grpc-swift


This post on the Swift forums gives a good overview of Swift gRPC: https://forums.swift.org/t/discussion-grpc-swift/29584

There's a lot of work being done to make Swift a good choice for ML.

TensorFlow for Swift for example - https://www.tensorflow.org/swift


Still no Windows binaries.

I recently tried to use Swift for a UDP server that was meant to run on Linux. I quickly realized the Network Framework is for iOS/MacOS/iPadOS only.

I ended up using Go and I'm happy with it. I really wanted to do full stack Swift though. Just so I could become better at the language.


Honestly that's no different than realizing that AVFoundation or CoreGraphics is Apple-platform-only...

Network.framework is an Objective-C framework (I'm honestly surprised I don't seen any C++ symbols in the binary) – it was never supposed to be a cross-platform framework.

SwiftNIO is the official base of the OSS Swift cross-platform networking toolchain. It's the equivalent of Java's Jetty package.


Netty, not Jetty.

Right, thanks!

I've pushed a few admittedly minor backend services in Swift to production. It is ok.

You basically have one viable non-Apple OS platform (Ubuntu) to deploy on. This means that your basic Golang service is a 10MB Docker image while it can be over 100MB for a basic Swift service. There are frameworks like Swift NIO which is based on Java's Netty (and there are some Apple developers who work on Netty also working on Swift NIO). It works well enough but in most benchmarks, Swift is not even close to Netty ( https://www.techempower.com/benchmarks/#section=data-r18&hw=... ), so if you're pushing the code expecting high performance, I would think twice. Swift gRPC (latest version based on Swift NIO) is also available, and while I think it works very well, it is still relatively new.

As far as tooling, Swift is very immature IMO if you step outside Xcode. There are efforts to get a LSP service fully working (SourceKit-LSP), but I find it to be ok at best (performance and code completion suggestions are often very hit or miss). From benchmarking to diagnostics/backtraces to logging and metrics frameworks to shared common knowledge/answers on Stack Overflow, it is still very early days for Swift. Golang is so far ahead here that I personally think (at least today) the only reason you should launch a Swift service into production is because you want to reuse code that you have in your app.

If you like Swift's type system and want a backend service equivalent, I would strongly recommend looking into Rust. IMO, Rust is a version of Swift where the programmer is given more control of what the code is actually doing (along with the associated responsibility). It sounds like a lot of trouble, but I find most Swift code naturally translates to Rust code (especially if you follow the "value vs reference" semantics ideology that the Swift compiler team advocates). At worst, if you learn Rust, you will understand Swift a lot better (like what really is an escaping vs. non-escaping closure or what is the difference if I use a generic versus a dynamic protocol type (dyn trait in Rust) in a function definition).


Really looking forward to ShapedArray. Eventually a lot of what one might do with Python may be available in Swift.

You can already use Julia language for more "native" experience and more computational syntax.

I wonder what it can offer vs Julia language for HPC.

Awesome. Now who wants to make a hypercomplex numbers library (quaternions, Clifford, Cayley-Dixon...) in Swift?

Personally, I'd rather get the Swift equivalent of java.util.concurrent.

How about the equivalent of RxJava instead: https://developer.apple.com/documentation/combine

Unfortunately not available on Linux, so server side is out :(


There’s an open source implementation that can be worth exploring:

https://github.com/broadwaylamb/OpenCombine




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: