
Swift: Challenges and Opportunity for Language and Compiler Research [pdf] - Jerry2
http://researcher.watson.ibm.com/researcher/files/us-lmandel/lattner.pdf
======
bsaul
Funny how people here mention source breaking change as the main issue with
the language. I think it's because they haven't used swift on a large
codebase. The main issue once you start to work on a big project are compiler
crash ( [https://github.com/practicalswift/swift-compiler-
crashes](https://github.com/practicalswift/swift-compiler-crashes) ) and
compilation time.

I really don't understand how a compiler can crash that much ( actually i've
been coding in many client and server languages before and it's the first time
i see a compiler crashing while i code). It makes me really nervous, because
it shows that unit tests aren't good enough, and that the language is really
in alpha stage, not more.

As for compilation time, you may think it's just a matter of optimisation. But
the problem is that until those optimisation arrive, you can't rely too much
on type inference (main source of slowdowns), which diminishes a lot the
beauty of the code.

Now that was for client side. Server side probably adds its share of issues.

I used to be very optimistic for this language, but i'd say the swift team
only has one more shot to make it really production grade, before the word
spreads that this language is for now just a joke ( maybe apple using it for
its own app would help allocate more resources).

~~~
sssilver
I actually love the source breaking changes. The alternative is easier and
much worse, and it takes boldness to not roll with the alternative, e.g. add
language bloat and keep suboptimal decisions as part of the language just to
support code that's already shipped.

C++ is the prime example of what the alternative looks like.

~~~
mike_hearn
I'm not sure I agree.

My girlfriend has decided to learn programming. She started with Swift because
she wanted to write iOS apps. The course she's using targets Swift 2 but her
install is Swift 3, or something like that (I haven't looked into it much).

XCode constantly flags stuff like x++ being replaced with x += 1. Really?
Surely that's the kind of decision you make before version one of a language,
not years after release. Why this pointless churn?

If these sorts of totally-indecisive deprecations were rare then you could
overlook them, as otherwise Swift is a rather nice language. But they're
everywhere. Even in trivial examples intended for beginners line after line of
code gets the yellow deprecation warnings. "What does deprecated mean" was
literally one of the first questions my girl asked me as she started out
learning programming, which is ludicrous. You shouldn't be encountering
deprecation warnings over and over when targeting a brand new platform using a
teaching course barely a year old.

~~~
pivo
If your girlfriend wants a stable language for iOS development she should use
Objective-C. Apple have been very clear about the fact that Swift is evolving
and that there will be breaking changes.

Changes to Swift are hardly indecisive or pointless. A very clear rationale
for removing the ++ operator was laid out here:
[https://github.com/apple/swift-
evolution/blob/master/proposa...](https://github.com/apple/swift-
evolution/blob/master/proposals/0004-remove-pre-post-inc-decrement.md).

While these changes may be painful now (though the removal of the ++ operator
seems minor to me), they should result in a stronger and simpler language in
the long run.

~~~
mike_hearn
The course she bought (without my input btw, she's pretty independent!) uses
Swift. And that's probably right. Swift is a much nicer language than
Objective-C, much closer to modern languages and has far less of the C
heritage poking through. Even with the deprecation warnings, it's probably
easier to deal with.

Removing x++ is indeed minor, which is why it's so curious. Sure, there are
arguments for removing it. But there are also arguments for minimising
language churn and just living with these things.

~~~
mmariani
You could give a little help to your girlfriend. How about installing for her
the required swift version by her course with swiftenv? This way she could
learn the basics without getting nagged by Xcode. Later she could read up the
differences and port her code to practice.

------
makecheck
I found that on macOS, refactoring even a tiny Xcode project to use Swift
added 10 MB of libraries to the resulting bundle. The OS needs to start
including a stable set of Swift frameworks _by default_ so that Swift-based
programs do not require users to download much fatter binaries.

~~~
mikeash
Swift needs a stable ABI before that can happen. That was planned for Swift 3,
but it didn't work out. Currently it's planned for Swift 4, which will ship
next fall. If Apple immediately takes the opportunity to bundle the libraries
with the system, then apps targeting iOS 11 or macOS 10.13 written in Swift 4
will be able to avoid embedding the libraries.

~~~
makecheck
I don’t necessarily mind paying the cost _once_ but since I tend to refactor
applications into multiple utility applications (separation of concerns, crash
stability, security, etc.) this means _each_ sub-bundle pays the cost too. At
the moment I have not found an obvious way to avoid this, aside from somehow
hacking all of them to symbolically-link to the same copy of the Swift
libraries or something. Therefore, instead, I stick with Objective-C.

------
scribu
I've recently started working on an iOS app in Swift 3 and it's a mostly
pleasant experience, even though XCode doesn't have Vim keybinds.

I hope Swift can break out of the app-building niche, but from these slides it
sounds like it will be a while until it can compete with Go and others in the
high-concurrency space.

~~~
SmileyKeith
XVim[0] still works well if you're willing to resign Xcode. There are simple
instructions in INSTALL_Xcode8.md.

0: [https://github.com/XVimProject/XVim](https://github.com/XVimProject/XVim)

~~~
rudedogg
I've been using XVim as well. The only downside is that it hides the cursor in
Swift Playgrounds, so they can't really be used.

No problems inside Xcode projects though.

~~~
SmileyKeith
That is fixed in an open PR:
[https://github.com/XVimProject/XVim/pull/1016](https://github.com/XVimProject/XVim/pull/1016)

~~~
rudedogg
This is awesome, thank you!

I shared on
[https://github.com/XVimProject/XVim/issues/998](https://github.com/XVimProject/XVim/issues/998)
since a lot of people have been waiting there.

------
rudedogg
Kitura is mentioned in the slides since it's an IBM talk, but there's also
[http://vapor.codes/](http://vapor.codes/)

------
ngrilly
The presentation claims that Swift offers "progressive disclosure of
complexity", but I don't really buy the argument. "Progressive disclosure of
complexity" works when you are learning a language and writing code, because
you choose what language features you use. But it doesn't work when you read
code written by someone else.

~~~
coldtea
> _" Progressive disclosure of complexity" works when you are learning a
> language and writing code, because you choose what language features you
> use. But it doesn't work when you read code written by someone else._

So? Most app developers are small or one-person shops, and don't "read code
written by someone else".

~~~
ngrilly
That's debatable. And even if it was true, most apps rely on open source
libraries which are written by other developers.

~~~
coldtea
That's beside the point, since "Progressive disclosure of complexity" would
still work for using those "open source libraries", as the most common case of
bring a third party library into a Swift app would be to consume and/or extend
its API, not refactor or maintain it.

I've used lots of third party libraries and for the most part I don't care at
all what's going on in their code as long as they work for what they do.

------
sydd
Sadly this will go nowhere until Apple invests significantly more resources
into non-Apple platforms.

They dont support Ubuntu 16.10, there is no IDE support besides XCode, and no
Windows support at all. And I havent heard even a mention of Android support.

Such half-assed Linux and non existent Windows support will leave it as a toy
language on these platforms.

------
wsc981
For a while I wanted to wait investing too much time in Swift, due to it's
instability (breaking changes on every big release). But I noticed many
potential clients in The Netherland already work on Swift projects. I've
already lost some freelance work due to my limited (<1 year) Swift experience.
Even though the language is still unstable, it's probably better to just bite
the bullet and build up some Swift experience (perhaps by working on personal
projects) if one is a freelancer.

~~~
dep_b
So I understand you have Objective-C experience but not Swift experience? It
was so easy to pick up on Swift, just remember to _never_ force unwrap and
always use _guard_ or _if let_. Just stick to that religiously even when it
doesn't make sense sometimes and a dramatically more stable app will be your
reward.

Of course you should try to actually do something useful inside the _guard
else_ so the user knows what went wrong or just log it with a remote logger so
you can fix it for your next release.

The hard part is maintaining a large Swift application, but as long as your
clients are paying the bills you shouldn't worry as a freelancer.

~~~
wsc981
I have Swift experience (about 7-8 months for my own tvOS game) and I like the
language a lot. It is not hard to pick up Swift. But to write Swift code with
the same quality as my Objective-C code, I would need more experience. Which
is why many clients preferred to pick up a more experienced Swift developers
for some projects. After all, idiomatic Swift code can be quite different from
idiomatic Objective-C code. Learning the Swift idioms takes some time.

I think for a freelancer a client has much higher expectations than from a
permanent employee. An employer doesn't want a freelancer to learn (much) on
the job. The employers is paying for an experienced developer that can deliver
on quality from day one. After all, freelancers are usually much more
expensive.

~~~
pjmlp
The sad part of it, is the vicious circle that side projects aren't accepted
as experience, even if cool ones, and one doesn't get the contracts to
actually be able to earn the experience as official customer projects.

Been on that path a few times...

------
mike_hearn
Even though I haven't used it much, I like Swift: it's a big upgrade over
Objective-C.

I do have some thoughts on the JIT/AOT part of the presentation though, which
seems the most interesting to me. After many years where the two camps were
largely separate, what we're seeing recently are more cases where JIT and AOT
compilation get combined in new and interesting ways:

• Swift is AOT except when developers need fast response times, then it
becomes JIT.

• Rust and Go are pure AOT always.

• Android went interpreted, JIT, AOT, back to JIT with AOT at night.

• Java 9 introduces mixed AOT/JIT mode, in which you can pre-compile Java
modules to native code ahead of time, but that native code still self-profiles
and reports behaviour data into the runtime which can then schedule a JIT-
compiled replacement for any given method using the new profiling data. It
also introduces ahead of time cross-module optimisation (a "static
linker"-like thing called jlink).

Obviously these are quite different approaches, but they're working with
similar(ish) languages on identical hardware. So which approach is going to
win out, in the long run, if any?

It's fair to say that LLVM is the most advanced compiler toolchain for C-like
languages. The JVM is I'd argue the most advanced runtime for more dynamic
languages like Java, JavaScript, Python, Ruby etc. But it seems to me that
LLVM struggles with some optimisations that should be quite basic and
important - the way I read the presentation is that it won't do things like
inline parts of the standard/collections library into calling code because
they reside in different modules, and inter-procedural optimisation is only
done at the level of the module. Otherwise compilation times become too
problematic. A profile-directed JIT compiler has no such problems and will
happily optimise across module boundaries.

As languages evolve, they seem to take on more dynamic features. This is
especially true with the incorporation of functional programming styles where
you're frequently passing functions to other functions and working with
immutable data, which implies dynamic dispatch (when you can't inline through
the call chain) and lots of copying of short-lived data structures (what a
generational GC eats for breakfast).

Another major trend is multi-core processors, which we still aren't as good at
exploiting as we should be. But more dynamic runtimes tend to find ways of
using multiple cores even for single-threaded programs: if your program is
inherently only able to use 1 or 2 cores at once, but you're on an 8 core
machine and you aren't heat/power constrained, then using the other cores for
concurrent GC or JIT compilers is basically free. If these techniques can
speed up the execution of your program threads then it's a win, even though
analysed holistically it might look like a loss. It is notable that multi-
threading LLVM is apparently only a recent feature.

These trends lead me to believe that the JVM architects have the right general
direction:

• Allow code to be AOT compiled but still optimise at runtime using multiple
spare cores at once, to extract performance even in more modern, heavily OOP
or FP-oriented code.

• Use a generational GC that is optimised for objects being mutated through
copying and which doesn't cause large quantities of cache-coherency traffic
due to the use of atomics all over the place.

• Support inter-procedural ahead of time optimisations like dead code
elimination and statically resolved reflection with an optional link phase.

• Rely on pure JIT when developing to keep the edit-compile-run loop tight.

• Use a high level code transport format like bytecode that minimises the
exposed ABI, thus allowing you to tweak and optimise the ABI used at runtime
without breaking the world.

Apple has gone in this direction with the app store compiling bitcode to
binaries for you, but LLVM bitcode was never really designed for that use
case. Still, it seems like LLVM will be heading further in this direction in
the future, or at least would like to, judging from the following comment:

 _I dream about the day when we can speed up the edit /compile/run cycle by
using a JIT compiler to get the process running, before all the code is even
compiled. In addition to speeding the development cycle, this sort of approach
could provide much higher execution performance for debug builds, by using the
time after the process starts up to continuously optimize the hot code._

 _Unfortunately, getting here requires a lot of work and has some interesting
open questions when it comes to dependence tracking for changes (how you
invalidate previously compiled code). That said, this would be a really
phenomenal research area for someone to tackle._

These problems were already tackled years ago by the HotSpot project, which is
capable of tracking the dependencies between compiled methods and invalidating
compiled code when assumptions used in their compilation become invalidated.
As you can also do manual memory management and bypass the GC in languages
that target the JVM, I wonder if Swift would benefit more than Chris Lattner
imagines from a port that targets it (at least, once the JVM supports value
types, which it's getting experimental support for at the moment).

~~~
valarauca1
The LLVM inlining behavior is left over from C. Different compilation units
won't inline.

The LLVM and its Gold linker actually allow their input to be LLVM-Byte Code
not just obj files.

The advantage is cross language and cross module optimization.

As it works today with the right make file one can link Rust/C/C++ and
mutually inline/optimize across language barriers.

------
mirekrusin
Great to hear they want to add Rust-like borrow checker as one of memory
management options.

------
thyselius
Is it not possible to get multi thread compiling with swift in Xcode today?

------
dep_b
I've noticed that the compiler (both real-time analysis and actual compiling)
gets sluggish and unreliable over time when a project grows. I think I've
found the source of the problem and unfortunately for people in large code
bases it does take a bit of rewriting to get it back into shape.

Since the language is very strictly typed yet allows for a loose syntax, the
compiler needs to guess which type you are actually using. For example the
type Any? always works but perhaps you are declaring it like a dictionary that
always uses a string as a key? It doesn't know until it analyzes all keys and
sees that you're indeed always using a string as an index. Put one Int in the
end of a long list and it breaks.

Now something like this doesn't hurt:

    
    
        let indexOfTab = 4
    

Because the compiler sees you're trying to put an int in a constant.

But this is a bit more difficult:

    
    
        let tabDict = [ /* long list of declarations here */ ]
        let indexOfTab = tabDict.first(where: { key, value in value == "contacts" })?.key ?? 0
    

First it needs to know if that tabDict indeed would have only Ints or similar
values as keys and only strings as values because they would now need to match
what happens in the filter.

This makes the complexity of the code to analyze explode in a way that if your
code consists of many of these things you all of the sudden find your computer
go to a crawl.

The alternative however is a bit more ugly (I could use a bad word to describe
it, like: "Java"):

    
    
       let tabDict: [Int: String] = [ /* long list of declarations here */ ]
       let tabWithContact: [Int: String] = tabDict.first(where: { key, value in value == "contacts" })
       let indexOfTab: Int = tabWithContact?.key ?? 0
    

(bear with me, this code is not checked in Xcode, just a hasty example)

Now step by step the compiler does not have to guess anymore, if the first
line doesn't compile (because for example you've added a string as a key) it
just won't compile. Same goes for the filter function, if you treat the types
wrong inside the filter block. Also not too much to guess in the third line,
it will never guess what type the key is let alone compile if you would
declare indexOfTab as a String.

What I would like to see is:

* A tool that can actually show the points where your code is slowing down the compiler a lot

* A tool that proposes you to split the declarations up for faster execution just like you can automate much of the changes after upgrading to a new version of Swift

All I do now is guessing and adding more code to get performance back to
acceptable levels.

~~~
timjver
Not disagreeing with anything that you're saying, except that dictionaries are
declared `[Key: Value]` and not `[Key, Value]`.

By the way, rather than using `.filter { ... }.first`, you should really use
`.first(where: { ... })` instead. `filter` will iterate the entire sequence,
while `first(where:)` stops after it finds a match.

~~~
dep_b
All true! Let me fix it. I remember the "where" approach not completely
working like I expected it, didn't dare to put it in without checking :)

------
kensai
So Swift 4 is coming in Late 2017. Will it break all previous code again?

~~~
agildehaus
From Lattner:

"While our community has generally been very kind and understanding about
Swift evolving under their feet, we cannot keep doing this for long. While I
don’t think we’ll want to guarantee 100% source compatibility from Swift 3 to
Swift 4, I’m hopefully that it will be much simpler than the upgrade to Swift
2 was or Swift 3 will be."

[https://lists.swift.org/pipermail/swift-evolution/Week-of-
Mo...](https://lists.swift.org/pipermail/swift-evolution/Week-of-
Mon-20160125/007737.html)

Swift 4 won't have anything like Swift 3's Cocoa renaming. It'll be minor
language changes.

ABI stability and source compatibility should start with Swift 4.

~~~
dep_b
I think Swift 4 changes are mostly improvements in interop with C and
Objective-C, maybe Cocoa will get some changes to make API's more Swift-y.

I don't expect much big stuff anymore.

------
stiGGG
Horrible slides! Is there a video recording from Chris' talk?

~~~
chmaynard
The IBM corporate website has a conference page:

[http://researcher.watson.ibm.com/researcher/view_group_subpa...](http://researcher.watson.ibm.com/researcher/view_group_subpage.php?id=6940)

I contacted one of the organizers and was told that the keynote was recorded.
I can only assume that it will show up online eventually, perhaps on YouTube.

------
cyphreak
I thought about sitting down and learning swift, but they make breaking
changed so often I really can't justify the time yet. I'm very curious what
IBM comes up with, not sure yet why they're so interested in it other than
LLVM.

Interesting news anyways.

~~~
kalleboo
This is why I avoided embracing Swift until this year - we started using it in
parts of our app but the 200 breaking changes in our project every 6 months
was a major PITA.

Starting with Swift 3 they claim to be source-stable which is why I've dared
embrace it wholeheartedly now.

~~~
cyphreak
Well but now scala (what I knew and just kept running with) is getting native
compiler, eventually iOS support, academic underlying calculus for the
language, all kinds of good stuff.

I have a lot of respect for Lattner, obviously, but Swift has spent a LOT of
goodwill that people were willing to spare it. Unless IBM pulls a really good
thing out of their hat, I fear that swift might be in a perl6 position where
they finally made a language worth a shit, but ran out of gas as they reached
top speed.

I guess time will tell. I WANT swift to be awesome, but in order to make it a
rational choice, I NEED swift to be stable, common, and approachable for new
employees. (something scala can struggle with for some programmers)

It's really gonna bum me out when swift4 gets announced this spring (/s) and
completely ruins everything, and then swift5 would of course be announced
around the time swift4 becomes even remotely stable.

It's just been a disaster so far.

~~~
pjmlp
In what concerns Apple OSes, just like with any other first class programming
language developers that want to target them will not have any other option.

All the language improvements Objective-C has got since Swift was made
available, were only to improve the interoperability between both languages.

Also just check the amount of WWDC talks from the past two years that still
used Objective-C on their presentations.

This is why it is so important to have OS vendors sponsor new languages.

------
Entangled
Naysayers. The world will pour billions of dollars in Swift the coming years
and they will keep complaining about it.

The most important advantage of Swift is that it allows you to code from the
server to the desktop, mobile, watch, tv, IoT, in one language.

Swift will be everywhere and some are afraid of that, very afraid.

~~~
seanparsons
What on earth is this all about? Having spent a year working mostly in Swift I
can't wait to see the back of it, even "billions" of dollars spent wouldn't
turn it into a halfway decent programming language.

~~~
Razengan
The problems you've listed in reply to my sibling commenter are all on the
tooling side, not with the language, and yes they could definitely be fixed by
spending "billions" on people and resources dedicated solely to improving
them.

You didn't specify anything that says Swift isn't a "halfway decent
programming _language_."

~~~
seanparsons
I don't think I've ever seen anyone regard the compiler as tooling, SourceKit
for sure that makes sense, but not the compiler. Especially in this case as
there really is just the one compiler for it so those are issues that will hit
anyone using Swift now. But if you want problems with the language itself, let
me see:

It suffers from similar problems to Scala, where it tries to blend OO concepts
in with FP concepts. Variance is where this flares up terribly because it
doesn't provide any explicit support for covariance or contravariance. You end
up with invariance and subtyping in a bunch of places which is not a nice
combination.

Protocols like Equatable and Hashable are implemented with compiler magic for
arrays and tuples. Which means that you run into trouble with generic
functions that say accept two instances of the same type which is Equatable if
you pass an array of ints. The underlying reasons behind this (which mostly
escape me at midnight on a Sunday) are something to do with protocols on
generic types. You can't say "for a List if the elements are Equatable the
List is equatable" IIRC. This one annoys me no end as we've had to fudge our
way around it several times on the same project.

For some reason that eludes me, Optional<T> is given all sorts of special case
syntax ("if let" and "guard let") which is like a crap version of "for yield"
from Scala or do notation in Haskell that only works on that. As a result a
lot of convenient abstractions are just not possible you end up writing all
kinds of horrid looking chained map/flatMap calls instead. Sometimes because
of this dissonance between things a block of code might have a guard let
block, then some other stuff, then an if let block with an else condition.
Whereas if you wrote the same thing in Haskell it would be one do block and
that's it.

Concurrency and parallelism support is effectively Grand Central Dispatch,
which is the most imperative API ever on macOS/iOS and on other platforms
(according to the Github page I just looked at for it) is in the early stages
of development. You're just calling the old Objective-C API and the language
doesn't help at all there.

Edit: As a bonus addition, it's a statically typed "FP" language which doesn't
have higher kinded types which means a bunch of handy abstractions are a real
pain. See the Swiftz project for how they have to define things like monads as
an example.

Is that enough problems?

