Funny how people here mention source breaking change as the main issue with the language. I think it's because they haven't used swift on a large codebase. The main issue once you start to work on a big project are compiler crash ( https://github.com/practicalswift/swift-compiler-crashes ) and compilation time.
I really don't understand how a compiler can crash that much ( actually i've been coding in many client and server languages before and it's the first time i see a compiler crashing while i code). It makes me really nervous, because it shows that unit tests aren't good enough, and that the language is really in alpha stage, not more.
As for compilation time, you may think it's just a matter of optimisation. But the problem is that until those optimisation arrive, you can't rely too much on type inference (main source of slowdowns), which diminishes a lot the beauty of the code.
Now that was for client side. Server side probably adds its share of issues.
I used to be very optimistic for this language, but i'd say the swift team only has one more shot to make it really production grade, before the word spreads that this language is for now just a joke ( maybe apple using it for its own app would help allocate more resources).
Yes, as part of a project that has used Swift for the last year, migrating away from Objective-C on a decent size codebase, I can say it is not ready for primetime.
Swift 2.2 + Xcode 7 wasn't great, but it was livable.
Constraint SourceKit crashes makes Xcode essentially a text editor and not a good one.
All indexing, highlighting essentially all IDE functionally lost.
This is the worst development experience that I've seen in 20+ years as a developer.
I thought the CoreData / CloudKit debacle from 3 years ago was bad, but oh my God, I just want to jump ship and go to Android, switch to Xamarin, or just leave mobile at this point.
It would be nice to have some level of optimism and say this is growing pains, but I don't have any faith that the Apple developers are competent in making this better.
Most of my problems seem to be with Xcode. The crashes can be pretty frustrating, and most I encounter are repeatable (which makes me think they should of been caught in testing).
The failure in releasing Xcode 8.2/iOS 10.2, but not updating the iTunes Connect backend to allow iOS 10.2 as the max was pretty disheartening. All apps were automatically rejected for half the day. How does something like that slip through the cracks? To find out if it was fixed I had to periodically check twitter :\. There was no blog post, no status page I could check - my only hope was some unlucky dev I found that didn't even work on the iTunes Connect/App Submission team.
I think the consensus is that we need nothing but bug fixes on the core stuff (Xcode, SourceKit, etc).
indeed. i think apple has a general software quality issue and should now think about hiring some senior devs from microsoft to help them sort their process out.
1000 times this. I've been really enthusiastic about the idea of a modern, compiled, type safe, systems language, backed by a major tech juggernaut.
But the compiler crashes and sourcekit instability are just breathtaking (even in 3.0+). I'm surprised I haven't seen this get more attention.
I'd (somewhat) expected something like this for the first few releases, but two years in I'm starting to think the team bit off more than they can chew with the type system.
SourceKit is unbelievable! When using Xcode for most of the time (as in >50%) it causes very high CPU load and it's not uncommon for it to use >3GB of RAM.
SourceKit issues fall into two categories. The source causes the compiler itself to crash (due to one of those compiler defects mentioned elsewhere in the thread), or some issue with the build configuration.
The latter category is especially nasty, and SourceKit won't inform you of this, except perhaps to crash. Oh, and that dumb bar in Xcode.
You can compile all the Swift source in your target successfully but have SourceKit choke on your build configuration. Accidentally introduce a duplicate set of headers in HEADER_SEARCH_PATHS? Degraded performance, and at worse, the compiler crashes everytime you edit text and SourceKit invokes the compiler. In cases like this the SOURCEKIT_LOGGING environment variable is your friend. Have fun combing through those logs to tease out what build option you set is causing your issue.
I spent the better part of two days sifting through SourceKit logs to figure out why a project's autocomplete wouldn't work. Yet, I wouldn't want to go back to writing Objective-C primarily. Because Swift as a language rocks.
I actually love the source breaking changes. The alternative is easier and much worse, and it takes boldness to not roll with the alternative, e.g. add language bloat and keep suboptimal decisions as part of the language just to support code that's already shipped.
C++ is the prime example of what the alternative looks like.
My girlfriend has decided to learn programming. She started with Swift because she wanted to write iOS apps. The course she's using targets Swift 2 but her install is Swift 3, or something like that (I haven't looked into it much).
XCode constantly flags stuff like x++ being replaced with x += 1. Really? Surely that's the kind of decision you make before version one of a language, not years after release. Why this pointless churn?
If these sorts of totally-indecisive deprecations were rare then you could overlook them, as otherwise Swift is a rather nice language. But they're everywhere. Even in trivial examples intended for beginners line after line of code gets the yellow deprecation warnings. "What does deprecated mean" was literally one of the first questions my girl asked me as she started out learning programming, which is ludicrous. You shouldn't be encountering deprecation warnings over and over when targeting a brand new platform using a teaching course barely a year old.
If your girlfriend wants a stable language for iOS development she should use Objective-C. Apple have been very clear about the fact that Swift is evolving and that there will be breaking changes.
While these changes may be painful now (though the removal of the ++ operator seems minor to me), they should result in a stronger and simpler language in the long run.
The course she bought (without my input btw, she's pretty independent!) uses Swift. And that's probably right. Swift is a much nicer language than Objective-C, much closer to modern languages and has far less of the C heritage poking through. Even with the deprecation warnings, it's probably easier to deal with.
Removing x++ is indeed minor, which is why it's so curious. Sure, there are arguments for removing it. But there are also arguments for minimising language churn and just living with these things.
You could give a little help to your girlfriend. How about installing for her the required swift version by her course with swiftenv? This way she could learn the basics without getting nagged by Xcode. Later she could read up the differences and port her code to practice.
Living with x++ wouldn't be terrible, but I don't mind having to make changes to my code now that Swift is still very new if it ends up becoming a better language. Teaching x++ and ++x to beginners can be quite a pain.
It feels backward to modify the language in a way that makes it less functional: replacing a construct that returns a value with one that returns Void.
Definitely going against the general language trend, there.
And the justifications to remove ++ are downright bizarre in my opinion.
The c style for loops were also dropped. now you do `for idx in 0...5` or you can also do `for idx in 0..<5` I think this makes clearer if you want to include or exclude the last index in the loop.
I think that this kind of break is not that bad, swift is a young language, and there are some things that only after so much testing you realize it could be better.
Swift is 2.5y old, it has just one year being open source, so I don't think these breaking changes as a big deal. It is not in a situation like Python 3. Even the compiler helps you fix it and XCode is very helpful when translating Swift 2 to Swift 3, so it is not that bad.
Rust didn't start real development until 2010, it was mostly notes and incomplete PoCs before then (which I'm sure Lattner was doing for Swift before 2010, and Pike for Go as well).
Of course, age only tells you so much about the amount of work put into the project, and Rust in particular took a huge amount of conceptual work to get its current model of ownership.
No, it is Java that is the prime example of what the alternative looks like.
And in spite of all problems with Java and its standard library, there's something very liberating about having code written in 1997 that still compiles and works in 2017.
And the beauty of Java is that it became a platform, so if you hate the language, you can pick another that's closer to what you think programming should be, like Scala, Clojure, JRuby, Groovy, Kotlin, etc. and still benefit from that piece of code written in 1997, all made possible due to Sun's fanatical devotion to backwards compatibility.
I've been a heavy Java platform user for years, mainly Java language but also with some Groovy and Scala. I agree with what you're saying to as far as it goes.
Where Swift excels is that it has the performance of C for the most part, and it will continue to improve. Another strength is ARC over Java style GC. As the slides point out, to get the same performance with GC, you need 4x the memory. With only 2x the memory, you get 70% lower performance with Java. Most important, though, is determinism. GC is simply not suitable for real time, hard or soft.
I'm very excited about Swift, and the current version of Xcode is working pretty well for me. I love the aggressive goal of replacing the C family of languages down the road...
> As the slides point out, to get the same performance with GC, you need 4x the memory.
I'd take that claim with a pretty big grain of salt. The paper they referenced used an experimental GC without use in production. Who knows how well or bad the default collector in the JVM would perform.
Also, it is not exactly recent. Hardware changes could have shifted the result in any direction.
The exact numbers are worth a grain of salt. The general point aligns well with my JVM experience. Requiring at least double the working set for decent performing GC doesn't seem congruent with running the leanest possible data centers.
Regardless, determinism is the really big win. :-)
Scala and Apache Groovy benefit from the JVM's backwards compatibility, but programmers can't benefit from backwards compatibility in those languages. Groovy even broke compatibility between various 1.x releases.
Which is a pity. We take backwards incompatible changes too lightly. I'm working with Scala for example and love the language, but it breaks binary compatibility between major versions because Scala's features don't map that well to the JVM's bytecode and the encoding of things like traits are fragile. Well, the community is kind of coping with it by having libraries support multiple major versions at the same time. Well, it's not much, but at least the tools have evolved to support this (e.g. SBT, IDEs, etc).
That said, if you're looking for a language that has kept backwards compatibility, that's Clojure. Rich Hickey has a recent keynote that I found very interesting: https://www.youtube.com/watch?v=oyLBGkS5ICk
I don't know how to feel about it. On one hand this means Clojure will probably never fix some obvious mistakes in its standard library, without changing its name (like what they did with ClojureScript). On the other hand, as I said, it is liberating to have code written in Clojure 1.0 still working in 1.8.
Your code stops compiling with a pretty unhelpful error message. Y'know, like Segmentation Fault 11, and a stack trace into the swift compiler sourcecode if you're lucky.
XCode doesn't always crash (though it does, frequently), but syntax highlighting goes away at the first hint of an issue.
I found that on macOS, refactoring even a tiny Xcode project to use Swift added 10 MB of libraries to the resulting bundle. The OS needs to start including a stable set of Swift frameworks by default so that Swift-based programs do not require users to download much fatter binaries.
Swift needs a stable ABI before that can happen. That was planned for Swift 3, but it didn't work out. Currently it's planned for Swift 4, which will ship next fall. If Apple immediately takes the opportunity to bundle the libraries with the system, then apps targeting iOS 11 or macOS 10.13 written in Swift 4 will be able to avoid embedding the libraries.
I don’t necessarily mind paying the cost once but since I tend to refactor applications into multiple utility applications (separation of concerns, crash stability, security, etc.) this means each sub-bundle pays the cost too. At the moment I have not found an obvious way to avoid this, aside from somehow hacking all of them to symbolically-link to the same copy of the Swift libraries or something. Therefore, instead, I stick with Objective-C.
I've recently started working on an iOS app in Swift 3 and it's a mostly pleasant experience, even though XCode doesn't have Vim keybinds.
I hope Swift can break out of the app-building niche, but from these slides it sounds like it will be a while until it can compete with Go and others in the high-concurrency space.
I use appcode - it has vim bindings I believe. also appcode has bookmarks which for some reason xcode has removed. The latest appcode is almost as good as xcode for 'fix-its', though still a bit slower
The presentation claims that Swift offers "progressive disclosure of complexity", but I don't really buy the argument. "Progressive disclosure of complexity" works when you are learning a language and writing code, because you choose what language features you use. But it doesn't work when you read code written by someone else.
>"Progressive disclosure of complexity" works when you are learning a language and writing code, because you choose what language features you use. But it doesn't work when you read code written by someone else.
So? Most app developers are small or one-person shops, and don't "read code written by someone else".
That's beside the point, since "Progressive disclosure of complexity" would still work for using those "open source libraries", as the most common case of bring a third party library into a Swift app would be to consume and/or extend its API, not refactor or maintain it.
I've used lots of third party libraries and for the most part I don't care at all what's going on in their code as long as they work for what they do.
Sadly this will go nowhere until Apple invests significantly more resources into non-Apple platforms.
They dont support Ubuntu 16.10, there is no IDE support besides XCode, and no Windows support at all. And I havent heard even a mention of Android support.
Such half-assed Linux and non existent Windows support will leave it as a toy language on these platforms.
For a while I wanted to wait investing too much time in Swift, due to it's instability (breaking changes on every big release). But I noticed many potential clients in The Netherland already work on Swift projects. I've already lost some freelance work due to my limited (<1 year) Swift experience. Even though the language is still unstable, it's probably better to just bite the bullet and build up some Swift experience (perhaps by working on personal projects) if one is a freelancer.
So I understand you have Objective-C experience but not Swift experience? It was so easy to pick up on Swift, just remember to never force unwrap and always use guard or if let. Just stick to that religiously even when it doesn't make sense sometimes and a dramatically more stable app will be your reward.
Of course you should try to actually do something useful inside the guard else so the user knows what went wrong or just log it with a remote logger so you can fix it for your next release.
The hard part is maintaining a large Swift application, but as long as your clients are paying the bills you shouldn't worry as a freelancer.
I have Swift experience (about 7-8 months for my own tvOS game) and I like the language a lot. It is not hard to pick up Swift. But to write Swift code with the same quality as my Objective-C code, I would need more experience. Which is why many clients preferred to pick up a more experienced Swift developers for some projects. After all, idiomatic Swift code can be quite different from idiomatic Objective-C code. Learning the Swift idioms takes some time.
I think for a freelancer a client has much higher expectations than from a permanent employee. An employer doesn't want a freelancer to learn (much) on the job. The employers is paying for an experienced developer that can deliver on quality from day one. After all, freelancers are usually much more expensive.
The sad part of it, is the vicious circle that side projects aren't accepted as experience, even if cool ones, and one doesn't get the contracts to actually be able to earn the experience as official customer projects.
Even though I haven't used it much, I like Swift: it's a big upgrade over Objective-C.
I do have some thoughts on the JIT/AOT part of the presentation though, which seems the most interesting to me. After many years where the two camps were largely separate, what we're seeing recently are more cases where JIT and AOT compilation get combined in new and interesting ways:
• Swift is AOT except when developers need fast response times, then it becomes JIT.
• Rust and Go are pure AOT always.
• Android went interpreted, JIT, AOT, back to JIT with AOT at night.
• Java 9 introduces mixed AOT/JIT mode, in which you can pre-compile Java modules to native code ahead of time, but that native code still self-profiles and reports behaviour data into the runtime which can then schedule a JIT-compiled replacement for any given method using the new profiling data. It also introduces ahead of time cross-module optimisation (a "static linker"-like thing called jlink).
Obviously these are quite different approaches, but they're working with similar(ish) languages on identical hardware. So which approach is going to win out, in the long run, if any?
It's fair to say that LLVM is the most advanced compiler toolchain for C-like languages. The JVM is I'd argue the most advanced runtime for more dynamic languages like Java, JavaScript, Python, Ruby etc. But it seems to me that LLVM struggles with some optimisations that should be quite basic and important - the way I read the presentation is that it won't do things like inline parts of the standard/collections library into calling code because they reside in different modules, and inter-procedural optimisation is only done at the level of the module. Otherwise compilation times become too problematic. A profile-directed JIT compiler has no such problems and will happily optimise across module boundaries.
As languages evolve, they seem to take on more dynamic features. This is especially true with the incorporation of functional programming styles where you're frequently passing functions to other functions and working with immutable data, which implies dynamic dispatch (when you can't inline through the call chain) and lots of copying of short-lived data structures (what a generational GC eats for breakfast).
Another major trend is multi-core processors, which we still aren't as good at exploiting as we should be. But more dynamic runtimes tend to find ways of using multiple cores even for single-threaded programs: if your program is inherently only able to use 1 or 2 cores at once, but you're on an 8 core machine and you aren't heat/power constrained, then using the other cores for concurrent GC or JIT compilers is basically free. If these techniques can speed up the execution of your program threads then it's a win, even though analysed holistically it might look like a loss. It is notable that multi-threading LLVM is apparently only a recent feature.
These trends lead me to believe that the JVM architects have the right general direction:
• Allow code to be AOT compiled but still optimise at runtime using multiple spare cores at once, to extract performance even in more modern, heavily OOP or FP-oriented code.
• Use a generational GC that is optimised for objects being mutated through copying and which doesn't cause large quantities of cache-coherency traffic due to the use of atomics all over the place.
• Support inter-procedural ahead of time optimisations like dead code elimination and statically resolved reflection with an optional link phase.
• Rely on pure JIT when developing to keep the edit-compile-run loop tight.
• Use a high level code transport format like bytecode that minimises the exposed ABI, thus allowing you to tweak and optimise the ABI used at runtime without breaking the world.
Apple has gone in this direction with the app store compiling bitcode to binaries for you, but LLVM bitcode was never really designed for that use case. Still, it seems like LLVM will be heading further in this direction in the future, or at least would like to, judging from the following comment:
I dream about the day when we can speed up the edit/compile/run cycle by using a JIT compiler to get the process running, before all the code is even compiled. In addition to speeding the development cycle, this sort of approach could provide much higher execution performance for debug builds, by using the time after the process starts up to continuously optimize the hot code.
Unfortunately, getting here requires a lot of work and has some interesting open questions when it comes to dependence tracking for changes (how you invalidate previously compiled code). That said, this would be a really phenomenal research area for someone to tackle.
These problems were already tackled years ago by the HotSpot project, which is capable of tracking the dependencies between compiled methods and invalidating compiled code when assumptions used in their compilation become invalidated. As you can also do manual memory management and bypass the GC in languages that target the JVM, I wonder if Swift would benefit more than Chris Lattner imagines from a port that targets it (at least, once the JVM supports value types, which it's getting experimental support for at the moment).
> After many years where the two camps were largely separate, what we're seeing recently are more cases where JIT and AOT compilation get combined in new and interesting ways:
The sad part, in regard to mainstream languages is that mixed AOT/JIT enviroments were already available with Lisp, Eiffel and Oberon systems, for example.
JIT for the programmer workflow, development environment, with AOT for release builds deployed to production.
As Alan Kay says, the industry would have so much to gain if it wasn't a pop culture.
I've noticed that the compiler (both real-time analysis and actual compiling) gets sluggish and unreliable over time when a project grows. I think I've found the source of the problem and unfortunately for people in large code bases it does take a bit of rewriting to get it back into shape.
Since the language is very strictly typed yet allows for a loose syntax, the compiler needs to guess which type you are actually using. For example the type Any? always works but perhaps you are declaring it like a dictionary that always uses a string as a key? It doesn't know until it analyzes all keys and sees that you're indeed always using a string as an index. Put one Int in the end of a long list and it breaks.
Now something like this doesn't hurt:
let indexOfTab = 4
Because the compiler sees you're trying to put an int in a constant.
But this is a bit more difficult:
let tabDict = [ /* long list of declarations here */ ]
let indexOfTab = tabDict.first(where: { key, value in value == "contacts" })?.key ?? 0
First it needs to know if that tabDict indeed would have only Ints or similar values as keys and only strings as values because they would now need to match what happens in the filter.
This makes the complexity of the code to analyze explode in a way that if your code consists of many of these things you all of the sudden find your computer go to a crawl.
The alternative however is a bit more ugly (I could use a bad word to describe it, like: "Java"):
let tabDict: [Int: String] = [ /* long list of declarations here */ ]
let tabWithContact: [Int: String] = tabDict.first(where: { key, value in value == "contacts" })
let indexOfTab: Int = tabWithContact?.key ?? 0
(bear with me, this code is not checked in Xcode, just a hasty example)
Now step by step the compiler does not have to guess anymore, if the first line doesn't compile (because for example you've added a string as a key) it just won't compile. Same goes for the filter function, if you treat the types wrong inside the filter block. Also not too much to guess in the third line, it will never guess what type the key is let alone compile if you would declare indexOfTab as a String.
What I would like to see is:
* A tool that can actually show the points where your code is slowing down the compiler a lot
* A tool that proposes you to split the declarations up for faster execution just like you can automate much of the changes after upgrading to a new version of Swift
All I do now is guessing and adding more code to get performance back to acceptable levels.
Not disagreeing with anything that you're saying, except that dictionaries are declared `[Key: Value]` and not `[Key, Value]`.
By the way, rather than using `.filter { ... }.first`, you should really use `.first(where: { ... })` instead. `filter` will iterate the entire sequence, while `first(where:)` stops after it finds a match.
This sounds bad, what type inference algorithm is Swift using? If they are computing some kind of least upper bound, I'm not sure why it would get so slow.
The difference between JS and Swift is that Swift compiles everything with static types. So JS just plods along and does one of it's funky things with types when encountering the next instruction, but Swift actually needs hard types to be able to compile and run at all. So you need to provide them, or it figures them out by itself. There used to be this famous error about "too complex" statements. They still exist but got a lot less:
I'm pretty well versed in static vs. dynamic typing.
In that example, they aren't inferring through a generic + type, but are rather doing it in a brute force manner, forking the world each time they have to consider an alternative. I guess this pitfall is why experienced type system designers make the big bucks.
"While our community has generally been very kind and understanding about Swift evolving under their feet, we cannot keep doing this for long. While I don’t think we’ll want to guarantee 100% source compatibility from Swift 3 to Swift 4, I’m hopefully that it will be much simpler than the upgrade to Swift 2 was or Swift 3 will be."
I contacted one of the organizers and was told that the keynote was recorded. I can only assume that it will show up online eventually, perhaps on YouTube.
I thought about sitting down and learning swift, but they make breaking changed so often I really can't justify the time yet. I'm very curious what IBM comes up with, not sure yet why they're so interested in it other than LLVM.
This is why I avoided embracing Swift until this year - we started using it in parts of our app but the 200 breaking changes in our project every 6 months was a major PITA.
Starting with Swift 3 they claim to be source-stable which is why I've dared embrace it wholeheartedly now.
Well but now scala (what I knew and just kept running with) is getting native compiler, eventually iOS support, academic underlying calculus for the language, all kinds of good stuff.
I have a lot of respect for Lattner, obviously, but Swift has spent a LOT of goodwill that people were willing to spare it. Unless IBM pulls a really good thing out of their hat, I fear that swift might be in a perl6 position where they finally made a language worth a shit, but ran out of gas as they reached top speed.
I guess time will tell. I WANT swift to be awesome, but in order to make it a rational choice, I NEED swift to be stable, common, and approachable for new employees. (something scala can struggle with for some programmers)
It's really gonna bum me out when swift4 gets announced this spring (/s) and completely ruins everything, and then swift5 would of course be announced around the time swift4 becomes even remotely stable.
In what concerns Apple OSes, just like with any other first class programming language developers that want to target them will not have any other option.
All the language improvements Objective-C has got since Swift was made available, were only to improve the interoperability between both languages.
Also just check the amount of WWDC talks from the past two years that still used Objective-C on their presentations.
This is why it is so important to have OS vendors sponsor new languages.
> I'm very curious what IBM comes up with, not sure yet why they're so interested in it other than LLVM.
They are diversifying outside Java as their bet on enterprise languages.
J9 (their JVM) has been modularized and became the basis of Eclipse OMR, an infrastructure to implement programming languages, with existing support for PHP, Python and Ruby.
Actually the opposite is true, Swift is an Apple ecosystem language, and thats why it sucks. I'd love if it could replace languages like Java or C#, but we're decades from that with the current pace (if it will ever happen).
They open sourced the language more than a year ago. The Linux builds are crappy, for example there is no release for Ubuntu 16.10. The IDE support on Linux is basically non-existent. There are some initial offerings, but they are in alpha or pre-alpha state.
The situation on Windows is even more bleak, you can only get the compiler via some Linux emulation.
All of this results obviously in no or very very few libraries that are not aimed towards iOS/OSX development.
So no, Swift does not allow you to code anywhere. And if Apple continues the way they do it now, it will end up as Objective-C, a language that is solely used by iOS/OSX devs.
I don't know about this. I think maybe it will take the node.js route: A language is required for one platform (web/iOS) and since developers like using a single language for everything it will be used for all kinds of other things, even things the language is not suited for.
What on earth is this all about? Having spent a year working mostly in Swift I can't wait to see the back of it, even "billions" of dollars spent wouldn't turn it into a halfway decent programming language.
The problems you've listed in reply to my sibling commenter are all on the tooling side, not with the language, and yes they could definitely be fixed by spending "billions" on people and resources dedicated solely to improving them.
You didn't specify anything that says Swift isn't a "halfway decent programming language."
I don't think I've ever seen anyone regard the compiler as tooling, SourceKit for sure that makes sense, but not the compiler. Especially in this case as there really is just the one compiler for it so those are issues that will hit anyone using Swift now. But if you want problems with the language itself, let me see:
It suffers from similar problems to Scala, where it tries to blend OO concepts in with FP concepts. Variance is where this flares up terribly because it doesn't provide any explicit support for covariance or contravariance. You end up with invariance and subtyping in a bunch of places which is not a nice combination.
Protocols like Equatable and Hashable are implemented with compiler magic for arrays and tuples. Which means that you run into trouble with generic functions that say accept two instances of the same type which is Equatable if you pass an array of ints. The underlying reasons behind this (which mostly escape me at midnight on a Sunday) are something to do with protocols on generic types. You can't say "for a List if the elements are Equatable the List is equatable" IIRC. This one annoys me no end as we've had to fudge our way around it several times on the same project.
For some reason that eludes me, Optional<T> is given all sorts of special case syntax ("if let" and "guard let") which is like a crap version of "for yield" from Scala or do notation in Haskell that only works on that. As a result a lot of convenient abstractions are just not possible you end up writing all kinds of horrid looking chained map/flatMap calls instead. Sometimes because of this dissonance between things a block of code might have a guard let block, then some other stuff, then an if let block with an else condition. Whereas if you wrote the same thing in Haskell it would be one do block and that's it.
Concurrency and parallelism support is effectively Grand Central Dispatch, which is the most imperative API ever on macOS/iOS and on other platforms (according to the Github page I just looked at for it) is in the early stages of development. You're just calling the old Objective-C API and the language doesn't help at all there.
Edit: As a bonus addition, it's a statically typed "FP" language which doesn't have higher kinded types which means a bunch of handy abstractions are a real pain. See the Swiftz project for how they have to define things like monads as an example.
Other people have listed issues in here like compiler crashes from valid code, gotta spend time unwinding changes until it works and write the code differently.
Simple things like "a + b +...+ n" cause compile times to balloon exponentially, which means for arrays you end up refactoring concatenation of immutable arrays into a mutable array you call appendContentsOf on a bunch of times.
SourceKit on the surface sounds great, but in truth it's a simple text parser that doesn't understand type aliases for example.
The compiler _really_ struggles with anything beyond very simple generics, first the type inference starts to fail and then it'll refuse to compile until you break the code apart and add type ascriptions.
The compile errors are often absolute nonsense, pointing nowhere near the actual errors or are like the Magic 8-ball coming up "ask again later".
The project I'm working on has been a mix of Swift and JavaScript, we're happily increasing the ratio in favour of the latter because of all the pain we've had with Swift. I personally don't even like JavaScript but it's not even a contest between the two.
I really don't understand how a compiler can crash that much ( actually i've been coding in many client and server languages before and it's the first time i see a compiler crashing while i code). It makes me really nervous, because it shows that unit tests aren't good enough, and that the language is really in alpha stage, not more.
As for compilation time, you may think it's just a matter of optimisation. But the problem is that until those optimisation arrive, you can't rely too much on type inference (main source of slowdowns), which diminishes a lot the beauty of the code.
Now that was for client side. Server side probably adds its share of issues.
I used to be very optimistic for this language, but i'd say the swift team only has one more shot to make it really production grade, before the word spreads that this language is for now just a joke ( maybe apple using it for its own app would help allocate more resources).