It has the advantages of C# as a language but no garbage collection, granted the reference counting in Swift can be problematic, but it seems to be improving with each release.
The functional programming aspect of swift I find useful. There are a number of features in swift that decreases the amount of code - e.g. if let/guard. The use of optional types increases program safety.
The fact that you can use classes declared elsewhere without any consideration of dependency I find very powerful, and of course with great power comes great responsibility.
The power of swift collections is fantastic, and the syntax compared to Objective C is so succinct.
Since it is missing mutable arrays and strings, your code often ends up copying a lot of memory around if you're not extremely careful, and you quickly get accidentally quadratic complexity.
A lot of APIs are just wrappers around ObjC APIs, with lots of memory management calls required for bridging. You really need to be careful what calls you put in a tight loop.
And that's before you even start with the good stuff. Protocols with associated types are so slow that they are unusable for anything performance sensitive. Theoretically the compiler should be able to optimize it incredibly well since it's all statically typed, but in practice the compiler seems to never be smart enough. Call stacks get very deep very quickly (protocol witnesses, closures, ...).
In my experience the easiest way to make Swift code fast is to rewrite hotspots in ObjC.
In practice I agree it's easy to write code that's slow, but you can't knock the language itself for user error. They are still working on improving the user experience, such as how 4.2 will remove redundant generated retain/release calls. And hopefully they make the compiler smarter at figuring out which specialization functions to output for generics/protocols.
There is a subset of Swift that is fast. But if you tell people that Swift is faster than Obj C, they will think that they can rewrite code in Swift and it will be faster. That may be the case at some point in the future, but right now that's just not true yet.
In my experience, the opposite is the case. Swift is a great language, it's safe, and nice to work with. But those bits where you are processing data in a tight loop -- you still may have to do that in Obj C.
Obviously this depends a lot on your problem, but I have seen 2x speedups and more for things like string processing (rewriting things from Swift in Obj C)
And as I said, that's without using the fancy parts in Swift (things like type erasure absolutely kill performance)
No it's not.
> because of the static type checking, which is a good thing
That's a reason why you think it should be faster. Actual performance is measured, not argued.
When you actually measure Swift, it is horribly slow.
UPDATE: details in other posts here. In general, my measurements have found Swift code to be 50% to 50x slower.
If you are comparing primitive types in Swift with objects in Objective-C, then you are comparing apples to oranges. You need to compare objects in Swift with objects in Objective-C and primitive types in Swift with primitive types in Objective-C.
Rephrasing, my findings are similar:
Swift in debug mode is so incredibly slow that even using primitive types with Swift is slower than Objective-C objects. In release mode, it approaches, but rarely reaches native Objective-C speed.
More details here:
That's how all Objective-C code is written–every method call goes through objc_msgSend.
Both of these factors have forced me into using lower-level Objective-C++ code on numerous occasions.
nitpick: reference counting is a form of GC in standard CS terminology. It saves some memory compared to other GC methods but is slowish.
However, to say that Swift is a regression over Objective C seems very short sighted to me. Can you imagine if Apple continued with Objective-C? There would be no way to reverse decade old decisions that simply aren't the right decision anymore. Every new feature would need to be built on top of code from 1984. Swift was very future focused from the get go, and is a bid to ensure that developing for Apple devices in 2025 is not an archaic mess.
Like anything new, it's not perfect at first. That is the world we live in now. However, Swift is getting incrementally better at a very good pace. There are some reasons left to prefer Objective-C right now, but I'm sure that in a few years these reasons will be far fewer.
My own opinion is that I'm much more productive writing Swift, after spending a similar amount of time with each language. If you think the same is true of Objective-C, then that's great - there are certainly still some upsides to developing with Objective-C. However, to say that Swift is a mistake is something that I can't reason with, since it's one of Apple's most forward thinking decisions to date.
Of all the reasons to prefer a new language, this is the worst one from a platform perspective.
C and POSIX allowed Unix to conquer the world. They allow code written since the 1980s to run with minimal modifications even on modern systems. The other major operating system family -- Windows -- is also strongly associated with backward compatibility.
Everybody always wants to throw away the legacy code, because maintaining it is expensive. But throwing it all away and starting over from scratch can be more expensive. Especially when you're dealing with millions of lines of third party code. Which is why the only platforms that are still popular after 30+ years are the ones that didn't force everybody to do that.
I mean take over the world.
I know it's popular to hate C these days, but remember the context. The competition in the 80s and 90s were operating systems that didn't even have memory protection or the concept of user accounts. Unix was never going to lose over security.
And we're not talking about the language to write daemons or kernels in, just the system API. A system should make it easy to write your code in any language you want regardless of whether any other part of the system uses it. There isn't supposed to be a One True Language. System APIs that advantage any specific language are problematic -- which is why the C family is popular for APIs. It's simple enough that it's easy to wrap using nearly any other language.
For example, Java was the Next Big Thing around the time when IPv6 was new, and Java is "safer" than C. That would have been an opportunity for an operating system to deprecate the BSD sockets API, never extend it to support IPv6, and require internet-facing code to use a Java API in order to use IPv6.
The problem is that's just making trouble for everyone. People who want to keep using their C programs, or use any other language (like Rust) that isn't Java, will either get saddled with some awkward and slow kludge to use the Java API from another language, or will just not bother to support IPv6, making IPv6 less popular than it is already. People who do want to use IPv6 but not Java will prefer other operating systems, making a system that does that less popular in general.
It's no use to make something nominally more secure in a way that just causes people to not use it.
Really? Apparently we lived in different planets.
VAX/VMS, OS/360, Burroughs, AS/400, ....
UNIX was available with source code for a symbolic price to universities, that was the difference.
On mainframe hardware similar in price to a single family house. The competition on hardware individuals could hope to afford was the likes of DOS and MacOS Classic. And even on mainframes, the ones that have survived like OS/400 are the ones obsessed with backward compatibility.
Doing so was some of the earliest production uses of Rust, and is still among the largest.
Objective-C was hailed as the one true OO language by its smug fans, then Apple said one day that it's legacy software and it became legacy.
Many still don't completely understand that and try to argue that it should be somehow brought back.
To those I ask: did Apple bring back the floppy, DVD, thick laptops and non-butterfly keyboards? No. Objective-C is destined for oblivion and it's time for you all to accept that.
Objective-C was always a reasoned compromise. No more, no less.
Apple is usually quick to deprecate (which does have its own advantages), but given that a large part of their own tech stack is built on Objective-C, I think they'll play the long game here. I would be surprised to see it deprecated before the end of next decade (although I wouldn't put it past Apple either...).
No, wait… that was their failing as well. They took a compiler engineer’s vanity project to invent Rubberized C++, and made it their App development “platform of the future”. And in so doing, trashed three decades’ investment in their Cocoa Applications development platform, which is the actual valuable part, simply to reinvent the same old wheel they had before. While not actually adding any fresh value. To what end; stroking nerd ego?
Languages are ten-a-penny; you create a language to fit the problem space, not vice-versa. If you can’t get a 10x efficiency on developer time/LOC, don’t even bother. You’re just wasting everyone’s time.
Tony Hoare called NULL his billion-dollar mistake. Swift is easily a million man-hour mistake, and still counting. Google must be laughing right now.
Um, no. Macs were programmed in Pascal in 1984.
If a variable is read-only, you're forced to think about that by deciding between let/var. contrast this with objc, where all vars are writable unless you do the extra work of adding the const keyword. objc makes us do more work to get the faster (and safer) behavior.
Similar with Optionals, you're now forced to decide right away if a parameter can ever be nil, whereas with objc you didn't need to declare nullability. Again, it makes the safest and fastest choice easier to make, and allowing nullability is actually more work.
Generally Swift forces you to pass as much information to the compiler as possible at compile time, and it does it with a delightfully readable syntax. This theme repeats itself throughout the language. more information at compile time is always going to result in safer and more predictable behavior.
full disclosure: i'm an iOS dev working in objc and swift. i love both languages, and i think swift is the obvious path forward.
Swift is designed around premature optimization. In my experience, improving programmer productivity is more important. The more time spent with compiler enforced busywork is less time in Instruments. The same goes for compile times. If you want people to write fast code then help them to iterate quickly.
One of the problems I have is that I love python, but I was hoping that swift would be like python on steroids. Instead, I can’t make the leap because python lets me iterate so freaking fast. If I could have something like that with the ability to incrementally improve my code and compile to a binary, I’d be happy. I thought awift would be it, but your comment helped me understand why it isn’t working for me.
Full disclosure, I’m not a full time dev, so YMMV.
I'm not convinced; the examples you give require _more_ work for the compiler writer, not less. In general, a stronger type systems means that the compiler has to do more.
Exactly. In Objective-C, the compiler can't do much, which is kind of a bummer if you are a compiler guy. Chris is a compiler guy.
Consider Swift a kind of public works program for your local compiler team.
"So now contrast that to Swift. First of all: Which question did it desire to answer? Think about it. There is no one clean answer."
I feel that Swift is more just like a language designed to be a syntax swap of objective-c, making few things better, yet some other ones worse.
To folks that are proficient with Objective-C,
1) How do you like working in Swift?
2) Are you more productive (for medium or large projects)?
3) Do you enjoy it more?
In my part (if you are already very proficient with Objective-C), those answers are Nos. If you are new to iOS though, Swift is easier to get started.
After I left, I picked up Swift. I use it for all of my personal iOS projects. I don't disagree that there are flaws in the language, but they definitely aren't the ones the author pointed out.
> Adding compile time static dispatch, and making dynamic dispatch and message passing a second class citizen and introspection a non-feature.
I can't express how happy I am that an object is exactly what it tells me it is at compile time. The number of nuanced bugs that have bit me over the years, my lord. Let alone explaining this stuff to a new coworker? Dynamic dispatch / message passing / introspection are huge bug sources. If you can't rely on the compiler to tell you what something is, what can you rely on?! This alone dropped bugs an order of magnitude.
> Define the convenience and elegance of nil-message passing only as a source of problems.
Nil messaging is nothing but a source of problems. Can confirm.
> Classify the implicit optionality of objects purely as a source of bugs.
Implicit optionality was a massive design failure in C. Those days are behind us, it's time to move on.
(1) I love working in Swift. It opens up all sorts of great design patterns -- lightweight view model structs, enums with associated values, easy to manage protocols. Readability is dramatically better. No more copy-paste between header and source. No more dealing with unintelligible types, first-class generics, etc.
(2) Productivity boosts come from less boilerplate, less repetition, less typing, better design patterns being within reach. Static typing eliminates many classes of common bugs. No automatic nils is huge. I can't remember the last time my app crashed in something other than an Objective-C component or framework.
(3) Yes, I do.
2.) Swift confers upon me a quantum-leap kind of productivity increase -- the kind of "programmer time" optimization the Ruby guys used to talk about over Java. The core language is great. Swift enums concisely solve lots of problems that are easy-but-annoying in Objective-C. Protocol extensions are an absolute godsend and I hate the feeling (when programming in something else) of seeing how a protocol extension would solve the problem at hand, but not having them.
Incidentally, I remember vividly when there was talk of adding "concrete protocols" to Objective-C 2.0. (They let you do a similar thing; adopt a protocol to gain a bunch of implementation behavior, not just declare that you support an interface). I was really, really excited about that, too. But the rumor didn't pan out and concrete protocols were never added.
3.) Yep, love it. I changed jobs in 2016 and don't even do much Apple platform development anymore; I am mainly using Swift on Linux.
My comments apply to current Swift, so right now that means 4.2. As you go back in time to older versions, I liked it less. In the first year doing Swift, there were still times I enjoyed Objective-C more, but not anymore.
Aww, it's kind of sad when I think about it; after all those years, Objective-C is now dead to me, like NewtonScript...
I had my own instance of that with Java -> Kotlin — and this is on the backend. I can’t imagine how that transition feels like on Android
Curious as to what those might be, can you shed some light?
I was also puzzled by the emphasis on enums in Swift, as they are a language feature I rarely if ever use in Objective-C, not because I want to and they are inadequate, but because I generally don't see much of a need for them.
For example, in Swift, you can have an enum which says its values holds exactly one of:
A) several specific semantically meaningful strings
B) some other object
C) several objects
In Obj-c, you would probably express this as a single object, and write a bunch of error prone logic to handle which of these states you’re in, possibly missing an important last minute addition in one of your methods for walking over all the possible state.
In any OO language, I’d write different classes for cases A B and C that share a common message protocol (wether formal or informal) and use them uniformly/polymorphically.
2) For medium projects: absolutely. For large projects: I haven't had the pleasure yet, but the truth is that Swift encourages a much better approach to modularity, and if you're working in a large project as a monolith, you're probably doing something wrong.
3) Hands down, yes.
The entire language is designed to make the most costly part of development, quality, so much easier. That’s been it’s main purpose and theme since day one. And it’s not just optionals.
Today some code wouldn’t compile because I forgot a switch case. I remembered when I didn’t have to worry about stuff like that, Objective C would let me compile and defer till weeks later finding the obscure bug it caused in a painful multihour debugging session.
"Version 3 and subsequent versions were essentially a subset of C++ and supported basic object oriented programming concepts such as single inheritance as well as extensions to the C standard that conformed more closely to the requirements of Mac OS programming. After version 6, the OOP facilities were expanded to a full C++ implementation, and the product was rebranded Symantec C++ for versions 7 and 8, now under development by different authors"
It’s not about making the people who are already familiar with Objective C. It’s about the next generation of developers who find Objective C obtuse.
The current iOS developers can stay with Objective C for the foreseeable future. Microsoft had to drag developers from C/C++ and Vb6 to .Net. Apple has to do something similar.
Recently, I've begun re-writing some of my personal server-side projects from Java EE to Swift. While the new frameworks (Perfect, Vapor, Kitura, etc.) are good, there are still some areas where they are lacking. Cryptography, for example. There's a real lack of good crypto libraries for Swift that deploy on Linux. I'd LOVE to see the equivalent of the JCE (Java Cryptography Extensions) in Swift. Right now, most of them are just weak wrappers around OpenSSL. But, from a performance and resource perspective, Swift is GREAT compared to Java. Faster, and much less resource overhead.
I think Swift solves the problem of jailbreaking. If iOS were written entirely in Swift, there would be no jailbreak tweaks, because there is no method swizzling.
I see Swift as an evolution of C++, similar to Java and C#. Plenty of people can program in Objective-C, but not many people actually understand it IMO. I think an expert Objective-C programmer can be more productive than an expert Swift programmer. But Swift is better for large teams of average, interchangeable programmers.
I think those who say that are more productive in Swift were simply not very productive in Objective-C. People who are productive in Objective-C will despise having to work with another layer of abstraction that provides no benefit for them (it prevents bugs that they never created in the first place).
These are all things that could have just been added to ObjC through language and IDE updates, but they ended up in Swift, and so there I go.
I like plonking down some quick ObjC, but my Swift programs feel much, much more correct.
(Of course, when dealing with C APIs, Swift is downright nauseating.)
Would we could do this with other languages.
"Swift is a successor to both the C and Objective-C languages."
This. Even then Swift (the last I really looked like v2) was full of special one offs and smacked of design by committee.
It would’ve been much more interesting if they’d worked on making an updated Objective C language. Maybe even break C superset constraint and add in some new syntax. It’s even more sad that ObjC message passing can support distributed systems.
Probably time to move back to Linux. Too bad GNUStep never took off. Or Etoile (?).
Mainly it seemed that instead of supporting full functional concepts, they one-offed feature that _looked_ like it was functional but weren't. Unfortunately I can't find the original article that reviewed the Swift type system.
But the OP is right in that Swift aims at being a silver bullet, yet the pace at which it evolves compared to that goal is absolutely scary. Concurrency hasn't moved at all, and in the meantime server side performance is a total disaster (https://www.techempower.com/benchmarks/#section=data-r16&hw=...).
There is also nothing for cross-platform development (real cross-platform, not just iOS and macOS).
So, personally, my next experiment is going to be full stack dart.
> Concurrency hasn't moved at all
Yeah, it hasn't moved because GCD is already pretty mature, and they don't seem to have made any effort at other concurrency paradigms. I'd love to see how something more like CSP would work (Zewo has an implementation, but I haven't had a chance to work with it yet).
> There is also nothing for cross-platform development (real cross-platform, not just iOS and macOS).
Not yet; it doesn't quite seem like it's ready to be a practical goal. I'm curious to see if that changes when they manage ABI stability.
It also doesn't provide more parallelism regarding i/o than os-based threading. go has light threads which makes context switching fast, and node has async i/o all over the place. Maybe Swift NIO would make a difference, but it would only make swift catch up to what the first version of node.js provided.
My point was that this manifesto https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9... has been written 1 year ago and nothing has been implemented or announced at wwdc (but maybe i missed a talk...)
Although as I mentioned in another comment, if you're looking for something more mature, Kotlin is a great option. Vert.x has the widest database support and is lean and fast, so this is probably what you're looking for. But if just Mongo/Cassandra/Redis is acceptable, Spring Boot w/ Reactive Web looks pretty cool, too.
If you want something like a more viable server-side Swift, with a much better story around concurrency, you should check out Kotlin + Vert.x
I've read that AngularDart is particularly little-used and unpopular. Maybe Flutter is where Google is placing their efforts now.
> Swift actively distracts me in that endeavor by making me answer questions I really don't want to answer right now. Yes, stuff might be less correct in the meantime, but heck that is what I want during the design phase. Find my concept, sweet spot, iterate, pivot quickly.
I have much of the same feelings with Rust.
This also fits into their "we don't need to collect data" mindset, they seem to iterate on assumptions, not direct feedback from customers. They create many prototypes and then implement the most viable one, and and this point they know exactly what they need.
My problem with Swift is my feeling that the people who developed it don't like Objective-C. It really feels like the Java direction Apple tried to take in the late 1990s. You can argue Nil messaging but its part of the landscape.
I should be much more productive, but I'm not and that sucks.
Swift is suppose to be modern, only to be burden by API compatibility work, which is non trivia.
This significantly slow certain important developments like ABI Stability (5.0), Full Generic (5.0), Concurrency (Maybe 6.0)?
A year since 4.0 release and in the coming few months, 4.2 will be release and not 5.0. This mean the timeline for 6.0 get push even further back.
While Swift is modern in area like optionality, first class immutable struct and (my favourite feature) enum with associated values, it lack many other modern features we come to expect from modern language. e.g. callback are still the way for async path control (1 of the regret of Ryan Dahl in his JSConf EU talk https://www.youtube.com/watch?v=M3BM9TB-8yA)
1.0 to 3.0 was spent getting the API right. This is a significant positive investment in the long run, but as someone who have to maintain codes, it was not pleasant at all and I still have code stuck in 1.0/2.0 eras. I have crashes with getting conditional conformance working with generic. Some wasn't crashing on 1.0 or 2.0 but crash on 3.0. Swift clearing is a WIP.
TypeScript have no choice but be pragmatic (probably after seeing how Dart was not adopted by the larger community for going the Swift way).
Apple basically act like a benevolent dictator, whatever direction they take is more or less the future, we have to figure out how to work around the new "world" order, which get updated every year at June.
The best iOS/Mac developer thrive in this environment and get handsomely rewarded (App Store ranking, recognition from Apple), I tried and failed miserably.
>> “It wants to be strongly typed, but at the same time so convenient in type inference, that it falls over its own feet while trying to grasp simple expressions, and they become too complex to manage. By design.”
What is an example of an expression that is tripping over its own feet?
time swiftc too-complex.swift
too-complex.swift:1:49: error: expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions
let a:[Int] =  +  +  +  +  +  + 
let a:[Int] =  +  +  +  +  +  + 
This is also what annoys me about TDD - writing code is a process of refinement - if I know exactly what the API I am going to program against looks like, if I know the API I will offer and I know the internals of my code are before I write code then hell, I have done it before.
Often times coding is exploring.
Some other good ones tracked by Michael Tsai:
There was a proposal recently to make this first-class, and not just limited to Objective-C.
Not purely, but often optionality is a source of bugs.
> At the same time it failed to attack, solve and support some of the most prominent current and future computing problems, which in my book all would be more important than most of the areas it tries to be good in:
Concurrency is being hashed out, and will probably be available in a later version of Swift.
> overall API interaction complexity
The Swift API is refreshingly small. Are you talking about Foundation?
What's wrong with the debug-ability, especially with solutions like playgrounds?
> actual App and UI development
…this is literally the main use of Swift.
> developer productivity
Personally, I feel it's much improved.
> While Apple did a great job on exposing Cocoa/Foundation as graspable into Swift as they could, there is still great tension in the way Swift wants to see the world, and the design paradigms that created the existing frameworks. That tension is not resolved yet, and since it is a design conflict, essentially can't be resolved. Just mitigated. From old foundational design patterns of Cocoa, like delegation, data sources, flat class hierarchies, over to the way the collection classes work, and how forgiving the API in general should be.
This is almost a non-issue these days, unless you're interacting with C APIs.
> Just imagine a world where Objective‑C would have gotten the same amount of drive and attention Swift got from Apple?
The ten years before Swift existed?
> Concurrency is being hashed out, and will probably be available in a later version of Swift.
> It keeps defering the big wins to the future
>Concurrency is being hashed out, and will probably be available in a later version of Swift
Sounds to me like "modern swift" doesn't actually have any if these things. Did you mean to say that the author wasn't familiar with some future version of swift that doesn't yet exist?
In some cases, wordy Objective-C names will get mapped to Swift's tidier conventions, but the public APIs are all documented in both languages, and Xcode's auto-complete is pretty good at helping you figure it out.
Typically a maintained Objective-C library will evaluate their ability to be used from Swift, and use attributes/macros where necessary to tweak the code import to generate a higher quality Swift interface. Examples would be declaring that a method never returns a nil value, or creating a clearer function name, or switching to the newer Objective-C enum/set styles to have the enumerations generated as Swift types rather than having their API take a raw integer constant or string.
AFAIK, all of apple's frameworks are C or Objective C interfaces, and the swift interface is nearly all generated based on the code and public annotations.
> It should scale from App/UI language down to system language.
Did I miss something where Apple or the Swift team said anything about writing a kernel in Swift? They wanted to make it fast enough for performance-critical situations, yes, but there are plenty of those in userspace -- and I haven't found Obj-C's inability to perform in those situations to be an asset.
> It should inter-op with existing Foundation and Cocoa, but still bring its own wide reaching standard library, adding a lot of duplication.
This is missing the forest for the trees. Bringing Swift-native Foundation APIs into the standard library means they're available on non-Mac platforms, which is huge. I'm also not sure what alternative would be preferable -- should it _not_ interop with Foundation/Cocoa, and totally fragment the ecosystem? Or should _not_ reimplement them with native calls, so that it inherits the performance penalty of objc_msgSend?
> It is functional, object-oriented and protocol oriented all at the same time.
Apple describes Swift as protocol-oriented. It's flexible enough to support other paradigms, but I don't see how that's a liability, and I haven't seen Apple claim "Swift is an FP language".
> It wants to be strongly typed, but at the same time so convenient in type inference, that it falls over its own feet while trying to grasp simple expressions, and they become too complex to manage. By design.
This complaint is so vague as to be impossible to address, except to say that I haven't managed to come across a case of it yet. An example would really help make the case.
> It is compiled and static, but emphasized the REPL and playground face that makes it want to look like a great scripting solution. Which it isn't.
Again, I haven't heard any claim that "Swift is a scripting language". Sure, it's not, but neither is Objective-C, so... I'm not really sure what the complaint is here?
> It seems to have been driven by the needs of the compiler and the gaps that needed to be filled for the static analyzer. Those seem to have been super-charged instead of catering to app developer's actual needs: efficient, hassle free, productive (iOS) App development.
This sounds like a complaint about strong typing and static dispatch in general. Yes, you have to code more carefully, but the flipside is that a huge number of errors that crop up at runtime in Objective-C become compile-time errors in Swift. I'd much rather bang my head against the compiler for a while before I ship than have reams of undebuggable crash logs pour in.
> It is meant offer progressive disclosure and be simple, to be used in playgrounds and learning. At the same time learning and reading through the Swift book and standard library is more akin to mastering C++. It is quite unforgiving, harsh, and complex.
That's true, but that's also kind of a big part of why playgrounds exist. So again, what's the complaint? Is the fact that playgrounds make the language more accessible a problem somehow?
Also, the only way I can view Objective-C as "simple" is by ignoring the C underpinnings and looking only at the relatively thin layer of classes and message passing on top. For somebody coming to coding with fresh eyes, both Swift and Objective-C are going to have a learning curve; the difference is that with Swift you don't have to pick up K&R first.
Yes, thats right. Lua for everything. Lua on iOS, Lua on MacOS, Lua on Linux. Lua on Windows.
I absolutely love the freedom, flexibility and downright sexiness of using one language on all of the platforms. Its a beautiful, difficult, lonely place to be - but if you haven't tried it, you can't really knock it.
The old adage of C devs against us on the strong type side of the fence of Algol family.
We all know how the freedom for creativity end up.
You mean, it completely dominates the programming world?
C is a massive success story, no other language even comes close.
Had AT&T been allowed to charge for UNIX since the beginning and not 10 years later after being split by the government and C would have been a footnote on the history of systems programming languages.
In a world where many don't want to pay for software, free trumps technical merits.
Kudos to Apple.
Swift makes it easy to write software that is incredibly fast and safe by design. Our goals for Swift are ambitious: we want to make programming simple things easy, and difficult things possible.
I don't see how say Go can be a system language and Swift can't.
Much of Apple's investment has been around making App and Framework development more productive on their platforms, but Swift is open and other companies like IBM have been focused on things like Web frameworks.
>It should inter-op with existing Foundation and Cocoa, but still bring its own wide reaching standard library, adding a lot of duplication.
I assume they mean the standard Swift package as the standard library, since they listed Foundation separate. Swift is tiny, basically holding core types like Int, String, Optional, and pointers, Collections (Dictionary, Set, and Array), ranges like 1..<10, and essential common protocols like Equatable and Hashable.
It isn't until you get to foundation that you get things like I/O / networking or binary data types.
Several of the languages listed in the article as being above as positive (like Ruby) are also multi-paradigm. Swift is hardly a functional language, it just has functional influences - like nearly all the languages listed.
Since there wasn't an example given, all I can really say is there's nothing that requires you to use type inference. My experience from multiple languages is that usually such an expression is wrong, and the compiler cannot figure out any suggestions on how to fix it because a broken expression doesn't give any type inference hints. A dynamic language would just let it crash and burn at runtime, which (if you have a test suite capturing the issue) is such a different process for debugging from fixing type inference at compile-time that they are hard to compare.
REPL and Playgrounds are both for experimentation, not necessarily for scripting. You can do command-line swift scripts, but generally the compiler makes them a bit heavyweight for many things.
Can't say much here vs having a computer analyze your code is meant to be a boon, not a hinderance. My experience is that now when I write Java projects I wish I could get back the expressiveness, performance, and personal productivity I have writing Swift code.
If nothing else, someone is underestimating the effort of mastering C++.
Try Scala. Or Kotlin if you don't want to have to learn too much along the way.
Except it actually is incredibly slow, not fast. The only thing fast about it is the marketing, and that is mostly fast and loose.
I ran a ton of benchmarks for my performance book, and despite the fact that I didn't expect much, I was still surprised by just how badly it misses on the performance front.
As an example see how Kitura compares to other web frameworks in performance: https://www.techempower.com/benchmarks/#section=data-r16&hw=...
And Kitura actually implements the central HTTP parser in C. Porting that same code to Swift made it (that code, not Kitura) 10x slower.
As a second example, see JSON parsing. Putting Swift on top of NSJSONSerialization typically adds a 25x performance overhead. Yes, that's twenty-five. And NSJSONSerialization uses NSDictionary and friends to represent the parse result, which is already very, very slow. 25x on top of that is no mean feat.
Anyway, the good folk from Big Nerd Ranch decided the problem was the interop and decided to write a parser in pure Swift, called Freddy. It was a "success", as Freddy is only 8x slower than NSJSONSerialization.
Oh, and if you think Swift serialization fixes this: it doesn't. Still around an order of magnitude slower.
And I didn't even go into the role of the optimizer. Unoptimized code is much, much slower relatively speaking, sometimes 100x to 1000x (yes, that's a thousand) slower than optimized code. Note that the default debug mode in Xcode is unoptimized and that 1000x is the difference between 1 second and 20 minutes. And of course you don't get a diagnostic if an optimization is not applied.
The "Swift is fast" meme is pure Apple marketing at its most deceptive. It has never been true in the past and isn't true now.
As to safety: there is very little to no evidence that static typing actually increases safety, studies go either way and when there is a positive effect it is tiny. See also: http://blog.metaobject.com/2014/06/the-safyness-of-static-ty...
> [type inference falling over] usually such an expression is wrong
time swiftc too-complex.swift
too-complex.swift:1:49: error: expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions
let a:[Int] =  +  +  +  +  +  + 
let a:[Int] =  +  +  +  +  +  + 
> here is very little to no evidence that static typing actually increases safety
Well, C is statically typed, Python or Common Lisp aren't. We need better categories than "static/dynamic".
There were quite a few system languages that replaced Assembly, some of them even 10 years before C was invented.